问题
We need to implement Jaro-Winkler distance calculation across string in Apache Spark Dataset. We are new to spark and after searching in web we are not able to find much. It would be great if you can guide us. We thought of using flatMap then realized it won’t help, then we tried to use couple of foreach loops but not able to figure how to go forward. As each of the string has to be compared against all. Like in the below dataset.
RowFactory.create(0, "Hi I heard about Spark"),
RowFactory.create(1,"I wish Java could use case classes"),
RowFactory.create(2,"Logistic,regression,models,are,neat"));
Example jaro winkler score across all string found in the above dataframe.
Distance score between label, 0,1 -> 0.56
Distance score between label, 0,2 -> 0.77
Distance score between label, 0,3 -> 0.45
Distance score between label, 1,2 -> 0.77
Distance score between label, 2,3 -> 0.79
import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.Metadata;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;
import info.debatty.java.stringsimilarity.JaroWinkler;
public class JaroTestExample {
public static void main( String[] args )
{
System.setProperty("hadoop.home.dir", "C:\\winutil");
JavaSparkContext sc = new JavaSparkContext(new SparkConf().setAppName("SparkJdbcDs").setMaster("local[*]"));
SQLContext sqlContext = new SQLContext(sc);
SparkSession spark = SparkSession.builder()
.appName("JavaTokenizerExample").getOrCreate();
JaroWinkler jw = new JaroWinkler();
// substitution of s and t
System.out.println(jw.similarity("My string", "My tsring"));
// substitution of s and n
System.out.println(jw.similarity("My string", "My ntrisg"));
List<Row> data = Arrays.asList(
RowFactory.create(0, "Hi I heard about Spark"),
RowFactory.create(1,"I wish Java could use case classes"),
RowFactory.create(2,"Logistic,regression,models,are,neat"));
StructType schema = new StructType(new StructField[] {
new StructField("label", DataTypes.IntegerType, false,
Metadata.empty()),
new StructField("sentence", DataTypes.StringType, false,
Metadata.empty()) });
Dataset<Row> sentenceDataFrame = spark.createDataFrame(data, schema);
sentenceDataFrame.foreach();
}
}
回答1:
Cross join in spark can be done using the below code Dataset2Object=Dataset1Object.crossJoin(Dataset2Object) In Dataset2Object you get all combination of recordpair which is your need. In this case flatmap wont be helpfull. Please remember to use version spark-sql_2.11 version 2.1.0
来源:https://stackoverflow.com/questions/41701733/jaro-winkler-score-calculation-in-apache-spark