Preparing data for LDA in spark

不想你离开。 提交于 2019-12-06 10:44:15

One way to handle this:

  • make sure that spark-csv package is available
  • load data into DataFrame and select columns of interest

    val df = sqlContext.read
        .format("com.databricks.spark.csv")
        .option("header", "true")
        .option("inferSchema", "true") // Optional, providing schema is prefered
        .option("delimiter", "\t")
        .load("foo.csv")
        .select($"doc".cast("long").alias("doc"), $"term")
    
  • index term column:

    import org.apache.spark.ml.feature.StringIndexer
    
    val indexer = new StringIndexer()
      .setInputCol("term")
      .setOutputCol("termIndexed")
    
    val indexed = indexer.fit(df)
      .transform(df)
      .drop("term")
      .withColumn("termIndexed", $"termIndexed".cast("integer"))
      .groupBy($"doc", $"termIndexed")
      .agg(count(lit(1)).alias("cnt").cast("double"))
    
  • convert to PairwiseRDD

    import org.apache.spark.sql.Row
    
    val pairs = indexed.map{case Row(doc: Long, term: Int, cnt: Double) => 
      (doc, (term, cnt))}
    
  • group by doc:

    val docs = pairs.groupByKey
    
  • create feature vectors

    import org.apache.spark.mllib.linalg.Vectors
    import org.apache.spark.sql.functions.max
    
    val n = indexed.select(max($"termIndexed")).first.getInt(0) + 1
    
    val docsWithFeatures = docs.mapValues(vs => Vectors.sparse(n, vs.toSeq))
    
  • now you have all you need to create LabeledPoints or apply additional processing

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!