From TF-IDF to LDA clustering in spark, pyspark

那年仲夏 提交于 2019-12-05 16:57:48
zero323

LDA expect a (id, features) as an input so assuming that KeyIndex serves as an ID:

from pyspark.mllib.clustering import LDA

k = ... # number of clusters
corpus = indexed_data.select(col("KeyIndex").cast("long"), "features").map(list)
model = LDA.train(corpus, k=k)

LDA does not take as input the TF-IDF matrix. Instead it only takes in the TF matrix. For example:

from pyspark.ml.feature import *
from pyspark.ml.feature import HashingTF, IDF, Tokenizer, CountVectorizer 
from pyspark.ml.feature import StopWordsRemover
from pyspark.ml.clustering import LDA

tokenizer = Tokenizer(inputCol="hashTagDocument", outputCol="words")

stopWordsRemover = StopWordsRemover(inputCol="words", outputCol="filtered", 
stopWords=stopwords)

vectorizer = CountVectorizer(inputCol="filtered", outputCol="features", 
vocabSize=40000, minDF=5) 

pipeline = Pipeline(stages=[tokenizer, stopWordsRemover, vectorizer, lda])
pipelineModel = pipeline.fit(corpus)

pipelineModel.stages
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!