问题
I'm looking at the documentation for Statistics.corr
in PySpark: https://spark.apache.org/docs/1.1.0/api/python/pyspark.mllib.stat.Statistics-class.html#corr.
Why does the correlation here result in NaN
?
>>> rdd = sc.parallelize([Vectors.dense([1, 0, 0, -2]), Vectors.dense([4, 5, 0, 3]),
... Vectors.dense([6, 7, 0, 8]), Vectors.dense([9, 0, 0, 1])])
>>> pearsonCorr = Statistics.corr(rdd)
>>> print str(pearsonCorr).replace('nan', 'NaN')
[[ 1. 0.05564149 NaN 0.40047142]
[ 0.05564149 1. NaN 0.91359586]
[ NaN NaN 1. NaN]
[ 0.40047142 0.91359586 NaN 1. ]]
回答1:
It is pretty simple.Pearson correlation coefficient is defined as follows:
Since standard deviation for the second column ([0, 0, 0, 0]
) is equal 0, whole equation results in NaN.
来源:https://stackoverflow.com/questions/36728771/why-does-this-example-result-in-nan