Problems with pySpark Columnsimilarities

前端 未结 1 736
春和景丽
春和景丽 2021-01-17 01:55

tl;dr How do I use pySpark to compare the similarity of rows?

I have a numpy array where I would like to compare the similarities of each row to one another

相关标签:
1条回答
  • 2021-01-17 02:31

    First, the columnSimilarities method only returns the off diagonal entries of the upper triangular portion of the similarity matrix. With the absence of the 1's along the diagonal, you may have 0's for entire rows in the resulting similarity matrix.

    Second, a pyspark RowMatrix doesn't have meaningful row indices. So essentially when converting from a CoordinateMatrix to a RowMatrix, the i value in the MatrixEntry is being mapped to whatever is convenient (probably some incrementing index). So what is likely happening is the rows that have all 0's are simply being ignored and the matrix is being squished vertically when you convert it to a RowMatrix.

    It probably makes sense to inspect the dimension of the similarity matrix immediately after computation with the columnSimilarities method. You can do this by using the numRows() and the numCols() methods.

    print(exact.numRows(),exact.numCols())
    

    Other than that, it does sound like you need to transpose your matrix to get the correct vector similarities. Furthermore, if there is some reason that you need this in a RowMatrix-like form, you could try using an IndexedRowMatrix which does have meaningful row indices and would preserve the row index from the original CoordinateMatrix upon conversion.

    0 讨论(0)
提交回复
热议问题