matrix-factorization

Python Non negative Matrix Factorization that handles both zeros and missing data?

不羁岁月 提交于 2019-11-29 23:14:05
I look for a NMF implementation that has a python interface, and handles both missing data and zeros. I don't want to impute my missing values before starting the factorization, I want them to be ignored in the minimized function. It seems that neither scikit-learn, nor nimfa, nor graphlab, nor mahout propose such an option. Thanks! Using this Matlab to python code conversion sheet I was able to rewrite NMF from Matlab toolbox library. I had to decompose a 40k X 1k matrix with sparsity of 0.7%. Using 500 latent features my machine took 20 minutes for 100 iteration. Here is the method: import

Write a trackable R function that mimics LAPACK's dgetrf for LU factorization

好久不见. 提交于 2019-11-29 14:10:16
There is no LU factorization function in R core. Although such factorization is a step of solve , it is not made explicitly available as a stand-alone function. Can we write an R function for this? It needs mimic LAPACK routine dgetrf . Matrix package has an lu function which is good, but it would be better if we could write a trackable R function, that can factorize the matrix till a certain column / row and return the intermediate result; continue the factorization from an intermediate result to another column / row or to the end. This function would be useful for both educational and

Use coo_matrix in TensorFlow

时光总嘲笑我的痴心妄想 提交于 2019-11-29 08:55:26
I'm doing a Matrix Factorization in TensorFlow, I want to use coo_matrix from Spicy.sparse cause it uses less memory and it makes it easy to put all my data into my matrix for training data. Is it possible to use coo_matrix to initialize a variable in tensorflow? Or do I have to create a session and feed the data I got into tensorflow using sess.run() with feed_dict. I hope that you understand my question and my problem otherwise comment and i will try to fix it. The closest thing TensorFlow has to scipy.sparse.coo_matrix is tf.SparseTensor , which is the sparse equivalent of tf.Tensor . It

Write a trackable R function that mimics LAPACK's dgetrf for LU factorization

大憨熊 提交于 2019-11-28 08:09:54
问题 There is no LU factorization function in R core. Although such factorization is a step of solve , it is not made explicitly available as a stand-alone function. Can we write an R function for this? It needs mimic LAPACK routine dgetrf. Matrix package has an lu function which is good, but it would be better if we could write a trackable R function, that can factorize the matrix till a certain column / row and return the intermediate result; continue the factorization from an intermediate

Use coo_matrix in TensorFlow

こ雲淡風輕ζ 提交于 2019-11-28 02:18:52
问题 I'm doing a Matrix Factorization in TensorFlow, I want to use coo_matrix from Spicy.sparse cause it uses less memory and it makes it easy to put all my data into my matrix for training data. Is it possible to use coo_matrix to initialize a variable in tensorflow? Or do I have to create a session and feed the data I got into tensorflow using sess.run() with feed_dict. I hope that you understand my question and my problem otherwise comment and i will try to fix it. 回答1: The closest thing

Cholesky decomposition of sparse matrices using permutation matrices

痴心易碎 提交于 2019-11-27 21:33:15
I am interested in the Cholesky decomposition of large sparse matrices. The problem I'm having is that the Cholesky factors are not necessarily sparse (just like the product of two sparse matrices is not necessarily sparse). For example for a matrix with non-zeros only along the first row, first column, and diagonal the Cholesky factors have 100% fill-in (the lower and upper triangles are 100% dense). In the image below the gray is non zero and the white is zero. One solution I'm aware is to find a permutation P matrix and do the Cholesky decomposition of P T AP . For example with the same

Cholesky decomposition of sparse matrices using permutation matrices

空扰寡人 提交于 2019-11-26 20:27:31
问题 I am interested in the Cholesky decomposition of large sparse matrices. The problem I'm having is that the Cholesky factors are not necessarily sparse (just like the product of two sparse matrices is not necessarily sparse). For example for a matrix with non-zeros only along the first row, first column, and diagonal the Cholesky factors have 100% fill-in (the lower and upper triangles are 100% dense). In the image below the gray is non zero and the white is zero. One solution I'm aware is to