I have a 60000 documents which i processed in gensim
and got a 60000*300 matrix. I exported this as a csv
file. When i import this in ELKI
This sounds strange, but i found the solution to this issue by opening the exported CSV
file and doing Save As
and saving again as a CSV
file. While size of the original file is 437MB, the second file is 163MB. I used the numpy function np.savetxt
for saving the doc2vec
vector. So it seems to be a Python
issue instead of being ELKI
issue.
Edit: Above solution is not useful. I instead exported the doc2vec
output which was created using gensim
library and while exporting format of the values were decided explicitly as %1.22e
. i.e. the values exported are in exponential format and values have length of 22. Below is the entire line of code.
textVect = model.docvecs.doctag_syn0
np.savetxt('D:\Backup\expo22.csv',textVect,delimiter=',',fmt=('%1.22e'))
CSV
file thus created runs without any issue in ELKI environment.
The error (which took me a bit to understand, when I saw it the first time) says that your data has the "shape"
variable,mindim=266,maxdim=300
I.e. some lines have only 266 columns, some have 300. This may be a file format issue, for example due to NaN, missing values, or similar bad characters.
You get that error if you try to run an algorithm like kmeans that assumes the data comes from a R^d vectorspace (that is the NumberVector,field
requirement), because the input data is not meeting this requirement.