Problem with big data (?) during computation of sequence distances using TraMineR

天大地大妈咪最大 提交于 2019-12-29 05:29:07

问题


I am trying to run an optimal matching analysis using TraMineR but it seems that I am encountering an issue with the size of the dataset. I have a big dataset of European countries which contains employment spells. I have more than 57,000 sequences which are 48 units long and consist of 9 distinct states. In order to get an idea of the analysis, here is the head of sequence object employdat.sts:

[1] EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-...  
[2] EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-...  
[3] ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-...  
[4] ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-...  
[5] EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-EF-...  
[6] ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-ST-...  

In a shorter SPS format, this reads as follows:

Sequence               
[1] "(EF,48)"              
[2] "(EF,48)"              
[3] "(ST,48)"              
[4] "(ST,36)-(MS,3)-(EF,9)"
[5] "(EF,48)"              
[6] "(ST,24)-(EF,24)"

After passing this sequence object to the seqdist() function, I get the following error message:

employdat.om <- seqdist(employdat.sts, method="OM", sm="CONSTANT", indel=4)    
[>] creating 9x9 substitution-cost matrix using 2 as constant value  
[>] 57160 sequences with 9 distinct events/states  
[>] 12626 distinct sequences  
[>] min/max sequence length: 48/48  
[>] computing distances using OM metric  
Error in .Call(TMR_cstringdistance, as.integer(dseq), as.integer(dim(dseq)),  : negative length vectors are not allowed

Is this error related to the huge number of distinct, long sequences? I am using a x64-machine with 4GB RAM and I have also tried it on a machine with 8-GB RAM which reproduced the error message. Does someone know a way to tackle this error? Besides, analyses for each single country using the same syntax with an index for the country worked well and produced meaningful results.


回答1:


I never saw this error code before, but it might well be due to your high number of sequences. There are at least two things you can try to do:

  • use the argument "full.matrix=FALSE" in seqdist (see help page). It will compute only the lower triangular matrix and return a "dist" object that can be used directly in the hclust function.
  • You can aggregate identical sequences (you only have 12626 distinct sequences instead of 57160 sequences), compute the distances, cluster the sequences using weights (that are computed according to the number of times each distinct sequence appears in the dataset) and then add the clustering back to your original dataset. This can be made quite easily using the WeightedCluster library. The first appendix of the WeightedCluster Manual provides a step by step guide to do that (the procedure is also described on the webpage http://mephisto.unige.ch/weightedcluster).

Hope this helps.




回答2:


An easy solution which often works well is to analyze a sample only of your data. For instance

employdat.sts <- employdat.sts[sample(nrow(employdat.sts),5000),]

would extract a random sample of 5000 sequences. Exploring such an important sample should be largely sufficient to find out the characteristics of your sequences, including their diversity.

To improve representativeness, you can even resort to some stratified sampling (e.g., by first or last state, or by some covariates available in your data set). Since you have the original data set at hand, you can fully control the random sampling design.



来源:https://stackoverflow.com/questions/15929936/problem-with-big-data-during-computation-of-sequence-distances-using-tramine

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!