R - Calculate difference (similarity measure) between similar datasets

时光怂恿深爱的人放手 提交于 2020-01-02 11:03:50

问题


I have seen many questions that touch on this topic but haven't yet found an answer. If I have missed a question that does answer this question, please do mark this and point us to the question.

Scenario: We have a benchmark dataset, we have imputation methods, we systematically delete values from the benchmark and use two different imputation methods. Thus we have a benchmark, imputedData1 and imputedData2.

Question: Is there a function that can produce a number that represents the difference between the benchmark and imputedData1 or/and the difference between the benchmark and imputedData2. Ie function(benchmark, imputedData1) = 3.3 and function(benchmark, imputedData2) = 2.8

Note: Datasets are numerical, datasets are the same size, method should work at the data level if possible (ie not creating a regression and comparing regressions - unless it can work with ANY numerical dataset).

Reproducible datasets, they have only been changed in the first row:

benchmark:

> head(mtcars,n=10)
                   mpg cyl  disp  hp drat    wt  qsec vs am gear carb
Mazda RX4         21.0   6 160.0 110 3.90 2.620 16.46  0  1    4    4
Mazda RX4 Wag     21.0   6 160.0 110 3.90 2.875 17.02  0  1    4    4
Datsun 710        22.8   4 108.0  93 3.85 2.320 18.61  1  1    4    1
Hornet 4 Drive    21.4   6 258.0 110 3.08 3.215 19.44  1  0    3    1
Hornet Sportabout 18.7   8 360.0 175 3.15 3.440 17.02  0  0    3    2
Valiant           18.1   6 225.0 105 2.76 3.460 20.22  1  0    3    1
Duster 360        14.3   8 360.0 245 3.21 3.570 15.84  0  0    3    4
Merc 240D         24.4   4 146.7  62 3.69 3.190 20.00  1  0    4    2
Merc 230          22.8   4 140.8  95 3.92 3.150 22.90  1  0    4    2
Merc 280          19.2   6 167.6 123 3.92 3.440 18.30  1  0    4    4

imputedData1:

> head(mtcars,n=10)
                   mpg cyl  disp  hp drat    wt  qsec vs am gear carb
Mazda RX4         22.0   4 108.0 100 3.90 2.200 16.46  0  1    4    4
Mazda RX4 Wag     21.0   6 160.0 110 3.90 2.875 17.02  0  1    4    4
Datsun 710        22.8   4 108.0  93 3.85 2.320 18.61  1  1    4    1
Hornet 4 Drive    21.4   6 258.0 110 3.08 3.215 19.44  1  0    3    1
Hornet Sportabout 18.7   8 360.0 175 3.15 3.440 17.02  0  0    3    2
Valiant           18.1   6 225.0 105 2.76 3.460 20.22  1  0    3    1
Duster 360        14.3   8 360.0 245 3.21 3.570 15.84  0  0    3    4
Merc 240D         24.4   4 146.7  62 3.69 3.190 20.00  1  0    4    2
Merc 230          22.8   4 140.8  95 3.92 3.150 22.90  1  0    4    2
Merc 280          19.2   6 167.6 123 3.92 3.440 18.30  1  0    4    4

imputedData2:

> head(mtcars,n=10)
                   mpg cyl  disp  hp drat    wt  qsec vs am gear carb
Mazda RX4         18.0   6 112.0 105 3.90 2.620 16.46  0  0    3    4
Mazda RX4 Wag     21.0   6 160.0 110 3.90 2.875 17.02  0  1    4    4
Datsun 710        22.8   4 108.0  93 3.85 2.320 18.61  1  1    4    1
Hornet 4 Drive    21.4   6 258.0 110 3.08 3.215 19.44  1  0    3    1
Hornet Sportabout 18.7   8 360.0 175 3.15 3.440 17.02  0  0    3    2
Valiant           18.1   6 225.0 105 2.76 3.460 20.22  1  0    3    1
Duster 360        14.3   8 360.0 245 3.21 3.570 15.84  0  0    3    4
Merc 240D         24.4   4 146.7  62 3.69 3.190 20.00  1  0    4    2
Merc 230          22.8   4 140.8  95 3.92 3.150 22.90  1  0    4    2
Merc 280          19.2   6 167.6 123 3.92 3.440 18.30  1  0    4    4

I have tried to use RMSE (root mean squared error) but it didn't work very well so I am trying to find other ways to tackle this problem.


回答1:


You could also check out package ftsa. It has about 20 error measures that can be calculated. In your case, a scaled error would make sense as the units differ from column to column.

library(ftsa)
error(forecast=unlist(imputedData1),true=unlist(bench), 
          insampletrue = unlist(bench), method = "mase")
[1] 0.035136

error(forecast=unlist(imputedData2),true=unlist(bench), 
          insampletrue = unlist(bench), method = "mase")
[1] 0.031151

data

bench <- read.table(text='mpg cyl  disp  hp drat    wt  qsec vs am gear carb
21.0   6 160.0 110 3.90 2.620 16.46  0  1    4    4
21.0   6 160.0 110 3.90 2.875 17.02  0  1    4    4
22.8   4 108.0  93 3.85 2.320 18.61  1  1    4    1
21.4   6 258.0 110 3.08 3.215 19.44  1  0    3    1
18.7   8 360.0 175 3.15 3.440 17.02  0  0    3    2
18.1   6 225.0 105 2.76 3.460 20.22  1  0    3    1
14.3   8 360.0 245 3.21 3.570 15.84  0  0    3    4
24.4   4 146.7  62 3.69 3.190 20.00  1  0    4    2
22.8   4 140.8  95 3.92 3.150 22.90  1  0    4    2
19.2   6 167.6 123 3.92 3.440 18.30  1  0    4    4',header=TRUE,stringsAsFactors=FALSE)

imputedData1 <- read.table(text='mpg cyl  disp  hp drat    wt  qsec vs am gear carb
22.0   4 108.0 100 3.90 2.200 16.46  0  1    4    4
21.0   6 160.0 110 3.90 2.875 17.02  0  1    4    4
22.8   4 108.0  93 3.85 2.320 18.61  1  1    4    1
21.4   6 258.0 110 3.08 3.215 19.44  1  0    3    1
18.7   8 360.0 175 3.15 3.440 17.02  0  0    3    2
18.1   6 225.0 105 2.76 3.460 20.22  1  0    3    1
14.3   8 360.0 245 3.21 3.570 15.84  0  0    3    4
24.4   4 146.7  62 3.69 3.190 20.00  1  0    4    2
22.8   4 140.8  95 3.92 3.150 22.90  1  0    4    2
19.2   6 167.6 123 3.92 3.440 18.30  1  0    4    4',header=TRUE,stringsAsFactors=FALSE)

imputedData2 <- read.table(text='mpg cyl  disp  hp drat    wt  qsec vs am gear carb
18.0   6 112.0 105 3.90 2.620 16.46  0  0    3    4
21.0   6 160.0 110 3.90 2.875 17.02  0  1    4    4
22.8   4 108.0  93 3.85 2.320 18.61  1  1    4    1
21.4   6 258.0 110 3.08 3.215 19.44  1  0    3    1
18.7   8 360.0 175 3.15 3.440 17.02  0  0    3    2
18.1   6 225.0 105 2.76 3.460 20.22  1  0    3    1
14.3   8 360.0 245 3.21 3.570 15.84  0  0    3    4
24.4   4 146.7  62 3.69 3.190 20.00  1  0    4    2
22.8   4 140.8  95 3.92 3.150 22.90  1  0    4    2
19.2   6 167.6 123 3.92 3.440 18.30  1  0    4    4',header=TRUE,stringsAsFactors=FALSE)



回答2:


One possible way is to calculate a norm of their difference and prefer the imputation method that minimises this value. There are different matrix norms for different purposes. I'll point you to the wikipedia as a starting point - https://en.wikipedia.org/wiki/Matrix_norm.

In the absence of any specifics about your data I can't really say which you should choose but one method could be to create your own index that averages across different matrix norms and select the imputation method that minimizes this average. Or you could just eyeball them and with any luck one of the methods is a clear winner across most or all matrix norms.




回答3:


A simple implementation of what was discussed in the comments that gives a result with same order of magnitude as P Lapointe's answer, just FYI.

library(magrittr)
center_and_reduce_df <- function(df,bm){
  centered <- mapply('-',df,sapply(bm,mean)) %>% as.data.frame(stringsAsFactors= FALSE)
  reduced <- mapply('/',centered,sapply(bm,sd)) %>% as.data.frame(stringsAsFactors= FALSE)
}
mean((center_and_reduce_df(id1,bm) - center_and_reduce_df(bm,bm))^2) # 0.03083166



回答4:


Not quite sure what you mean by "difference", but if you just want to know how much each cell differs from each cell on average (given the matrices are of the same shape and have indentical cols/rows), you could do absolute difference, or use Euclidean distance, or Kolmogorov-Smirnov distance - depending again on what you mean by "difference".

abs(head(mtcars) - (head(mtcars)*0.5)) # differences by cell
mean( as.matrix(abs(head(mtcars) - (head(mtcars)*0.5)))) # mean abs difference
dist( t(data.frame(as.vector(as.matrix(head(mtcars))), (as.vector(as.matrix(head(mtcars)*0.5)))))) # Euclidean; remove t() to see element by element
ks.test( as.vector(as.matrix(head(mtcars))), (as.vector(as.matrix(head(mtcars)*0.5))))$statistic  # K-S


来源:https://stackoverflow.com/questions/44907523/r-calculate-difference-similarity-measure-between-similar-datasets

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!