问题
I have a data frame with on the order of 20 numeric columns, each containing significant amounts of NA values. I would like to select a subset of these columns that will give me the most rows containing zero NA values. An exhaustive search would take a lot of computing time--is there a better way to get an approximation?
Here is an example with a smaller data frame (completely arbitrary):
set.seed(2)
foo = as.data.frame(matrix(rnorm(200), nr = 20))
foo[sapply(foo, function(x) x > abs(x[1]))] = NA
foo = foo[-1, ]
round(foo, 3)
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
2 0.185 -1.200 -1.959 NA -1.696 0.261 0.139 0.410 -0.638 -1.262
3 NA 1.590 -0.842 -0.703 -0.533 -0.314 NA -0.807 -0.268 0.392
4 -1.130 1.955 NA 0.158 -1.372 -0.750 -0.431 0.086 0.360 -1.131
5 -0.080 0.005 NA 0.506 -2.208 -0.862 -1.044 NA -1.313 0.544
6 0.132 -2.452 NA -0.820 NA NA 0.538 -0.654 -0.884 NA
7 0.708 0.477 -0.305 -1.999 -0.653 0.940 -0.670 NA NA 0.025
8 -0.240 -0.597 -0.091 -0.479 -0.285 NA 0.639 0.550 -2.099 0.515
9 NA 0.792 -0.184 0.084 -0.387 -0.421 -1.724 -0.807 -1.239 -0.654
10 -0.139 0.290 -1.199 -0.895 0.387 -0.351 -1.742 -0.997 NA 0.504
11 0.418 0.739 -0.838 -0.921 NA -1.027 0.690 NA NA -1.272
12 NA 0.319 NA 0.330 NA -0.251 0.331 -0.169 NA -0.077
13 -0.393 1.076 -0.562 -0.142 -1.184 0.472 0.871 NA 0.057 -1.345
14 -1.040 -0.284 NA 0.435 -1.358 NA -2.016 -0.844 0.324 -0.266
15 NA -0.777 -1.048 -0.054 -1.513 0.564 1.213 NA -0.905 NA
16 -2.311 -0.596 -1.966 -0.907 -1.253 0.456 1.200 -1.343 -0.652 0.701
17 0.879 -1.726 -0.323 1.304 NA NA 1.032 NA -0.262 -0.443
18 0.036 -0.903 NA 0.772 0.008 NA 0.786 0.464 -0.935 -0.789
19 NA -0.559 NA 1.053 -0.843 0.107 NA 0.268 NA -0.857
20 0.432 -0.247 NA -1.410 -0.601 -0.783 -1.454 NA -1.624 -0.746
dim(na.omit(foo))
[1] 1 10
Here is how I've formulated an exhaustive search:
best.list = list()
for (i in 5:ncol(foo)) {
# get best subset for each size
collist = combn(ncol(foo), i)
numobs = apply(collist, 2, function(x) nrow(na.omit(foo[, x])))
cat("for subset size", i, "most complete obs is", max(numobs), "\n")
best = which(numobs == max(numobs))[1]
best.list = c(best.list, list(collist[, best]))
}
For example, best.list[[1]]
tells me that if I keep 5 columns I can have 12 complete observations (rows with zero NAs), and that columns 1, 2, 4, 7, and 10 are the ones I should choose.
While this works for very small data frames, it quickly becomes prohibitive with larger ones. Is there a way in R to efficiently estimate the best subset of a given size? The only thing I've been able to find is the subselect
package, though I can't figure out how to implement its methods for the problem at hand.
回答1:
Not sure if this is the complete solution, but if you want fast results, data.table and a shadow matrix are the most probable ingredients.
library(data.table)
df = data.table(foo) # your foo dataframe, converted to data.table
y = sort(df[,lapply(.SD, function(x) sum(is.na(x)))]) # nr of NA in columns, increasing
setcolorder(df, names(y)) # now the columns are ordered - less NA first
df[, idx := rowSums(is.na(df))] # count nr of NA in rows
df = df[order(idx),] # sort by nr of NA in rows
df[, idx := NULL] # idx not needed anymore
# now your data.table is sorted: columns with least NA to the left,
# rows with with least NA on top
# shadow matrix
x= data.table(abs(!is.na(df))) # 0 = NA value
y = as.data.table(t(x))
y = y[,lapply(.SD, cumprod)]
y = as.data.table(t(y))
y[,lapply(.SD, sum)]
# nr of complete cases from column selections:
# V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
# 1: 19 18 16 14 11 10 7 5 2 1
回答2:
Old post, but there's a built in function that does this. I'll bet it's quite efficient:
df_noNAs <- df[complete.cases(df[,1:20]),]
来源:https://stackoverflow.com/questions/24016612/subset-of-data-frame-columns-to-maximize-complete-observations