I am looking for an efficient (both computer resource wise and learning/implementation wise) method to merge two larger (size>1 million / 300 KB RData file) data frames.
Here are some timings for the data.table vs. data.frame methods.
Using data.table is very much faster. Regarding memory, I can informally report that the two methods are very similar (within 20%) in RAM use.
library(data.table)
set.seed(1234)
n = 1e6
data_frame_1 = data.frame(id=paste("id_", 1:n, sep=""),
factor1=sample(c("A", "B", "C"), n, replace=TRUE))
data_frame_2 = data.frame(id=sample(data_frame_1$id),
value1=rnorm(n))
data_table_1 = data.table(data_frame_1, key="id")
data_table_2 = data.table(data_frame_2, key="id")
system.time(df.merged <- merge(data_frame_1, data_frame_2))
# user system elapsed
# 17.983 0.189 18.063
system.time(dt.merged <- merge(data_table_1, data_table_2))
# user system elapsed
# 0.729 0.099 0.821
Here's the obligatory data.table
example:
library(data.table)
## Fix up your example data.frame so that the columns aren't all factors
## (not necessary, but shows that data.table can now use numeric columns as keys)
cols <- c(1:5, 7:10)
test[cols] <- lapply(cols, FUN=function(X) as.numeric(as.character(test[[X]])))
test[11] <- as.logical(test[[11]])
## Create two data.tables with which to demonstrate a data.table merge
dt <- data.table(test, key=names(test))
dt2 <- copy(dt)
## Add to each one a unique non-keyed column
dt$X <- seq_len(nrow(dt))
dt2$Y <- rev(seq_len(nrow(dt)))
## Merge them based on the keyed columns (in both cases, all but the last) to ...
## (1) create a new data.table
dt3 <- dt[dt2]
## (2) or (poss. minimizing memory usage), just add column Y from dt2 to dt
dt[dt2,Y:=Y]
Do you have to do the merge in R? If not, merge the underlying data files using a simple file concatenation and then load them into R. (I realize this may not apply to your situation -- but if it does, it could save you a lot of headache.)