I have successfully connected local R3.1.2( win7 64bit rstudio) and remote hive server using rjdbc
,
library(RJDBC)
.jinit()
dir = \"E:/xxx/jars
through these years, I still cannot find a full solution...but here is also a partial one, only available for write small data.frame and how small vary from 32/64bit , mac/win ...
first change dataframe to character vector
data2hodoop <- paste0( apply(dataframe, 1, function(x) paste0("('", paste0(x, collapse = "', '"), "')")), collapse = ", ")
then use insert to write lines into hadoop
dbSendQuery(conn, paste("INSERT INTO ", tbname," VALUES ",data2hodoop, ";" ))
in my PC, WIN7 64BIT 16G, if the vector 'data2hodoop' larger than 50M, there will be an error " C stack usage xxx is too close to the limit";
in my mac, the limit is even lower, and I can not find a way to modify this limit.