问题
Hi first of all I already search on stack and google and found posts such at this one : Quickly reading very large tables as dataframes. While those are helpfull and well answered, I'm looking for more informations.
I am looking for the best way to read/import "big" data that can go up to 50-60GB.
I am currently using the fread()
function from data.table
and it is the function that is the fastest I know at the moment. The pc/server I work on got a good cpu (work station) and 32 GB RAM, but still datas over 10GB and sometimes near billions observations takes a lot of time to get read.
We already have sql databases but for some reasons we have to work with big data in R.
Is there a way to speed up R or an even better option than fread()
when it comes to huge file like this?
Thank you.
Edit : fread("data.txt", verbose = TRUE)
omp_get_max_threads() = 2
omp_get_thread_limit() = 2147483647
DTthreads = 0
RestoreAfterFork = true
Input contains no \n. Taking this to be a filename to open
[01] Check arguments
Using 2 threads (omp_get_max_threads()=2, nth=2)
NAstrings = [<<NA>>]
None of the NAstrings look like numbers.
show progress = 1
0/1 column will be read as integer
[02] Opening the file
Opening file C://somefolder/data.txt
File opened, size = 1.083GB (1163081280 bytes).
Memory mapped ok
[03] Detect and skip BOM
[04] Arrange mmap to be \0 terminated
\n has been found in the input and different lines can end with different line endings (e.g. mixed \n and \r\n in one file). This is common and ideal.
[05] Skipping initial rows if needed
Positioned on line 1 starting: <<ID,Dat,No,MX,NOM_TX>>
[06] Detect separator, quoting rule, and ncolumns
Detecting sep automatically ...
sep=',' with 100 lines of 5 fields using quote rule 0
Detected 5 columns on line 1. This line is either column names or first data row. Line starts as: <<ID,Dat,No,MX,NOM_TX>>
Quote rule picked = 0
fill=false and the most number of columns found is 5
[07] Detect column types, good nrow estimate and whether first row is column names
Number of sampling jump points = 100 because (1163081278 bytes from row 1 to eof) / (2 * 5778 jump0size) == 100647
Type codes (jump 000) : 5A5AA Quote rule 0
Type codes (jump 100) : 5A5AA Quote rule 0
'header' determined to be true due to column 1 containing a string on row 1 and a lower type (int32) in the rest of the 10054 sample rows
=====
Sampled 10054 rows (handled \n inside quoted fields) at 101 jump points
Bytes from first data row on line 2 to the end of last row: 1163081249
Line length: mean=56.72 sd=20.65 min=25 max=128
Estimated number of rows: 1163081249 / 56.72 = 20506811
Initial alloc = 41013622 rows (20506811 + 100%) using bytes/max(mean-2*sd,min) clamped between [1.1*estn, 2.0*estn]
=====
[08] Assign column names
[09] Apply user overrides on column types
After 0 type and 0 drop user overrides : 5A5AA
[10] Allocate memory for the datatable
Allocating 5 column slots (5 - 0 dropped) with 41013622 rows
[11] Read the data
jumps=[0..1110), chunk_size=1047820, total_size=1163081249
|--------------------------------------------------|
|==================================================|
Read 20935277 rows x 5 columns from 1.083GB (1163081280 bytes) file in 00:31.484 wall clock time
[12] Finalizing the datatable
Type counts:
2 : int32 '5'
3 : string 'A'
=============================
0.007s ( 0%) Memory map 1.083GB file
0.739s ( 2%) sep=',' ncol=5 and header detection
0.001s ( 0%) Column type detection using 10054 sample rows
1.809s ( 6%) Allocation of 41013622 rows x 5 cols (1.222GB) of which 20935277 ( 51%) rows used
28.928s ( 92%) Reading 1110 chunks (0 swept) of 0.999MB (each chunk 18860 rows) using 2 threads
+ 26.253s ( 83%) Parse to row-major thread buffers (grown 0 times)
+ 2.639s ( 8%) Transpose
+ 0.035s ( 0%) Waiting
0.000s ( 0%) Rereading 0 columns due to out-of-sample type exceptions
31.484s Total
回答1:
You can use select = columns
to only load the relevant columns without saturating your memory. For example:
dt <- fread("./file.csv", select = c("column1", "column2", "column3"))
I used read.delim()
to read a file that fread()
could not load completely. So you could convert your data into .txt and use read.delim()
.
However, why don't you open a connection to the SQL server you're pulling your data from. You can open connections to SQL servers with library(odbc)
and write your query like you normally would. You can optimize your memory usage that way.
Check out this short introduction to odbc
.
回答2:
Assuming you want your file fully read into R, using database or choosing subset of columns/rows won't be much helpful.
What can be helpful in such case is to:
- ensure that you are using recent version of data.table
- ensure that optimal number of threads is set
use setDTthreads(0L)
to use all available threads, by default data.table
uses 50% of available threads.
- check output of fread(..., verbose=TRUE)
, and possibly add it to your question here
- put your file on fast disk, or a RAM disk, and read from there
If your data has a lot of distinct character variables you might not be able get great speed because of the fact that populating R's internal global character cache is single threaded, thus parsing can go fast but creating character vector(s) will be bottleneck.
来源:https://stackoverflow.com/questions/56396770/is-there-a-faster-way-than-fread-to-read-big-data