This is more of a question on understanding than programming. I am quite new to Pandas and SQL. I am using pandas to read data from SQL with some specific chunksize. When I run
Let's consider two options and what happens in both cases:
For more details you can see pandas\io\sql.py module, it is well documented
Its basically there to stop your server from running out of memory when you have a massive query.
Out to CSV
for chunk in pd.read_sql_query(sql , con, chunksize=10000):
chunk.to_csv(os.path.join(tablename + ".csv"), mode='a',sep=',',encoding='utf-8')
or Out to Parquet
count = 0
folder_path = 'path/to/output'
for chunk in pd.read_sql_query(sql , con, chunksize=10000):
file_path = folder_path + '/part.%s.parquet' % (count)
chunk.to_parquet(file_path, engine='pyarrow')
count += 1
When you do not provide a chunksize
, the full result of the query is put in a dataframe at once.
When you do provide a chunksize
, the return value of read_sql_query
is an iterator of multiple dataframes. This means that you can iterate through this like:
for df in result:
print df
and in each step df
is a dataframe (not an array!) that holds the data of a part of the query. See the docs on this: http://pandas.pydata.org/pandas-docs/stable/io.html#querying
To answer your question regarding memory, you have to know that there are two steps in retrieving the data from the database: execute
and fetch
.
First the query is executed (result = con.execute()
) and then the data are fetched from this result set as a list of tuples (data = result.fetch()
). When fetching you can specify how many rows at once you want to fetch. And this is what pandas does when you provide a chunksize
.
But, many database drivers already put all data into memory in the execute step, and not only when fetching the data. So in that regard, it should not matter much for the memory. Apart from the fact the copying of the data into a DataFrame only happens in different steps while iterating with chunksize
.