I have a dataframe in Python. Can I write this data to Redshift as a new table? I have successfully created a db connection to Redshift and am able to execute simple sql que
I tried using pandas df.to_sql()
but it was tremendously slow. It was taking me well over 10 minutes to insert 50 rows. See this open issue (as of writing)
I tried using odo from the blaze ecosystem (as per the recommendations in the issue discussion), but faced a ProgrammingError
which I didn't bother to investigate into.
Finally what worked:
import psycopg2
# Fill in the blanks for the conn object
conn = psycopg2.connect(user = 'user',
password = 'password',
host = 'host',
dbname = 'db',
port = 666)
cursor = conn.cursor()
# Adjust ... according to number of columns
args_str = b','.join(cursor.mogrify("(%s,%s,...)", x) for x in tuple(map(tuple,np_data)))
cursor.execute("insert into table (a,b,...) VALUES "+args_str.decode("utf-8"))
cursor.close()
conn.commit()
conn.close()
Yep, plain old psycopg2
. This is for a numpy array but converting from a df
to a ndarray
shouldn't be too difficult. This gave me around 3k rows/minute.
However, the fastest solution as per recommendations from other team mates is to use the COPY command after dumping the dataframe as a TSV/CSV into a S3 cluster and then copying over. You should investigate into this if you're copying really huge datasets. (I will update here if and when I try it out)
import pandas_redshift as pr
pr.connect_to_redshift(dbname = <dbname>,
host = <host>,
port = <port>,
user = <user>,
password = <password>)
pr.connect_to_s3(aws_access_key_id = <aws_access_key_id>,
aws_secret_access_key = <aws_secret_access_key>,
bucket = <bucket>,
subdirectory = <subdirectory>)
# Write the DataFrame to S3 and then to redshift
pr.pandas_to_redshift(data_frame = data_frame,
redshift_table_name = 'gawronski.nba_shots_log')
Details: https://github.com/agawronski/pandas_redshift
You can use to_sql
to push data to a Redshift database. I've been able to do this using a connection to my database through a SQLAlchemy engine. Just be sure to set index = False
in your to_sql
call. The table will be created if it doesn't exist, and you can specify if you want you call to replace the table, append to the table, or fail if the table already exists.
from sqlalchemy import create_engine
import pandas as pd
conn = create_engine('postgresql://username:password@yoururl.com:5439/yourdatabase')
df = pd.DataFrame([{'A': 'foo', 'B': 'green', 'C': 11},{'A':'bar', 'B':'blue', 'C': 20}])
df.to_sql('your_table', conn, index=False, if_exists='replace')
Note that you may need to pip install psycopg2 in order to connect to Redshift through SQLAlchemy.
to_sql Documentation
For the purpose of this conversation Postgres = RedShift You have two options:
Option 1:
From Pandas: http://pandas.pydata.org/pandas-docs/stable/io.html#io-sql
The pandas.io.sql module provides a collection of query wrappers to both facilitate data retrieval and to reduce dependency on DB-specific API. Database abstraction is provided by SQLAlchemy if installed. In addition you will need a driver library for your database. Examples of such drivers are psycopg2 for PostgreSQL or pymysql for MySQL.
Writing DataFrames
Assuming the following data is in a DataFrame data, we can insert it into the database using to_sql().
id Date Col_1 Col_2 Col_3
26 2012-10-18 X 25.7 True
42 2012-10-19 Y -12.4 False
63 2012-10-20 Z 5.73 True
In [437]: data.to_sql('data', engine)
With some databases, writing large DataFrames can result in errors due to packet size limitations being exceeded. This can be avoided by setting the chunksize parameter when calling to_sql. For example, the following writes data to the database in batches of 1000 rows at a time:
In [438]: data.to_sql('data_chunked', engine, chunksize=1000)
Option 2
Or you can simply do your own If you have a dataframe called data simply loop over it using iterrows:
for row in data.iterrows():
then add each row to your database. I would use copy instead of insert for each row, as it will be much faster.
http://initd.org/psycopg/docs/usage.html#using-copy-to-and-copy-from
I used to rely on pandas to_sql()
function, but it is just too slow. I have recently switched to doing the following:
import pandas as pd
import s3fs # great module which allows you to read/write to s3 easily
import sqlalchemy
df = pd.DataFrame([{'A': 'foo', 'B': 'green', 'C': 11},{'A':'bar', 'B':'blue', 'C': 20}])
s3 = s3fs.S3FileSystem(anon=False)
filename = 'my_s3_bucket_name/file.csv'
with s3.open(filename, 'w') as f:
df.to_csv(f, index=False, header=False)
con = sqlalchemy.create_engine('postgresql://username:password@yoururl.com:5439/yourdatabase')
# make sure the schema for mytable exists
# if you need to delete the table but not the schema leave DELETE mytable
# if you want to only append, I think just removing the DELETE mytable would work
con.execute("""
DELETE mytable;
COPY mytable
from 's3://%s'
iam_role 'arn:aws:iam::xxxx:role/role_name'
csv;""" % filename)
the role has to allow redshift access to S3 see here for more details
I found that for a 300KB file (12000x2 dataframe) this takes 4 seconds compared to the 8 minutes I was getting with pandas to_sql()
function
Assuming you have access to S3, this approach should work:
Step 1: Write the DataFrame as a csv to S3 (I use AWS SDK boto3 for this)
Step 2: You know the columns, datatypes, and key/index for your Redshift table from your DataFrame, so you should be able to generate a create table
script and push it to Redshift to create an empty table
Step 3: Send a copy
command from your Python environment to Redshift to copy data from S3 into the empty table created in step 2
Works like a charm everytime.
Step 4: Before your cloud storage folks start yelling at you delete the csv from S3
If you see yourself doing this several times, wrapping all four steps in a function keeps it tidy.