Redshift COPY operation doesn't work in SQLAlchemy

后端 未结 3 735
隐瞒了意图╮
隐瞒了意图╮ 2021-01-17 19:18

I\'m trying to do a Redshift COPY in SQLAlchemy.

The following SQL correctly copies objects from my S3 bucket into my Redshift table when I execute it in psql:

相关标签:
3条回答
  • 2021-01-17 20:00

    I have had success using the core expression language and Connection.execute() (as opposed to the ORM and sessions) to copy delimited files to Redshift with the code below. Perhaps you could adapt it for JSON.

    def copy_s3_to_redshift(conn, s3path, table, aws_access_key, aws_secret_key, delim='\t', uncompress='auto', ignoreheader=None):
        """Copy a TSV file from S3 into redshift.
    
        Note the CSV option is not used, so quotes and escapes are ignored.  Empty fields are loaded as null.
        Does not commit a transaction.
        :param Connection conn: SQLAlchemy Connection
        :param str uncompress: None, 'gzip', 'lzop', or 'auto' to autodetect from `s3path` extension.
        :param int ignoreheader: Ignore this many initial rows.
        :return: Whatever a copy command returns.
        """
        if uncompress == 'auto':
            uncompress = 'gzip' if s3path.endswith('.gz') else 'lzop' if s3path.endswith('.lzo') else None
    
        copy = text("""
            copy "{table}"
            from :s3path
            credentials 'aws_access_key_id={aws_access_key};aws_secret_access_key={aws_secret_key}'
            delimiter :delim
            emptyasnull
            ignoreheader :ignoreheader
            compupdate on
            comprows 1000000
            {uncompress};
            """.format(uncompress=uncompress or '', table=text(table), aws_access_key=aws_access_key, aws_secret_key=aws_secret_key))    # copy command doesn't like table name or keys single-quoted
        return conn.execute(copy, s3path=s3path, delim=delim, ignoreheader=ignoreheader or 0)
    
    0 讨论(0)
  • 2021-01-17 20:05

    I basically had the same problem, though in my case it was more:

    engine = create_engine('...')
    engine.execute(text("COPY posts FROM 's3://mybucket/the/key/prefix' WITH CREDENTIALS aws_access_key_id=myaccesskey;aws_secret_access_key=mysecretaccesskey'    JSON AS 'auto';"))
    

    By stepping through pdb, the problem was obviously the lack of a .commit() being invoked. I don't know why session.commit() is not working in your case (maybe the session "lost track" of the sent commands?) so it might not actually fix your problem.

    Anyhow, as explained in the sqlalchemy docs

    Given this requirement, SQLAlchemy implements its own “autocommit” feature which works completely consistently across all backends. This is achieved by detecting statements which represent data-changing operations, i.e. INSERT, UPDATE, DELETE [...] If the statement is a text-only statement and the flag is not set, a regular expression is used to detect INSERT, UPDATE, DELETE, as well as a variety of other commands for a particular backend.

    So, there are 2 solutions, either:

    • text("COPY posts FROM 's3://mybucket/the/key/prefix' WITH CREDENTIALS aws_access_key_id=myaccesskey;aws_secret_access_key=mysecretaccesskey' JSON AS 'auto';").execution_options(autocommit=True).
    • Or, get a fixed version of the redshift dialect... I just opened a PR about it
    0 讨论(0)
  • 2021-01-17 20:10

    Add a commit to the end of the copy worked for me:

    <your copy sql>;commit;
    
    0 讨论(0)
提交回复
热议问题