How to unload a table on RedShift to a single CSV file?

后端 未结 5 521
醉话见心
醉话见心 2021-02-02 11:36

I want to migrate a table from Amazon RedShift to MySQL, but using \"unload\" will generate multiple data files which are hard to imported into MySQL directly.

Is there

5条回答
  •  清酒与你
    2021-02-02 12:17

    This is an old question at this point, but I feel like all the existing answers are slightly misleading. If your question is, "Can I absolutely 100% guarantee that Redshift will ALWAYS unload to a SINGLE file in S3?", the answer is simply NO.

    That being said, for most cases, you can generally limit your query in such a way that you'll end up with a single file. Per the documentation (https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html), the main factor in limiting the number of files you generate is the actual raw size in bytes of your export (NOT the number of rows). The limit on the size of an output file generated by the Redshift UNLOAD command is 6.2GB.

    So if you want to try to guarantee that you get a single output file from UNLOAD, here's what you should try:

    • Specify PARALLEL OFF. Parallel is "ON" by default and will generally write to multiple files unless you have a tiny cluster (The number of output files with "PARALLEL ON" set is proportional to the number of slices in your cluster). PARALLEL OFF will write files serially to S3 instead of in parallel and will only spill over to using multiple files if you exceed the size limit.
    • Limit the size of your output. The raw size of the data must be less than 6.2GB if you want a single file. So you need to make your query have a more restrictive WHERE clause or use a LIMIT clause to keep the number of records down. Unfortunately neither of these techniques are perfect since rows can be of variable size. It's also not clear to me if the GZIP option affects the output file size spillover limit or not (it's unclear if 6.2GB is the pre-GZIP size limit or the post-GZIP size limit).

    For me, the UNLOAD command that ending up generating a single CSV file in most cases was:

    UNLOAD
    ('SELECT  FROM  WHERE ')
    TO 's3:///'
    CREDENTIALS 'aws_access_key_id=;aws_secret_access_key='
    DELIMITER AS ','
    ADDQUOTES
    NULL AS ''
    PARALLEL OFF;
    
    
    

    The other nice side effect of PARALLEL OFF is that it will respect your ORDER BY clause if you have one and generate the files in an order that keeps all the records ordered, even across multiple output files.

    Addendum: There seems to be some folkloric knowledge around using LIMIT 2147483647 to force the leader node to do all the processing and generate a single output file, but this doesn't seem to be actually documented anywhere in the Redshift documentation and as such, relying on it seems like a bad idea since it could change at any time.

    提交回复
    热议问题