问题
Is there a way to do a SQL dump from Amazon Redshift?
Could you use the SQL workbench/J client?
回答1:
We are currently using Workbench/J successfuly with Redshift.
Regarding dumps, at the time there is no schema export tool available in Redshift (pg_dump doesn't work), although data can always be extracted via queries.
Hope to help.
EDIT: Remember that things like sort and distribution keys are not reflected on the code generated by Workbench/J. Take a look to the system table pg_table_def
to see info on every field. It states if a field is sortkey or distkey, and such info. Documentation on that table:
http://docs.aws.amazon.com/redshift/latest/dg/r_PG_TABLE_DEF.html
回答2:
pg_dump
of schemas may not have worked in the past, but it does now.
pg_dump -Cs -h my.redshift.server.com -p 5439 database_name > database_name.sql
CAVEAT EMPTOR: pg_dump
still produces some postgres specific syntax, and also neglects the Redshift SORTKEY
and DISTSTYLE
definitions for your tables.
Another decent option is to use the published AWS admin script views for generating your DDL. It handles the SORTKEY/DISTSTYLE, but I've found it to be buggy when it comes to capturing all FOREIGN KEYs, and doesn't handle table permissions/owners. Your milage may vary.
To get a dump of the data itself, you still need to use the UNLOAD command on each table unfortunately.
Here's a way to generate it. Be aware that select *
syntax will fail if your destination table does not have the same column order as your source table:
select
ist.table_schema,
ist.table_name,
'unload (''select col1,col2,etc from "' || ist.table_schema || '"."' || ist.table_name || '"'')
to ''s3://SOME/FOLDER/STRUCTURE/' || ist.table_schema || '.' || ist.table_name || '__''
credentials ''aws_access_key_id=KEY;aws_secret_access_key=SECRET''
delimiter as '',''
gzip
escape
addquotes
null as ''''
--encrypted
--parallel off
--allowoverwrite
;'
from information_schema.tables ist
where ist.table_schema not in ('pg_catalog')
order by ist.table_schema, ist.table_name
;
回答3:
If you're using a Mac, I'm using Postico and it works great. Just right click the table and click export.
回答4:
Yes, you can do so via several ways.
UNLOAD() to an S3 Bucket- Thats the best. You can get your data on almost any other machine. (More info here: http://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html)
Pipe the contents of your table to a data file using the Linux instance you have. So, running:
$> psql -t -A -F 'your_delimiter' -h 'hostname' -d 'database' -U 'user' -c "select * from myTable" >> /home/userA/tableDataFile will do the trick for you.
来源:https://stackoverflow.com/questions/15440794/is-there-a-way-to-do-a-sql-dump-from-amazon-redshift