问题
I've noticed that AWS Redshift recommends different column compression encodings from the ones that it automatically creates when loading data (via COPY) to an empty table.
For example, I have created a table and loaded data from S3 as follows:
CREATE TABLE Client (Id varchar(511) , ClientId integer , CreatedOn timestamp,
UpdatedOn timestamp , DeletedOn timestamp , LockVersion integer , RegionId
varchar(511) , OfficeId varchar(511) , CountryId varchar(511) ,
FirstContactDate timestamp , DidExistPre boolean , IsActive boolean ,
StatusReason integer , CreatedById varchar(511) , IsLocked boolean ,
LockType integer , KeyWorker varchar(511) , InactiveDate timestamp ,
Current_Flag varchar(511) );
Table Client created Execution time: 0.3s
copy Client from 's3://<bucket-name>/<folder>/Client.csv'
credentials 'aws_access_key_id=<access key>; aws_secret_access_key=<secret>'
csv fillrecord truncatecolumns ignoreheader 1 timeformat as 'YYYY-MM-
DDTHH:MI:SS' gzip acceptinvchars compupdate on region 'ap-southeast-2';
Warnings: Load into table 'client' completed, 24284 record(s) loaded successfully. Load into table 'client' completed, 6 record(s) were loaded with replacements made for ACCEPTINVCHARS. Check 'stl_replacements' system table for details.
0 rows affected COPY executed successfully
Execution time: 3.39s
Having done this I can look at the column compression encodings that have been applied by COPY:
select "column", type, encoding, distkey, sortkey, "notnull"
from pg_table_def where tablename = 'client';
Giving:
╔══════════════════╦═════════════════════════════╦═══════╦═══════╦═══╦═══════╗
║ id ║ character varying(511) ║ lzo ║ false ║ 0 ║ false ║
║ clientid ║ integer ║ delta ║ false ║ 0 ║ false ║
║ createdon ║ timestamp without time zone ║ lzo ║ false ║ 0 ║ false ║
║ updatedon ║ timestamp without time zone ║ lzo ║ false ║ 0 ║ false ║
║ deletedon ║ timestamp without time zone ║ none ║ false ║ 0 ║ false ║
║ lockversion ║ integer ║ delta ║ false ║ 0 ║ false ║
║ regionid ║ character varying(511) ║ lzo ║ false ║ 0 ║ false ║
║ officeid ║ character varying(511) ║ lzo ║ false ║ 0 ║ false ║
║ countryid ║ character varying(511) ║ lzo ║ false ║ 0 ║ false ║
║ firstcontactdate ║ timestamp without time zone ║ lzo ║ false ║ 0 ║ false ║
║ didexistprecirts ║ boolean ║ none ║ false ║ 0 ║ false ║
║ isactive ║ boolean ║ none ║ false ║ 0 ║ false ║
║ statusreason ║ integer ║ none ║ false ║ 0 ║ false ║
║ createdbyid ║ character varying(511) ║ lzo ║ false ║ 0 ║ false ║
║ islocked ║ boolean ║ none ║ false ║ 0 ║ false ║
║ locktype ║ integer ║ lzo ║ false ║ 0 ║ false ║
║ keyworker ║ character varying(511) ║ lzo ║ false ║ 0 ║ false ║
║ inactivedate ║ timestamp without time zone ║ lzo ║ false ║ 0 ║ false ║
║ current_flag ║ character varying(511) ║ lzo ║ false ║ 0 ║ false ║
╚══════════════════╩═════════════════════════════╩═══════╩═══════╩═══╩═══════╝
I can then do:
analyze compression client;
Giving:
╔════════╦══════════════════╦═══════╦═══════╗
║ client ║ id ║ zstd ║ 40.59 ║
║ client ║ clientid ║ delta ║ 0.00 ║
║ client ║ createdon ║ zstd ║ 19.85 ║
║ client ║ updatedon ║ zstd ║ 12.59 ║
║ client ║ deletedon ║ raw ║ 0.00 ║
║ client ║ lockversion ║ zstd ║ 39.12 ║
║ client ║ regionid ║ zstd ║ 54.47 ║
║ client ║ officeid ║ zstd ║ 88.84 ║
║ client ║ countryid ║ zstd ║ 79.13 ║
║ client ║ firstcontactdate ║ zstd ║ 22.31 ║
║ client ║ didexistprecirts ║ raw ║ 0.00 ║
║ client ║ isactive ║ raw ║ 0.00 ║
║ client ║ statusreason ║ raw ║ 0.00 ║
║ client ║ createdbyid ║ zstd ║ 52.43 ║
║ client ║ islocked ║ raw ║ 0.00 ║
║ client ║ locktype ║ zstd ║ 63.01 ║
║ client ║ keyworker ║ zstd ║ 38.79 ║
║ client ║ inactivedate ║ zstd ║ 25.40 ║
║ client ║ current_flag ║ zstd ║ 90.51 ║
╚════════╩══════════════════╩═══════╩═══════╝
i.e. quite different results.
I'm keen to know why this might be? I get that ~24K records are less than the 100K that AWS specifies as being required for a meaningful compression analysis sample, however it still seems strange that COPY and ANALYZE are giving different results for the same 24K row table.
回答1:
COPY doesn't currently recommend ZSTD which is why the recommended compression settings are different.
If you're looking to apply compression on permanent tables where you want to maximize compression (use least amount of space), setting ZSTD across the board will give you close to optimal compression.
The reason RAW is coming back on some columns is because in this case there is no advantage to applying compression (same number of blocks with and without compression). If you know table will be growing it makes sense to apply compression to those columns as well.
来源:https://stackoverflow.com/questions/45093279/redshift-copy-creates-different-compression-encodings-from-analyze