问题
I want to track username changes overtime.
I have the following users table in Redshift:
id username valid_from valid_to current
--------------------------------------------------------
1 joe1 2015-01-01 2015-01-15 No
1 joe2 2015-01-15 NULL Yes
My source data is from RDS Postgres. I'm thinking of several options on how to handle this:
1) Create users_history table and start tracking this inside RDS Postgres db. This requires me making changes to my app and this table potentially can get huge
2) Have an ETL process and query the users source table like every 5 minutes looking for new changes (sort by last updated_at) and dump it to DynamoDB.
3) Have an ETL process dumping data to S3, then COPY it into a temporary table inside Redshift and doing query update there
Can you help advice what is scalable and easily maintainable in the long run? Remember these tables can be massive and I'll be tracking SCD for many tables.
Thanks.
Update 1: I chatted with AWS support and they showed me this, seems like a good solution: http://docs.aws.amazon.com/redshift/latest/dg/merge-specify-a-column-list.html
回答1:
In terms of SQL/ ETL implementation, Redshift supports anything that an RDS will support. So, you decision should be based on what are the constraints and expectations from the database.
Redshift is a read optimized system, so updates every few minutes will likely slow it down for query purposes. (Micro-ETLs are not much recommended on Redshift)
On the other hand, if you are likely to have huge tables, Redshift will perform better that most row-store databases (like MySQL, Postgre etc.). This delta in performance will increase with the growth of your data size as Redshift is designed for bigger scales than traditional systems.
来源:https://stackoverflow.com/questions/34297538/how-to-handle-slowly-changing-dimension-type-2-in-redshift