How do we optimise the spark job if the base table has 130 billions of records

后端 未结 0 1697
别跟我提以往
别跟我提以往 2020-12-19 09:00

We are joining multiple tables and doing complex transformations and enrichments. In that the base table will have around 130 billions of records, how can we optimise the sp

相关标签:
回答
  • 消灭零回复
提交回复
热议问题