Spark SQL “Limit”

随声附和 提交于 2019-12-23 16:28:29

问题


Env : spark 1.6 using Hadoop. Hortonworks Data Platform 2.5

I have a table with 10 billion records and I would like to get 300 million records and move them to a temporary table.

sqlContext.sql("select ....from my_table limit 300000000").repartition(50)
.write.saveAsTable("temporary_table")

I saw that the Limit keyword would actually make spark use only one executor!!! This means moving 300 million records to one node and writing it back to Hadoop. How can I avoid this reduce but still get just 300 million records while having more than one executor. I would like all nodes to write into hadoop.

Can sampling help me with that? If so how?


回答1:


Sampling can be used in below ways :-

select ....from my_table TABLESAMPLE(.3 PERCENT)

or

select ....from my_table TABLESAMPLE(30M ROWS)


来源:https://stackoverflow.com/questions/42512790/spark-sql-limit

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!