How to convert an 500GB SQL table into Apache Parquet?

前端 未结 2 1927
暖寄归人
暖寄归人 2021-02-05 21:56

Perhaps this is well documented, but I am getting very confused how to do this (there are many Apache tools).

When I create an SQL table, I create the table using the fo

相关标签:
2条回答
  • 2021-02-05 22:06

    The odbc2parquet command line tool might also be helpful in some situations.

    odbc2parquet \
    -vvv \ # Log output, good to know it is still doing something during large downloads
    query \ # Subcommand for accessing data and storing it
    --connection-string ${ODBC_CONNECTION_STRING} \
    --batch-size 100000 \ # Batch size in rows
    --batches-per-file 100 \ # Ommit to store entire query in a single file
    out.par \ # Path to output parquet file
    "SELECT * FROM YourTable"
    
    0 讨论(0)
  • 2021-02-05 22:08

    Apache Spark can be used to do this:

    1.load your table from mysql via jdbc
    2.save it as a parquet file
    

    Example:

    from pyspark.sql import SparkSession
    spark = SparkSession.builder.getOrCreate()
    df = spark.read.jdbc("YOUR_MYSQL_JDBC_CONN_STRING",  "YOUR_TABLE",properties={"user": "YOUR_USER", "password": "YOUR_PASSWORD"})
    df.write.parquet("YOUR_HDFS_FILE")
    
    0 讨论(0)
提交回复
热议问题