Creating hive table using parquet file metadata

后端 未结 6 1458
面向向阳花
面向向阳花 2021-02-01 11:01

I wrote a DataFrame as parquet file. And, I would like to read the file using Hive using the metadata from parquet.

Output from writing parquet write

_co         


        
6条回答
  •  一整个雨季
    2021-02-01 11:34

    I'd just like to expand on James Tobin's answer. There's a StructField class which provides Hive's data types without doing string replacements.

    // Tested on Spark 1.6.0.
    
    import org.apache.spark.sql.DataFrame
    
    def dataFrameToDDL(dataFrame: DataFrame, tableName: String): String = {
        val columns = dataFrame.schema.map { field =>
            "  " + field.name + " " + field.dataType.simpleString.toUpperCase
        }
    
        s"CREATE TABLE $tableName (\n${columns.mkString(",\n")}\n)"
    }
    

    This solves the IntegerType problem.

    scala> val dataFrame = sc.parallelize(Seq((1, "a"), (2, "b"))).toDF("x", "y")
    dataFrame: org.apache.spark.sql.DataFrame = [x: int, y: string]
    
    scala> print(dataFrameToDDL(dataFrame, "t"))
    CREATE TABLE t (
      x INT,
      y STRING
    )
    

    This should work with any DataFrame, not just with Parquet. (e.g., I'm using this with a JDBC DataFrame.)

    As an added bonus, if your target DDL supports nullable columns, you can extend the function by checking StructField.nullable.

提交回复
热议问题