Creating hive table using parquet file metadata

后端 未结 6 1455
面向向阳花
面向向阳花 2021-02-01 11:01

I wrote a DataFrame as parquet file. And, I would like to read the file using Hive using the metadata from parquet.

Output from writing parquet write

_co         


        
相关标签:
6条回答
  • 2021-02-01 11:17

    Here's a solution I've come up with to get the metadata from parquet files in order to create a Hive table.

    First start a spark-shell (Or compile it all into a Jar and run it with spark-submit, but the shell is SOO much easier)

    import org.apache.spark.sql.hive.HiveContext
    import org.apache.spark.sql.DataFrame
    
    
    val df=sqlContext.parquetFile("/path/to/_common_metadata")
    
    def creatingTableDDL(tableName:String, df:DataFrame): String={
      val cols = df.dtypes
      var ddl1 = "CREATE EXTERNAL TABLE "+tableName + " ("
      //looks at the datatypes and columns names and puts them into a string
      val colCreate = (for (c <-cols) yield(c._1+" "+c._2.replace("Type",""))).mkString(", ")
      ddl1 += colCreate + ") STORED AS PARQUET LOCATION '/wherever/you/store/the/data/'"
      ddl1
    }
    
    val test_tableDDL=creatingTableDDL("test_table",df,"test_db")
    

    It will provide you with the datatypes that Hive will use for each column as they are stored in Parquet. E.G: CREATE EXTERNAL TABLE test_table (COL1 Decimal(38,10), COL2 String, COL3 Timestamp) STORED AS PARQUET LOCATION '/path/to/parquet/files'

    0 讨论(0)
  • 2021-02-01 11:19

    Actually, Impala supports

    CREATE TABLE LIKE PARQUET
    

    (no columns section altogether):

    http://www.cloudera.com/content/www/en-us/documentation/archive/impala/2-x/2-1-x/topics/impala_create_table.html

    Tags of your question have "hive" and "spark" and I don't see this is implemented in Hive, but in case you use CDH, it may be what you were looking for.

    0 讨论(0)
  • 2021-02-01 11:25

    A small improvement over Victor (adding quotes on field.name) and modified to bind the table to a local parquet file (tested on spark 1.6.1)

    def dataFrameToDDL(dataFrame: DataFrame, tableName: String, absFilePath: String): String = {
        val columns = dataFrame.schema.map { field =>
          "  `" + field.name + "` " + field.dataType.simpleString.toUpperCase
        }
        s"CREATE EXTERNAL TABLE $tableName (\n${columns.mkString(",\n")}\n) STORED AS PARQUET LOCATION '"+absFilePath+"'"
      }
    

    Also notice that:

    • A HiveContext is needed since SQLContext does not support creating external table.
    • The path to the parquet folder must be an absolute path
    0 讨论(0)
  • 2021-02-01 11:33

    I had the same question. It might be hard to implement from pratcical side though, as Parquet supports schema evolution:

    http://www.cloudera.com/content/www/en-us/documentation/archive/impala/2-x/2-0-x/topics/impala_parquet.html#parquet_schema_evolution_unique_1

    For example, you could add a new column to your table and you don't have to touch data that's already in the table. It's only new datafiles will have new metadata (compatible with previous version).

    Schema merging is switched off by default since Spark 1.5.0 since it is "relatively expensive operation" http://spark.apache.org/docs/latest/sql-programming-guide.html#schema-merging So infering most recent schema may not be as simple as it sounds. Although quick-and-dirty approaches are quite possible e.g. by parsing output from

    $ parquet-tools schema /home/gz_files/result/000000_0
    
    0 讨论(0)
  • 2021-02-01 11:34

    I'd just like to expand on James Tobin's answer. There's a StructField class which provides Hive's data types without doing string replacements.

    // Tested on Spark 1.6.0.
    
    import org.apache.spark.sql.DataFrame
    
    def dataFrameToDDL(dataFrame: DataFrame, tableName: String): String = {
        val columns = dataFrame.schema.map { field =>
            "  " + field.name + " " + field.dataType.simpleString.toUpperCase
        }
    
        s"CREATE TABLE $tableName (\n${columns.mkString(",\n")}\n)"
    }
    

    This solves the IntegerType problem.

    scala> val dataFrame = sc.parallelize(Seq((1, "a"), (2, "b"))).toDF("x", "y")
    dataFrame: org.apache.spark.sql.DataFrame = [x: int, y: string]
    
    scala> print(dataFrameToDDL(dataFrame, "t"))
    CREATE TABLE t (
      x INT,
      y STRING
    )
    

    This should work with any DataFrame, not just with Parquet. (e.g., I'm using this with a JDBC DataFrame.)

    As an added bonus, if your target DDL supports nullable columns, you can extend the function by checking StructField.nullable.

    0 讨论(0)
  • 2021-02-01 11:40

    I would like to expand James answer,

    The following code will work for all datatypes including ARRAY, MAP and STRUCT.

    Have tested in SPARK 2.2

    val df=sqlContext.parquetFile("parquetFilePath")
    val schema = df.schema
    var columns = schema.fields
    var ddl1 = "CREATE EXTERNAL TABLE " tableName + " ("
    val cols=(for(column <- columns) yield column.name+" "+column.dataType.sql).mkString(",")
    ddl1=ddl1+cols+" ) STORED AS PARQUET LOCATION '/tmp/hive_test1/'"
    spark.sql(ddl1)
    
    0 讨论(0)
提交回复
热议问题