Building a StructType from a dataframe in pyspark

后端 未结 4 1020
后悔当初
后悔当初 2021-02-04 06:45

I am new spark and python and facing this difficulty of building a schema from a metadata file that can be applied to my data file. Scenario: Metadata File for the Data file(csv

相关标签:
4条回答
  • 2021-02-04 07:19

    The attribute df.schema of a pyspark DataFrame return the StructType.

    Given your df:

    +--------------------+---------------+
    |                name|           type|
    +--------------------+---------------+
    |                  id|  IntegerType()|
    |          created_at|TimestampType()|
    |          updated_at|   StringType()|
    

    Type:

    df.schema
    

    Result:

    StructType(
     List(
      StructField(id,IntegerType,true),
      StructField(created_at,TimestampType,true),
      StructField(updated_at,StringType,true)
     )
    
    0 讨论(0)
  • 2021-02-04 07:24

    Below steps can be followed to change the Datatype Objects

    data_schema=[
        StructField("age", IntegerType(), True),
        StructField("name", StringType(), True)
    ]
    
    
    
    final_struct=StructType(fields=data_schema)
    
    df=spark.read.json('/home/abcde/Python-and-Spark-for-Big-Data-master/Spark_DataFrames/people.json', schema=final_struct)
    
    
    
    df.printSchema()
    
    root
     |-- age: integer (nullable = true)
     |-- name: string (nullable = true)
    
    0 讨论(0)
  • 2021-02-04 07:26
    val columns: Array[String] = df1.columns
    val reorderedColumnNames: Array[String] = df2.columns //or do the reordering you want
    val result: DataFrame = dataFrame.select(reorderedColumnNames.head, reorderedColumnNames.tail: _*)
    
    0 讨论(0)
  • 2021-02-04 07:32

    Fields have argument have to be a list of DataType objects. This:

    .map(lambda l:([StructField(l.name, l.type, 'true')]))
    

    generates after collect a list of lists of tuples (Rows) of DataType (list[list[tuple[DataType]]]) not to mention that nullable argument should be boolean not a string.

    Your second attempt:

    .map(lambda l: ("StructField(" + l.name + "," + l.type + ",true)")).
    

    generates after collect a list of str objects.

    Correct schema for the record you've shown should look more or less like this:

    from pyspark.sql.types import *
    
    StructType([
        StructField("id", IntegerType(), True),
        StructField("created_at", TimestampType(), True),
        StructField("updated_at", StringType(), True)
    ])
    

    Although using distributed data structures for task like this is a serious overkill, not to mention inefficient, you can try to adjust your first solution as follows:

    StructType([
        StructField(name, eval(type), True) for (name, type) in  df.rdd.collect()
    ])
    

    but it is not particularly safe (eval). It could be easier to build a schema from JSON / dictionary. Assuming you have function which maps from type description to canonical type name:

    def get_type_name(s: str) -> str:
        """
        >>> get_type_name("int")
        'integer'
        """
        _map = {
            'int': IntegerType().typeName(),
            'timestamp': TimestampType().typeName(),
            # ...
        } 
        return _map.get(s, StringType().typeName())
    

    You can build dictionary of following shape:

    schema_dict = {'fields': [
        {'metadata': {}, 'name': 'id', 'nullable': True, 'type': 'integer'},
        {'metadata': {}, 'name': 'created_at', 'nullable': True, 'type': 'timestamp'}
    ], 'type': 'struct'}
    

    and feed it to StructType.fromJson:

    StructType.fromJson(schema_dict)
    
    0 讨论(0)
提交回复
热议问题