Select columns in Pyspark Dataframe

前端 未结 6 1568
小鲜肉
小鲜肉 2021-02-03 21:45

I am looking for a way to select columns of my dataframe in pyspark. For the first row, I know I can use df.first() but not sure about columns given that they do

相关标签:
6条回答
  • 2021-02-03 22:28

    Try something like this:

    df.select([c for c in df.columns if c in ['_2','_4','_5']]).show()
    
    0 讨论(0)
  • 2021-02-03 22:30

    You can use an array and unpack it inside the select:

    cols = ['_2','_4','_5']
    df.select(*cols).show()
    
    0 讨论(0)
  • 2021-02-03 22:37

    Use df.schema.names:

    spark.version
    # u'2.2.0'
    
    df = spark.createDataFrame([("foo", 1), ("bar", 2)])
    df.show()
    # +---+---+ 
    # | _1| _2|
    # +---+---+
    # |foo|  1| 
    # |bar|  2|
    # +---+---+
    
    df.schema.names
    # ['_1', '_2']
    
    for i in df.schema.names:
      # df_new = df.withColumn(i, [do-something])
      print i
    # _1
    # _2
    
    0 讨论(0)
  • 2021-02-03 22:43

    The method select accepts a list of column names (string) or expressions (Column) as a parameter. To select columns you can use:

    -- column names (strings):

    df.select('col_1','col_2','col_3')
    

    -- column objects:

    import pyspark.sql.functions as F
    
    df.select(F.col('col_1'), F.col('col_2'), F.col('col_3'))
    
    # or
    
    df.select(df.col_1, df.col_2, df.col_3)
    
    # or
    
    df.select(df['col_1'], df['col_2'], df['col_3'])
    

    -- a list of column names or column objects:

    df.select(*['col_1','col_2','col_3'])
    
    #or
    
    df.select(*[F.col('col_1'), F.col('col_2'), F.col('col_3')])
    
    #or 
    
    df.select(*[df.col_1, df.col_2, df.col_3])
    

    The star operator * can be omitted as it's used to keep it consistent with other functions like drop that don't accept a list as a parameter.

    0 讨论(0)
  • 2021-02-03 22:49

    First two columns and 5 rows

     df.select(df.columns[:2]).take(5)
    
    0 讨论(0)
  • 2021-02-03 22:52

    The dataset in ss.csv contains some columns I am interested in:

    ss_ = spark.read.csv("ss.csv", header= True, 
                          inferSchema = True)
    ss_.columns
    
    ['Reporting Area', 'MMWR Year', 'MMWR Week', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)†, Current week', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)†, Current week, flag', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)†, Previous 52 weeks Med', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)†, Previous 52 weeks Med, flag', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)†, Previous 52 weeks Max', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)†, Previous 52 weeks Max, flag', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)†, Cum 2018', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)†, Cum 2018, flag', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)†, Cum 2017', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)†, Cum 2017, flag', 'Shiga toxin-producing Escherichia coli, Current week', 'Shiga toxin-producing Escherichia coli, Current week, flag', 'Shiga toxin-producing Escherichia coli, Previous 52 weeks Med', 'Shiga toxin-producing Escherichia coli, Previous 52 weeks Med, flag', 'Shiga toxin-producing Escherichia coli, Previous 52 weeks Max', 'Shiga toxin-producing Escherichia coli, Previous 52 weeks Max, flag', 'Shiga toxin-producing Escherichia coli, Cum 2018', 'Shiga toxin-producing Escherichia coli, Cum 2018, flag', 'Shiga toxin-producing Escherichia coli, Cum 2017', 'Shiga toxin-producing Escherichia coli, Cum 2017, flag', 'Shigellosis, Current week', 'Shigellosis, Current week, flag', 'Shigellosis, Previous 52 weeks Med', 'Shigellosis, Previous 52 weeks Med, flag', 'Shigellosis, Previous 52 weeks Max', 'Shigellosis, Previous 52 weeks Max, flag', 'Shigellosis, Cum 2018', 'Shigellosis, Cum 2018, flag', 'Shigellosis, Cum 2017', 'Shigellosis, Cum 2017, flag']
    
    

    but I only need a few:

    columns_lambda = lambda k: k.endswith(', Current week') or k == 'Reporting Area' or k == 'MMWR Year' or  k == 'MMWR Week'
    

    The filter returns the list of desired columns, list is evaluated:

    sss = filter(columns_lambda, ss_.columns)
    to_keep = list(sss)
    

    the list of desired columns is unpacked as arguments to dataframe select function that return dataset containing only columns in the list:

    dfss = ss_.select(*to_keep)
    dfss.columns
    

    The result:

    ['Reporting Area',
     'MMWR Year',
     'MMWR Week',
     'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)†, Current week',
     'Shiga toxin-producing Escherichia coli, Current week',
     'Shigellosis, Current week']
    

    The df.select() has a complementary pair: http://spark.apache.org/docs/2.4.1/api/python/pyspark.sql.html#pyspark.sql.DataFrame.drop

    to drop the list of columns.

    0 讨论(0)
提交回复
热议问题