Filter Pyspark dataframe column with None value

前端 未结 10 1571
小鲜肉
小鲜肉 2020-11-29 18:10

I\'m trying to filter a PySpark dataframe that has None as a row value:

df.select(\'dt_mvmt\').distinct().collect()

[Row(dt_mvmt=u\'2016-03-27\         


        
相关标签:
10条回答
  • 2020-11-29 18:15

    If you want to keep with the Pandas syntex this worked for me.

    df = df[df.dt_mvmt.isNotNull()]
    
    0 讨论(0)
  • 2020-11-29 18:16

    To obtain entries whose values in the dt_mvmt column are not null we have

    df.filter("dt_mvmt is not NULL")
    

    and for entries which are null we have

    df.filter("dt_mvmt is NULL")
    
    0 讨论(0)
  • 2020-11-29 18:17

    If you want to filter out records having None value in column then see below example:

    df=spark.createDataFrame([[123,"abc"],[234,"fre"],[345,None]],["a","b"])
    

    Now filter out null value records:

    df=df.filter(df.b.isNotNull())
    
    df.show()
    

    If you want to remove those records from DF then see below:

    df1=df.na.drop(subset=['b'])
    
    df1.show()
    
    0 讨论(0)
  • 2020-11-29 18:21

    You can use Column.isNull / Column.isNotNull:

    df.where(col("dt_mvmt").isNull())
    
    df.where(col("dt_mvmt").isNotNull())
    

    If you want to simply drop NULL values you can use na.drop with subset argument:

    df.na.drop(subset=["dt_mvmt"])
    

    Equality based comparisons with NULL won't work because in SQL NULL is undefined so any attempt to compare it with another value returns NULL:

    sqlContext.sql("SELECT NULL = NULL").show()
    ## +-------------+
    ## |(NULL = NULL)|
    ## +-------------+
    ## |         null|
    ## +-------------+
    
    
    sqlContext.sql("SELECT NULL != NULL").show()
    ## +-------------------+
    ## |(NOT (NULL = NULL))|
    ## +-------------------+
    ## |               null|
    ## +-------------------+
    

    The only valid method to compare value with NULL is IS / IS NOT which are equivalent to the isNull / isNotNull method calls.

    0 讨论(0)
  • 2020-11-29 18:22

    There are multiple ways you can remove/filter the null values from a column in DataFrame.

    Lets create a simple DataFrame with below code:

    date = ['2016-03-27','2016-03-28','2016-03-29', None, '2016-03-30','2016-03-31']
    df = spark.createDataFrame(date, StringType())
    

    Now you can try one of the below approach to filter out the null values.

    # Approach - 1
    df.filter("value is not null").show()
    
    # Approach - 2
    df.filter(col("value").isNotNull()).show()
    
    # Approach - 3
    df.filter(df["value"].isNotNull()).show()
    
    # Approach - 4
    df.filter(df.value.isNotNull()).show()
    
    # Approach - 5
    df.na.drop(subset=["value"]).show()
    
    # Approach - 6
    df.dropna(subset=["value"]).show()
    
    # Note: You can also use where function instead of a filter.
    

    You can also check the section "Working with NULL Values" on my blog for more information.

    I hope it helps.

    0 讨论(0)
  • 2020-11-29 18:25

    Try to just use isNotNull function.

    df.filter(df.dt_mvmt.isNotNull()).count()
    
    0 讨论(0)
提交回复
热议问题