Custom Partitioner in Pyspark 2.1.0

前端 未结 1 1166
慢半拍i
慢半拍i 2021-01-13 16:57

I read that RDDs with the same partitioner will be co-located. This is important to me because I want to join several large Hive tables that are not partitioned. My theory i

相关标签:
1条回答
  • 2021-01-13 17:42

    It's happening because you are not applying partitionBy on key-value pair rdd. Your rdd must be in key-value pair. Also, your key type should be integer. I don't have sample data for your hive table. So let's demonstrate the fact using below hive table:

    I have created a below dataframe using hive table :

    df = spark.table("udb.emp_details_table");
    +------+--------+--------+----------------+
    |emp_id|emp_name|emp_dept|emp_joining_date|
    +------+--------+--------+----------------+
    |     1|     AAA|      HR|      2018-12-06|
    |     1|     BBB|      HR|      2017-10-26|
    |     2|     XXX|   ADMIN|      2018-10-22|
    |     2|     YYY|   ADMIN|      2015-10-19|
    |     2|     ZZZ|      IT|      2018-05-14|
    |     3|     GGG|      HR|      2018-06-30|
    +------+--------+--------+----------------+
    

    Now, I wish to partition my dataframe and want to keep the similar keys in one partition. So, I have converted my dataframe to rdd as you can only apply partitionBy on rdd for re-partitioning.

        myrdd = df.rdd
        newrdd = myrdd.partitionBy(10,lambda k: int(k[0]))
        newrdd.take(10)
    

    I got the same error:

     File "/usr/hdp/current/spark2-client/python/pyspark/rdd.py", line 1767, in add_shuffle_key
        for k, v in iterator:
    ValueError: too many values to unpack 
    

    Hence, we need to convert our rdd into key-value pair to use paritionBy

    keypair_rdd = myrdd.map(lambda x : (x[0],x[1:]))
    

    Now,you can see that rdd has been converted to key value pair and you can therefore distribute your data in partitions according to keys available.

    [(u'1', (u'AAA', u'HR', datetime.date(2018, 12, 6))), 
    (u'1', (u'BBB', u'HR', datetime.date(2017, 10, 26))), 
    (u'2', (u'XXX', u'ADMIN', datetime.date(2018, 10, 22))), 
    (u'2', (u'YYY', u'ADMIN', datetime.date(2015, 10, 19))), 
    (u'2', (u'ZZZ', u'IT', datetime.date(2018, 5, 14))), 
    (u'3', (u'GGG', u'HR', datetime.date(2018, 6, 30)))]
    

    Using a paritionBy on key-value rdd now:

    newrdd = keypair_rdd.partitionBy(5,lambda k: int(k[0]))
    

    Lets take a look at the partitions. Data is grouped and similar keys are stored into similar partitions now. Two of them are empty.

    >>> print("Partitions structure: {}".format(newrdd.glom().map(len).collect()))
    Partitions structure: [0, 2, 3, 1, 0]
    

    Now lets say I want to custom partitioning my data. So I have created below function to keep keys '1' and '3' in similar partition.

    def partitionFunc(key):
        import random
        if key == 1 or key == 3:
            return 0
        else:
            return random.randint(1,2)
    
    newrdd = keypair_rdd.partitionBy(5,lambda k: partitionFunc(int(k[0])))
    
    >>> print("Partitions structure: {}".format(newrdd.glom().map(len).collect()))
    Partitions structure: [3, 3, 0, 0, 0]
    

    As you can see now that keys 1 and 3 are stored in one partition and rest on other.

    I hope this helps. You can try to partitionBy your dataframe. Make sure to convert it into key value pair and keeping key as type integer.

    0 讨论(0)
提交回复
热议问题