getting the new row id from pySpark SQL write to remote mysql db (JDBC)

萝らか妹 提交于 2020-01-05 06:31:12

问题


I am using pyspark-sql to create rows in a remote mysql db, using JDBC.

I have two tables, parent_table(id, value) and child_table(id, value, parent_id), so each row of parent_id may have as many rows in child_id associated to it as needed.

Now I want to create some new data and insert it into the database. I'm using the code guidelines here for the write opperation, but I would like to be able to do something like:

parentDf = sc.parallelize([5, 6, 7]).toDF(('value',))
parentWithIdDf = parentDf.write.mode('append') \
                         .format("jdbc") \
                         .option("url", "jdbc:mysql://" + host_name + "/"
                            + db_name).option("dbtable", table_name) \
                         .option("user", user_name).option("password", password_str) \
                         .save()
# The assignment at the previous line is wrong, as pyspark.sql.DataFrameWriter#save doesn't return anything.

I would like a way for the last line of code above to return a DataFrame with the new row ids for each row so I can do

childDf = parentWithIdDf.flatMap(lambda x: [[8, x[0]], [9, x[0]]])
childDf.write.mode('append')...

meaning that at the end I would have in my remote databasde

parent_table
 ____________
| id | value |
 ____________
| 1  |   5   |
| 2  |   6   |
| 3  |   7   |
 ____________

child_table
 ________________________
| id | value | parent_id |
 ________________________
| 1  |   8   |    1      |
| 2  |   9   |    1      |
| 3  |   8   |    2      |
| 4  |   9   |    2      |
| 5  |   8   |    3      |
| 6  |   9   |    3      |
 ________________________ 

As I've written in the first code snippet above, pyspark.sql.DataFrameWriter#save doesn't return anything, looking at its documentation, so how can I achieve this?

Am I doing something completely wrong? It looks like there is no way to get data back from a Spark's action (which save is) while I would like to use this action as a transformation, shich leads me to think I may be thinking of all this in the wrong way.


回答1:


A simple answer is to to use the timestamp + auto increment number to create a unique ID. This only works if there is only one server is running at an instance of time. :)



来源:https://stackoverflow.com/questions/52184502/getting-the-new-row-id-from-pyspark-sql-write-to-remote-mysql-db-jdbc

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!