getting the new row id from pySpark SQL write to remote mysql db (JDBC)
问题 I am using pyspark-sql to create rows in a remote mysql db, using JDBC. I have two tables, parent_table(id, value) and child_table(id, value, parent_id) , so each row of parent_id may have as many rows in child_id associated to it as needed. Now I want to create some new data and insert it into the database. I'm using the code guidelines here for the write opperation, but I would like to be able to do something like: parentDf = sc.parallelize([5, 6, 7]).toDF(('value',)) parentWithIdDf =