I\'d like to save data in a Spark (v 1.3.0) dataframe to a Hive table using PySpark.
The documentation states:
\"spark.sql.hive.convertMetasto
metadata doesn't already exist. In other words, it will add any partitions that exist on HDFS but not in metastore, to the hive metastore.