Identify Partition Key Column from a table using PySpark
问题 I need help to find the unique partitions column names for a Hive table using PySpark. The table might have multiple partition columns and preferable the output should return a list of the partition columns for the Hive Table. It would be great if the result would also include the datatype of the partitioned columns. Any suggestions will be helpful. 回答1: It can be done using desc as shown below: df=spark.sql("""desc test_dev_db.partition_date_table""") >>> df.show(truncate=False) +-----------