I tried a simple example like:
data = sqlContext.read.format(\"csv\").option(\"header\", \"true\").option(\"inferSchema\", \"true\").load(\"/databricks-datasets/
As there were tabs in my input file, removing the tabs or spaces in the header helped display the answer.
My example:
saledf = spark.read.csv("SalesLTProduct.txt", header=True, inferSchema= True, sep='\t')
saledf.printSchema()
root
|-- ProductID: string (nullable = true)
|-- Name: string (nullable = true)
|-- ProductNumber: string (nullable = true)
saledf.describe('ProductNumber').show()
+-------+-------------+
|summary|ProductNumber|
+-------+-------------+
| count| 295|
| mean| null|
| stddev| null|
| min| BB-7421|
| max| WB-H098|
+-------+-------------+