问题
The code below is working and creates a Spark dataframe from a text file. However, I'm trying to use the header option to use the first column as header and for some reason it doesn't seem to be happening. I cannot understand why! It must be something stupid but I cannot solve this.
>>>from pyspark.sql import SparkSession
>>>spark = SparkSession.builder.master("local").appName("Word Count")\
.config("spark.some.config.option", "some-value")\
.getOrCreate()
>>>df = spark.read.option("header", "true")\
.option("delimiter", ",")\
.option("inferSchema", "true")\
.text("StockData/ETFs/aadr.us.txt")
>>>df.take(3)
Returns the following:
[Row(value=u'Date,Open,High,Low,Close,Volume,OpenInt'), Row(value=u'2010-07-21,24.333,24.333,23.946,23.946,43321,0'), Row(value=u'2010-07-22,24.644,24.644,24.362,24.487,18031,0')]
>>>df.columns
Returns the following:
['value']
回答1:
Issue
The issue is that you are using .text
api call instead of .csv
or .load
. If you read the .text api documentation, it says
def text(self, paths): """Loads text files and returns a :class:DataFrame whose schema starts with a string column named "value", and followed by partitioned columns if there are any. Each line in the text file is a new row in the resulting DataFrame. :param paths: string, or list of strings, for input path(s). df = spark.read.text('python/test_support/sql/text-test.txt') df.collect() [Row(value=u'hello'), Row(value=u'this')] """
Solution using .csv
Change the .text
function call to .csv
and you should be fine as
df = spark.read.option("header", "true") \
.option("delimiter", ",") \
.option("inferSchema", "true") \
.csv("StockData/ETFs/aadr.us.txt")
df.show(2, truncate=False)
which should give you
+-------------------+------+------+------+------+------+-------+
|Date |Open |High |Low |Close |Volume|OpenInt|
+-------------------+------+------+------+------+------+-------+
|2010-07-21 00:00:00|24.333|24.333|23.946|23.946|43321 |0 |
|2010-07-22 00:00:00|24.644|24.644|24.362|24.487|18031 |0 |
+-------------------+------+------+------+------+------+-------+
Solution using .load
.load
would assume the file to be of parquet format if a format option is not defined. So you would need a format option to be defined as well
df = spark.read\
.format("com.databricks.spark.csv")\
.option("header", "true") \
.option("delimiter", ",") \
.option("inferSchema", "true") \
.load("StockData/ETFs/aadr.us.txt")
df.show(2, truncate=False)
I hope the answer is helpful
来源:https://stackoverflow.com/questions/49471192/spark-2-3-0-read-text-file-with-header-option-not-working