Trying to parse a fixed width text file.
my text file looks like the following and I need a row id, date, a string, and an integer:
00101292017you1234
00201302017 me5678
I can read the text file to an RDD using sc.textFile(path). I can createDataFrame with a parsed RDD and a schema. It's the parsing in between those two steps.
Spark's substr function can handle fixed-width columns, for example:
df = spark.read.text("/tmp/sample.txt")
df.select(
df.value.substr(1,3).alias('id'),
df.value.substr(4,8).alias('date'),
df.value.substr(12,3).alias('string'),
df.value.substr(15,4).cast('integer').alias('integer')
).show()
will result in:
+---+--------+------+-------+
| id| date|string|integer|
+---+--------+------+-------+
|001|01292017| you| 1234|
|002|01302017| me| 5678|
+---+--------+------+-------+
Having splitted columns you can reformat and use them as in normal spark dataframe.
I want to automate this process as number of columns will be different for different files
df.value.substr(1,3).alias('id'),
df.value.substr(4,8).alias('date'),
df.value.substr(12,3).alias('string'),
df.value.substr(15,4).cast('integer').alias('integer')
I created a Python function to generate this on the basis of schema file but now when I am appending it with
df.select("my automated string").show
it's throwing an error analysis exception
来源:https://stackoverflow.com/questions/41944689/pyspark-parse-fixed-width-text-file