问题
I am trying to read my delimited file which is tab separated but not able to read all records.
Here is my input records:
head1 head2 head3
a b c
a2 a3 a4
a1 "b1 "c1
My code:
var inputDf = sparkSession.read
.option("delimiter","\t")
.option("header", "true")
// .option("inferSchema", "true")
.option("nullValue", "")
.option("escape","\"")
.option("multiLine", true)
.option("nullValue", null)
.option("nullValue", "NULL")
.schema(finalSchema)
.csv("file:///C:/Users/prhasija/Desktop/retriedAddresses_4.txt")
// .csv(inputPath)
.na.fill("")
// .repartition(4)
println(inputDf.count)
Output:
2 records
Why it is not returning 3 as count?
回答1:
I think you need to add the following options to your read: .option("escape", "\\") and .option("quote", "\\")
val test = spark.read
.option("header", true)
.option("quote", "\\")
.option("escape", "\\")
.option("delimiter", ",")
.csv(".../test.csv")
Here is the test csv I used it on:
a,b,c
1,b,a
5,d,e
5,"a,"f
Full output:
scala> val test = spark.read.option("header", true).option("quote", "\\").option("escape", "\\").option("delimiter", ",").csv("./test.csv")
test: org.apache.spark.sql.DataFrame = [a: string, b: string ... 1 more field]
scala> test.show
+---+---+---+
| a| b| c|
+---+---+---+
| 1| b| a|
| 5| d| e|
| 5| "a| "f|
+---+---+---+
scala> test.count
res11: Long = 3
来源:https://stackoverflow.com/questions/52995878/escape-quotes-is-not-working-in-spark-2-2-0-while-reading-csv