Adding custom Delimiter adds double quotes in the final spark data frame CSV outpu

放肆的年华 提交于 2019-12-02 16:56:23

问题


I have a data frame where i am replacing default delimiter , with |^|. it is working fine and i am getting the expected result also except where , is found in the records . For example i have one such records like below

4295859078|^|914|^|INC|^|Balancing Item - Non Operating Income/(Expense),net|^||^||^|IIII|^|False|^||^||^||^||^|False|^||^||^||^||^|505096|^|505074|^|505074|^|505096|^|505096|^||^|505074|^|True|^||^|3014960|^||^|I|!|

So there is , in the 4th field .

Now i am doing like this to replace the ,

 val dfMainOutputFinal = dfMainOutput.na.fill("").select($"DataPartition", $"StatementTypeCode",concat_ws("|^|", dfMainOutput.schema.fieldNames.filter(_ != "DataPartition").map(c => col(c)): _*).as("concatenated"))

val headerColumn = df.columns.filter(v => (!v.contains("^") && !v.contains("_c"))).toSeq

val header = headerColumn.dropRight(1).mkString("", "|^|", "|!|")

val dfMainOutputFinalWithoutNull = dfMainOutputFinal.withColumn("concatenated", regexp_replace(col("concatenated"), "null", "")).withColumnRenamed("concatenated", header)


dfMainOutputFinalWithoutNull.repartition(1).write.partitionBy("DataPartition","StatementTypeCode")
  .format("csv")
  .option("nullValue", "")
  .option("header", "true")
  .option("codec", "gzip")
  .save("s3://trfsmallfffile/FinancialLineItem/output")

And i get output like this in the saved output part file

"4295859078|^|914|^|INC|^|Balancing Item - Non Operating Income/(Expense),net|^||^||^|IIII|^|false|^||^||^||^||^|false|^||^||^||^||^|505096|^|505074|^|505074|^|505096|^|505096|^||^|505074|^|true|^||^|3014960|^||^|I|!|"

My problem is " " at the start and end of the result .

If remove comma then i am getting correct result like below

4295859078|^|914|^|INC|^|Balancing Item - Non Operating Income/(Expense)net|^||^||^|IIII|^|false|^||^||^||^||^|false|^||^||^||^||^|505096|^|505074|^|505074|^|505096|^|505096|^||^|505074|^|true|^||^|3014960|^||^|I|!|

回答1:


This is a standard CSV feature. If there's an occurrence of delimiter in the actual data (referred to as Delimiter Collision), the field is enclosed in quotes.

You can try

df.write.option("delimiter" , somechar)

where somechar should be a character that doesn't occur in your data.

EDIT:

A more robust solution would be to disable quoteMode entirely since you are writing a dataframe with only one column.

dfMainOutputFinalWithoutNull.repartition(1)
  .write.partitionBy("DataPartition","StatementTypeCode")
  .format("csv")
  .option("nullValue", "")
  .option("quoteMode", "NONE")
//.option("delimiter", ";")           // assuming `;` is not present in data
  .option("header", "true")
  .option("codec", "gzip")
  .save("s3://trfsmallfffile/FinancialLineItem/output")


来源:https://stackoverflow.com/questions/47002414/adding-custom-delimiter-adds-double-quotes-in-the-final-spark-data-frame-csv-out

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!