Terribly new to spark and hive and big data and scala and all. I\'m trying to write a simple function that takes an sqlContext, loads a csv file from s3 and returns a DataFr
If you check the GitHub page, there is a delimiter
parameter for spark-csv (as you also noted).
Use it like this:
val df = sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true") // Use first line of all files as header
.option("inferSchema", "true") // Automatically infer data types
.option("delimiter", "\u0001")
.load("cars.csv")
With Spark 2.x and the CSV API, use the sep
option:
val df = spark.read
.option("sep", "\u0001")
.csv("path_to_csv_files")