I have a spark job (for 1.4.1) receiving a stream of kafka events. I would like to save them continuously as parquet on tachyon.
val lines = KafkaUtils.creat
setting "parquet.enable.summary-metadata" as text ("false" and not false) seems to work for us.
By the way Spark does use the _common_metadata file (we copy that over manually for repetitive jobs)
Spark 2.0 doesn't save metadata summaries by default any more, see SPARK-15719.
If you are working with data hosted in S3, you may still find parquet performance hit by parquet itself trying to scan the tail of all objects to check their schemas. That can be disabled explicitly
sparkConf.set("spark.sql.parquet.mergeSchema", "false")