I have a Spark job that reads data from a configuration file. This file is a typesafe config file.
The code that reads the config looks like that:
Config
So with a little digging in the Spark 1.6.1 source code I found the solution.
These are the steps that you need to take in order to get both the log4j and the application.conf being used by your application when submitting to yarn using cluster mode:
--files "$ROOT_DIR/application.conf,$LOG4J_FULL_PATH/log4j.xml"
(separate them by comma)--conf spark.driver.extraJavaOptions="-Dlog4j.configuration=file:log4j.xml"
- notice that once you pass it with --files you can just refer to the file name without any pathNote: I haven't tried it but from what I saw if you're trying to run it in client mode I think the spark.driver.extraJavaOptions
line should be renamed to something like driver-java-options
Thats it. So simple and I wish these things were documented better. I hope this answer will help someone
Cheers