I have a Scala Maven project using that uses Spark, and I am trying implement logging using Logback. I am compiling my application to a jar, and deploying to an EC2 instance
I packed logback and log4j-to-slf4j along with my other dependencies and src/main/resources/logback.xml in a fat jar.
When I run spark-submit with
--conf "spark.driver.userClassPathFirst=true" \
--conf "spark.executor.userClassPathFirst=true"
all logging is handled by logback.
I had the same problem: I was trying to use a logback config file. I tried many permutation, but I did not get it to work.
I was accessing logback through grizzled-slf4j using this SBT dependency:
"org.clapper" %% "grizzled-slf4j" % "1.3.0",
Once I added the log4j config file:
src/main/resources/log4j.properties/log4j.properties files.
my logging worked fine.
I had encountered a very similar problem.
Our build was similar to yours (but we used sbt
) and is described in detail here: https://stackoverflow.com/a/45479379/1549135
Running this solution locally works fine, but then spark-submit
would ignore all the exclusions and new logging framework (logback
) because spark's classpath has priority over the deployed jar. And since it contains log4j 1.2.xx
it would simply load it and ignore our setup.
I have used several sources. But quoting Spark 1.6.1 docs (applies to Spark latest / 2.2.0 as well):
spark.driver.extraClassPath
Extra classpath entries to prepend to the classpath of the driver. Note: In client mode, this config must not be set through the SparkConf directly in your application, because the driver JVM has already started at that point. Instead, please set this through the --driver-class-path command line option or in your default properties file.
spark.executor.extraClassPath
Extra classpath entries to prepend to the classpath of executors. This exists primarily for backwards-compatibility with older versions of Spark. Users typically should not need to set this option.
What is not written here, though is that extraClassPath
takes precedence before default Spark's classpath!
So now the solution should be quite obvious.
- log4j-over-slf4j-1.7.25.jar
- logback-classic-1.2.3.jar
- logback-core-1.2.3.jar
spark-submit
:libs="/absolute/path/to/libs/*"
spark-submit \
...
--master yarn \
--conf "spark.driver.extraClassPath=$libs" \
--conf "spark.executor.extraClassPath=$libs" \
...
/my/application/application-fat.jar \
param1 param2
I am just not yet sure if you can put those jars on HDFS. We have them locally next to the application jar.
Strangely enough, using Spark 1.6.1
I have also found this option in docs:
spark.driver.userClassPathFirst, spark.executor.userClassPathFirst
(Experimental) Whether to give user-added jars precedence over Spark's own jars when loading classes in the the driver. This feature can be used to mitigate conflicts between Spark's dependencies and user dependencies. It is currently an experimental feature. This is used in cluster mode only.
But simply setting:
--conf "spark.driver.userClassPathFirst=true" \
--conf "spark.executor.userClassPathFirst=true" \
Did not work for me. So I am gladly using extraClassPath
!
Cheers!
logback.xml
If you face any problems loading logback.xml
to Spark, my question here might help you out:
Pass system property to spark-submit and read file from classpath or custom path
After much struggle I've found another solution: library shading. After I've shaded org.slf4j
, my application logs are separated from spark logs. Furthermore, logback.xml
in my application jar is honored.
Here you can find information on library shading in sbt, in this case it comes down to putting:
assemblyShadeRules in assembly += ShadeRule.rename(s"org.slf4j.**" -> "your_favourite_prefix.@0").inAll
in your build.sbt
settings.
Side note: If you are not sure whether shading actually happened, open your jar in some archive browser and check whether directory structure reflects shaded one, in this case your jar should contain path /your_favourite_prefix/org/slf4j
, but not /org/slf4j
I had to modify the solution presented by Atais to get it working in cluster mode. This worked for me:
libs="/absolute/path/to/libs/*"
spark-submit \
--master yarn \
--deploy-mode cluster \
... \
--jars $libs \
--conf spark.driver.extraClassPath=log4j-over-slf4j-1.7.25.jar:logback-classic-1.2.3.jar:logback-core-1.2.3.jar:logstash-logback-encoder-6.4.jar \
--conf spark.executor.extraClassPath=log4j-over-slf4j-1.7.25.jar:logback-classic-1.2.3.jar:logback-core-1.2.3.jar:logstash-logback-encoder-6.4.jar \
/my/application/application-fat.jar \
param1 param2
The underlying reason was that the jars were not available to all nodes and had to be made explicitly available (even after submitting with --jars).
Update: Refined the solution further. You can also pass the jars as list of urls, i.e. --jars url1,url2,url3
. These jars still have to be added to the class path to be prioritized over log4j.