How can I run spark in headless mode in my custom version on HDP?
问题 How can I run spark in headless mode? Currently, I am executing spark on a HDP 2.6.4 (i.e. 2.2 is installed by default) on the cluster. I have downloaded a spark 2.4.1 Scala 2.11 release in headless mode (i.e. no hadoop jars are built in) from https://spark.apache.org/downloads.html. The exact name is: pre-built with scala 2.11 and user provided hadoop Now when trying to run I follow: https://spark.apache.org/docs/latest/hadoop-provided.html export SPARK_DIST_CLASSPATH=$(hadoop classpath)