I just copied the spark streaming wodcount python code, and use spark-submit to run the wordcount python code in Spark cluster, but it shows the following errors:
py4j.protocol.Py4JJavaError: An error occurred while calling o23.loadClass.
: java.lang.ClassNotFoundException: org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
I did build the jar spark-streaming-kafka-assembly_2.10-1.4.0-SNAPSHOT.jar. And I used the following script to submit: bin/spark-submit /data/spark-1.3.0-bin-hadoop2.4/wordcount.py --master spark://192.168.100.6:7077 --jars /data/spark-1.3.0-bin-hadoop2.4/kafka-assembly/target/spark-streaming-kafka-assembly_*.jar.
Thanks in advance!
Actually I just realized you have included the --jars after the script. The jar files will not be included unless the jars are specified before the script name. So use spark-submit --jars spark-streaming-kafka-assembly_2.10-1.3.1.jar Script.py instead of spark-submit Script.py --jars spark-streaming-kafka-assembly_2.10-1.3.1.jar.
I had to reference a number of jars in my command to get this to work, Maybe try reference the jars explicitly, it might not be picking it up correctly from the jar you built.
/opt/spark/spark-1.3.1-bin-hadoop2.6/bin/spark-submit --jars /root/spark-streaming-kafka_2.10-1.3.1.jar,/usr/hdp/2.2.4.2-2/kafka/libs/kafka_2.10-0.8.1.2.2.4.2-2.jar,/usr/hdp/2.2.4.2-2/kafka/libs/zkclient-0.3.jar,/root/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar kafka_wordcount.py kafkaAddress:2181 topicName
Actually It looks like its not picking up this jar : kafka_2.10-0.8.1.2.2.4.2-2.jar
来源:https://stackoverflow.com/questions/29485175/spark-submit-failed-with-spark-streaming-workdcount-python-code