Error while running PySpark DataProc Job due to python version

爷,独闯天下 提交于 2019-12-06 02:49:27

问题


I create a dataproc cluster using the following command

gcloud dataproc clusters create datascience \
--initialization-actions \
    gs://dataproc-initialization-actions/jupyter/jupyter.sh \

However when I submit my PySpark Job I got the following error

Exception: Python in worker has different version 3.4 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.

Any Thoughts?


回答1:


This is due to a difference in the python versions between the master and the worker. By default, the jupyter image installs the latest version of miniconda, which uses the python3.7. However, the worker is still using the default python3.6.

Solution: - specify the miniconda version when creating the master node i.e to install python3.6 in the master node

gcloud dataproc clusters create example-cluster --metadata=MINICONDA_VERSION=4.3.30

Note:

  • may need updating to have a more sustainable solution to managing the environment



回答2:


We fixed it now -- thanks for the intermediate workaround @brotich. Check out the discussion in #300.

PR #306 keeps python at the same version as was already installed (3.6), and installs packages on all nodes to ensure that the master and worker python environments stay identical.

As a side effect, you can choose your python version by passing an argument to the conda init action to change the python version. E.g. --metadata 'CONDA_PACKAGES="python==3.5"'.

PR #311 pins miniconda to a particular version (currently 4.5.4), so we avoid issues like this again. You can use --metadata 'MINICONDA_VERSION=latest' to use the old behavior of always downloading the latest miniconda.




回答3:


UPDATE THE SPARK ENVIRONMENT TO USE PYTHON 3.7:

Open a new terminal and type the following command: export PYSPARK_PYTHON=python3.7 This will ensure that the worker nodes use Python 3.7 (same as the Driver) and not the default Python 3.4

DEPENDING ON VERSIONS OF PYTHON YOU HAVE, YOU MAY HAVE TO DO SOME INSTALL/UPDATE ANACONDA:

(To install see: https://www.digitalocean.com/community/tutorials/how-to-install-anaconda-on-ubuntu-18-04-quickstart)

Make sure you have anaconda 4.1.0 or higher. Open a new terminal and check your conda version by typing into a new terminal:

conda --version

checking conda version

if you are below anaconda 4.1.0, type conda update conda

  1. Next we check to see if we have the library nb_conda_kernels by typing

conda list

Checking if we have nb_conda_kernels

  1. If you don’t see nb_conda_kernels type

conda install nb_conda_kernels

Installing nb_conda_kernels

  1. If you are using Python 2 and want a separate Python 3 environment please type the following

conda create -n py36 python=3.6 ipykernel

py35 is the name of the environment. You could literally name it anything you want.

Alternatively, If you are using Python 3 and want a separate Python 2 environment, you could type the following.

conda create -n py27 python=2.7 ipykernel

py27 is the name of the environment. It uses python 2.7.

  1. Ensure the versions of python are installed successfully and close the terminal. Open a new terminal and type pyspark. You should see the new environments appearing.


来源:https://stackoverflow.com/questions/51427175/error-while-running-pyspark-dataproc-job-due-to-python-version

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!