Submit a Python project to Dataproc job

后端 未结 2 1996
别那么骄傲
别那么骄傲 2021-01-20 18:16

I have a python project, whose folder has the structure

main_directory - lib - lib.py
               - run - script.py

script.py

相关标签:
2条回答
  • 2021-01-20 18:33

    To zip the dependencies -

    cd base-path-to-python-modules
    zip -qr deps.zip ./* -x script.py
    

    Copy deps.zip to hdfs/gs. Use uri when submitting the job as shown below.

    Submit a python project (pyspark) using Dataproc' Python connector

    from google.cloud import dataproc_v1
    from google.cloud.dataproc_v1.gapic.transports import (
        job_controller_grpc_transport)
    
    region = <cluster region>
    cluster_name = <your cluster name>
    project_id = <gcp-project-id>
    
    job_transport = (
        job_controller_grpc_transport.JobControllerGrpcTransport(
            address='{}-dataproc.googleapis.com:443'.format(region)))
    dataproc_job_client = dataproc_v1.JobControllerClient(job_transport)
    
    job_file = <gs://bucket/path/to/main.py or hdfs://file/path/to/main/job.py>
    
    # command line for the main job file
    args = ['args1', 'arg2']
    
    # required only if main python job file has imports from other modules
    # can be one of .py, .zip, or .egg. 
    addtional_python_files = ['hdfs://path/to/deps.zip', 'gs://path/to/moredeps.zip']
    
    job_details = {
        'placement': {
            'cluster_name': cluster_name
        },
        'pyspark_job': {
            'main_python_file_uri': job_file,
            'args': args,
            'python_file_uris': addtional_python_files
        }
    }
    
    res = dataproc_job_client.submit_job(project_id=project_id,
                                         region=region, 
                                         job=job_details)
    job_id = res.reference.job_id
    
    print(f'Submitted dataproc job id: {job_id}')
    
    0 讨论(0)
  • 2021-01-20 18:51

    If you want to preserve project structure when submitting Dataroc job then you should package your project into a .zip file and specify it in --py-files parameter when submitting a job:

    gcloud dataproc jobs submit pyspark --cluster=$CLUSTER_NAME --region=$REGION \
      --py-files lib.zip \
      run/script.py
    

    To create zip archive you need to run script:

    cd main_directory/
    zip -x run/script.py -r libs.zip .
    

    Refer to this blog post for more details on how to package dependencies in zip archive for PySpark jobs.

    0 讨论(0)
提交回复
热议问题