When running a PySpark job on the dataproc server like this
gcloud --project <project_name> dataproc jobs submit pyspark --cluster <cluster_name> <python_script>
my print statements don't show up in my terminal.
Is there any way to output data onto the terminal in PySpark when running jobs on the cloud?
Edit: I would like to print/log info from within my transformation. For example:
def print_funct(l):
print(l)
return l
rddData.map(lambda l: print_funct(l)).collect()
Should print every line of data in the RDD rddData
.
Doing some digging, I found this answer for logging, however, testing it provides me the results of this question, whose answer states that that logging isn't possible within the transformation
Printing or logging inside of a transform will end up in the Spark executor logs, which can be accessed through your Application's AppMaster or HistoryServer via the YARN ResourceManager Web UI.
You could alternatively collect the information you are printing alongside your output (e.g. in a dict or tuple). You could also stash it away in an accumulator and then print it from the driver.
If you are doing a lot of print statement debugging, you might find it faster to SSH into your master node and use the pyspark REPL or IPython to experiment with your code. This would also allow you to use the --master local flag which would make your print statements appear in stdout.
来源:https://stackoverflow.com/questions/37407256/pyspark-print-to-console