问题
I'm running a memory intensive code where I've created a pipeline which consists of :
Finding the best number of bin value using Shimazaki and Shinomoto's Bin Width algorithm.
Creating a new column by Bucketizing the same column with the respective bin values found from above.
Calculating a Weight of Evidence by 8 sequencial SQL queries.
Config: Python - 3.6
Spark - 2.3
Environment - Standalone machine (16 GB RAM and 500 GB HDD with i7 processor)
IDE - Pycharm
My doubt is, it is working as expected but giving the below ERROR and WARNING even though the job is completing successfully.
Any clue on why I'm getting the below? Is there any tweak I need to do to use optimal available memory while spark submit?
FYI - Currently I am simply running with Pycharm Run button rather than spark submit, though internally it does the same.
2018-05-25 18:13:06 ERROR AsyncEventQueue:70 - Dropping event from queue appStatus. This likely means one of the listeners is too slow and cannot keep up with the rate at which tasks are being started by the scheduler.
2018-05-25 18:13:07 WARN AsyncEventQueue:66 - Dropped com.codahale.metrics.Counter@4382d088 events from appStatus since Thu Jan 01 05:30:00 IST 1970.
来源:https://stackoverflow.com/questions/50530679/spark-2-3-asynceventqueue-error-and-warning