问题
I'm running an http triggered cloud function. The script was tested under Flask conditions, and ran perfectly. For an unknown reason, the cloud running suddenly stops and I get "Error: could not handle the request". When I checked the function logs, there was no error message or any indication for a crash. All I see is an output printed by the program, down to the point where it stops
I allocated the maximum memory size, 4GB, plus I reduced the size of data in my request (shorter time span, simple breakdown) . Yet, same problem.
This is a the part of the code where it stops (data request from Twitter API):
def getAsyncData(account, entity, date_from, date_to,metric_groups,network,segmented_by=False,country_lst = None):
entity_dict = entity.active_entities(account, date_from, date_to)
if entity_dict == []:
return None
print(json.dumps(entity_dict, indent=4, sort_keys=True))
df = pd.DataFrame.from_dict(entity_dict)
entity_arr = df['entity_id'].tolist()
print(entity_arr)
queued_job_ids = []
for chunk_ids in getChunks(entity_arr, n=20):
if (segmented_by == 'LOCATIONS') or (segmented_by is None) or (segmented_by =='PLATFORMS'):
queued_job_ids.append(
entity.queue_async_stats_job(account=account, ids=chunk_ids, metric_groups=metric_groups,
start_time=date_from,
end_time=date_to,
granularity=GRANULARITY.DAY,
segmentation_type=segmented_by,
placement=network).id)
elif segmented_by == 'REGIONS':
for country in country_lst:
queued_job_ids.append(
entity.queue_async_stats_job(account=account, ids=chunk_ids, metric_groups=metric_groups,
start_time=date_from,
end_time=date_to,
granularity=GRANULARITY.DAY,
placement=network,
segmentation_type=segmented_by,
country=country, ).id)
print(queued_job_ids)
if queued_job_ids == []:
return None
# let the job complete
seconds = 10
time.sleep(seconds)
while True:
async_stats_job_results = entity.async_stats_job_result(account, job_ids=queued_job_ids)
if all(result.status == 'SUCCESS' for result in async_stats_job_results):
break
async_data = []
for result in async_stats_job_results:
async_data.append(entity.async_stats_job_data(account, url=result.url))
print(json.dumps(async_data, indent=4, sort_keys=True))
return async_data
The last print I see in the log is the queued_job_ids. after that - nothing. no error or any other activity. I run other functions with the very same piece of code and it runs fine. What could be the reason? any thoughts?
回答1:
There was an issue back in August where Cloud Functions did not show any logs. You could confirm if you are being affected by the same issue by redeploying your Cloud Functions as follows:
gcloud functions deploy func --set-env-vars USE_WORKER_V2=true,PYTHON37_DRAIN_LOGS_ON_CRASH_WAIT_SEC=5 --runtime=python37 --runtime=python37
Please review this Issue Tracker for additional information.
If after redeploying the function, you are still not able to see any logs, I would recommend you to report this in a new Issue Tracker and as suggested in the comments, as a workaround, you could use a wrapper. Here’s a quick example:
import logging
import traceback
def try_catch_log(wrapped_func):
def wrapper(*args, **kwargs):
try:
response = wrapped_func(*args, **kwargs)
except Exception:
error_message = traceback.format_exc().replace('\n', ' ')
logging.error(error_message)
return 'Error';
return response;
return wrapper;
#Example hello world function
@try_catch_log
def hello_world(request):
request_args = request.args
print( 0 / 0 )
return 'Hello World!'
来源:https://stackoverflow.com/questions/64663500/google-cloud-functions-crashes-without-an-error-message