How to handle backpressure using google cloud functions

白昼怎懂夜的黑 提交于 2019-12-01 23:00:40

This functionality is not available for Google Cloud Functions. Instead, since you are asking to handle the pace at which the system will open concurrent tasks, Task Queues is the solution.

Push queues dispatch requests at a reliable, steady rate. They guarantee reliable task execution. Because you can control the rate at which tasks are sent from the queue, you can control the workers' scaling behavior and hence your costs.

In your case, you can control the rate at which the downstream consumer service is called.

You can set the number of "Function invocations per second" with quotas. It's documented here: https://cloud.google.com/functions/quotas#rate_limits

The documentation tells you how to increase it, but you can also decrease it to achieve the kind of throttling that you are looking for.

You can control the pace at which cloud functions are triggered by controlling the triggers themselves. For example, if you have set "new file creation in a bucket" as trigger for your cloud function, then by controlling how many new files are created in that bucket you can manage concurrent execution. Such solutions are not perfect though because sometimes the cloud functions fails and get restart automatically (if you've configure your cloud function that way) without you having any control over it. In effect, the number of active instances of cloud functions will be sometimes more than you plan. What AWS is offering is a neat feature though.

This is now possible with the current gcloud beta! You can set a max that can run at once:

gcloud beta functions deploy FUNCTION_NAME --max-instances 10 FLAGS...

See docs https://cloud.google.com/functions/docs/max-instances

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!