Best practice for rate limiting users of a REST API?

前端 未结 3 1985
被撕碎了的回忆
被撕碎了的回忆 2020-12-12 18:13

I am putting together a REST API and as I\'m unsure how it will scale or what the demand for it will be, I\'d like to be able to rate limit uses of it as well as to be able

3条回答
  •  有刺的猬
    2020-12-12 19:00

    This is all done with outer webserver, which listens to the world (i recommend nginx or lighttpd).

    Regarding rate limits, nginx is able to limit, i.e. 50 req/minute per each IP, all over get 503 page, which you can customize.

    Regarding expected temporary down, in rails world this is done via special maintainance.html page. There is some kind of automation that creates or symlinks that file when rails app servers go down. I'd recommend relying not on file presence, but on actual availability of app server.

    But really you are able to start/stop services without losing any connections at all. I.e. you can run separate instance of app server on different UNIX socket/IP port and have balancer (nginx/lighty/haproxy) use that new instance too. Then you shut down old instance and all clients are served with only new one. No connection lost. Of course this scenario is not always possible, depends on type of change you introduced in new version.

    haproxy is a balancer-only solution. It can extremely efficiently balance requests to app servers in your farm.

    For quite big service you end-up with something like:

    • api.domain resolving to round-robin N balancers
    • each balancer proxies requests to M webservers for static and P app servers for dynamic content. Oh well your REST API don't have static files, does it?

    For quite small service (under 2K rps) all balancing is done inside one-two webservers.

提交回复
热议问题