User rate limit exceeded after a few requests

血红的双手。 提交于 2019-12-04 07:42:27

403: User Rate Limit Exceeded is basically flood protection.

{
 "error": {
  "errors": [
   {
    "domain": "usageLimits",
    "reason": "userRateLimitExceeded",
    "message": "User Rate Limit Exceeded"
   }
  ],
  "code": 403,
  "message": "User Rate Limit Exceeded"
 }
}

You need to slow down. Implementing exponential backoff as you have done is the correct course of action.

Google is not perfect at counting the requests so counting them yourself isn't really going to help. Sometimes you can get away with 15 request a second other times you can only get 7.

You should also remember that you are completing with the other people using the server if there is a lot of load on the server one of your request may take longer while an other may not. Don't run on the hour that's when most people have cron jobs set up to extract.

Note: If you go to google developer console under where you have enabled the drive API got to quota tab click the pencil icon next to

Queries per 100 seconds per user

and

Queries per 100 seconds

You can increase them both. One is User based the other is project based. Each user can make X requests in 100 seconds your project can make Y request per 100 seconds.

Note: No idea how high you can set yours. This is my dev account so it may have some beta access I cant remember.

pinoyyid

See 403 rate limit after only 1 insert per second and 403 rate limit on insert sometimes succeeds

The key points are:-

  • backoff but do not implement exponential backoff!. This will simply kill your application throughput

  • instead you need to proactively throttle your requests to avoid the 304's from occurring. In my testing I've found that the maximum sustainable throughput is about 1 transaction every 1.5 seconds.

  • batching makes the problem worse because the batch is unpacked before the 304 is raised. Ie. a batch of 10 is interpreted as 10 rapid transactions, not 1.

Try this algorithm

delay=0                              // start with no backoff
while (haveFilesInUploadQueue) {
   sleep(delay)                      // backoff 
   upload(file)                      // try the upload
   if (403 Rate Limit) {             // if rejected with a rate limit
      delay += 2s                    // add 2s to delay to guarantee the next attempt will be OK
   } else {
      removeFileFromQueue(file)      // if not rejected, mark file as done
      if (delay > 0) {               // if we are currently backing off
         delay -= 0.2s               // reduce the backoff delay
      }
   }
}
// You can play with the 2s and 0.2s to optimise throughput. The key is to do all you can to avoid the 403's 

One thing to be aware of is that there was (is?) a bug with Drive that sometimes an upload gets rejected with a 403, but, despite sending the 403, Drive goes ahead and creates the file. The symptom will be duplicated files. So to be extra safe, after a 403 you should somehow check if the file is actually there. Easiest way to do this is use pre-allocated ID's, or to add your own opaque ID to a property.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!