Resque: time-critical jobs that are executed sequentially per user

故事扮演 提交于 2019-12-09 13:15:38

问题


My application creates resque jobs that must be processed sequentially per user, and they should be processed as fast as possible (1 second maximum delay).

An example: job1 and job2 is created for user1 und job3 for user2. Resque can process job1 and job3 in parallel, but job1 and job2 should be processed sequentially.

I have different thoughts for a solution:

  • I could use different queues (e.g. queue_1 ... queue_10) and start a worker for each queue (e.g. rake resque:work QUEUE=queue_1). Users are assigned to a queue/ worker at runtime (e.g. on login, every day etc.)
  • I could use dynamic "user queues" (e.g. queue_#{user.id}) and try to extend resque that only 1 worker can process a queue at a time (as asked in Resque: one worker per queue)
  • I could put the jobs in a non-resque queue and use a "per-user meta job" with resque-lock (https://github.com/defunkt/resque-lock) that handles those jobs.

Do you have any experiences with one of those scenarios in practice? Or do have other other ideas that might be worth thinking about? I would appreciate any input, thank you!


回答1:


Thanks to the answer of @Isotope I finally came to a solution that seems to work (using resque-retry and locks in redis:

class MyJob
  extend Resque::Plugins::Retry

  # directly enqueue job when lock occurred
  @retry_delay = 0 
  # we don't need the limit because sometimes the lock should be cleared
  @retry_limit = 10000 
  # just catch lock timeouts
  @retry_exceptions = [Redis::Lock::LockTimeout]

  def self.perform(user_id, ...)
    # Lock the job for given user. 
    # If there is already another job for the user in progress, 
    # Redis::Lock::LockTimeout is raised and the job is requeued.
    Redis::Lock.new("my_job.user##{user_id}", 
      :expiration => 1, 
      # We don't want to wait for the lock, just requeue the job as fast as possible
      :timeout => 0.1
    ).lock do
      # do your stuff here ...
    end
  end
end

I am using here Redis::Lock from https://github.com/nateware/redis-objects (it encapsulates the pattern from http://redis.io/commands/setex).




回答2:


I've done this before.

Best solution to ensure sequentially for things like this is to have the end of job1 queue the job2. job1's and job2's can then either go in the same queue or different queues, it won't matter for sequentially, it's up to you.

Any other solution, such as queuing jobs1+2 at the same time BUT telling job2 to start in 0.5secs would result in race conditions, so that's not recommended.

Having job1 trigger job2 is also really easy to do.

If you want another option for the sake of it: My final suggestion would be to bundle both jobs into a single job and add a param for if the 2nd part should be triggered also.

e.g.

def my_job(id, etc, etc, do_job_two = false)
  ...job_1 stuff...
  if do_job_two
    ...job_2 stuff...
  end
end


来源:https://stackoverflow.com/questions/10054248/resque-time-critical-jobs-that-are-executed-sequentially-per-user

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!