Sidekiq documentation suggests I can only control the global concurrency of sidekiq, rather than per queue. I am raising a question here with hope that there\'s a solution for a
To handle the queue wise concurrency you can use this gem sidekiq-limit-fetch. which fix the maximum no of threads that queue can use.
Using Heroku I was able to control concurrency per queue by setting an environment variable in Procfile and then utilizing it in an sidekiq.rb
initializer:
Sidekiq.configure_server do |config|
config.options[:concurrency] = (ENV['SIDEKIQ_WORKERS_PROCFILE'] || ENV['SIDEKIQ_WORKERS'] || 1).to_i
...
end
SIDEKIQ_WORKERS_PROCFILE
is set in Procfile for one queue - other queues use SIDEKIQ_WORKERS
that is set in Heroku settings.
I'm not sure if this could be anyhow helpful in your scenario though.
UPDATE
To clarify this, the idea involves deployment on Heroku and every queue is processed in a separate dyno. This makes it still use global sidekiq concurrency settings, dynos are just a workaround which does the job in my use case.
My Procfile
looks like this:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
default: env HEROKU_PROCESS=default bundle exec sidekiq -c 5
important: env HEROKU_PROCESS=important bundle exec sidekiq -q important -c 5
instant: env HEROKU_PROCESS=instant bundle exec sidekiq -q instant -c 5
matrices: env HEROKU_PROCESS=matrices SIDEKIQ_WORKERS_PROCFILE=1 bundle exec sidekiq -q matrices -c 1
You can see that the matrices
worker has SIDEKIQ_WORKERS_PROCFILE
variable set to 1 - this makes it possible to run the worker with the queue with different concurrency. The variable is read by the sidekiq.rb
initializer. Please note that there is also the -c 1
option - I don't know if that matters however.
The initializer is already up there.
All set up and in the sidekiq dashboard I can see that the matrices
queue is running 1 thread while other use 3 (the SIDEKIQ_WORKERS
variable is set to 3 in Heroku settings env. variable):
The Enterprise version of Sidekiq has enhanced concurrency control:
https://github.com/mperham/sidekiq/wiki/Ent-Rate-Limiting
In particular, you can limit concurrency of arbitrary jobs by count or interval. e.g. up to 20 jobs at a time; or up to 20 jobs per (60 seconds/1 hour, etc). The interval can be rolling or clock-aligned.
This could satisfy the per-queue control you're asking about with the correct usage. But it can be much more flexible, controlling concurrency by your own groupings. For example, you could specify up to 30 jobs per second can hit PayPal. Or, you could specify 30 jobs per second per state can hit Paypay, and 15 jobs per state can hit Stripe simultaneously. (Assuming 'state' is an attribute in your data, of course.)
There's no limit to how fine-grained you want to define your groups.