resque

GitHub's Redis and Resque failure behavior?

不打扰是莪最后的温柔 提交于 2019-12-10 12:38:39
问题 Anyone have any insight into how GitHub deals with the potential failure or temporary unavailability of a Redis server when using Resque? There are others that seem to have put together semi-complicated solutions as a holdover for redis-cluster using zookeeper (see https://github.com/ryanlecompte/redis_failover and Solutions for resque failover redis). Others seem to have 'poor mans failover' that switches the slave to the master on first sight of connectivity issues without coordination

Locating and removing a delayed resque job

梦想与她 提交于 2019-12-10 11:56:16
问题 I have a resque job that got caught up in some bad code and is infinitely getting reque'd after failing over repeatedly. I'd like to remove the job manually, somehow, but I'm not sure what the name of the job is in the redis namespace. It isn't in 'failed' because I'm catching the actual exception. In the exception, I add the job back to the resque queue using Resque.enqueue_in(). How do I figure out what the name of the job is in redis so I can delete the key/job from ever happening? 回答1:

generating pdf using prawn in background with resque

安稳与你 提交于 2019-12-10 09:34:28
问题 I am trying to create a PDF document in the background via Resque background job. My code for creating the PDF is in a Rails helper method that I want to use in the Resque worker like: class DocumentCreator @queue = :document_creator_queue require "prawn" def self.perform(id) @doc = Document.find(id) Prawn::Document.generate('test.pdf') do |pdf| include ActionView::Helpers::DocumentHelper create_pdf(pdf) end end end The create_pdf method is from the DocumentHelper but I am getting this error:

Resque on Heroku cedar stack Worker count still exists after the worker terminate

假如想象 提交于 2019-12-09 18:29:39
问题 I have successfully run resque on heroku cedar stack and mount the interface on rails. when I start the worker, Everything works fine. The worker process the job. But When i kill the worker, Resque still think that the worker is available. When I start another worker, it then think there are 2 worker but in fact there is only one running. I also notice form here http://devcenter.heroku.com/articles/ps that heroku send SIGTERM when killing a worker and if that does not terminate then it send

Efficiently reschedule ActiveJob (resque/sidekiq)

帅比萌擦擦* 提交于 2019-12-09 13:50:03
问题 I'm playing with Rails 4.2 app which uses ActiveJob backed by resque/sidekiq for email scheduling. When a user creates newsletter campaign a new job is created and scheduled on certain date. That's all great but what happens when the user changes the delivery date. In this case every job could check if it should be delivered or not thus invalid jobs would be ignored and only the last one would be executed. This could work but if a user would make 1k edits that would push 1k-1 invalid jobs

Resque: time-critical jobs that are executed sequentially per user

故事扮演 提交于 2019-12-09 13:15:38
问题 My application creates resque jobs that must be processed sequentially per user, and they should be processed as fast as possible (1 second maximum delay). An example: job1 and job2 is created for user1 und job3 for user2. Resque can process job1 and job3 in parallel, but job1 and job2 should be processed sequentially. I have different thoughts for a solution: I could use different queues (e.g. queue_1 ... queue_10) and start a worker for each queue (e.g. rake resque:work QUEUE=queue_1 ).

Jobs processing in background from web application

一个人想着一个人 提交于 2019-12-09 06:09:16
问题 I want to schedule and run a lot of jobs in the background during a web application execution. The web app is built on top of Symfony 2 and Doctrine 2. I know the job-processing can be done with libraries like Resque or Sidekiq. However, these libraries and my application are written in different languages, so I am wondering how I can run Sidekiq jobs written in Ruby which should integrate with my app written in PHP. What I'm asking myself is if the only way to do this is rewriting a large

Rails.root points to the wrong directory in production during a Resque job

a 夏天 提交于 2019-12-08 16:25:14
问题 I have two jobs that are queued simulataneously and one worker runs them in succession. Both jobs copy some files from the builds/ directory in the root of my Rails project and place them into a temporary folder. The first job always succeeds, never have a problem - it doesn't matter which job runs first either. The first one will work. The second one receives this error when trying to copy the files: No such file or directory - /Users/apps/Sites/my-site/releases/20130829065128/builds/foo

Rails, Heroku, Unicorn & Resque - how to choose the amount of web workers / resque workers?

*爱你&永不变心* 提交于 2019-12-08 14:50:27
I've just switched to using Unicorn on Heroku. I'm also going to switch to resque from delayed_job and use the setup described at http://bugsplat.info/2011-11-27-concurrency-on-heroku-cedar.html What I don't understand from this is how config/unicorn.rb: worker_processes 3 timeout 30 @resque_pid = nil before_fork do |server, worker| @resque_pid ||= spawn("bundle exec rake " + \ "resque:work QUEUES=scrape,geocode,distance,mailer") end translates into: "This will actually result in six processes in each web dyno: 1 unicorn master, 3 unicorn web workers, 1 resque worker, 1 resque child worker

Rails, Heroku, Unicorn & Resque - how to choose the amount of web workers / resque workers?

ぃ、小莉子 提交于 2019-12-08 04:16:23
问题 I've just switched to using Unicorn on Heroku. I'm also going to switch to resque from delayed_job and use the setup described at http://bugsplat.info/2011-11-27-concurrency-on-heroku-cedar.html What I don't understand from this is how config/unicorn.rb: worker_processes 3 timeout 30 @resque_pid = nil before_fork do |server, worker| @resque_pid ||= spawn("bundle exec rake " + \ "resque:work QUEUES=scrape,geocode,distance,mailer") end translates into: "This will actually result in six