Is this expected behaviour for Delayed Job, Rails and Mandrill?

旧时模样 提交于 2020-01-06 02:18:08

问题


Related to this question, which explains the origins of the logic in the controller, I have a question about background jobs with Delayed Job before pushing it to github and re-deploying.

The model and scopes work, the logic in the controller(s) works, the conditionals in the e-mail text.erb files work, users are either readers or subscribers and can set their e-mail preferences on their "My Account" page: [Articles & Updates, Just Articles, No E-mail, etc.]. Delayed Job is set up and processing it all in the background, making the front end nice and fast as always, and Mandrill SMTP is receiving it all correctly and sending out the e-mails in a speedy manner.

The main logic block in article_controller does this to send the right e-mails to the right users:

if @article.update(article_params) && @article.status == 'published' && @article.created_at.today?
    User.wantsarticles.editor.each do |user|
      ArticleMailer.delay.send_article_full(@article, user)
    end
    User.wantsarticles.subscribers.each do |user|
      ArticleMailer.delay.send_article_full(@article, user)
    end
    User.wantsarticles.readers.each do |user|
      ArticleMailer.delay.send_article_teaser(@article, user)
    end
    format.html { redirect_to :action => 'admin', notice: 'Article was successfully updated.' }
    format.json { render :show, status: :ok, location: @article }
    else
      format.html { redirect_to :action => 'admin', notice: 'Article was successfully updated.' }
      format.json { render :show, status: :ok, location: @article }
    end

Looking at the Rails and Delayed Jobs logs, though, with a test set of just a few users (5-10), when it cycles through the logic and decides three e-mails need to be sent out, Rails is doing three INSERT INTO the DJ table and DJ then does this for each one:

Job NewsitemMailer.send_article_full (id=21) RUNNING
Job NewsitemMailer.send_article_full (id=21) COMPLETED after 0.8950

And then when it has finished, it reports back with:

3 jobs processed at 0.9039 j/s, 0 failed

And in the Mandrill logs, each e-mail sent gets its own API "success/fail" entry.

So: is this correct/expected behaviour for Delayed Job? Should it be creating one job for each e-mail? Or processing them in a different way? Is this way of doing things going to break the server when we start doing X thousand instead of ten/three?


回答1:


This looks like the default behaviour of delayed_job. You have ArticleMailer.delay call 3 times and it queues them all and hence you see 3 jobs processed.

I think, you should also look at handle_asynchronously feature of delayed_job.

Also, if you plan to process a huge number of emails, I would suggest you to explore other options such as Resque or sidekiq or beanstalkd

They are better than delayed_job when it comes to process a lot of jobs. delayed_job is simple and easy to setup, but can face performance issue in scale.

See this to get an overview.



来源:https://stackoverflow.com/questions/32219759/is-this-expected-behaviour-for-delayed-job-rails-and-mandrill

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!