worker

Is it feasible to run multiple processeses on a Heroku dyno?

泪湿孤枕 提交于 2019-12-01 03:36:19
I am aware of the memory limitations of the Heroku platform, and I know that it is far more scalable to separate an app into web and worker dynos. However, I still would like to run asynchronous tasks alongside the web process for testing purposes. Dynos are costly and I would like to prototype on the free instance that Heroku provides. Are there any issues with spawning a new job as a process or subprocess in the same dyno as a web process? On the newer Cedar stack, there are no issues with spawning multiple processes. Each dyno is a virtual machine and has no particular limitations except in

differentiate driver code and work code in Apache Spark

笑着哭i 提交于 2019-11-30 19:55:46
In Apache Spark program how do we know which part of code will execute in driver program and which part of code will execute in worker nodes? With Regards It is actually pretty simple. Everything that happens inside the closure created by a transformation happens on a worker. It means if something is passed inside map(...) , filter(...) , mapPartitions(...) , groupBy*(...) , aggregateBy*(...) is executed on the workers. It includes reading data from a persistent storage or remote sources. Actions like count , reduce(...) , fold(...) are usually executed on both driver and workers. Heavy

How to close Smtp connection in SwiftMailer

老子叫甜甜 提交于 2019-11-30 13:44:25
问题 I use SwiftMailer to send emails from a gearman worker process. I'm using the Swift_SmtpTransport class to send emails. The problem is that if this worker process stays idle for sometime, the SwiftMailer smtp connection times out. Now when the next job arrives, SwiftMailer fails to send emails as the connection has been timed out. Ideally, I would want to close the smtp connection after every job. I'm unable to locate a api in the class which does this specifically. Neither does unset()

Starting multiple upstart instances automatically

☆樱花仙子☆ 提交于 2019-11-30 10:20:31
问题 We use PHP gearman workers to run various tasks in parallel. Everything works just fine, and I have silly little shell script to spin them up when I want them. Being a programmer (and therefore lazy), I wanted to see if I could spin these up via an upstart script. I figured out how to use the instance stanza, so I could start them with an instance number: description "Async insert workers" author "Mike Grunder" env SCRIPT_PATH="/path/to/my/script" instance $N script php $SCRIPT_PATH/worker

How to close Smtp connection in SwiftMailer

老子叫甜甜 提交于 2019-11-30 08:38:00
I use SwiftMailer to send emails from a gearman worker process. I'm using the Swift_SmtpTransport class to send emails. The problem is that if this worker process stays idle for sometime, the SwiftMailer smtp connection times out. Now when the next job arrives, SwiftMailer fails to send emails as the connection has been timed out. Ideally, I would want to close the smtp connection after every job. I'm unable to locate a api in the class which does this specifically. Neither does unset() object works since this is a static class. cernio There is a rude option: stop the transport explicitly. On

Elastic Beanstalk Worker's SQS daemon getting 504 gateway timeout after 1 minute

老子叫甜甜 提交于 2019-11-30 05:47:58
I have an Elastic Beanstalk worker that can only run one task at a time and it takes some time to do so (from a few minutes to, hopefully, less than 30 minutes), so I'm queuing my tasks on a SQS. On my worker configuration, I have: HTTP connections: 1 Visibility timeout: 3600 Error visibility timeout: 300 (On "Advanced") Inactivity timeout: 1800 The problem is that there seems to be a 1 minute timeout (on nginx?) that overrides the "Inactivity timeout", returning a 504 (Gateway timeout). This is what I can find on the aws-sqsd.log file: 2016-02-03T16:16:27Z init: initializing aws-sqsd 2.0

Starting multiple upstart instances automatically

╄→гoц情女王★ 提交于 2019-11-29 19:53:46
We use PHP gearman workers to run various tasks in parallel. Everything works just fine, and I have silly little shell script to spin them up when I want them. Being a programmer (and therefore lazy), I wanted to see if I could spin these up via an upstart script. I figured out how to use the instance stanza, so I could start them with an instance number: description "Async insert workers" author "Mike Grunder" env SCRIPT_PATH="/path/to/my/script" instance $N script php $SCRIPT_PATH/worker.php end script And this works great, to start them like so: sudo start async-worker N=1 sudo start async

Worker design pattern

会有一股神秘感。 提交于 2019-11-29 14:50:20
问题 What is "worker" design pattern? 回答1: It could be that you are after a worker thread pattern, where you use a queue to schedule tasks that you want to be processed "offline" by a worker thread. Some solutions will use a pool of worker threads instead of single thread to achieve performance gains by utilising paralelisation. 回答2: The worker design pattern Problem: You have a small object which is data and large set of operations which could be performed to that object You want to keep the

How to make EventSource available inside SharedWorker in FireFox?

廉价感情. 提交于 2019-11-29 14:37:21
I am trying to implement Server-Sent Events (SSE) inside a SharedWorker. The implementation is working with no problems in Google Chrome. However, it does not work in FireFox at all. When I try get it to work in FireFox I get this error in the console. error { target: SharedWorker, isTrusted: true, message: "ReferenceError: EventSource is not defined", filename: "https://example.com/add-ons/icws/js/worker.js", lineno: 28, colno: 0, currentTarget: SharedWorker, eventPhase: 2, bubbles: false, cancelable: true, defaultPrevented: false } How can I make EventSource available inside the SharedWorker

Elastic Beanstalk Worker's SQS daemon getting 504 gateway timeout after 1 minute

左心房为你撑大大i 提交于 2019-11-29 05:17:03
问题 I have an Elastic Beanstalk worker that can only run one task at a time and it takes some time to do so (from a few minutes to, hopefully, less than 30 minutes), so I'm queuing my tasks on a SQS. On my worker configuration, I have: HTTP connections: 1 Visibility timeout: 3600 Error visibility timeout: 300 (On "Advanced") Inactivity timeout: 1800 The problem is that there seems to be a 1 minute timeout (on nginx?) that overrides the "Inactivity timeout", returning a 504 (Gateway timeout). This