I\'m using web workers to do some CPU intensive work but have the requirement that the worker will respond to messages from the parent script while the worker is still processin
Having the same problem I searched the web workers draft and found something in the Processing model section, steps from 9 to 12. As far as I have understood, a worker that starts processing a task will not process another one until the first is completed. So, if you don't care about stopping and resuming a task, nciagra's answer should give better performances than rescheduling each iteration of the task.
Still investigating, though.
I ran into this issue myself when playing with workers for the first time. I also debated using setInterval, but I felt that this would be a rather hacky approach to the problem (and I had already went this way for my emulated multithreading). Instead, I settled on terminating the workers from the main thread (worker.terminate()) and recreating them if the task that they are involved in needs to be interrupted. Garbage collection etc seemed to be handled in my testing.
If there is data from these tasks that you want to save, you can always post it back to the main thread for storage at regular intervals, and if there is some logic you wish to implement regarding whether they are terminated or not, you can post the relevant data back at regular enough intervals to allow it.
Spawning subworkers would lead to the same set of issues anyway; you'd still have to terminate the subworkers (or create new ones) according to some logic, and I'm not sure it's as well supported (on chrome for example).
James
A worker can spawn sub workers. You can have your main worker act as your message queue, and when it receives a request for a long running operation, spawn a sub worker to process that data. The sub worker can then send the results back to the main worker to remove the event from the queue and return the results to the main thread. That way your main worker will always be free to listen for new messages and you have complete control over the queue.
--Nick