While using the disruptor, there may be a consumer(s) that is lagging behind, and because of that slow consumer, the whole application is affected.
Keeping in mind
Use a set of identical eventHandlers. To avoid more than 1 eventHandler acting upon a single event, I use the following approach.
Create a thread pool of size Number of cores in the system
Executor executor = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors()); // a thread pool to which we can assign tasks
Then create a handler array
HttpEventHandler [] handlers = new HttpEventHandler[Runtime.getRuntime().availableProcessors()];
for(int i = 0; i<Runtime.getRuntime().availableProcessors();i++){
handlers[i] = new HttpEventHandler(i);
}
disruptor.handleEventsWith(handlers);
In the EventHandler
public void onEvent(HttpEvent event, long sequence, boolean endOfBatch) throws InterruptedException
{
if( sequence % Runtime.getRuntime().availableProcessors()==id){
System.out.println("-----On event Triggered on thread "+Thread.currentThread().getName()+" on sequence "+sequence+" -----");
//your event handler logic
}
Try to separate slow part to other thread (I/O, not O(1) or O(log) calculations, etc.), or to apply some kind of back pressure when consumer is overloaded (by yielding or temporary parking of producers, replaying with 503 or 429 status codes, etc.): http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html
Generally speaking use a WorkerPool to allow multiple pooled worker threads to work on a single consumer, which is good if you have tasks that are independent and of a potentially variable duration (eg: some short tasks, some longer).
The other option is to have multiple independent workers parallel process over the events, but each worker only handle modulo N workers (eg 2 threads, and one thread processes odd, one thread processes even event IDs). This works great if you have consistent duration processing tasks, and allows batching to work very efficiently too.
Another thing to consider is that the consumer can do "batching", which is especially useful for example in auditing. If your consumer has 10 events waiting, rather than write 10 events to an audit log independently, you can collect all 10 events and write them at the same time. In my experience this more than covers the need to run multiple threads.