问题
My Java EE 6 Web application has reached the maximum number of JMS messages that can be sent in a single transaction and I need to do it in several transactions. What would be the best way of doing this when transactions are managed by the container? Is it OK to use the same MessageProducer across different transactions (using an EJB method annotated with @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
)
I'm using Glassfish v3 and OpenMQ.
The problem with the maximum number of messages in OpenMQ is covered in this SO question Maximum number of messages sent to a Queue in OpenMQ?.
回答1:
If you're unable to find a way to accomplish this with container-managed transactions in your application server, and you don't want to use programmatic transactions, you can consider using the Aggregator and/or Splitter enterprise integration patterns.
In your producer, aggregate your individual messages or objects into one composite message. On the consumer end, split out the composite message for appropriate processing.
回答2:
@theo: great. So basically your current business flow is:-
Front-end (JSF UI) operation--> EJB layer(transaction starts)--->publish messages onto FINAL QUEUE.
Now here is one way of solving ur problem:- you can define a split queue in between:- So flow could be
either:- front-end(JSF UI) operation-->EJB layer-->SPLIT QUEUE--->MDB-->FINAL QUEUE
or, front-end(JSF UI) operation-->SPLIT QUEUE--->EJB layer-->FINAL QUEUE
So basic idea is since you have n number of messagers per transaction limit and if you have m no of messages to be published where m>n and m>0; then you can split m into blocks m\n times by introducing a SPLIT QUEUE in mid way. I had done this in my one of projects. Let me know if you have any questions.
@Theo: I am not very clear what Asynkronos mentioned. But here is what I meant:- You JSF UI operation results into (let us say) 1000 JMS messages to your FINAL queue. Now let us suppose, there is 200 JMS messages per transaction limit. What you can do possibly is:-
Though I don't know ur data model/business flow completely, but here is synopsis. Let me assume ur JMS message's payload data is coming from a set of tables which can be uniquely identified by identifier Un. So you have 1000 JMS messages to be published identified by U1,U2....U1000. So basically You can define an internal SPLIT queue. In your Java code, split 1000 identifiers into block of 200 each: so {U1...U200},{U201....U400),{U401...U600),(U601,..U800) and (U801,...U1000). You can publish these list of identifiers onto your Split Queue as Java.util.List payload. You can define MDB listening SPLIT queue with transaction attribute REQUIRES-NEW. In MDB code, you can get list of identifiers and do a for loop:
onMessage(Message m){
ObjectMessage objectMsg=(ObjectMessage) m;
java.util.List list=(List) objectMsg.getObject();
//open a JMS session
for (String identifer : list){
//fetch data from DB for particular identifier.
//prepare output JMS payload for that particular identifier.
//publish JMS data onto FINAL queue
}
Hope this clarifies.