Why Batch scope behave strange when trying to load a Huge Records- Mule ESB

久未见 提交于 2019-12-06 15:17:56

问题


I'm facing issues in Process Record Phase of Batch, Kindly suggest- I'm trying to load the some KB file ( which has about 5000 record). For the success scenario it works. If suppose error happened in input phase for the first hit and the flows stops, when the second time when it try to hit the same record. Mule stops executing in Process Record step.It is not running After loading Phase. Please find the run time logs below

11:55:33  INFO  info.org.mule.module.logging.DispatchingLogger - Starting loading phase for   instance 'ae67601a-5fbe-11e4-bc4d-f0def1ed6871' of job 'test'
11:55:33  INFO  info.org.mule.module.logging.DispatchingLogger - Finished loading phase for instance ae67601a-5fbe-11e4-bc4d-f0def1ed6871 of job order. 5000 records were loaded
11:55:33  INFO  info.org.mule.module.logging.DispatchingLogger - **Started execution of instance 'ae67601a-5fbe-11e4-bc4d-f0def1ed6871' of job 'test**

It stopped processing after instance starts- I'm not sure what is happening here. When i stop the flow and delete the .mule folder from the workspace. It then works. I hope in loading phase mule using temporary queue it is not being deleted automatically when exception happens in input phase, but not sure this could be the real cause.

I cant go and delete each time the .muleFolder in a real time.

Could you please anyone suggest what makes the strange behavior here. How to i get rid of this issue. Please find config xml

  <batch:job name="test">
    <batch:threading-profile poolExhaustedAction="WAIT"/>
    <batch:input>

        <component class="com.ReadFile" doc:name="File Reader"/>
        <mulexml:jaxb-xml-to-object-transformer returnClass="com.dto" jaxbContext-ref="JAXB_Context" doc:name="XML to JAXB Object"/>
        <component class="com.Transformer" doc:name="Java"/>
    </batch:input>
    <batch:process-records>
        <batch:step name="Batch_Step" accept-policy="ALL">
            <batch:commit doc:name="Batch Commit" streaming="true">

                <logger message="************after Data mapper" level="INFO" doc:name="Logger"/>
                <data-mapper:transform config-ref="Orders_Pojo_To_XML"  stream="true" doc:name="Transform_CanonicalToHybris"/>
                <file:outbound-endpoint responseTimeout="10000" doc:name="File" path="#[sessionVars.uploadFilepath]"">        

         </file:outbound-endpoint>       
            </batch:commit>
        </batch:step>
    </batch:process-records>
    <batch:on-complete>

       <set-payload value="BatchJobInstanceId:#[payload.batchJobInstanceId+'\n'], Number of TotalRecords: #[payload.totalRecords+'\n'], Number of loadedRecord: #[payload.loadedRecords+'\n'],  ProcessedRecords: #[payload.processedRecords+'\n'], Number of sucessfull Records: #[payload.successfulRecords+'\n'], Number of failed Records: #[payload.failedRecords+'\n'], ElapsedTime: #[payload.elapsedTimeInMillis+'\n'], InpuPhaseException #[payload.inputPhaseException+'\n'], LoadingPhaseException: #[payload.loadingPhaseException+'\n'], CompletePhaseException: #[payload.onCompletePhaseException+'\n'] " doc:name="Set Batch Result"/>

        <logger message="afterSetPayload: #[payload]" level="INFO" doc:name="Logger"/> 

        <flow-ref name="log" doc:name="Logger" />     

    </batch:on-complete>

I'm in struck with this behavior quite a long days. Your help will be much appreciated. Version:3.5.1 Thanks in advance.


回答1:


Set max-failed-records to -1 so that batch job will continue even though there an exception <batch:job name="test" max-failed-records="-1">

in the real time environment you don't have the situation to clean .mule folder

this happens only when you are working with Anypoint Studio



来源:https://stackoverflow.com/questions/26642320/why-batch-scope-behave-strange-when-trying-to-load-a-huge-records-mule-esb

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!