问题
I have a spring batch application which will trigger the job to transfer bulk data from one database to another database through API call.All jobs are configured to work in parallel processing(Master/slave step)partition and deployed this application in openshift. Need to autoscale the application based on the load during the job execution.Even though i have used the openshift autoscale feature still i couldn't find the efficiency in performance of the job. PODs are simply creating but only one POD is getting utilized.How to fix this issue?How to split jobs among PODS?
回答1:
In a remote partitioning setup, the master step sends StepExecutionRequest
s to a configurable queue (let's call it requests
). Worker steps are listeners on this queue. The master step can be configured to either:
- Aggregate replies from workers on a configurable queue (let's call it
replies
) - Poll the job repository to check the status of workers
With this in mind, autoscaling such a setup depends on how you define your PODs. For example, if you run one or multiple workers in the same POD, you can autoscale this deployment as the size of the requests
queue grows.
Hope this helps.
来源:https://stackoverflow.com/questions/54667737/how-to-autoscale-the-spring-batch-application-in-openshift