I have a requirement where I am reading a text file placed in Unix of size 19 GB and having records around 115 million. My Spring Batch (Launcher) is getting triggered by Au
In a project, I worked on, we had to transfer 5 billion records from db2 to oracle. With a quite complex transformation logic. During the transformation, the data was saved about 4 times in different files. We were able to insert data with about 50'000 records a row in an oracle db. From that point of view, doing it under 4 hours seems realistic.
You didn't state where exactly your bottlenecks are, but here are some ideas.
HTH.