问题
I have a requirement where I have to read a file which contains around 1 million records each record in a separate line. Each record will be validated and then will be saved in Redis cache. I implemented this in a normal traditional way by reading each line and saving it in the Redis cache but it is hampering performance very badly. Then I came to know about Redis pipeline feature in which I will process records in a batch say 10k at a time in order to improve performance.
How can I use this feature in my current scenario? Any small simple example appreciated. I am using Redis-2.1.1.RELEASE, spring boot-2.0.8.RELEASE
来源:https://stackoverflow.com/questions/57572975/how-to-use-spring-data-redis-pipeline-to-process-1-million-records