Our current caching implementation caches large amounts of data in report objects (50MB in some cases).
We’ve moved from memory cache to file cache and use ProtoBu
Redis actually is not designed for storing large objects (many MB) because it is a single-thread server. So, one request will be fast enough, but a few requests will be slow because they all will be processed by one thread. In the last versions some optimizations were done.
Speed of RAM and memory bandwidth seem less critical for global performance especially for small objects. For large objects (>10 KB), it may become noticeable though. Usually, it is not really cost-effective to buy expensive fast memory modules to optimize Redis. https://redis.io/topics/benchmarks
So, you can use Jumbo frames or buy a faster memory if it is possible. But actually it won't help significantly. Consider using Memcached instead. It is multi-threaded and can be scaled out horizontally to support large amount of data. Redis can be scaled only with master-slave replication.