Why a 500MB Redis dump.rdb file takes about 5.0GB memory?

前端 未结 2 680
孤城傲影
孤城傲影 2021-02-13 17:41

Actually, I have 3 Redis instances and I put them together into this 500MB+ dump.rdb. The Redis server can read this dump.rdb and it seems that everything is ok. Then I notice t

相关标签:
2条回答
  • 2021-02-13 17:45

    There may be more to it, but I believe Redis compresses the dump files.

    0 讨论(0)
  • 2021-02-13 18:10

    The ratio of memory to dump size depends on the data types Redis uses internally.

    For small objects (hashes, lists and sortedsets), redis uses ziplists to encode data. For small sets made of integers, redis uses Intsets. ZipLists and IntSets are stored on disk in the same format as they are stored in memory. So, you'd expect a 1:1 ratio if your data uses these encodings.

    For larger objects, the in-memory representation is completely different from the on-disk representation. The on-disk format is compressed, doesn't have pointers, doesn't have to deal with memory fragmentation. So, if your objects are large, a 10:1 memory to disk ratio is normal and expected.

    If you want to know which objects eat up memory, use redis-rdb-tools to profile your data (disclaimer: I am the author of this tool). From there, follow the memory optimization notes on redis.io, as well as the memory optimization wiki entry on redis-rdb-tools.

    0 讨论(0)
提交回复
热议问题