Caching a 302MB object

后端 未结 4 871
你的背包
你的背包 2021-01-05 21:51

I have an object (actually, an array of objects) that\'s 302MB. When I try to cache it with memcached it doesn\'t work, no matter how much memory I give memcached, apparentl

相关标签:
4条回答
  • 2021-01-05 22:01

    In alternative you can change the limit quickly by edit the configuration file [/etc/memcached.conf] adding:

    # Increase limit 
    -I 512M
    

    After restart your service.

    0 讨论(0)
  • 2021-01-05 22:02

    There's no way that you should need a 302Mb instance of an object (or array of objects) to be cached.

    Without seeing the code, I can't suggest how...but there has to be a good way to refactor so that you're caching on a smaller scale.

    0 讨论(0)
  • 2021-01-05 22:02

    You could perhaps use shared memory (but with that size I would vote against it) or use a RAM drive.

    Is this for one webserver or are you looking for a common cache for multiple web servers?

    I still join the others when they are saying that you probably need another approach. Try to explain what kind of data it is that you want to cache.

    0 讨论(0)
  • 2021-01-05 22:19

    Quoting

    15.5.5.4: What is the max size of an object you can store in memcache and is that configurable?

    The default maximum object size is 1MB. In memcached 1.4.2 and later you can change the maximum size of an object using the -I command line option.

    For versions before this, to increase this size, you have to re-compile memcached. You can modify the value of the POWER_BLOCK within the slabs.c file within the source.

    In memcached 1.4.2 and higher you can configure the maximum supported object size by using the -I command-line option. For example, to increase the maximum object size to 5MB:

     $ memcached -I 5m
    

    However, even when increasing the memory, this is hardly a good choice IMO. A better idea would be to break the object apart into smaller pieces and then cache indididual parts of it.

    Quoting Why are items limited to 1 megabyte in size?

    Short answer: Because of how the memory allocator's algorithm works.

    Long answer: Memcached's memory storage engine (which will be pluggable/adjusted in the future...), uses a slabs approach to memory management. Memory is broken up into slabs chunks of varying sizes, starting at a minimum number and ascending by a factorial up to the largest possible value.

    Say the minimum value is 400 bytes, and the maximum value is 1 megabyte, and the factorial is 1.20:

    slab 1 - 400 bytes slab 2 - 480 bytes slab 3 - 576 bytes ... etc.

    The larger the slab, the more of a gap there is between it and the previous slab. So the larger the maximum value the less efficient the memory storage is. Memcached also has to pre-allocate some memory for every slab that exists, so setting a smaller factorial with a larger max value will require even more overhead.

    There're other reason why you wouldn't want to do that... If we're talking about a web page and you're attempting to store/load values that large, you're probably doing something wrong. At that size it'll take a noticeable amount of time to load and unpack the data structure into memory, and your site will likely not perform very well.

    If you really do want to store items larger than 1MB, you can recompile memcached with an edited slabs.c:POWER_BLOCK value, or use the inefficient malloc/free backend. Other suggestions include a database, MogileFS, etc.

    0 讨论(0)
提交回复
热议问题