How to check heap size for a process on Linux

后端 未结 5 674
清歌不尽
清歌不尽 2020-12-01 08:24

I was writing some code and it kept crashing. Later after digging the dumps I realized I was overshooting the maximum heap limit (life would have been easier if I had added

相关标签:
5条回答
  • 2020-12-01 09:04

    Heap and memory management is a facility provided by your C library (likely glibc). It maintains the heap and returns chunks of memory to you every time you do a malloc(). It doesn't know heap size limit: every time you request more memory than what is available on the heap, it just goes and asks the kernel for more (either using sbrk() or mmap()).

    By default, kernel will almost always give you more memory when asked. This means that malloc() will always return a valid address. It's only when you refer to an allocated page for the first time that the kernel will actually bother to find a page for you. If it finds that it cannot hand you one it runs an OOM killer which according to certain measure called badness (which includes your process's and its children's virtual memory sizes, nice level, overall running time etc) selects a victim and sends it a SIGTERM. This memory management technique is called overcommit and is used by the kernel when /proc/sys/vm/overcommit_memory is 0 or 1. See overcommit-accounting in kernel documentation for details.

    By writing 2 into /proc/sys/vm/overcommit_memory you can disable the overcommit. If you do that the kernel will actually check whether it has memory before promising it. This will result in malloc() returning NULL if no more memory is available.

    You can also set a limit on the virtual memory a process can allocate with setrlimit() and RLIMIT_AS or with the ulimit -v command. Regardless of the overcommit setting described above, if the process tries to allocate more memory than the limit, kernel will refuse it and malloc() will return NULL. Note than in modern Linux kernel (including entire 2.6.x series) the limit on the resident size (setrlimit() with RLIMIT_RSS or ulimit -m command) is ineffective.

    The session below was run on kernel 2.6.32 with 4GB RAM and 8GB swap.

    $ cat bigmem.c
    #include <stdlib.h>
    #include <stdio.h>
    
    int main() {
      int i = 0;
      for (; i < 13*1024; i++) {
        void* p = malloc(1024*1024);
        if (p == NULL) {
          fprintf(stderr, "malloc() returned NULL on %dth request\n", i);
          return 1;
        }
      }
      printf("Allocated it all\n");
      return 0;
    }
    $ cc -o bigmem bigmem.c
    $ cat /proc/sys/vm/overcommit_memory
    0
    $ ./bigmem
    Allocated it all
    $ sudo bash -c "echo 2 > /proc/sys/vm/overcommit_memory"
    $ cat /proc/sys/vm/overcommit_memory
    2
    $ ./bigmem
    malloc() returned NULL on 8519th request
    $ sudo bash -c "echo 0 > /proc/sys/vm/overcommit_memory"
    $ cat /proc/sys/vm/overcommit_memory
    0
    $ ./bigmem
    Allocated it all
    $ ulimit -v $(( 1024*1024 ))
    $ ./bigmem
    malloc() returned NULL on 1026th request
    $
    

    In the example above swapping or OOM kill could never occur, but this would change significantly if the process actually tried to touch all the memory allocated.

    To answer your question directly: unless you have virtual memory limit explicitly set with ulimit -v command, there is no heap size limit other than machine's physical resources or logical limit of your address space (relevant in 32-bit systems). Your glibc will keep allocating memory on the heap and will request more and more from the kernel as your heap grows. Eventually you may end up swapping badly if all physical memory is exhausted. Once the swap space is exhausted a random process will be killed by kernel's OOM killer.

    Note however, that memory allocation may fail for many more reasons than lack of free memory, fragmentation or reaching a configured limit. The sbrk() and mmap() calls used by glib's allocator have their own failures, e.g. the program break reached another, already allocated address (e.g. shared memory or a page previously mapped with mmap()) or process's maximum number of memory mappings has been exceeded.

    0 讨论(0)
  • 2020-12-01 09:10

    I'd like to add one point to the previous answers.

    Apps have the illusion that malloc() returns 'solid' blocks; in reality, a buffer may exist scattered, pulverized, on many pages of RAM. The crucial fact here is this: the Virtual Memory of a process, containing its code or containing something as a large array, must be contiguous. Let's even admit that code and data be separated; a large array, char str[universe_size], must be contiguous.

    Now: can a single app enlarge the heap arbitrarily, to alloc such an array?

    The answer could be 'yes' if there were nothing else running in the machine. The heap can be ridiculously huge, but it must have boundaries. At some point, calls to sbrk() (in Linux, the function that, in short, 'enlarges' the heap) should stumble on the area reserved for another application.

    This link provides some interesting and clarifying examples, check it out. I did not find the info on Linux.

    0 讨论(0)
  • 2020-12-01 09:20

    The heap usually is as large as the addressable virtual memory on your architecture.

    You should check your systems current limits with the ulimit -a command and seek this line max memory size (kbytes, -m) 3008828, this line on my OpenSuse 11.4 x86_64 with ~3.5 GiB of ram says I have roughly 3GB of ram per process.

    Then you can truly test your system using this simple program to check max usable memory per process:

    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>
    
    int main(int argc,char* argv[]){
            size_t oneHundredMiB=100*1048576;
            size_t maxMemMiB=0;
            void *memPointer = NULL;
            do{
                    if(memPointer != NULL){
                            printf("Max Tested Memory = %zi\n",maxMemMiB);
                            memset(memPointer,0,maxMemMiB);
                            free(memPointer);
                    }
                    maxMemMiB+=oneHundredMiB;
                    memPointer=malloc(maxMemMiB);
            }while(memPointer != NULL);
            printf("Max Usable Memory aprox = %zi\n",maxMemMiB-oneHundredMiB);
            return 0;
    }
    

    This programs gets memory on 100MiB increments, presents the currently allocated memory, allocates 0's on it,then frees the memory. When the system can't give more memory, returns NULL and it displays the final max usable amount of ram.

    The Caveat is that your system will start to heavily swap memory in the final stages. Depending on your system configuration, the kernel might decide to kill some processes. I use a 100 MiB increments so there is some breathing space for some apps and the system. You should close anything that you don't want crashing.

    That being said. In my system where I'm writing this nothing crashed. And the program above reports barely the same as ulimit -a. The difference is that it actually tested the memory and by means of memset() confirmed the memory was given and used.

    For comparison on a Ubuntu 10.04x86 VM with 256 MiB of ram and 400MiB of swap the ulimit report was memory size (kbytes, -m) unlimited and my little program reported 524.288.000 bytes, which is roughly the combined ram and swap, discounting ram used by others software and the kernel.

    Edit: As Adam Zalcman wrote, ulimit -m is no longer honored on newer 2.6 and up linux kernels, so i stand corrected. But ulimit -v is honored. For practical results you should replace -m with -v, and look for virtual memory (kbytes, -v) 4515440. It seems mere chance that my suse box had the -m value coinciding with what my little utility reported. You should remember that this is virtual memory assigned by the kernel, if physical ram is insufficient it will take swap space to make up for it.

    If you want to know how much physical ram is available without disturbing any process or the system, you can use

    long total_available_ram =sysconf(_SC_AVPHYS_PAGES) * sysconf(_SC_PAGESIZE) ;

    this will exclude cache and buffer memory, so this number can be far smaller than the actual available memory. OS caches can be quiet large and their eviction can give the needed extra memory, but that is handled by the kernel.

    0 讨论(0)
  • 2020-12-01 09:21

    I think your original problem was that malloc failed to allocate the requested memory on your system.

    Why this happened is specific to your system.

    When a process is loaded, it is allocated memory up to a certain address which is the system break point for the process. Beyond that address the memory is unmapped for the process. So when the process "hits" the "break" point it requests more memory from the system and one way to do this is via the system call sbrk
    malloc would do that under the hood but in your system for some reason it failed.

    There could be many reasons for this for example:
    1) I think in Linux there is a limit for max memory size. I think it is ulimit and perhaps you hit that. Check if it is set to a limit
    2) Perhaps your system was too loaded
    3) Your program does bad memory management and you end up with fragemented memory so malloc can not get the chunk size you requested.
    4) Your program corrupts the malloc internal data structures i.e. bad pointer usage
    etc

    0 讨论(0)
  • 2020-12-01 09:22

    You can find the process id of your webapp/java process from top. Use jmap heap - to get the heap allocation. I tested this on AWS-Ec2 for elastic beanstalk and it gives the heap allocated. Here is the detailed answer Xmx settings in elasticbean stalk through environment properties

    0 讨论(0)
提交回复
热议问题