How can I measure the actual memory usage of an application or process?

后端 未结 30 2715
心在旅途
心在旅途 2020-11-22 03:01

This question is covered here in great detail.

How do you measure the memory usage of an application or process in Linux?

From the blog articl

相关标签:
30条回答
  • 2020-11-22 03:34

    Check out this shell script to check memory usage by application in Linux.

    It is also available on GitHub and in a version without paste and bc.

    0 讨论(0)
  • 2020-11-22 03:34

    /prox/xxx/numa_maps gives some info there: N0=??? N1=???. But this result might be lower than the actual result, as it only count those which have been touched.

    0 讨论(0)
  • 2020-11-22 03:36

    If the process is not using up too much memory (either because you expect this to be the case, or some other command has given this initial indication), and the process can withstand being stopped for a short period of time, you can try to use the gcore command.

    gcore <pid>
    

    Check the size of the generated core file to get a good idea how much memory a particular process is using.

    This won't work too well if process is using hundreds of megabytes, or gigabytes, as the core generation could take several seconds or minutes to be created depending on I/O performance. During the core creation the process is stopped (or "frozen") to prevent memory changes. So be careful.

    Also make sure the mount point where the core is generated has plenty of disk space and that the system will not react negatively to the core file being created in that particular directory.

    0 讨论(0)
  • 2020-11-22 03:37

    Another vote for Valgrind here, but I would like to add that you can use a tool like Alleyoop to help you interpret the results generated by Valgrind.

    I use the two tools all the time and always have lean, non-leaky code to proudly show for it ;)

    0 讨论(0)
  • 2020-11-22 03:41

    There isn't any easy way to calculate this. But some people have tried to get some good answers:

    • ps_mem.py
    • ps_mem.py at GitHub
    0 讨论(0)
  • 2020-11-22 03:41

    Use smem, which is an alternative to ps which calculates the USS and PSS per process. You probably want the PSS.

    • USS - Unique Set Size. This is the amount of unshared memory unique to that process (think of it as U for unique memory). It does not include shared memory. Thus this will under-report the amount of memory a process uses, but it is helpful when you want to ignore shared memory.

    • PSS - Proportional Set Size. This is what you want. It adds together the unique memory (USS), along with a proportion of its shared memory divided by the number of processes sharing that memory. Thus it will give you an accurate representation of how much actual physical memory is being used per process - with shared memory truly represented as shared. Think of the P being for physical memory.

    How this compares to RSS as reported by ps and other utilities:

    • RSS - Resident Set Size. This is the amount of shared memory plus unshared memory used by each process. If any processes share memory, this will over-report the amount of memory actually used, because the same shared memory will be counted more than once - appearing again in each other process that shares the same memory. Thus it is fairly unreliable, especially when high-memory processes have a lot of forks - which is common in a server, with things like Apache or PHP (FastCGI/FPM) processes.

    Notice: smem can also (optionally) output graphs such as pie charts and the like. IMO you don't need any of that. If you just want to use it from the command line like you might use ps -A v, then you don't need to install the Python and Matplotlib recommended dependency.

    0 讨论(0)
提交回复
热议问题