how to find MAX memory from docker stats?

后端 未结 4 1242
庸人自扰
庸人自扰 2021-02-18 15:26

With docker stats you can see the memory usage of a container over time.

Is there a way to find what the highest value of memory usage was while running

相关标签:
4条回答
  • 2021-02-18 15:45

    In my case I wanted to monitor a docker container which runs tests for my web application. The test suite is pretty big, it includes javascript tests in a real browser and consume significant amount of both, memory and time.

    Ideally, I wanted to watch the current memory usage real time, but to also keep the history for later analysis.

    I ended up using a modified and simplified version of the Keiran's solution:

    CONTAINER=$(docker ps -q -f name=CONTAINER_NAME)
    FORMAT='{{.MemPerc}}\t{{.MemUsage}}\t{{.Name}}'
    
    docker stats --format $FORMAT $CONTAINER | sed -u 's/\x1b\[[0-9;]*[a-zA-Z]//g' | tee stats
    

    Notes:

    • CONTAINER=$(docker ps -q -f name=NAME) # find container by name, but there are other options
    • FORMAT='{{.MemPerc}} ...}} # MemPerc goes first (for sorting); otherwise you can be creative
    • sed -u # the -u flag is important, it turns off buffering
    • | sed -u 's/\x1b\[[0-9;]*[a-zA-Z]//g' # removes ANSI escape sequences
    • | tee stats # not only show real time, but also write into the stats file
    • I Ctrl-C manually when it's ready – not ideal, but OK for me
    • after that it's easy to find the max with something like sort -n stats | tail
    0 讨论(0)
  • 2021-02-18 15:55

    you can use command:

    docker stats --no-stream | awk '{ print $3 }' | sed '1d'|sort | tail -1
    

    It will give highest memory by container.

    Let me Explain command:

     --no-stream :          Disable streaming stats and only pull the first result
     awk '{ print $3 }' :   will print MEM USAGE
     sed '1d' :             will delete first entry that is %
     sort :                 it will sort the result
     tail -1 :              it will give last entry that is highest. 
    
    0 讨论(0)
  • 2021-02-18 15:57

    I took a sampling script from here and aggregated data by @pl_rock. But be careful - the sort command only compares string values - so the results are usually wrong (but ok for me). Also mind that docker is sometimes reporting wrong numbers (ie. more allocated mem than physical RAM).

    Here is the script:

    #!/bin/bash
    
    "$@" & # Run the given command line in the background.
    pid=$!
    
    echo "" > stats
    
    while true; do
      sleep 1
      sample="$(ps -o rss= $pid 2> /dev/null)" || break
    
      docker stats --no-stream --format "{{.MemUsage}} {{.Name}} {{.Container}}" | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0 }' >> stats
    done
    
    for containerid in `awk '/.+/ { print $7 }' stats | sort | uniq`
    do
        grep "$containerid" stats | sort -r -k3 | tail -n 1
        # maybe: | sort -r -k3 -h | head -n 1
        # see comment below (didnt tested)
    done
    
    0 讨论(0)
  • 2021-02-18 15:59

    If you need to find the peak usage you are better off requesting the .MemPerc option and calculating based on the total memory (unless you restricted the memory available to the container). .MemUsage has units which change during the life of the container which mess with the result.

    docker stats --format 'CPU: {{.CPUPerc}}\tMEM: {{.MemPerc}}'
    

    You can stream an ongoing log to a file (or script).

    To get just the max memory as originally requested:

    (timeout 120 docker stats --format '{{.MemPerc}}' <CONTAINER_ID> \
      | sed 's/\x1b\[[0-9;]*[a-zA-Z]//g' ; echo) \
      | tr -d '%' | sort -k1,1n | tail -n 1
    

    And then you can ask the system for its total RAM (again assuming you didn't limit the RAM available to docker) and calculate:

    awk '/MemTotal/ {print $2}' /proc/meminfo
    

    You would need to know how long the container is going to run when using timeout as above, but if docker stats was run without this in background submitted by a script it could kill it once the container completed.

    ...

    This command allows you to generate a time-series of the cpu/memory load:

    (timeout 20 docker stats --format \
      'CPU: {{.CPUPerc}}\tMEM: {{.MemPerc}}' <CONTAINER_ID> \
      | sed 's/\x1b\[[0-9;]*[a-zA-Z]//g' ; echo) \
      | gzip -c > monitor.log.gz
    

    Note that it pipes into gzip. In this form you get ~2 rows per second so the file would get large rapidly if you don't.

    I'd advise this for benchmarking and trouble shooting rather than use on production containers

    0 讨论(0)
提交回复
热议问题