Analyze Kubernetes pod OOMKilled

前端 未结 1 1424
萌比男神i
萌比男神i 2021-02-10 02:01

We got OOMKilled event on our K8s pods. We want in case of such event to run Native memory analysis command BEFORE the pod is evicted. Is it possible to add such a hook?

1条回答
  •  挽巷
    挽巷 (楼主)
    2021-02-10 02:19

    Looks like it is almost impossible to handle.

    Based on an answer on Github about a gracefully stop on OMM Kill:

    It is not possible to change OOM behavior currently. Kubernetes (or runtime) could provide your container a signal whenever your container is close to its memory limit. This will be on a best effort basis though because memory spikes might not be handled on time.

    Here is from official documentation:

    If the node experiences a system OOM (out of memory) event prior to the kubelet is able to reclaim memory, the node depends on the oom_killer to respond. The kubelet sets a oom_score_adj value for each container based on the quality of service for the Pod.

    So, as you understand, you have not much chance to handle it somehow. Here is the large article about the handling of OOM, I will take just a small part here, about memory controller out of memory handling:

    Unfortunately, there may not be much else that this process can do to respond to an OOM situation. If it has locked its text into memory with mlock() or mlockall(), or it is already resident in memory, it is now aware that the memory controller is out of memory. It can't do much of anything else, though, because most operations of interest require the allocation of more memory.

    The only thing I can offer is getting a data from cAdvisor (here you can get an OOM Killer event) or from Kubernetes API and run your command when you see by metrics that you are very close to out of memory. I am not sure that you will have a time to do something after you will get OOM Killer event.

    0 讨论(0)
提交回复
热议问题