The resource limit of Pod has been set as:
resource
limit
cpu: 500m
memory: 5Gi
and there\'s 10G
mem left on the node.
Kubernetes resource specifications have two fields, request
and limit
.
limits
place a cap on how much of a resource a container can use. For memory, if a container goes above its limits, it will be OOM killed. For CPU, its usage may be throttled.
requests
are different in that they ensure the node that the pod is put on has at least that much capacity available for it. If you want to make sure that your pods will be able to grow to a particular size without the node running out of resources, specify a request of that size. This will limit how many pods you can schedule, though -- a 10G node will only be able to fit 2 pods with a 5G memory request.
Kubernetes supports Quality of Service. If your Pods have limits
set, they belong to the Guaranteed
class and the likelihood of them getting killed due to system memory pressure is extremely low. If the docker daemon or some other daemon you run on the node consumes a lot of memory, that's when there is a possibility for Guaranteed Pods to get killed.
The Kube scheduler does take into account memory capacity and memory allocated while scheduling. For instance, you cannot schedule more than two pods each requesting 5 GB on a 10GB node.
Memory usage is not consumed by Kubernetes as of now for the purposes of scheduling.