k8s - livenessProbe vs readinessProbe

回眸只為那壹抹淺笑 提交于 2021-02-17 21:09:51

问题


Consider a pod which has a healthcheck setup via a http endpoint /health at port 80 and it takes almost 60 seconds to be actually ready & serve the traffic.

readinessProbe:
  httpGet:
    path: /health
    port: 80
  initialDelaySeconds: 60
livenessProbe:
  httpGet:
    path: /health
    port: 80

Questions:

  • Is my above config correct for the given requirement?
  • Does liveness probe start working only after the pod becomes ready ? In other words, I assume readiness probe job is complete once the POD is ready. After that livenessProbe takes care of health check. In this case, I can ignore the initialDelaySeconds for livenessProbe. If they are independent, what is the point of doing livenessProbe check when the pod itself is not ready! ?
  • Check this documentation. What do they mean by

If you want your Container to be able to take itself down for maintenance, you can specify a readiness probe that checks an endpoint specific to readiness that is different from the liveness probe.

I was assuming, the running pod will take itself down only if the livenessProbe fails. not the readinessProbe. The doc says other way.

Clarify!


回答1:


The liveness probes are to check if the container is started and alive. If this isn’t the case, kubernetes will eventually restart the container.

The readiness probes in turn also check dependencies like database connections or other services your container is depending on to fulfill it’s work. As a developer you have to invest here more time into the implementation than just for the liveness probes. You have to expose an endpoint which is also checking the mentioned dependencies when queried.

Your current configuration uses a health endpoint which are usually used by liveness probes. It probably doesn’t check if your services is really ready to take traffic.

Kubernetes relies on the readiness probes. During a rolling update, it will keep the old container up and running until the new service declares that it is ready to take traffic. Therefore the readiness probes have to be implemented correctly.




回答2:


I'm starting from the second problem to answer. The second question is:

Does liveness probe start working only after the pod becomes ready? In other words, I assume readiness probe job is complete once the POD is ready. After that livenessProbe takes care of health check.

Our initial understanding is that liveness probe will start to check after readiness probe was succeeded but it turn out not to be like that. It has opened an issue for this challenge.Yon can look up to here. Then It was solved this problem by adding startup probes.

To sum up:

  • livenessProbe

livenessProbe: Indicates whether the Container is running. If the liveness probe fails, the kubelet kills the Container, and the Container is subjected to its restart policy. If a Container does not provide a liveness probe, the default state is Success.

  • readinessProbe

readinessProbe: Indicates whether the Container is ready to service requests. If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a Container does not provide a readiness probe, the default state is Success.

  • startupProbe

startupProbe: Indicates whether the application within the Container is started. All other probes are disabled if a startup probe is provided, until it succeeds. If the startup probe fails, the kubelet kills the Container, and the Container is subjected to its restart policy. If a Container does not provide a startup probe, the default state is Success

look up here.




回答3:


Both readiness probe and liveness probe seem to have same behavior. They do same type of checks. But the action they take in case of failures is different.

Readiness Probe shuts the traffic from service down. so that service can always the send the request to healthy pod whereas the liveness probe restarts the pod in case of failure. It does not do anything for the service. Service continues to send the request to the pods as usual if it is in ‘available’ status.

It is recommended to use both probes!!

Check here for detailed explanation with code samples.




回答4:


The Kubernetes platform has capabilities for validating container applications, called healthchecks. Liveness is proof of availability and readness is proof of pod readiness is ready to use. The features are designed to prevent service downtime and inconsistent images by enabling restarts when needed. Kubernetes uses liveness to know when to restart the container, so it can solve most problems. Kubernetes uses readness to know when the container is available to accept requests. The pod is considered ready when all containers are ready. Therefore, when the pod takes too long to initialize (by cache mount, DB schema, etc.) it is recommended to increase initialDelaySeconds.




回答5:


Liveness probes are a relatively specialized tool, and you probably don't want one at all. However they run totally independently AFAIK.




回答6:


I'd post it as a comment but it's too long, So let's make it a full answer.

Is my above config correct for the given requirement?

IMHO no, you are missing initialDelaySeconds for both probes and liveness and rediness probably should not call the same endpoint. I'd use the suggestionss form @fgul

Does liveness probe start working only after the pod becomes ready ? In other words, I assume readiness probe job is complete once the POD is ready. After that livenessProbe takes care of health check. In this case, I can ignore the initialDelaySeconds for livenessProbe. If they are independent, what is the point of doing livenessProbe check when the pod itself is not ready! ?

I think you were thinking about startupProbe, again @fgul described what does what so there is no point in me repeating.

I was assuming, the running pod will take itself down only if the livenessProbe fails. not the readinessProbe. The doc says other way.

The pod can be restarted only based on livenessProbe, not the redinessProbe.

I'd think twice before binding a rediness probe with external services (being alive as @randy advised), especially in high load services:

Let's assume you have define a deployment with lots of pods, that are connecting to a database and are processing lots of requests. Now the database goes down. The rediness probe is checking also db connection and it marks all of the pods as "out of service". Now the db goes up. Pods rediness probe will start to pass but not instantly and on all pods right away - the pods will be marked as "Ready" one after an other. But it might be too slow - the second the first pod will be marked as ready, ALL of the traffic will be sent to this one pod alone. It might end in a situation that the "waking up" pods will be killed by the traffic one after an other.

For that kind of situation I'd say the rediness pod should check only pod internal stuff and don't care about the externall services. The kubernetes endpoint will return an error and either the clients might support failing service (it's called "designed for failure") or the loadbalancer/ingress can cover it.



来源:https://stackoverflow.com/questions/55423405/k8s-livenessprobe-vs-readinessprobe

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!