I have the following setup:
A docker image omg/telperion
on docker hub
A kubernetes cluster (with 4 nodes, each with ~50GB RAM) and plenty resources
CrashLoopBackOff
tells that a pod crashes right after the start. Kubernetes tries to start pod again, but again pod crashes and this goes in loop.
You can check pods logs for any error by kubectl logs -n --previous
--previous will show you logs of the previous instantiation of a container
Next, you can check "state reason","last state reason" and "Events" Section by describing pod kubectl describe pod -n
"state reason","last state reason"
Sometimes the issue can be because of the Less Memory or CPU provided to application.
You can access the logs of your pods with
kubectl logs [podname] -p
the -p option will read the logs of the previous (crashed) instance
If the crash comes from the application, you should have useful logs in there.