My kubernetes pods keep crashing with “CrashLoopBackOff” but I can't find any log

前端 未结 15 1668
耶瑟儿~
耶瑟儿~ 2020-11-29 22:40

This is what I keep getting:

[root@centos-master ~]# kubectl get pods
NAME               READY     STATUS             RESTARTS   AGE
nfs-server-h6nw8   1/1           


        
相关标签:
15条回答
  • 2020-11-29 23:05

    i solved this problem by removing space between quotes and command value inside of array ,this is happened because container exited after started and no executable command present which to be run inside of container.

    ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
    
    0 讨论(0)
  • 2020-11-29 23:08

    I had the need to keep a pod running for subsequent kubectl exec calls and as the comments above pointed out my pod was getting killed by my k8s cluster because it had completed running all its tasks. I managed to keep my pod running by simply kicking the pod with a command that would not stop automatically as in:

    kubectl run YOUR_POD_NAME -n YOUR_NAMESPACE --image SOME_PUBLIC_IMAGE:latest --command tailf /dev/null
    
    0 讨论(0)
  • 2020-11-29 23:10

    From This page, the container dies after running everything correctly but crashes because all the commands ended. Either you make your services run on the foreground, or you create a keep alive script. By doing so, Kubernetes will show that your application is running. We have to note that in the Docker environment, this problem is not encountered. It is only Kubernetes that wants a running app.

    Update (an example):

    Here's how to avoid CrashLoopBackOff, when launching a Netshoot container:

    kubectl run netshoot --image nicolaka/netshoot -- sleep infinity
    
    0 讨论(0)
  • 2020-11-29 23:13

    I had same issue and now I finally resolved it. I am not using docker-compose file. I just added this line in my Docker file and it worked.

    ENV CI=true
    

    Reference: https://github.com/GoogleContainerTools/skaffold/issues/3882

    0 讨论(0)
  • 2020-11-29 23:14

    My pod kept crashing and I was unable to find the cause. Luckily there is a space where kubernetes saves all the events that occurred before my pod crashed.
    (#List Events sorted by timestamp)

    To see these events run the command:

    kubectl get events --sort-by=.metadata.creationTimestamp
    

    make sure to add a --namespace mynamespace argument to the command if needed

    The events shown in the output of the command showed my why my pod kept crashing.

    0 讨论(0)
  • 2020-11-29 23:14

    In your yaml file, add command and args lines:

    ...
    containers:
          - name: api
            image: localhost:5000/image-name 
            command: [ "sleep" ]
            args: [ "infinity" ]
    ...
    

    Works for me.

    0 讨论(0)
提交回复
热议问题