My kubernetes pods keep crashing with “CrashLoopBackOff” but I can't find any log

前端 未结 15 1670
耶瑟儿~
耶瑟儿~ 2020-11-29 22:40

This is what I keep getting:

[root@centos-master ~]# kubectl get pods
NAME               READY     STATUS             RESTARTS   AGE
nfs-server-h6nw8   1/1           


        
相关标签:
15条回答
  • 2020-11-29 23:30

    I observed the same issue, and added the command and args block in yaml file. I am copying sample of my yaml file for reference

     apiVersion: v1
        kind: Pod
        metadata:
          labels:
            run: ubuntu
          name: ubuntu
          namespace: default
        spec:
          containers:
          - image: gcr.io/ow/hellokubernetes/ubuntu
            imagePullPolicy: Never
            name: ubuntu
            resources:
              requests:
                cpu: 100m
            command: ["/bin/sh"]
            args: ["-c", "while true; do echo hello; sleep 10;done"]
          dnsPolicy: ClusterFirst
          enableServiceLinks: true
    
    0 讨论(0)
  • 2020-11-29 23:30

    In my case the problem was what Steve S. mentioned:

    The pod is crashing because it starts up then immediately exits, thus Kubernetes restarts and the cycle continues.

    Namely I had a Java application whose main threw an exception (and something overrode the default uncaught exception handler so that nothing was logged). The solution was to put the body of main into try { ... } catch and print out the exception. Thus I could find out what was wrong and fix it.

    (Another cause could be something in the app calling System.exit; you could use a custom SecurityManager with an overridden checkExit to prevent (or log the caller of) exit; see https://stackoverflow.com/a/5401319/204205.)

    0 讨论(0)
  • 2020-11-29 23:31

    Whilst troubleshooting the same issue I found no logs when using kubeclt logs <pod_id>. Therefore I ssh:ed in to the node instance to try to run the container using plain docker. To my surprise this failed also.

    When entering the container with:

    docker exec -it faulty:latest /bin/sh
    

    and poking around I found that it wasn't the latest version.

    A faulty version of the docker image was already available on the instance.

    When I removed the faulty:latest instance with:

    docker rmi faulty:latest
    

    everything started to work.

    0 讨论(0)
提交回复
热议问题