I am new to kubernetes. I have an issue in the pods. When I run the command
kubectl get pods
Result:
NAME RE
$ kubectl replace --force -f <resource-file>
if all goes well, you should see something like:
<resource-type> <resource-name> deleted
<resource-type> <resource-name> replaced
details of this can be found in the Kubernetes documentation, "manage-deployment" and kubectl-cheatsheet pages at the time of writing.
If the Pod
is part of a Deployment
or Service
, deleting it will restart the Pod
and, potentially, place it onto another node:
$ kubectl delete po $POD_NAME
replace
it if it's an individual Pod
:
$ kubectl get po -n $namespace $POD_NAME -o yaml | kubectl replace -f -
In case of not having the yaml file:
kubectl get pod PODNAME -n NAMESPACE -o yaml | kubectl replace --force -f -
Try with deleting pod it will try to pull image again.
kubectl delete pod <pod_name> -n <namespace_name>
Usually in case of "ImagePullBackOff" it's retried after few seconds/minutes. In case you want to try again manually you can delete the old pod and recreate the pod. The one line command to delete and recreate the pod would be:
kubectl replace --force -f <yml_file_describing_pod>
First try to see what's wrong with the pod:
kubectl logs -p <your_pod>
In my case it was a problem with the YAML file.
So, I needed to correct the configuration file and replace it:
kubectl replace --force -f <yml_file_describing_pod>