问题
I'm trying to modify the running state of my pod, managed by a deployment controller both from command line via kubectl patch
and from the k8s python client API. Neither of them seem to work
From the command line, I tried both strategic merge match and JSON merge patch, but neither of them works. For e.g. I'm trying to patch the pod conditions to make the status
field to False
kubectl -n foo-ns patch pod foo-pod-18112 -p '{
"status": {
"conditions": [
{
"type": "PodScheduled",
"status": "False"
},
{
"type": "Ready",
"status": "False"
},
{
"type": "ContainersReady",
"status": "False"
},
{
"type": "Initialized",
"status": "False"
}
],
"phase": "Running"
}
}' --type merge
From the python API
# definition of various pod states
ready_true = { "type": "Ready", "status": "True" }
ready_false = { "type": "Ready", "status": "False" }
scheduled_true = { "type": "PodScheduled", "status": "True" }
cont_ready_true = { "type": "ContainersReady", "status": "True" }
cont_ready_false = { "type": "ContainersReady", "status": "False" }
initialized_true = { "type": "Initialized", "status": "True" }
initialized_false = { "type": "Initialized", "status": "False" }
patch = {"status": { "conditions": [ready_false, initialized_false, cont_ready_false, scheduled_true ], "phase" : "Running" }}
p_status = v1.patch_namespaced_pod_status(podname, "default", body=patch)
While running the above snippet, I don't see any errors and the response p_status
has all the pod conditions modified as applied in the patch
, but I don't see any events from API server related to this pod status change.
May be the deployment controller is rolling back the changes to a working config? I'm looking for ways to patch the pod conditions and test if my custom controller (not related to the question) is able to see those new pod conditions.
回答1:
You should not.
Clients write the desired state in the spec:
and controllers write the status:
-part.
来源:https://stackoverflow.com/questions/64881584/how-to-programmatically-modify-a-running-k8s-pod-status-conditions