Running kubectl patch --local fails due to missing config

妖精的绣舞 提交于 2021-02-08 11:33:01

问题


I have a GitHub Actions workflow that substitutes value in a deployment manifest. I use kubectl patch --local=true to update the image. This used to work flawlessly until now. Today the workflow started to fail with a Missing or incomplete configuration info error.

I am running kubectl with --local flag so the config should not be needed. Does anyone know what could be the reason why kubectl suddenly started requiring a config? I can't find any useful info in Kubernetes GitHub issues and hours of googling didn't help.

Output of the failed step in GitHub Actions workflow:

Run: kubectl patch --local=true -f authserver-deployment.yaml -p '{"spec":{"template":{"spec":{"containers":[{"name":"authserver","image":"test.azurecr.io/authserver:20201230-1712-d3a2ae4"}]}}}}' -o yaml > temp.yaml && mv temp.yaml authserver-deployment.yaml

error: Missing or incomplete configuration info.  Please point to an existing, complete config file:


  1. Via the command-line flag --kubeconfig
  2. Via the KUBECONFIG environment variable
  3. In your home directory as ~/.kube/config

To view or setup config directly use the 'config' command.
Error: Process completed with exit code 1.

Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", 
GitCommit:"ffd68360997854d442e2ad2f40b099f5198b6471", GitTreeState:"clean", 
BuildDate:"2020-11-18T13:35:49Z", GoVersion:"go1.15.0", Compiler:"gc", 
Platform:"linux/amd64"}

回答1:


You still need to set the config to access kubernetes cluster. Even tho you are modifying the file locally, you are still executing kubectl command that has to be ran against the cluster. By default, kubectl looks for a file named config in the $HOME/.kube directory.

error: current-context is not set indicates that there is no current context set for the cluster and kubectl cannot be executed against a cluster. You can create a context for Service Account using this tutorial.




回答2:


As a workaround I installed kind (it does take longer for the job to finish, but at least it's working and it can be used for e2e tests later).

Added this step:

- name: Setup kind
        run: kubectl version
        uses: engineerd/setup-kind@v0.5.0

Also use --dry-run=client as an option for your kubectl command.

I do realize this is not the proper solution.




回答3:


Exporting KUBERNETES_MASTER environment variable should do the trick:

$ export KUBERNETES_MASTER=localhost:8081  # 8081 port, just to ensure it works 

$ kubectl version

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate
:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8081 was refused - did you specify the right host or port?

# Notice the port 8081 in the error message ^^^^^^

Now patch also should work as always:

$ kubectl patch --local=true -f testnode.yaml -p '{"metadata":{"managedFields":[]}}'    # to get the file content use  -o yaml 
node/k8s-w1 patched

Alternatively, you can update kubectl to a later version. (v1.18.8 works fine even without the trick)

Explanation section:

The change is likely to be introduced by PR #86173 stop defaulting kubeconfig to http://localhost:8080

The change was reverted in Revert "stop defaulting kubeconfig to http://localhost:8080" #90243 for the later 18.x versions, see the issue kubectl --local requires a valid kubeconfig file #90074 for the details



来源:https://stackoverflow.com/questions/65522661/running-kubectl-patch-local-fails-due-to-missing-config

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!