I have followed the helloword tutorial on http://kubernetes.io/docs/hellonode/.
When I run:
kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node
The issue is that your kubeconfig
is not right.
To auto-generate it run:
gcloud container clusters get-credentials "CLUSTER NAME"
This worked for me.
I got this issue when using " Bash on Windows " with azure kubernetes
az aks get-credentials -n <myCluster>-g <myResourceGroup>
The config file is autogenerated and placed in '~/.kube/config' file as per OS (which is windows in my case)
To solve this -
Run from Bash commandline cp <yourWindowsPathToConfigPrintedFromAbobeCommand> ~/.kube/config
After running "kubeinit" command, kubernetes asks you to run following as regular user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
But if you run this as a regular user, you will get "The connection to the server localhost:8080 was refused - did you specify the right host or port?" when trying to access as a root user and vice versa. So try accessing "kubectl" as the user who executed the above commands.
I had same error, this worked for me. Run
minikube status
if the response is
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped
run minikube start
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
You can proceed
Make sure your config is set to the project -
gcloud config set project [PROJECT_ID]
Run a checklist of the Clusters in the account:
gcloud container clusters list
Check the output :
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VE.
alpha-cluster asia-south1-a 1.9.7-gke.6 35.200.254.78 f1-micro 1.9.7-
NUM_NODES STATUS
gke.6 3 RUNNING
Run the following cmd -
gcloud container clusters get-credentials your-cluster-name --zone your-zone --project your-project
Fetching cluster endpoint and auth data.
kubeconfig entry generated for alpha-cluster.
kubectl
such as-kubectl get nodes -o wide
Should be good to go.
If you created a cluster on AWS using kops, then kops creates ~/.kube/config
for you, which is nice. But if someone else needs to connect to that cluster, then they also need to install kops so that it can create the kubeconfig for you:
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
export CLUSTER_ALIAS=kubernetes-cluster
kubectl config set-context ${CLUSTER_ALIAS} \
--cluster=${CLUSTER_FULL_NAME} \
--user=${CLUSTER_FULL_NAME}
kubectl config use-context ${CLUSTER_ALIAS}
kops export cluster --name ${CLUSTER_FULL_NAME} \
--region=${CLUSTER_REGION} \
--state=${KOPS_STATE_STORE}