I have a kubernetes cluster on Azure and I created 2 namespaces and 2 service accounts because I have two teams deploying on the cluster. I want to give each team their own
# your server name goes here
server=https://localhost:8443
# the name of the secret containing the service account token goes here
name=default-token-sg96k
ca=$(kubectl get secret/$name -o jsonpath='{.data.ca\.crt}')
token=$(kubectl get secret/$name -o jsonpath='{.data.token}' | base64 --decode)
namespace=$(kubectl get secret/$name -o jsonpath='{.data.namespace}' | base64 --decode)
echo "
apiVersion: v1
kind: Config
clusters:
- name: default-cluster
cluster:
certificate-authority-data: ${ca}
server: ${server}
contexts:
- name: default-context
context:
cluster: default-cluster
namespace: default
user: default-user
current-context: default-context
users:
- name: default-user
user:
token: ${token}
" > sa.kubeconfig
Kubectl can be initialized to use a cluster account. To do so, get the cluster url, cluster certificate and account token.
KUBE_API_EP='URL+PORT'
KUBE_API_TOKEN='TOKEN'
KUBE_CERT='REDACTED'
echo $KUBE_CERT >deploy.crt
kubectl config set-cluster k8s --server=https://$KUBE_API_EP \
--certificate-authority=deploy.crt \
--embed-certs=true
kubectl config set-credentials gitlab-deployer --token=$KUBE_API_TOKEN
kubectl config set-context k8s --cluster k8s --user gitlab-deployer
kubectl config use-context k8s
The cluster file is stored under: ~/.kube/config. Now the cluster can be accessed using:
kubectl --context=k8s get pods -n test-namespace
add this flag --insecure-skip-tls-verify
if you are using self signed certificate.