kubeadm

将Kubernetes安装到Docker容器里面

断了今生、忘了曾经 提交于 2019-12-07 03:08:32
将Kubernetes安装到Docker容器里面,使用DinD(Docker in Docker)完成。 kubeadm-dind-cluster A Kubernetes multi-node cluster for developer of Kubernetes and projects that extend Kubernetes. Based on kubeadm and DIND (Docker in Docker). Supports both local workflows and workflows utilizing powerful remote machines/cloud instances for building Kubernetes, starting test clusters and running e2e tests. If you're an application developer, you may be better off with Minikube because it's more mature and less dependent on the local environment, but if you're feeling adventurous you may give kubeadm-dind-cluster a try,

Kubernetes dashboard showing Unauthorized

人盡茶涼 提交于 2019-12-06 19:32:33
I configured kubernetes cluster with one master and 4 worker nodes using KUBEADM tool IN LOCAL. All nodes are running fine. deployed an app and able access that app from browser. I have tried many ways to create a dashboard using kubectl but i am failed. TRY1: tried directly with the below command: $ sudo kubectl proxy --address="172.20.22.101" -p 8001 tried to access the dashboard using the url http://172.20.22.101:8001/api/v1 , but it is saying unauthorized. TRY2: created dashboard-admin.yaml file with the below content: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding

Kubernetes WatchConnectionManager: Exec Failure: HTTP 403

纵然是瞬间 提交于 2019-12-06 14:18:15
I'm experiencing Error Expected HTTP 101 response but was '403 Forbidden' After I setup a new Kubernetes cluster using Kubeadm with a single master and two workers, as I submit a pyspark sample app I encountered below ERROR message: spark-submit command spark-submit --master k8s://master-host:port \ --deploy-mode cluster --name test-pyspark \ --conf spark.kubernetes.container.image=mm45/pyspark-k8s-example:2.4.1 \ --conf spark.kubernetes.pyspark.pythonVersion=3 \ --conf spark.executor.instances=1 \ --conf spark.executor.memory=1000m \ --conf spark.driver.memory=1000m \ --conf spark.executor

Is there a best practice to reboot a cluster

折月煮酒 提交于 2019-12-06 09:34:07
I followed Alex Ellis' excellent tutorial that uses kubeadm to spin-up a K8s cluster on Raspberry Pis. It's unclear to me what the best practice is when I wish to power-cycle the Pis. I suspect sudo systemctl reboot is going to result in problems. I'd prefer not to delete and recreate the cluster each time starting with kubeadm reset . Is there a way that I can shutdown and restart the machines without deleting the cluster? Thanks! 来源: https://stackoverflow.com/questions/48362855/is-there-a-best-practice-to-reboot-a-cluster

Requests timing out when accesing a Kubernetes clusterIP service

前提是你 提交于 2019-12-06 05:19:38
问题 I am looking for help to troubleshoot this basic scenario that isn't working OK: Three nodes installed with kubeadm on VirtualBox VMs running on a MacBook: sudo kubectl get nodes NAME STATUS ROLES AGE VERSION kubernetes-master Ready master 4h v1.10.2 kubernetes-node1 Ready <none> 4h v1.10.2 kubernetes-node2 Ready <none> 34m v1.10.2 The Virtualbox VMs have 2 adapters: 1) Host-only 2) NAT. The node IP's from the guest computer are: kubernetes-master (192.168.56.3) kubernetes-node1 (192.168.56.4

K8s Dashboard not logging in (k8s version 1.11)

风流意气都作罢 提交于 2019-12-05 18:05:14
I did K8s(1.11) cluster using kubeadm tool. It 1 master and one node in the cluster. I applied dashboard UI there. kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml Created service account (followed this link: https://github.com/kubernetes/dashboard/wiki/Creating-sample-user ) apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system and apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind:

How to use kubeadm upgrade to change some features in kubeadm-config

我怕爱的太早我们不能终老 提交于 2019-12-05 10:26:22
I want to install kube-prometheus on my existing kubernetes cluster(v1.10). Before that, the doc says I need to change the ip address of contrller/scheduler from 127.0.0.1 to 0.0.0.0 . And it also recommand to use kubeadm config upgrade to change these features: controllerManagerExtraArgs: address: 0.0.0.0 schedulerExtraArgs: address: 0.0.0.0 After reading the doc, i tried with the below command, but it didn't work: kubeadm upgrade --feature-gates controllerManagerExtraArgs.address=0.0.0.0 I know i can use kubectl -n kube-system edit cm kubeadm-config to modify configMap directly, just want to

How to add roles to nodes in Kubernetes?

浪子不回头ぞ 提交于 2019-12-03 08:40:38
问题 When I provision a Kubernetes cluster using kubeadm and I get my nodes tagged as none. It's a know bug in Kubernetes and currently a PR is in-progress. However, I would like to know if there is an option to add a Role name manually for the node? root@ip-172-31-14-133:~# kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-14-133 Ready master 19m v1.9.3 ip-172-31-6-147 Ready <none> 16m v1.9.3 回答1: A node role is just a label with the format node-role.kubernetes.io/<role> You can add this

How to add roles to nodes in Kubernetes?

梦想的初衷 提交于 2019-12-03 00:22:27
When I provision a Kubernetes cluster using kubeadm and I get my nodes tagged as none. It's a know bug in Kubernetes and currently a PR is in-progress. However, I would like to know if there is an option to add a Role name manually for the node? root@ip-172-31-14-133:~# kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-14-133 Ready master 19m v1.9.3 ip-172-31-6-147 Ready <none> 16m v1.9.3 A node role is just a label with the format node-role.kubernetes.io/<role> You can add this yourself with kubectl label ram dhakne This worked for me: kubectl label node cb2.4xyz.couchbase.com node

kubectl not able to pull the image from private repository

喜你入骨 提交于 2019-12-02 20:56:44
I am running kubeadm alpha version to set up my kubernates cluster. From kubernates , I am trying to pull docker images which is hosted in nexus repository. When ever I am trying to create a pods , It is giving "ImagePullBackOff" every time. Can anybody help me on this ? Detail for this are present in https://github.com/kubernetes/kubernetes/issues/41536 Pod definition : apiVersion: v1 kind: Pod metadata: name: test-pod labels: name: test spec: containers: - image: 123.456.789.0:9595/test name: test ports: - containerPort: 8443 imagePullSecrets: - name: my-secret You need to refer to the