kubespray

Why I cant access a kubernetes pod from other Nodes IP?

我们两清 提交于 2020-08-10 18:51:46
问题 I've installed kubernetes cluster with help of Kubespray. Cluster having 3 Nodes (2 Master & 1 Worker). node1 - 10.1.10.110, node2 - 10.1.10.111, node3 - 10.1.10.112 $ kubectl get nodes NAME STATUS ROLES AGE VERSION node1 Ready master 12d v1.18.5 node2 Ready master 12d v1.18.5 node3 Ready <none> 12d v1.18.5 I deployed this pod in node1 (10.1.10.110) and exposed nodeport service as shown. NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default pod/httpd

kubespray dashboard warning forbidden popups

怎甘沉沦 提交于 2020-05-15 08:45:05
问题 I am trying to set up a new kubernetes cluster on one machine with kubespray (commit 7e84de2ae116f624b570eadc28022e924bd273bc). After running the playbook (on a fresh ubuntu 16.04), I open the dashboard and see those warning popups: - configmaps is forbidden: User "system:serviceaccount:default:default" cannot list configmaps in the namespace "default" - persistentvolumeclaims is forbidden: User "system:serviceaccount:default:default" cannot list persistentvolumeclaims in the namespace

No route to host from some Kubernetes containers to other containers in same cluster

半世苍凉 提交于 2020-02-01 09:48:28
问题 This is a Kubespray deployment using calico. All the defaults are were left as-is except for the fact that there is a proxy. Kubespray ran to the end without issues. Access to Kubernetes services started failing and after investigation, there was no route to host to the coredns service. Accessing a K8S service by IP worked. Everything else seems to be correct, so I am left with a cluster that works, but without DNS. Here is some background information: Starting up a busybox container: #

Kubespray fails with “Found multiple CRI sockets, please use --cri-socket to select one”

。_饼干妹妹 提交于 2019-12-25 00:57:40
问题 Problem encountered When deploying a cluster with Kubespray , CRI-O and Cilium I get an error about having multiple CRI socket to choose from. Full error fatal: [p3kubemaster1]: FAILED! => {"changed": true, "cmd": " mkdir -p /etc/kubernetes/external_kubeconfig && /usr/local/bin/kubeadm init phase kubeconfig admin --kubeconfig-dir /etc/kubernetes/external_kubeconfig --cert-dir /etc/kubernetes/ssl --apiserver-advertise-address 10.10.3.15 --apiserver-bind-port 6443 >/dev/null && cat /etc

Can I reach a container by it's hostname from another container running on another node in Kubernetes?

懵懂的女人 提交于 2019-12-04 05:17:21
问题 I believe my question is pretty straightforward. I'm doing my prerequisites to install Kubernetes cluster on bare metal. Let's say I have: master - hostname for Docker DB container which is fixed on first node slave - hostname for Docker DB container which is fixed on second node Can I communicate with master from any container (app, etc.) in a cluster regardless it's running on the same node or not? Is this a default behaviour? Or anything additional should be done? I assume that I need to