gke-networking

Unhealthy nodes for load balancer when using nginx ingress controller on GKE

江枫思渺然 提交于 2020-06-24 11:44:09
问题 I have set up the nginx ingress controller following this guide. The ingress works well and I am able to visit the defaultbackend service and my own service as well. But when reviewing the objects created in the Google Cloud Console, in particular the load balancer object which was created automatically, I noticed that the health check for the other nodes are failing: Is this because the ingress controller process is only running on the one node, and so it's the only one that passes the

VPN access to in-house network not working after GKE cluster upgrade to 1.14.6

萝らか妹 提交于 2020-06-22 10:32:30
问题 We upgraded our existing development cluster from 1.13.6-gke.13 to 1.14.6-gke.13 and our pods can no longer reach our in-house network over our Google Cloud VPN. Our production cluster (still on 1.13) shares the same VPC network and VPN tunnels and is still working fine. The only thing that changed was the upgrade of the admin node and node pool to 1.14 on the development cluster. I have opened a shell into a pod on the development cluster and attempted to ping the IP address of an in-house

Understanding --master-ipv4-cidr when provisioning private GKE clusters

若如初见. 提交于 2019-12-06 04:37:34
问题 I am trying to further understand what exactly is happening when I provision a private cluster in Google's Kubernetes Engine. Google provides this example here of provisioning a private cluster where the control plane services (e.g. Kubernetes API) live on the 172.16.0.16/28 subnet. https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters gcloud beta container clusters create pr-clust-1 \ --private-cluster \ --master-ipv4-cidr 172.16.0.16/28 \ --enable-ip-alias \ --create

Understanding --master-ipv4-cidr when provisioning private GKE clusters

北城以北 提交于 2019-12-04 11:32:16
I am trying to further understand what exactly is happening when I provision a private cluster in Google's Kubernetes Engine. Google provides this example here of provisioning a private cluster where the control plane services (e.g. Kubernetes API) live on the 172.16.0.16/28 subnet. https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters gcloud beta container clusters create pr-clust-1 \ --private-cluster \ --master-ipv4-cidr 172.16.0.16/28 \ --enable-ip-alias \ --create-subnetwork "" When I run this command, I see that: I now have a few gke subnets in my VPC belong to the