google-kubernetes-engine

How to set static internal IP to the GKE internal Ingress

一笑奈何 提交于 2020-08-09 07:16:29
问题 I want to create a Internal Ingress for my GKE workloads. I want to know what is the annotation that I can use so that I set a static INTERNAL IP address/name in my ingress. apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-https namespace: istio-system annotations: kubernetes.io/ingress.allow-http: "false" kubernetes.io/ingress.class: "gce-internal" ingress.gcp.kubernetes.io/pre-shared-cert: my-cert helm.sh/chart: {{ include "devtools.chart" . }} app.kubernetes.io/instance

One node for a GKE cluster cannot pull image from dockerhub

别等时光非礼了梦想. 提交于 2020-08-08 05:44:28
问题 This is a very wried thing. I created a private GKE cluster with a node pool of 3 nodes. Then I have a replica set with 3 pods. some of these pods will be scheduled to one node. So one of these pods always get ImagePullBackOff , I check the error Failed to pull image "bitnami/mongodb:3.6": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Using standalone 'gsutil' from within GKE

扶醉桌前 提交于 2020-08-07 08:40:47
问题 I'm trying to use the standalone gsutil tool from within a container running in a GKE cluster, but I cannot get it to work. I believe the cluster has adequate permissions (see below). However, running ./gsutil ls gs://my-bucket/ yields ServiceException: 401 Anonymous users does not have storage.objects.list access to bucket my-bucket. Am I missing anything? I don't have a .boto file, as I believe it shouldn't be necessary—or is it? This is the list of scopes that the cluster and the node pool

Using standalone 'gsutil' from within GKE

安稳与你 提交于 2020-08-07 08:39:07
问题 I'm trying to use the standalone gsutil tool from within a container running in a GKE cluster, but I cannot get it to work. I believe the cluster has adequate permissions (see below). However, running ./gsutil ls gs://my-bucket/ yields ServiceException: 401 Anonymous users does not have storage.objects.list access to bucket my-bucket. Am I missing anything? I don't have a .boto file, as I believe it shouldn't be necessary—or is it? This is the list of scopes that the cluster and the node pool

Monitoring and alerting on pod status or restart with Google Container Engine (GKE) and Stackdriver

隐身守侯 提交于 2020-08-01 03:10:41
问题 Is there a way to monitor the pod status and restart count of pods running in a GKE cluster with Stackdriver? While I can see CPU, memory and disk usage metrics for all pods in Stackdriver there seems to be no way of getting metrics about crashing pods or pods in a replica set being restarted due to crashes. I'm using a Kubernetes replica set to manage the pods, hence they are respawned and created with a new name when they crash. As far as I can tell the metrics in Stackdriver appear by pod

Monitoring and alerting on pod status or restart with Google Container Engine (GKE) and Stackdriver

邮差的信 提交于 2020-08-01 03:07:52
问题 Is there a way to monitor the pod status and restart count of pods running in a GKE cluster with Stackdriver? While I can see CPU, memory and disk usage metrics for all pods in Stackdriver there seems to be no way of getting metrics about crashing pods or pods in a replica set being restarted due to crashes. I'm using a Kubernetes replica set to manage the pods, hence they are respawned and created with a new name when they crash. As far as I can tell the metrics in Stackdriver appear by pod

Egress traffic from GKE Pod through VPN

三世轮回 提交于 2020-07-23 07:01:13
问题 I have a VPC network with a subnet in the range 10.100.0.0/16, in which the nodes reside. There is a route and firewall rules applied to the range 10.180.102.0/23, which routes and allows traffic going to/coming from a VPN tunnel. If I deploy a node in the 10.100.0.0/16 range, I can ping my devices in the 10.180.102.0/23 range. However, the pod running inside that node cannot ping the devices in the 10.180.102.0/23 range. I assume it has to do with the fact that the pods live in a different

Egress traffic from GKE Pod through VPN

若如初见. 提交于 2020-07-23 07:00:06
问题 I have a VPC network with a subnet in the range 10.100.0.0/16, in which the nodes reside. There is a route and firewall rules applied to the range 10.180.102.0/23, which routes and allows traffic going to/coming from a VPN tunnel. If I deploy a node in the 10.100.0.0/16 range, I can ping my devices in the 10.180.102.0/23 range. However, the pod running inside that node cannot ping the devices in the 10.180.102.0/23 range. I assume it has to do with the fact that the pods live in a different

Egress traffic from GKE Pod through VPN

徘徊边缘 提交于 2020-07-23 06:59:05
问题 I have a VPC network with a subnet in the range 10.100.0.0/16, in which the nodes reside. There is a route and firewall rules applied to the range 10.180.102.0/23, which routes and allows traffic going to/coming from a VPN tunnel. If I deploy a node in the 10.100.0.0/16 range, I can ping my devices in the 10.180.102.0/23 range. However, the pod running inside that node cannot ping the devices in the 10.180.102.0/23 range. I assume it has to do with the fact that the pods live in a different

Unable to delete all pods in Kubernetes - Clear/restart Kubernetes

試著忘記壹切 提交于 2020-07-22 03:18:20
问题 I am trying to delete/remove all the pods running in my environment. When I issue docker ps I get the below output. This is a sample screenshot. As you can see that they are all K8s. I would like to delete all of the pods/remove them. I tried all the below approaches but they keep appearing again and again sudo kubectl delete --all pods --namespace=default/kube-public #returns "no resources found" for both default and kube-public namespaces sudo kubectl delete --all pods --namespace=kube