google-kubernetes-engine

Unable to delete all pods in Kubernetes - Clear/restart Kubernetes

自闭症网瘾萝莉.ら 提交于 2020-07-22 03:13:45
问题 I am trying to delete/remove all the pods running in my environment. When I issue docker ps I get the below output. This is a sample screenshot. As you can see that they are all K8s. I would like to delete all of the pods/remove them. I tried all the below approaches but they keep appearing again and again sudo kubectl delete --all pods --namespace=default/kube-public #returns "no resources found" for both default and kube-public namespaces sudo kubectl delete --all pods --namespace=kube

Unable to delete all pods in Kubernetes - Clear/restart Kubernetes

◇◆丶佛笑我妖孽 提交于 2020-07-22 03:13:31
问题 I am trying to delete/remove all the pods running in my environment. When I issue docker ps I get the below output. This is a sample screenshot. As you can see that they are all K8s. I would like to delete all of the pods/remove them. I tried all the below approaches but they keep appearing again and again sudo kubectl delete --all pods --namespace=default/kube-public #returns "no resources found" for both default and kube-public namespaces sudo kubectl delete --all pods --namespace=kube

How to run a script as command in Kubernetes yaml file

一笑奈何 提交于 2020-07-10 07:01:25
问题 I have this script. A Pod will have two containers, one for the main application and the other for logging. I want the logging container to sleep to help me debug an issue. apiVersion: apps/v1 kind: Deployment metadata: name: codingjediweb spec: replicas: 2 selector: matchLabels: app: codingjediweb template: metadata: labels: app: codingjediweb spec: volumes: - name: shared-logs emptyDir: {} containers: - name: codingjediweb image: docker.io/manuchadha25/codingjediweb:03072020v2 volumeMounts:

How do I access my Cassandra/Kubernetes cluster from outside the cluster?

时光毁灭记忆、已成空白 提交于 2020-07-09 12:08:13
问题 I have started using Cass-Operator and the setup worked like a charm! https://github.com/datastax/cass-operator. I have an issue though. My cluster is up and running on GCP. But how do I access it from my laptop (basically from outside)? Sorry, new to Kubernetes so do not know how to access the cluster from outside? I can see the nodes are up on the GCP dashboard. I can ping the external IP of the nodes from my laptop but when I run cqlsh external_ip 9042 then the connection fails. How do I

How do I access my Cassandra/Kubernetes cluster from outside the cluster?

别来无恙 提交于 2020-07-09 12:08:10
问题 I have started using Cass-Operator and the setup worked like a charm! https://github.com/datastax/cass-operator. I have an issue though. My cluster is up and running on GCP. But how do I access it from my laptop (basically from outside)? Sorry, new to Kubernetes so do not know how to access the cluster from outside? I can see the nodes are up on the GCP dashboard. I can ping the external IP of the nodes from my laptop but when I run cqlsh external_ip 9042 then the connection fails. How do I

Trying to set up an ingress with tls and open to some IPs only on GKE

怎甘沉沦 提交于 2020-07-09 09:23:43
问题 I'm having trouble setting up an ingress open only to some specific IPs, checked docs, tried a lot of stuff and an IP out of the source keep accessing. that's a Zabbix web interface on an alpine with nginx, set up a service on node-port 80 then used an ingress to set up a loadbalancer on GCP, it's all working, the web interface is working fine, but how can I make it accessible only to desired IPs? my firewall rules are ok and it's only accessible through load balancer IP Also, I have a

Login to GKE via service account with token

核能气质少年 提交于 2020-07-03 03:05:10
问题 I am trying to access my Kubernetes cluster on google cloud with the service account, but I am not able to make this works. I have a running system with some pods and ingress. I want to be able to update images of deployments. I would like to use something like this (remotely): kubectl config set-cluster cluster --server="<IP>" --insecure-skip-tls-verify=true kubectl config set-credentials foo --token="<TOKEN>" kubectl config set-context my-context --cluster=cluster --user=foo --namespace

Kubernetes - Pod which encapsulates DB is crashing

半腔热情 提交于 2020-06-28 08:48:31
问题 I am experiencing issues when I try to deploy my Django application to Kubernetes cluster. More specifically, when I try to deploy PostgreSQL. Here is what my .YML deployment file looks like: apiVersion: v1 kind: Service metadata: name: postgres-service spec: selector: app: postgres-container tier: backend ports: - protocol: TCP port: 5432 targetPort: 5432 type: ClusterIP --- apiVersion: v1 kind: PersistentVolume metadata: name: postgres-pv labels: type: local spec: accessModes: -

Unable to initialize helm (tiller) on newly created GKE cluster

不羁的心 提交于 2020-06-28 07:15:48
问题 I have just created a GKE cluster on Google Cloud platform. I have installed in the cloud console helm : $ helm version version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"} I have also created the necessary serviceaccount and clusterrolebinding objects: $ cat helm-rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind:

GCP Kubernetes created 6 nodes when num-nodes was set to 2

淺唱寂寞╮ 提交于 2020-06-28 05:32:51
问题 I am following this tutorial to configure Kubernetes on GCP. https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app#clean-up I run this command to create a cluster following the suggestion from here - GKE: Insufficient regional quota to satisfy request: resource "IN_USE_ADDRESSES" gcloud container clusters create name-cluster --num-nodes=2 When I list the nodes using gcloud compute instances list I notice that I have got more than 2 nodes!! Why? NAME LOCATION MASTER_VERSION