google-kubernetes-engine

Create GKE cluster and namespace with Terraform

寵の児 提交于 2020-12-15 01:55:33
问题 I need to create GKE cluster and then create namespace and install db through helm to that namespace. Now I have gke-cluster.tf that creates cluster with node pool and helm.tf, that has kubernetes provider and helm_release resource. It first creates cluster, but then tries to install db but namespace doesn't exist yet, so I have to run terraform apply again and it works. I want to avoid scenario with multiple folder and run terraform apply only once. What's the good practice for situaction

Create GKE cluster and namespace with Terraform

余生颓废 提交于 2020-12-15 01:55:04
问题 I need to create GKE cluster and then create namespace and install db through helm to that namespace. Now I have gke-cluster.tf that creates cluster with node pool and helm.tf, that has kubernetes provider and helm_release resource. It first creates cluster, but then tries to install db but namespace doesn't exist yet, so I have to run terraform apply again and it works. I want to avoid scenario with multiple folder and run terraform apply only once. What's the good practice for situaction

gke cert manager certificate in progress

佐手、 提交于 2020-12-13 07:54:11
问题 Im trying to make my google services more secure by moving from http to https. I've been follwing the cert-manager documentation to get it working. https://cert-manager.io/docs/configuration/acme/dns01/google/ I can't install helm on the cluster nor nginx ingress that's why im using the dns01 challenge instead of the http01. I installed cert-manager with regular manifests v0.11.0. After creating a dns admin service account, i used this yaml to create the issuer : apiVersion: cert-manager.io

Kubernetes Cron Job Terminate Pod before creation of next schedule

最后都变了- 提交于 2020-12-13 04:53:47
问题 I have a Kubernetes Cron Job for running a scheduled task every 5 minutes. I want to make sure that when a new pod is created at next schedule time, the earlier pod should have been terminated. The earlier pod should get terminated before creation of new. Can Kubernetes terminate the earlier pod before creation of new? My yaml is: apiVersion: batch/v1beta1 kind: CronJob metadata: name: my-scheduled spec: schedule: "*/5 * * * *" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1

chown: /var/lib/postgresql/data/postgresql.conf: Read-only file system

末鹿安然 提交于 2020-12-13 03:17:46
问题 I solved a permission issue when mounting /var/lib/postgresql/data by following this answer with initContainers . Now I'm trying to mount postgresql.conf as a volume, and I'm running into a similar permissioning issue that throws chown: /var/lib/postgresql/data/postgresql.conf: Read-only file system . What could I be missing? I've tried a bunch of different variations with little luck. apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: postgres labels: app: postgres spec: serviceName:

chown: /var/lib/postgresql/data/postgresql.conf: Read-only file system

♀尐吖头ヾ 提交于 2020-12-13 03:17:42
问题 I solved a permission issue when mounting /var/lib/postgresql/data by following this answer with initContainers . Now I'm trying to mount postgresql.conf as a volume, and I'm running into a similar permissioning issue that throws chown: /var/lib/postgresql/data/postgresql.conf: Read-only file system . What could I be missing? I've tried a bunch of different variations with little luck. apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: postgres labels: app: postgres spec: serviceName:

chown: /var/lib/postgresql/data/postgresql.conf: Read-only file system

别说谁变了你拦得住时间么 提交于 2020-12-13 03:17:26
问题 I solved a permission issue when mounting /var/lib/postgresql/data by following this answer with initContainers . Now I'm trying to mount postgresql.conf as a volume, and I'm running into a similar permissioning issue that throws chown: /var/lib/postgresql/data/postgresql.conf: Read-only file system . What could I be missing? I've tried a bunch of different variations with little luck. apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: postgres labels: app: postgres spec: serviceName:

Kubernetes ingress redirects http to https

拟墨画扇 提交于 2020-12-12 06:14:17
问题 I need some help from the community, I'm pretty new to kubernetes. I need the URL of my host defined in the "deployment.yaml" file to redirect from http to https using whatever technique. Next I am going to leave the infrastructure as the code I have. Deployment.yaml: apiVersion: apps/v1 kind: Deployment metadata: name: web namespace: default spec: selector: matchLabels: run: web template: metadata: labels: run: web spec: containers: - image: gcr.io/google-samples/hello-app:1.0

How do managed Kubernetes providers hide the master nodes?

坚强是说给别人听的谎言 提交于 2020-12-08 07:24:51
问题 If I run kubectl get nodes on GKE, EKS, or DigitalOcean Kubernetes, I only see the worker nodes. How are these systems architected at the network or application level to create this separation between workers and masters? 回答1: You can run the Kubernetes control plane outside Kubernetes as long as the worker nodes have network access to the control plane. This approach is used on most managed Kubernetes solutions. 回答2: A Container Engine cluster is a group of Compute Engine instances running

How do managed Kubernetes providers hide the master nodes?

淺唱寂寞╮ 提交于 2020-12-08 07:24:16
问题 If I run kubectl get nodes on GKE, EKS, or DigitalOcean Kubernetes, I only see the worker nodes. How are these systems architected at the network or application level to create this separation between workers and masters? 回答1: You can run the Kubernetes control plane outside Kubernetes as long as the worker nodes have network access to the control plane. This approach is used on most managed Kubernetes solutions. 回答2: A Container Engine cluster is a group of Compute Engine instances running