kubernetes-deployment

How to edit all the deployment of kubernetes at a time

倖福魔咒の 提交于 2020-03-18 10:07:47
问题 We have hundreds of deployment and in the config we have imagePullPolicy set as “ifnotpresent” for most of them and for few it is set to “always” now I want to modify all deployment which has ifnotpresent to always . How can we achieve this with at a stroke? Ex: kubectl get deployment -n test -o json | jq ‘.spec.template.spec.contianer[0].imagePullPolicy=“ifnotpresent”| kubectl -n test replace -f - The above command helps to reset it for one particular deployment. 回答1: Kubernetes doesn't

How to properly setup hostPath persistent volume on Minikube?

对着背影说爱祢 提交于 2020-02-05 04:26:04
问题 I'm currently working on a Lumen project where we are using Minikube as our dev environment. Our host machine's /Users/development/<project name> is mounted at /var/www/html and is working fine. However, I'm facing this Storage issue where file writes are not working in the /var/www/html/storage/framework due to the fact that the entire /var/www/html directory has the 1001:1001 ownership. This is my deployment spec: apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind:

Kubernetes Persistent Volume Claim mounted with wrong gid

时光毁灭记忆、已成空白 提交于 2020-02-02 07:05:46
问题 I'm creating a Kubernetes PVC and a Deploy that uses it. In the yaml it is specified that uid and gid must be 1000. But when deployed the volume is mounted with different IDs so I have no write access on it. How can I specify effectively uid and gid for a PVC? PVC yaml: --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jmdlcbdata annotations: pv.beta.kubernetes.io/gid: "1000" volume.beta.kubernetes.io/mount-options: "uid=1000,gid=1000" volume.beta.kubernetes.io/storage-class:

Kubernetes Persistent Volume Claim mounted with wrong gid

試著忘記壹切 提交于 2020-02-02 07:05:21
问题 I'm creating a Kubernetes PVC and a Deploy that uses it. In the yaml it is specified that uid and gid must be 1000. But when deployed the volume is mounted with different IDs so I have no write access on it. How can I specify effectively uid and gid for a PVC? PVC yaml: --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jmdlcbdata annotations: pv.beta.kubernetes.io/gid: "1000" volume.beta.kubernetes.io/mount-options: "uid=1000,gid=1000" volume.beta.kubernetes.io/storage-class:

how to set different environment variables of Deployment replicas in kubernetes

旧巷老猫 提交于 2020-01-04 02:26:26
问题 I have 4 k8s pods by setting the replicas of Deployment to 4 now. apiVersion: v1 kind: Deployment metadata: ... spec: ... replicas: 4 ... The POD will get items in a database and consume it, the items in database has a column class_name . now I want one pod only get one class_name 's item. for example pod1 only get item which class_name equals class_name_1 , and pod2 only get item which class_name equals class_name_2 ... So I want to pass different class_name as environment variables to

Kubernetes “the server doesn't have a resource type deployments”

好久不见. 提交于 2019-12-25 02:54:05
问题 I'm new on kubernetes. I couldn't get deployments using kubectl but I can see all deployments on kubernetes dashboard. How can i fix this problem? user@master:~$ kubectl get deployments error: the server doesn't have a resource type "deployments" kubernetes version: 1.12 kubectl version: 1.13 kubectl api-versions: apiregistration.k8s.io/v1 apiregistration.k8s.io/v1beta1 v1 api-resources: user@master:~$ kubectl api-resources NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding

Nginx proxy on kubernetes

倖福魔咒の 提交于 2019-12-24 06:35:48
问题 I have a nginx deployment in k8s cluster which proxies my api/ calls like this: server { listen 80; location / { root /usr/share/nginx/html; index index.html index.htm; try_files $uri $uri/ /index.html =404; } location /api { proxy_pass http://backend-dev/api; } } This works most of the time, however sometimes when api pods aren't ready, nginx fails with error: nginx: [emerg] host not found in upstream "backend-dev" in /etc/nginx/conf.d/default.conf:12 After couple of hours exploring

Helm V3 - Cannot find the official repo

那年仲夏 提交于 2019-12-21 03:41:27
问题 I have been trying to install nginx ingress using helm version 3 helm install my-ingress stable/nginx-ingress But Helm doesn't seem to be able to find it's official stable repo. It gives the message: Error: failed to download "stable/nginx-ingress" (hint: running helm repo update may help) I tried helm repo update . But it doesn't help. I tried listing the repos helm repo list but it is empty. I tried to add the stable repo: helm repo add stable https://github.com/helm/charts/tree/master

Cannot create a deployment that requests more than 2Gi memory

巧了我就是萌 提交于 2019-12-12 11:16:51
问题 My deployment pod was evicted due to memory consumption: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Evicted 1h kubelet, gke-XXX-default-pool-XXX The node was low on resource: memory. Container my-container was using 1700040Ki, which exceeds its request of 0. Normal Killing 1h kubelet, gke-XXX-default-pool-XXX Killing container with id docker://my-container:Need to kill Pod I tried to grant it more memory by adding the following to my deployment yaml : apiVersion: apps

Relation between preStop hook and terminationGracePeriodSeconds

亡梦爱人 提交于 2019-12-07 18:55:17
问题 Basically I am trying to do is play around with pod lifecycle and check if we can do some cleanup/backup such as copy logs before the pod terminates. What I need : Copy logs/heapdumps from container to a hostPath/S3 before terminating What I tried: I used a preStop hook with a bash command to echo a message (just to see if it works !!). Used terminationGracePeriodSeconds with a delay to preStop and toggle them to see if the process works. Ex. keep terminationGracePeriodSeconds:30 sec (default