kubernetes-pvc

Kubernetes trouble with StatefulSet and 3 PersistentVolumes

丶灬走出姿态 提交于 2021-02-08 10:40:34
问题 I'm in the process of creating a StatefulSet based on this yaml, that will have 3 replicas. I want each of the 3 pods to connect to a different PersistentVolume. For the persistent volume I'm using 3 objects that look like this, with only the name changed (pvvolume, pvvolume2, pvvolume3) : kind: PersistentVolume apiVersion: v1 metadata: name: pvvolume labels: type: local spec: storageClassName: standard capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/nfs" claimRef: kind

How to avoid override the container directory when using pvc in kubernetes?

孤街醉人 提交于 2021-02-07 20:57:54
问题 When using pvc to persist the container data, it seems pvc always override the container's directory, the original data in directory will not be available, what's the reason ? 回答1: This is by design. Kubelet is responsible for preparing the mounts for your container, and they can come from plaethora of different storagebackends. At the time of mounting they are empty and kubelet has no reason to put any content in them. That said, there are ways to achieve what you seem to expect by using

How to avoid override the container directory when using pvc in kubernetes?

江枫思渺然 提交于 2021-02-07 20:57:50
问题 When using pvc to persist the container data, it seems pvc always override the container's directory, the original data in directory will not be available, what's the reason ? 回答1: This is by design. Kubelet is responsible for preparing the mounts for your container, and they can come from plaethora of different storagebackends. At the time of mounting they are empty and kubelet has no reason to put any content in them. That said, there are ways to achieve what you seem to expect by using

How to deploy logstash with persistent volume on kubernetes?

此生再无相见时 提交于 2021-01-07 06:31:18
问题 Using GKE to deploy logstash by statefulset kind with pvc. Also need to install an output plugin. When don't use while true; do sleep 1000; done; in container's command args , it can't deploy with pvc successfully. The pod will cause CrashLoopBackOff error. Normal Created 13s (x2 over 14s) kubelet Created container logstash Normal Started 13s (x2 over 13s) kubelet Started container logstash Warning BackOff 11s (x2 over 12s) kubelet Back-off restarting failed container From here I found it can

Change kubernetes stroge class mounted value from another pod

孤者浪人 提交于 2020-06-29 04:06:45
问题 I want to publish sonarqube with kubernetes. I did successfully with official packages. But i want to use some plugins old version and some custom plugins. In local with docker-compose files, i created a fly-away container that fills the plugins directory(/opt/sonarqube/extensions/plugins) with plugins. And use that volume with sonarqube container. As a conclusion : Sonarqube extensions volume directory is created (or filled) from different container(do the job and die). I want to use the

kubernetes persistence volume and persistence volume claim exceeded storage

放肆的年华 提交于 2020-05-30 06:42:49
问题 By following kubernetes guide i have created a pv, pvc and pod. i have claimed only 10Mi of out of 20Mi pv. I have copied 23Mi that is more than my pv. But my pod is still running. Can any one explain ? pv-volume.yaml kind: PersistentVolume apiVersion: v1 metadata: name: task-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 20Mi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" pv-claim.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: task

glusterfs: failed to get the 'volume file' from server

我是研究僧i 提交于 2020-05-16 08:58:12
问题 I see below error in pod logs: , the following error information was pulled from the glusterfs log to help diagnose this issue: [2020-01-10 20:57:47.132637] E [glusterfsd-mgmt.c:1804:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server [2020-01-10 20:57:47.132690] E [glusterfsd-mgmt.c:1940:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_32dd7b246275) I have glusterfs installed on three servers - server 1, 2 and 3. I am using heketi to do dynamic

Kubernetes Persistent Volume Claim mounted with wrong gid

时光毁灭记忆、已成空白 提交于 2020-02-02 07:05:46
问题 I'm creating a Kubernetes PVC and a Deploy that uses it. In the yaml it is specified that uid and gid must be 1000. But when deployed the volume is mounted with different IDs so I have no write access on it. How can I specify effectively uid and gid for a PVC? PVC yaml: --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jmdlcbdata annotations: pv.beta.kubernetes.io/gid: "1000" volume.beta.kubernetes.io/mount-options: "uid=1000,gid=1000" volume.beta.kubernetes.io/storage-class: