persistent-volumes

Cancel or undo deletion of Persistent Volumes in kubernetes cluster

为君一笑 提交于 2020-08-22 04:53:26
问题 Accidentally tried to delete all PV's in cluster but thankfully they still have PVC's that are bound to them so all PV's are stuck in Status: Terminating. How can I get the PV's out of the "terminating" status and back to a healthy state where it is "bound" to the pvc and is fully working? The key here is that I don't want to lose any data and I want to make sure the volumes are functional and not at risk of being terminated if claim goes away. Here are some details from a kubectl describe on

How to extract volumeClaimTemplates to a separate PersistentVolumeClaim yaml file?

可紊 提交于 2020-06-29 04:19:10
问题 Let's say I have a StatefulSet definition apiVersion: v1 kind: StatefulSet metadata: name: web spec: ... volumeClaimTemplates: — metadata: name: www spec: resources: requests: storage: 1Gi This will create me a PersistentVolumeClaim (PVC) with a PersistentVolume (PV) of 1 GiB for each pod. How can I write something like this apiVersion: v1 kind: PersistentVolumeClaim metadata: name: www spec: ... resources: requests: storage: 1Gi ... and connect it with the StatefulSet in a way that it still

Volume claim on GKE / Multi-Attach error for volume Volume is already exclusively attached

|▌冷眼眸甩不掉的悲伤 提交于 2020-06-24 22:24:20
问题 The problem seems to have been solved a long time ago, as the answer and the comments does not provide real solutions, I would like to get some help from experienced users The error is the following (when describing the pod, which keeps on the ContainerCreating state) : Multi-Attach error for volume "pvc-xxx" Volume is already exclusively attached to one node and can't be attached to another This all run on GKE. I had a previous cluster, and the problem never occured. I have reused the same

kubernetes timescaledb statefulset: Changes lost on pod recreation

心已入冬 提交于 2020-06-17 15:12:17
问题 I have a Timescaledb server running as StatefulSet in AKS. It appears when I delete and recreate timescaledb pod, the changes are lost even though the pod is associated to the initially associated PV (persistent volume). Any help is appreciated. Below is the PV, PVC config of statefulset extracted by running kubectl get statefulset timescaledb -o yaml template: metadata: creationTimestamp: null labels: app: timescaledb spec: containers: - args: - -c - config_file=/etc/postgresql/postgresql

kubernetes timescaledb statefulset: Changes lost on pod recreation

允我心安 提交于 2020-06-17 15:11:58
问题 I have a Timescaledb server running as StatefulSet in AKS. It appears when I delete and recreate timescaledb pod, the changes are lost even though the pod is associated to the initially associated PV (persistent volume). Any help is appreciated. Below is the PV, PVC config of statefulset extracted by running kubectl get statefulset timescaledb -o yaml template: metadata: creationTimestamp: null labels: app: timescaledb spec: containers: - args: - -c - config_file=/etc/postgresql/postgresql

kubernetes timescaledb statefulset: Changes lost on pod recreation

廉价感情. 提交于 2020-06-17 15:10:27
问题 I have a Timescaledb server running as StatefulSet in AKS. It appears when I delete and recreate timescaledb pod, the changes are lost even though the pod is associated to the initially associated PV (persistent volume). Any help is appreciated. Below is the PV, PVC config of statefulset extracted by running kubectl get statefulset timescaledb -o yaml template: metadata: creationTimestamp: null labels: app: timescaledb spec: containers: - args: - -c - config_file=/etc/postgresql/postgresql