Accidentally tried to delete all PV\'s in cluster but thankfully they still have PVC\'s that are bound to them so all PV\'s are stuck in Status: Terminating.
How can I g
It is, in fact, possible to save data from your PersistentVolume
with Status: Terminating
and RetainPolicy
set to default (delete). We have done so on GKE, not sure about AWS or Azure but I guess that they are similar
We had the same problem and I will post our solution here in case somebody else has an issue like this.
Your PersistenVolumes
will not be terminated until there is a pod, deployment or to be more specific - a PersistentVolumeClaim
using it.
The steps we took to remedy our broken state:
Once you are in the situation lke the OP, the first thing you want to do is to create a snapshot of your PersistentVolumes
.
In GKE console, go to Compute Engine -> Disks
and find your volume there (use kubectl get pv | grep pvc-name
) and create a snapshot of your volume.
Use the snapshot to create a disk: gcloud compute disks create name-of-disk --size=10 --source-snapshot=name-of-snapshot --type=pd-standard --zone=your-zone
At this point, stop the services using the volume and delete the volume and volume claim.
Recreate the volume manually with the data from the disk:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: name-of-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
gcePersistentDisk:
fsType: ext4
pdName: name-of-disk
persistentVolumeReclaimPolicy: Retain
Now just update your volume claim to target a specific volume, the last line of the yaml file:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
namespace: my-namespace
labels:
app: my-app
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeName: name-of-pv