How to manual recover a PV

前端 未结 3 915
误落风尘
误落风尘 2020-12-16 02:45

according to the official docs https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/ with the “Retain” policy a PV can be manually recovered . What d

3条回答
  •  醉梦人生
    2020-12-16 03:39

    Like stated in the answer by Tummala Dhanvi, the spec.claimRef section has to be tackled with. While removing the whole spec.claimRef can be useful if you have only one PV, this will prove very nasty if you have multiple PVs to be "rescued".

    The first step is to ensure the PV has the Retain reclaim policy before deleting the PVC. You can edit or patch the PV to achieve that

    • kubectl edit pv pvc-73e9252b-67ed-4350-bed0-7f27c92ce826
      • find the spec.persistentVolumeReclaimPolicy key
      • input Retain for its value
      • save & exit
    • or, in one command kubectl patch pv pvc-73e9252b-67ed-4350-bed0-7f27c92ce826 -p "{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}"

    Now you can delete the PVC(s) (either by using helm or otherwise) and the PV(s) will not be deleted.

    To successfully re-mount a PV to the desired pod you have to edit the PV configuration once again, this time the spec.claimRef section. But do not delete the whole section. Instead, delete only the resourceVersion and uid keys. The resulting section would then look something like this

    ...
      capacity:
        storage: 16Gi
      claimRef:
        apiVersion: v1
        kind: PersistentVolumeClaim
        name: database
        namespace: staging
      nodeAffinity:
    ...
    

    Repeat this for all of your PVs and their status in the kubectl get pv output will be Available afterwards. By leaving the spec.claimRef.name and spec.claimRef.namespace keys intact, we ensured that a new PVC with the corresponding spec (staging/database in my case), will be bound to the exact PV it is supposed to.

    Also, make sure your new claim does not specify a larger storage capacity than the PV actually has (it seems though that the new claims' capacity may be less than the existing PV's). If the new PVC claims a larger storage, a new PV will be created instead. Best to keep it the same.

    To digress: If the storageClass you're using allows for volume resizing, you can resize it later; here it's explained how: https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/

    My experience with this was pretty stressful. I've had 6 PVs, thakfully in Retain mode. For some reason a new deployment rollout got stuck, two pods just wouldn't want to terminate. In the end I ended up with deleting the whole deployment (using helm), restarting the cluster nodes, and then redeploying anew. This caused 6 new PVs to be created!

    I found this thread, and went on to delete the spec.claimRef of all the PVs. Deleting and deploying the installation once again resulted in the PVs being reused, but they were not mounted where they supposed to, the data was not there. After a good amount of digging, I figured out that the database volume was mounted to a RabbitMQ pod, mongodb was mounted to ElasticSearch etc.

    It took me about a dozen times around, to get this right. Luckily, for me the mixed-up mounting of volumes did not destroy any of the original data. The pods initializations did not clean-out the volume, just wrote their stuff there.

    Hope this saves some seriuos headaches out there!

提交回复
热议问题