问题
I created a persistent volume using the following YAML
apiVersion: v1
kind: PersistentVolume
metadata:
name: dq-tools-volume
labels:
name: dq-tools-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: volume-class
nfs:
server: 192.168.215.83
path: "/var/nfsshare"
After creating this I created two persistentvolumeclaims using following YAMLS
PVC1:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-volume-1
labels:
name: jenkins-volume-1
spec:
accessMOdes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: volume-class
selector:
matchLabels:
name: dq-tools-volume
PVC2:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-volume-2
labels:
name: jenkins-volume-2
spec:
accessMOdes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: volume-class
selector:
matchLabels:
name: dq-tools-volume
But i noticed that both of these persistent volume claims are writing to same backend volume.
How can i isolate data of one persistentvolumeclaim from another. I am using this for multiple installations of Jenkins. I want workspace of each Jenkins to be isolated.
回答1:
As @D.T. explained a persistent volume claim is exclusively bound to a persistent volume.
You cannot bind 2 pvc to the same pv.
Here you can find another case where it was discussed.
There is a better solution for your scenario and it involves using nfs-client-provisioner. To achive that, firstly you have to install helm in your cluster an than follow these steps that I created for a previous answer on ServerFault.
I've tested it and using this solution you can isolate one PVC from the other.
1 - Install and configur NFS Server on my Master Node (Debian Linux, this might change depending on your Linux distribution):
Before installing the NFS Kernel server, we need to update our system’s repository index:
$ sudo apt-get update
Now, run the following command in order to install the NFS Kernel Server on your system:
$ sudo apt install nfs-kernel-server
Create the Export Directory
$ sudo mkdir -p /mnt/nfs_server_files
As we want all clients to access the directory, we will remove restrictive permissions of the export folder through the following commands (this may vary on your set-up according to your security policy):
$ sudo chown nobody:nogroup /mnt/nfs_server_files
$ sudo chmod 777 /mnt/nfs_server_files
Assign server access to client(s) through NFS export file
$ sudo nano /etc/exports
Inside this file, add a new line to allow access from other servers to your share.
/mnt/nfs_server_files 10.128.0.0/24(rw,sync,no_subtree_check)
You may want to use different options in your share. 10.128.0.0/24 is my k8s internal network.
Export the shared directory and restart the service to make sure all configuration files are correct.
$ sudo exportfs -a
$ sudo systemctl restart nfs-kernel-server
Check all active shares:
$ sudo exportfs
/mnt/nfs_server_files
10.128.0.0/24
2 - Install NFS Client on all my Worker Nodes:
$ sudo apt-get update
$ sudo apt-get install nfs-common
At this point you can make a test to check if you have access to your share from your worker nodes:
$ sudo mkdir -p /mnt/sharedfolder_client
$ sudo mount kubemaster:/mnt/nfs_server_files /mnt/sharedfolder_client
Notice that at this point you can use the name of your master node. K8s is taking care of the DNS here. Check if the volume mounted as expected and create some folders and files to male sure everything is working fine.
$ cd /mnt/sharedfolder_client
$ mkdir test
$ touch file
Go back to your master node and check if these files are at /mnt/nfs_server_files folder.
3 - Install NFS Client Provisioner.
Install the provisioner using helm:
$ helm install --name ext --namespace nfs --set nfs.server=kubemaster --set nfs.path=/mnt/nfs_server_files stable/nfs-client-provisioner
Notice that I've specified a namespace for it. Check if they are running:
$ kubectl get pods -n nfs
NAME READY STATUS RESTARTS AGE
ext-nfs-client-provisioner-f8964b44c-2876n 1/1 Running 0 84s
At this point we have a storageclass called nfs-client:
$ kubectl get storageclass -n nfs
NAME PROVISIONER AGE
nfs-client cluster.local/ext-nfs-client-provisioner 5m30s
We need to create a PersistentVolumeClaim:
$ more nfs-client-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: nfs
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "nfs-client"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
$ kubectl apply -f nfs-client-pvc.yaml
Check the status (Bound is expected):
$ kubectl get persistentvolumeclaim/test-claim -n nfs
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bound pvc-e1cd4c78-7c7c-4280-b1e0-41c0473652d5 1Mi RWX nfs-client 24s
4 - Create a simple pod to test if we can read/write out NFS Share:
Create a pod using this yaml:
apiVersion: v1
kind: Pod
metadata:
name: pod0
labels:
env: test
namespace: nfs
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
$ kubectl apply -f pod.yaml
Let's list all mounted volumes on our pod:
$ kubectl exec -ti -n nfs pod0 -- df -h /mnt
Filesystem Size Used Avail Use% Mounted on
kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1 99G 11G 84G 11% /mnt
As we can see, we have a NFS volume mounted on /mnt. (Important to notice the path kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-a2e53b0e-f9bb-4723-ad62-860030fb93b1
)
Let's check it:
root@pod0:/# cd /mnt
root@pod0:/mnt# ls -la
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 08:33 .
drwxr-xr-x 1 root root 4096 Nov 5 08:38 ..
It's empty. Let's create some files:
$ for i in 1 2; do touch file$i; done;
$ ls -l
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 08:58 .
drwxr-xr-x 1 root root 4096 Nov 5 08:38 ..
-rw-r--r-- 1 nobody nogroup 0 Nov 5 08:58 file1
-rw-r--r-- 1 nobody nogroup 0 Nov 5 08:58 file2
Now let's where are these files on our NFS Server (Master Node):
$ cd /mnt/nfs_server_files
$ ls -l
total 4
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 09:11 nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12
$ cd nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12/
$ ls -l
total 0
-rw-r--r-- 1 nobody nogroup 0 Nov 5 09:11 file1
-rw-r--r-- 1 nobody nogroup 0 Nov 5 09:11 file2
And here are the files we just created inside our pod!
回答2:
As i understand it is not possible to bind two PVC to the same PV.
Refer this link > A PVC to PV binding is a one-to-one mapping
You will possibly need to look into Dynamic Provisioning option for your setup.
Tested by creating one PV of 10G and two PVC with 8Gi an 2Gi claim request PVC-2 goes into pending state.
master $ kubectl get persistentvolume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv 10Gi RWX Retain Bound default/pv1 7m
master $ kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc1 Bound pv 10Gi RWX 3m
pvc2 Pending 8s
Files used for creating PV and PVC as below
master $ cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
path: /var/tmp/
master $ cat pvc1.ayml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
master $ cat pvc2.ayml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc2
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
来源:https://stackoverflow.com/questions/59630447/how-to-isolate-data-of-one-persistent-volume-claim-from-another