问题
I have an application running over a POD in Kubernetes. I would like to store some output file logs on a persistent storage volume.
In order to do that, I created a volume over the NFS and bound it to the POD through the related volume claim. When I try to write or accede the shared folder I got a "permission denied" message, since the NFS is apparently read-only.
The following is the json file I used to create the volume:
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "task-pv-test"
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"nfs": {
"server": <IPAddress>,
"path": "/export"
},
"accessModes": [
"ReadWriteMany"
],
"persistentVolumeReclaimPolicy": "Delete",
"storageClassName": "standard"
}
}
The following is the POD configuration file
kind: Pod
apiVersion: v1
metadata:
name: volume-test
spec:
volumes:
- name: task-pv-test-storage
persistentVolumeClaim:
claimName: task-pv-test-claim
containers:
- name: volume-test
image: <ImageName>
volumeMounts:
- mountPath: /home
name: task-pv-test-storage
readOnly: false
Is there a way to change permissions?
UPDATE
Here are the PVC and NFS config:
PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-test-claim
spec:
storageClassName: standard
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
NFS CONFIG
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "nfs-client-provisioner-557b575fbc-hkzfp",
"generateName": "nfs-client-provisioner-557b575fbc-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/nfs-client-provisioner-557b575fbc-hkzfp",
"uid": "918b1220-423a-11e8-8c62-8aaf7effe4a0",
"resourceVersion": "27228",
"creationTimestamp": "2018-04-17T12:26:35Z",
"labels": {
"app": "nfs-client-provisioner",
"pod-template-hash": "1136131967"
},
"ownerReferences": [
{
"apiVersion": "extensions/v1beta1",
"kind": "ReplicaSet",
"name": "nfs-client-provisioner-557b575fbc",
"uid": "3239b14a-4222-11e8-8c62-8aaf7effe4a0",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "nfs-client-root",
"nfs": {
"server": <IPAddress>,
"path": "/Kubernetes"
}
},
{
"name": "nfs-client-provisioner-token-fdd2c",
"secret": {
"secretName": "nfs-client-provisioner-token-fdd2c",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "nfs-client-provisioner",
"image": "quay.io/external_storage/nfs-client-provisioner:latest",
"env": [
{
"name": "PROVISIONER_NAME",
"value": "<IPAddress>/Kubernetes"
},
{
"name": "NFS_SERVER",
"value": <IPAddress>
},
{
"name": "NFS_PATH",
"value": "/Kubernetes"
}
],
"resources": {},
"volumeMounts": [
{
"name": "nfs-client-root",
"mountPath": "/persistentvolumes"
},
{
"name": "nfs-client-provisioner-token-fdd2c",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"serviceAccountName": "nfs-client-provisioner",
"serviceAccount": "nfs-client-provisioner",
"nodeName": "det-vkube-s02",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
]
},
"status": {
"phase": "Running",
"hostIP": <IPAddress>,
"podIP": "<IPAddress>,
"startTime": "2018-04-17T12:26:35Z",
"qosClass": "BestEffort"
}
}
I have just removed some status information from the nfs config to make it shorter
回答1:
If you set the proper securityContext
for the pod configuration you can make sure the volume is mounted with proper permissions.
Example:
apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
securityContext:
fsGroup: 2000
volumes:
- name: task-pv-test-storage
persistentVolumeClaim:
claimName: task-pv-test-claim
containers:
- name: demo
image: example-image
volumeMounts:
- name: task-pv-test-storage
mountPath: /data/demo
In the above example the storage will be mounted at /data/demo
with 2000 group id, which is set by fsGroup
. You need to find out the group id of the user you are using. For that run the container and type id
and look for gid
.
To run the container and get the results of id
type: docker run --rm -it example-image id
You can read more about pod security context here: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
回答2:
Thanks to 白栋天 for the tip. For instance, if the pod securityContext is set to:
securityContext:
runAsUser: 1000
fsGroup: 1000
you would ssh to the NFS host and run
chown 1000:1000 -R /some/nfs/path
If you do not know the user:group or many pods will mount it, you can run
chmod 777 -R /some/nfs/path
回答3:
A simple way is to get to the nfs storage, and chmod 777, or chown with the user id in your volume-test container
回答4:
I'm a little confused from how you're trying to get things done, in any case if I'm understanding you correctly try this example:
volumeClaimTemplates:
- metadata:
name: data
namespace: kube-system
labels:
k8s-app: something
monitoring: something
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
And then maybe an init container do do something:
initContainers:
- name: prometheus-init
image: /something/bash-alpine:1.5
command:
- chown
- -R
- 65534:65534
- /data
volumeMounts:
- name: data
mountPath: /data
or is it the volumeMounts you're missing out on:
volumeMounts:
- name: config-volume
mountPath: /etc/config
- name: data
mountPath: /data
My last comment would be to take note on containers, I think you're only allowed to write in /tmp
or was it just for CoreOS? I'd have to look that up.
回答5:
Have you checked the permissions of directory ? Make sure read access is available to all.
来源:https://stackoverflow.com/questions/50156124/kubernetes-nfs-persistent-volumes-permission-denied