PersistentVolumeClaim is not bound: “nfs-pv-provisioning-demo”

北城以北 提交于 2019-12-21 22:54:47

问题


I am setting up a kubernetes lab using one node only and learning to setup kubernetes nfs. I am following kubernetes nfs example step by step from the following link: https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs

Trying the first section, NFS server part, executed 3 commands:

$ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-rc.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-service.yaml

I experience problem, where I see the following event:

PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"

Research done:

https://github.com/kubernetes/kubernetes/issues/43120

https://github.com/kubernetes/examples/pull/30

None of those links above help me to resolve issue I experience. I have made sure it is using image 0.8.

Image:        gcr.io/google_containers/volume-nfs:0.8

Does anyone know what does this message mean? Clue and guidance on how to troubleshoot this issue is very much appreciated. Thank you.

$ docker version

Client:
 Version:      17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:41:23 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:42:49 2017
 OS/Arch:      linux/amd64
 Experimental: false


$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:27:48Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}


$ kubectl get nodes
NAME          STATUS    ROLES     AGE       VERSION
lab-kube-06   Ready     master    2m        v1.8.3


$ kubectl describe nodes lab-kube-06
Name:               lab-kube-06
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=lab-kube-06
                    node-role.kubernetes.io/master=
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             <none>
CreationTimestamp:  Thu, 16 Nov 2017 16:51:28 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 16 Nov 2017 17:30:36 +0000   Thu, 16 Nov 2017 16:51:28 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 16 Nov 2017 17:30:36 +0000   Thu, 16 Nov 2017 16:51:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 16 Nov 2017 17:30:36 +0000   Thu, 16 Nov 2017 16:51:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            True    Thu, 16 Nov 2017 17:30:36 +0000   Thu, 16 Nov 2017 16:51:28 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  10.0.0.6
  Hostname:    lab-kube-06
Capacity:
 cpu:     2
 memory:  8159076Ki
 pods:    110
Allocatable:
 cpu:     2
 memory:  8056676Ki
 pods:    110
System Info:
 Machine ID:                 e198b57826ab4704a6526baea5fa1d06
 System UUID:                05EF54CC-E8C8-874B-A708-BBC7BC140FF2
 Boot ID:                    3d64ad16-5603-42e9-bd34-84f6069ded5f
 Kernel Version:             3.10.0-693.el7.x86_64
 OS Image:                   Red Hat Enterprise Linux Server 7.4 (Maipo)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://Unknown
 Kubelet Version:            v1.8.3
 Kube-Proxy Version:         v1.8.3
ExternalID:                  lab-kube-06
Non-terminated Pods:         (7 in total)
  Namespace                  Name                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                   ------------  ----------  ---------------  -------------
  kube-system                etcd-lab-kube-06                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-lab-kube-06             250m (12%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-lab-kube-06    200m (10%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-dns-545bc4bfd4-gmdvn              260m (13%)    0 (0%)      110Mi (1%)       170Mi (2%)
  kube-system                kube-proxy-68w8k                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-lab-kube-06             100m (5%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-7zlbg                        20m (1%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  830m (41%)    0 (0%)      110Mi (1%)       170Mi (2%)
Events:
  Type    Reason                   Age                From                     Message
  ----    ------                   ----               ----                     -------
  Normal  Starting                 39m                kubelet, lab-kube-06     Starting kubelet.
  Normal  NodeAllocatableEnforced  39m                kubelet, lab-kube-06     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientDisk    39m (x8 over 39m)  kubelet, lab-kube-06     Node lab-kube-06 status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  39m (x8 over 39m)  kubelet, lab-kube-06     Node lab-kube-06 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    39m (x7 over 39m)  kubelet, lab-kube-06     Node lab-kube-06 status is now: NodeHasNoDiskPressure
  Normal  Starting                 38m                kube-proxy, lab-kube-06  Starting kube-proxy.



$ kubectl get pvc
NAME                       STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pv-provisioning-demo   Pending                                                      14s


$ kubectl get events
LAST SEEN   FIRST SEEN   COUNT     NAME                                        KIND                    SUBOBJECT   TYPE      REASON                    SOURCE                        MESSAGE
18m         18m          1         lab-kube-06.14f79f093119829a                Node                                Normal    Starting                  kubelet, lab-kube-06          Starting kubelet.
18m         18m          8         lab-kube-06.14f79f0931d0eb6e                Node                                Normal    NodeHasSufficientDisk     kubelet, lab-kube-06          Node lab-kube-06 status is now: NodeHasSufficientDisk
18m         18m          8         lab-kube-06.14f79f0931d1253e                Node                                Normal    NodeHasSufficientMemory   kubelet, lab-kube-06          Node lab-kube-06 status is now: NodeHasSufficientMemory
18m         18m          7         lab-kube-06.14f79f0931d131be                Node                                Normal    NodeHasNoDiskPressure     kubelet, lab-kube-06          Node lab-kube-06 status is now: NodeHasNoDiskPressure
18m         18m          1         lab-kube-06.14f79f0932f3f1b0                Node                                Normal    NodeAllocatableEnforced   kubelet, lab-kube-06          Updated Node Allocatable limit across pods
18m         18m          1         lab-kube-06.14f79f122a32282d                Node                                Normal    RegisteredNode            controllermanager             Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
17m         17m          1         lab-kube-06.14f79f1cdfc4c3b1                Node                                Normal    Starting                  kube-proxy, lab-kube-06       Starting kube-proxy.
17m         17m          1         lab-kube-06.14f79f1d94ef1c17                Node                                Normal    RegisteredNode            controllermanager             Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
14m         14m          1         lab-kube-06.14f79f4b91cf73b3                Node                                Normal    RegisteredNode            controllermanager             Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
58s         11m          42        nfs-pv-provisioning-demo.14f79f766cf887f2   PersistentVolumeClaim               Normal    FailedBinding             persistentvolume-controller   no persistent volumes available for this claim and no storage class is set
14s         4m           20        nfs-server-kq44h.14f79fd21b9db5f9           Pod                                 Warning   FailedScheduling          default-scheduler             PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"
4m          4m           1         nfs-server.14f79fd21b946027                 ReplicationController               Normal    SuccessfulCreate          replication-controller        Created pod: nfs-server-kq44h
                                       2m

$ kubectl get pods
NAME               READY     STATUS    RESTARTS   AGE
nfs-server-kq44h   0/1       Pending   0          16s


$ kubectl get pods

NAME               READY     STATUS    RESTARTS   AGE
nfs-server-kq44h   0/1       Pending   0          26s


$ kubectl get rc

NAME         DESIRED   CURRENT   READY     AGE
nfs-server   1         1         0         40s


$ kubectl describe pods nfs-server-kq44h

Name:           nfs-server-kq44h
Namespace:      default
Node:           <none>
Labels:         role=nfs-server
Annotations:    kubernetes.io/created-

by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"nfs-server","uid":"5653eb53-caf0-11e7-ac02-000d3a04eb...
Status:         Pending
IP:
Created By:     ReplicationController/nfs-server
Controlled By:  ReplicationController/nfs-server
Containers:
  nfs-server:
    Image:        gcr.io/google_containers/volume-nfs:0.8
    Ports:        2049/TCP, 20048/TCP, 111/TCP
    Environment:  <none>
    Mounts:
      /exports from mypvc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-plgv5 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  mypvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nfs-pv-provisioning-demo
    ReadOnly:   false
  default-token-plgv5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-plgv5
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  39s (x22 over 5m)  default-scheduler  PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"

回答1:


Each Persistent Volume Claim (PVC) needs a Persistent Volume (PV) that it can bind to. In your example, you have only created a PVC, but not the volume itself.

A PV can either be created manually, or automatically by using a Volume class with a provisioner. Have a look at the docs of static and dynamic provisioning for more information):

There are two ways PVs may be provisioned: statically or dynamically.

Static

A cluster administrator creates a number of PVs. They carry the details of the real storage which is available for use by cluster users. [...]

Dynamic

When none of the static PVs the administrator created matches a user’s PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC. This provisioning is based on StorageClasses: the PVC must request a class and the administrator must have created and configured that class in order for dynamic provisioning to occur.

In your example, you are creating a storage class provisioner (defined in examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml) that seems to be tailored for usage within the Google cloud (which it will probably not be able to actually create PVs in your lab setup).

You can create a persistent volume manually on your own. After creating the PV, the PVC should automatically bind itself to the volume and your pods should start. Below is an example for a persistent volume that uses the node's local file system as a volume (which is probably OK for a one-node test setup):

apiVersion: v1
kind: PersistentVolume
metadata:
  name: someVolume
spec:
  capacity:
    storage: 200Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /path/on/host

For a production setup, you'll probably want to choose a different volume type at hostPath, although the volume types available to you will greatly differ depending on the environment that you're in (cloud or self-hosted/bare-metal).



来源:https://stackoverflow.com/questions/47335939/persistentvolumeclaim-is-not-bound-nfs-pv-provisioning-demo

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!