问题
dear Kubernetes guru's!
I have spinned kube 1.4.1 cluster on manually created AWS hosts using 'contrib' Ansible playbook (https://github.com/kubernetes/contrib/tree/master/ansible).
My problem is that Kube doesn't attach EBS drives to minion hosts. If I define the pod as follows:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka1
spec:
replicas: 1
template:
spec:
containers:
- name: kafka1
image: daniilyar/kafka
ports:
- containerPort: 9092
name: clientconnct
protocol: TCP
volumeMounts:
- mountPath: /kafka
name: storage
volumes:
- name: storage
awsElasticBlockStore:
volumeID: vol-56676d83
fsType: ext4
I get the following error in kubelet.log:
Mounting arguments: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-56676d83 /var/lib/kubelet/pods/db213783-9477-11e6-8aa9-12f3d1cdf81a/volumes/kubernetes.io~aws-ebs/storage [bind]
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-56676d83 does not exist
EBS volume keeps being in 'Available' state during that, so I am sure that Kube doesn't attach volume to host at all and so, doesn't mount it. I am 100% sure that this is a Kubernetes itself issue and not the permissioning issue because I can mount the same volume manually from within this minion to this minion just fine:
$ aws ec2 --region us-east-1 attach-volume --volume-id vol-56676d83 --instance-id $(wget -q -O - http://instance-data/latest/meta-data/instance-id) --device /dev/sdc
{
"AttachTime": "2016-10-18T15:02:41.672Z",
"InstanceId": "i-603cfb50",
"VolumeId": "vol-56676d83",
"State": "attaching",
"Device": "/dev/sdc"
}
Googling, hacking and trying older K8 versions didn't help me to solve this. Could anyone please point me on what else could I do to understand the problem so I can fix it? Any help is greatly appreciated.
回答1:
Nobody helped me at K8 Slack channels, so after a day of pulling my hair off I found the solution by myself:
To get the K8 cluster installed by 'contrib' Ansible playbook (https://github.com/kubernetes/contrib/tree/master/ansible) mounting EBS volumes properly, except for IAM roles setup, you need to add the --cloud-provider=aws flag to your existing cluster: all kubelets, the apiserver, and the controller manager.
Without --cloud-provider=aws flag Kubernetes will give you an unfriendly 'mount: special device xxx does not exist' error instead of real cause.
回答2:
With a kubeadm configuration, configuration is defined in:
/var/lib/kubelet/config.yaml
and
/var/lib/kubelet/kubeadm-flags.env
My issue that I had was that the environmental variables were defined on the master node in kubeadm-flags.env
but not in the second node.
To resolve this manually I added the --cloud-provider=aws
tag to the kubeadm-flags.env
and restarted the services, which resolved the issue:
systemctl daemon-reload && systemctl restart kubelet
来源:https://stackoverflow.com/questions/40109083/kubernetes-mount-special-device-does-not-exist-when-attaching-aws-ebs-volume