I installed CentOS Atomic Host as operating system for kubernetes on AWS.
Everything works fine, but it seems I missed something.
I did not configure cloud provi
I can't speak to the ProjectAtomic bits, nor to the KUBERNETES_PROVIDER
env-var, since my experience has been with the CoreOS provisioner. I will talk about my experiences and see if that helps you dig a little more into your setup.
Foremost, it is absolutely essential that the controller EC2 and the worker EC2 machines have the correct IAM role that will enable the machines to make AWS calls on behalf of your account. This includes things like provisioning ELBs and working with EBS Volumes (or attaching an EBS Volume to themselves, in the case of the worker). Without that, your cloud-config experience will go nowhere. I'm pretty sure the IAM payloads are defined somewhere other than those .go
files, which are hard to read, but that's the quickest link I had handy to show what's needed.
Fortunately, the answer to that question, and the one I'm about to talk about, are both centered around the apiserver
and the controller-manager
. The configuration of them and the logs they output.
Both the apiserver and the controller-manager have an argument that points to an on-disk cloud configuration file that regrettably isn't documented anywhere except for the source. That Zone
field is, in my experience, optional (just like they say in the comments). However, it was seeing the KubernetesClusterTag
that led me to follow that field around in the code to see what it does.
If your experience is anything like mine, you'll see in the docker logs of the controller-manager
a bunch of error messages about how it created the ELB but could not find any subnets to attach to it; (that "docker logs" bit is presuming, of course, that ProjectAtomic also uses docker to run the Kubernetes daemons).
Once I attached a Tag
named KubernetesCluster
and set every instance of the Tag
to the same string (it can be anything, AFAIK), then the aws_loadbalancer
was able to find the subnet in the VPC and it attached the Nodes to the ELB and everything was cool -- except for the part about it can only create Internet facing ELBs, right now. :-(
Just for clarity: the aws.cfg
contains a field named KubernetesClusterTag
that allows you to redefine the Tag
that Kubernetes will look for; without any value in that file, Kuberenetes will use the Tag
name KubernetesCluster
.
I hope this helps you and I hope it helps others, because once Kubernetes is up, it's absolutely amazing.
- What features cloud provider gives to kubernetes?
Some features that I know: the external loadbalancer, the persistent volumes.
- How to configure AWS cloud provider?
There is a environment var called KUBERNETES_PROVIDER
, but it seems the env var only matters when people start a k8s cluster. Since you said "everything works fine", I guess you don't need any further configuration to use the features I mentioned above.