kube-controller-manager doesn't start when using “cloud-provider=aws” with kubeadm

后端 未结 1 1631
眼角桃花
眼角桃花 2021-01-25 19:30

I\'m trying to use Kubernetes integration with AWS, but kube-controller-manager don\'t start. (BTW: Everything works perfectly without the ASW option)

Here is what I d

相关标签:
1条回答
  • 2021-01-25 19:58

    Any idea?

    Check following points as potential issues:

    • kubelet has proper provider set, check /etc/systemd/system/kubelet.service.d/20-cloud-provider.conf containing:

      Environment="KUBELET_EXTRA_ARGS=--cloud-provider=aws --cloud-config=/etc/kubernetes/cloud-config.conf
      

      if not, add and restart kubelet service.

    • In /etc/kubernetes/manifests/ check following files have proper configuration:

      • kube-controller-manager.yaml and kube-apiserver.yaml:

        --cloud-provider=aws
        

        if not, just add, and pod will be automatically restarted.

    • Just in case, check that AWS resources (EC2 instances, etc) are tagged with kubernetes tag (taken from your cloud-config.conf) and IAM policies are properly set.

    If you could supply logs as requested by Artem in comments that could shed more light on the issue.

    Edit

    As requested in comment, short overview of IAM policy handling:

    • create new IAM policy (or edit appropriately if already created), say k8s-default-policy. Given below is quite a liberal policy and you can fine grain exact settings to match you security preferences. Pay attention to load balancer section in your case. In the description put something along the lines of "Allows EC2 instances to call AWS services on your behalf." or similar...

      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
              "arn:aws:s3:::kubernetes-*"
            ]
          },
          {
            "Effect": "Allow",
            "Action": "ec2:Describe*",
            "Resource": "*"
          },
          {
            "Effect": "Allow",
            "Action": "ec2:AttachVolume",
            "Resource": "*"
          },
          {
            "Effect": "Allow",
            "Action": "ec2:DetachVolume",
            "Resource": "*"
          },
          {
            "Effect": "Allow",
            "Action": ["ec2:*"],
            "Resource": ["*"]
          },
          {
            "Effect": "Allow",
            "Action": ["elasticloadbalancing:*"],
            "Resource": ["*"]
          }  ]
      } 
      
    • create new role (or edit approptiately if already created) and attach previous policy to it, say attach k8s-default-policy to k8s-default-role.

    • Attach Role to instances that can handle AWS resources. You can create different roles for master and for workers if you need to. EC2 -> Instances -> (select instance) -> Actions -> Instance Settings -> Attach/Replace IAM Role -> (select appropriate role)

    • Also, apart from this check that all resources in question are tagged with kubernetes tag.

    0 讨论(0)
提交回复
热议问题