DNS problem on AWS EKS when running in private subnets

前端 未结 5 1963
自闭症患者
自闭症患者 2021-02-13 12:50

I have an EKS cluster setup in a VPC. The worker nodes are launched in private subnets. I can successfully deploy pods and services.

However, I\'m not able to perform DN

5条回答
  •  陌清茗
    陌清茗 (楼主)
    2021-02-13 13:52

    Re: AWS EKS Kube Cluster and Route53 internal/private Route53 queries from pods

    Just wanted to post a note on what we needed to do to resolve our issues. Noting that YMMV and everyone has different environments and resolutions, etc.

    Disclaimer: We're using the community terraform eks module to deploy/manage vpcs and the eks clusters. We didn't need to modify any security groups. We are working with multiple clusters, regions, and VPC's.

    ref: Terraform EKS module

    CoreDNS Changes: We have a DNS relay for private internal, so we needed to modify coredns configmap and add in the dns-relay IP address ...

    ec2.internal:53 {
        errors
        cache 30
        forward . 10.1.1.245
    }
    foo.dev.com:53 {
        errors
        cache 30
        forward . 10.1.1.245
    }
    foo.stage.com:53 {
        errors
        cache 30
        forward . 10.1.1.245
    }
    

    ...

    VPC DHCP option sets: Update with the IP of the above relay server if applicable--requires regeneration of the option set as they cannot be modified.

    Our DHCP options set looks like this:

    ["AmazonProvidedDNS", "10.1.1.245", "169.254.169.253"]
    

    ref: AWS DHCP Option Sets

    Route-53 Updates: Associate every route53 zone with the VPC-ID that you need to associate it with (where our kube cluster resides and the pods will make queries from).

    there is also a terraform module for that: https://www.terraform.io/docs/providers/aws/r/route53_zone_association.html

提交回复
热议问题