问题
I have 2 EKS clusters, in 2 different AWS accounts and with, I might assume, different firewalls (which I don't have access to). The first one (Dev) is all right, however, with the same configuration, UAT cluster pods is struggling to resolve DNS. The Nodes can resolve and seems to be all right.
1) ping 8.8.8.8 works
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
2) I can ping the IP of google (and others), but not the actual dns names.
Our configuration:
- configured with Terraform.
- The worker nodes and control plane SG are the same than the dev ones. I believe those are fine.
- Added 53 TCP and 53 UDP on inbound + outbound NACl (just to be sure 53 was really open...). Added 53 TCP and 53 UDP outbound from Worker Nodes.
- We are using
ami-059c6874350e63ca9
with 1.14 kubernetes version.
I am unsure if the problem is a firewall somewhere, coredns, my configuration that needs to be updated or an "stupid mistake". Any help would be appreciated.
回答1:
After days of debugging, here is what was the problem :
I had allowed all traffic between the nodes but that all traffic
is TCP, not UDP.
It was basically a one line in AWS: In worker nodes SG, add an inbound rule from/to worker nodes port 53 protocol DNS (UDP).
If you use terraform, it should look like that:
resource "aws_security_group_rule" "eks-node-ingress-cluster-dns" {
description = "Allow pods DNS"
from_port = 53
protocol = 17
security_group_id = "${aws_security_group.SG-eks-WorkerNodes.id}"
source_security_group_id = "${aws_security_group.SG-eks-WorkerNodes.id}"
to_port = 53
type = "ingress"
}
来源:https://stackoverflow.com/questions/59662585/pods-in-eks-cant-resolve-dns-but-can-ping-ip