问题
Is it possible to use a private DNS in Kubernetes? For example, an application needs to connect to an external DB by its hostname. The DNS entry, which resolves the IP, is deposited in a private DNS.
My AKS (Azure Kubernetes Service) is running on version 1.17 which already uses the new coreDNS.
My first try was to use that private DNS like on VM by configuring the /etc/resolve.conf file of the pods:
dnsPolicy: "None"
dnsConfig:
nameservers:
- 10.76.xxx.xxx
- 10.76.xxx.xxx
searches:
- az-q.example.com
options:
- name: ndots
value: "2"
Then I tried to use configmap to adjust the coreDNS:
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["10.76.xxx.xxx", "10.76.xxx.xxx"]
But my pod is every time running in an error on deployment:
$ sudo kubectl logs app-homepage-backend-xxxxx -n ingress-nginx
events.js:174
throw er; // Unhandled 'error' event
^
Error: getaddrinfo ENOTFOUND az-q.example.com az-q.example.com:636
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:56:26)
What am I missing?
回答1:
In order to achieve what you need, I'd go with dnsPolicy: ClusterFirst
definition in the pod manifests and then a definition of a stub zone (private DNS zone) in your cluster DNS subsystem.
For identifying the Cluster DNS stack, typically check the pods running in the kube-system
namespace. Most likely you'll find one of these two: CoreDNS or Kube-DNS.
In case your cluster DNS runs on CoreDNS, then look for this kind of a modification in your coredns
configmap.
If you run on the older Kube-DNS system, then look for this modification in kube-dns
configmap.
It's important to say that if you would like to apply this modification to pods running in the host network mode (many pods from kube-system
namespace), you need to modify their manifests with dnsPolicy: ClusterFirstWithHostNet
stanza.
回答2:
Everyting depends on dnsPolicy you set in deployment configuration file of your application.
When Pod’s dnsPolicy is set to “default”,
it inherits the name resolution configuration from the node that the Pod runs on. The Pod’s DNS resolution should behave the same as the node.
1. Many Linux distributions (e.g. Ubuntu), use a local DNS resolver by
default (systemd-resolved). Systemd-resolved moves and replaces
/etc/resolv.conf
with a stub file that can cause a fatal forwarding
loop when resolving names in upstream servers. This can be fixed
manually by using kubelet’s --resolv-conf
flag to point to the
correct resolv.conf (With systemd-resolved, this is
/run/systemd/resolve/resolv.conf
). kubeadm (>= 1.11) automatically
detects systemd-resolved, and adjusts the kubelet flags accordingly.
Kubernetes installs do not configure the nodes’ resolv.conf files to use the cluster DNS by default, because that process is inherently distribution-specific. This should probably be implemented eventually.
2. Linux’s libc is impossibly stuck (see this bug from 2005) with
limits of just 3 DNS nameserver records and 6 DNS search records.
Kubernetes needs to consume 1 nameserver record and 3 search
records. This means that if a local installation already uses 3
nameservers or uses more than 3 searches, some of those settings
will be lost. As a partial workaround, the node can run dnsmasq
which will provide more nameserver entries, but not more search
entries. You can also use kubelet’s --resolv-conf
flag.
3. Make sure that you are not using Alpine version 3.3 or earlier as your base image, DNS may not work properly then.
Please take a look here: dns-kubernetes-known-issues.
回答3:
What is the dnsPolicy
set to in the deployment configuration for the application? According to this doc:
Custom upstream nameservers and stub domains do not affect Pods with a
dnsPolicy
set to “Default
” or “None
”.If a Pod’s
dnsPolicy
is set to “ClusterFirst
”, its name resolution is handled differently, depending on whether stub-domain and upstream DNS servers are configured.
See in that doc the example and what happens with custom configurations.
来源:https://stackoverflow.com/questions/59991592/kubernetes-use-private-dns