Is there a way to add arbitrary records to kube-dns?

后端 未结 5 1347
遥遥无期
遥遥无期 2020-12-28 21:27

I will use a very specific way to explain my problem, but I think this is better to be specific than explain in an abstract way...

Say, there is a MongoDB replica se

相关标签:
5条回答
  • 2020-12-28 22:03

    A type of External Name is required to access hosts or ips outside of the kubernetes.

    The following worked for me.

    {
        "kind": "Service",
        "apiVersion": "v1",
        "metadata": {
            "name": "tiny-server-5",
            "namespace": "default"
        },
        "spec": {
            "type": "ExternalName",
            "externalName": "192.168.1.15",
            "ports": [{ "port": 80 }]
        }
    }
    
    0 讨论(0)
  • 2020-12-28 22:07

    There are 2 possible solutions for this problem now:

    1. Pod-wise (Adding the changes to every pod needed to resolve these domains)
    2. cluster-wise (Adding the changes to a central place which all pods have access to, Which is in our case is the DNS)

    Let's begin with the pod-wise solution:

    As of Kunbernetes 1.7, It's possible now to add entries to a Pod's /etc/hosts directly using .spec.hostAliases

    For example: to resolve foo.local, bar.local to 127.0.0.1 and foo.remote, bar.remote to 10.1.2.3, you can configure HostAliases for a Pod under .spec.hostAliases:

    apiVersion: v1
    kind: Pod
    metadata:
      name: hostaliases-pod
    spec:
      restartPolicy: Never
      hostAliases:
      - ip: "127.0.0.1"
        hostnames:
        - "foo.local"
        - "bar.local"
      - ip: "10.1.2.3"
        hostnames:
        - "foo.remote"
        - "bar.remote"
      containers:
      - name: cat-hosts
        image: busybox
        command:
        - cat
        args:
        - "/etc/hosts"
    

    The Cluster-wise solution:

    As of Kubernetes v1.12, CoreDNS is the recommended DNS Server, replacing kube-dns. If your cluster originally used kube-dns, you may still have kube-dns deployed rather than CoreDNS. I'm going to assume that you're using CoreDNS as your K8S DNS.

    In CoreDNS it's possible to Add an arbitrary entries inside the cluster domain and that way all pods will resolve this entries directly from the DNS without the need to change each and every /etc/hosts file in every pod.

    First:

    Let's change the coreos ConfigMap and add required changes:

    apiVersion: v1
    kind: ConfigMap
    data:
      Corefile: |
        .:53 {
            errors
            health {
              lameduck 5s
            }
            hosts /etc/coredns/customdomains.db example.org {
              fallthrough
            }
            ready
            kubernetes cluster.local in-addr.arpa ip6.arpa {
              pods insecure
              fallthrough in-addr.arpa ip6.arpa
            }
            prometheus :9153
            forward . "/etc/resolv.conf"
            cache 30
            loop
            reload
            loadbalance
        }
      customdomains.db: |
        10.10.1.1 mongo-en-1.example.org
        10.10.1.2 mongo-en-2.example.org
        10.10.1.3 mongo-en-3.example.org
        10.10.1.4 mongo-en-4.example.org
    

    Basically we added two things:

    1. The hosts plugin before the kubernetes plugin and used the fallthrough option of the hosts plugin to satisfy our case.

      To shed some more lights on the fallthrough option. Any given backend is usually the final word for its zone - it either returns a result, or it returns NXDOMAIN for the query. However, occasionally this is not the desired behavior, so some of the plugin support a fallthrough option. When fallthrough is enabled, instead of returning NXDOMAIN when a record is not found, the plugin will pass the request down the chain. A backend further down the chain then has the opportunity to handle the request and that backend in our case is kubernetes.

    2. We added a new file to the ConfigMap (customdomains.db) and added our custom domains (mongo-en-*.example.org) in there.

    Last thing is to Remember to add the customdomains.db file to the config-volume for the CoreDNS pod template:

    kubectl edit -n kube-system deployment coredns
    
    volumes:
            - name: config-volume
              configMap:
                name: coredns
                items:
                - key: Corefile
                  path: Corefile
                - key: customdomains.db
                  path: customdomains.db
    

    and finally to signal CoreDNS to gracefully reload (each pod running):

    $ kubectl -n kube-system exec coredns-461002909-7mp96 -- kill -SIGUSR1 1
    
    0 讨论(0)
  • 2020-12-28 22:09

    UPDATE: 2017-07-03 Kunbernetes 1.7 now support Adding entries to Pod /etc/hosts with HostAliases.


    The solution is not about kube-dns, but /etc/hosts. Anyway, following trick seems to work so far...

    EDIT: Changing /etc/hosts may has race condition with kubernetes system. Let it retry.

    1) create a configMap

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: db-hosts
    data:
      hosts: |
        10.0.0.1  db1
        10.0.0.2  db2
    

    2) Add a script named ensure_hosts.sh.

    #!/bin/sh                                                                                                           
    while true
    do
        grep db1 /etc/hosts > /dev/null || cat /mnt/hosts.append/hosts >> /etc/hosts
        sleep 5
    done
    

    Don't forget chmod a+x ensure_hosts.sh.

    3) Add a wrapper script start.sh your image

    #!/bin/sh
    $(dirname "$(realpath "$0")")/ensure_hosts.sh &
    exec your-app args...
    

    Don't forget chmod a+x start.sh

    4) Use the configmap as a volume and run start.sh

    apiVersion: extensions/v1beta1
    kind: Deployment
    ...
    spec:
      template:
        ...
        spec:
          volumes:
          - name: hosts-volume
            configMap:
              name: db-hosts
          ...
          containers:
            command:
            - ./start.sh
            ...
            volumeMounts:
            - name: hosts-volume
              mountPath: /mnt/hosts.append
            ...
    
    0 讨论(0)
  • 2020-12-28 22:20

    Use configMap seems better way to set DNS, but it's a little bit heavy when just add a few record (in my opinion). So I add records to /etc/hosts by shell script executed by docker CMD.

    for example:

    Dockerfile

    ...(ignore)
    COPY run.sh /tmp/run.sh
    CMD bash /tmp/run.sh
    

    run.sh

    #!/bin/bash
    echo repl1.mongo.local 192.168.10.100 >> /etc/hosts
    # some else command...
    

    Notice, if your run MORE THAN ONE container in a pod, you have to add script in each container, because kubernetes start container randomly, /etc/hosts may be override by another container (which start later).

    0 讨论(0)
  • 2020-12-28 22:23

    For the record, an alternate solution for those not checking the referenced github issue.

    You can define an "external" Service in Kubernetes, by not specifying any selector or ClusterIP. You have to also define a corresponding Endpoint pointing to your external IP.

    From the Kubernetes documentation:

    {
        "kind": "Service",
        "apiVersion": "v1",
        "metadata": {
            "name": "my-service"
        },
        "spec": {
            "ports": [
                {
                    "protocol": "TCP",
                    "port": 80,
                    "targetPort": 9376
                }
            ]
        }
    }
    {
        "kind": "Endpoints",
        "apiVersion": "v1",
        "metadata": {
            "name": "my-service"
        },
        "subsets": [
            {
                "addresses": [
                    { "ip": "1.2.3.4" }
                ],
                "ports": [
                    { "port": 9376 }
                ]
            }
        ]
    }
    

    With this, you can point your app inside the containers to my-service:9376 and the traffic should be forwarded to 1.2.3.4:9376

    Limitations:

    • The DNS name used needs to be only letters, numbers or dashes. You can't use multi-level names (something.like.this). This means you probably have to modify your app to point just to your-service, and not yourservice.domain.tld.
    • You can only point to a specific IP, not a DNS name. For that, you can define a kind of a DNS alias with an ExternalName type Service.
    0 讨论(0)
提交回复
热议问题