问题
I use the skydns-rc.yaml.base(/kubernetes-release-1.3/cluster/addons/dns/sky..) file to create the k8s dns service. but the kubedns container is always failed to created.
The following substitutions have been made:
namespace: kube-system
replaced bynamespace: default
__PILLAR__DNS__REPLICAS__
replaced by1
__PILLAR__DNS__DOMAIN__
replaced bycluster.local
__PILLAR__FEDERATIONS__DOMAIN__MAP__
deleted
The edited element info and the whole file are shown below:
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v18
namespace: default
labels:
k8s-app: kube-dns
version: v18
kubernetes.io/cluster-service: "true"
spec:
replicas: __PILLAR__DNS__REPLICAS__
selector:
k8s-app: kube-dns
version: v18
template:
metadata:
labels:
k8s-app: kube-dns
version: v18
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.6
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube-dns"
- --domain=__PILLAR__DNS__DOMAIN__.
- --dns-port=10053
__PILLAR__FEDERATIONS__DOMAIN__MAP__
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz-amd64:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.__PILLAR__DNS__DOMAIN__ 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.__PILLAR__DNS__DOMAIN__ 127.0.0.1:10053 >/dev/null
- -port=8080
- -quiet
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default # Don't use cluster DNS.
Are there any problem for above information?
Other info:
$ kubectl describe pod kube-dns-v18-u7jgt
Name: kube-dns-v18-u7jgt
Namespace: default
Node: centos-cjw-minion1/10.139.4.195
Start Time: Mon, 18 Jul 2016 19:31:48 +0800
Labels: k8s-app=kube-dns,kubernetes.io/cluster-service=true,version=v18
Status: Running
IP: 172.17.0.4
Controllers: ReplicationController/kube-dns-v18
Containers:
kubedns:
Container ID: docker://5f97e1d7185e327ac3cd5415c79b1b51da1987d8946fb243ee1758cdc4d53d29
Image: iaasfree/kubedns-amd64:1.5
Image ID: docker://sha256:a1490b272781a9921ba216778e741943e9b866114dae7e7e8980daebbc5ba7ed
Ports: 10053/UDP, 10053/TCP
Args:
--domain=cluster.local.
--dns-port=10053
QoS Tier:
memory: Burstable
cpu: Guaranteed
Limits:
cpu: 100m
memory: 200Mi
Requests:
cpu: 100m
memory: 100Mi
State: Running
Started: Mon, 18 Jul 2016 19:36:02 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 18 Jul 2016 19:34:52 +0800
Finished: Mon, 18 Jul 2016 19:35:59 +0800
Ready: False
Restart Count: 3
Liveness: http-get http://:8080/healthz delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=30s timeout=5s period=10s #success=1 #failure=3
Environment Variables:
dnsmasq:
Container ID: docker://75ef5bc18dfe196438956c42f64a2e2d6fd408329408704f32534ce7b9252663
Image: iaasfree/kube-dnsmasq-amd64:1.3
Image ID: docker://sha256:8cb0646c9e984cf510ca70704154bee2f2c51cfb2e776f4357c52c1d17c2b741
Ports: 53/UDP, 53/TCP
Args:
--cache-size=1000
--no-resolv
--server=127.0.0.1#10053
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Running
Started: Mon, 18 Jul 2016 19:31:55 +0800
Ready: True
Restart Count: 0
Environment Variables:
healthz:
Container ID: docker://e11626508ecd5b2cfae3e1eaa3284d75dae4160c113d7f28ce97cbd0185f032d
Image: iaasfree/exechealthz-amd64:1.0
Image ID: docker://sha256:f3b98b5b347af3254c82e3a0090cd324daf703970f3bb62ba8005020ddf5a156
Port: 8080/TCP
Args:
-cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
-port=8080
-quiet
QoS Tier:
cpu: Guaranteed
memory: Guaranteed
Limits:
memory: 20Mi
cpu: 10m
Requests:
cpu: 10m
memory: 20Mi
State: Running
Started: Mon, 18 Jul 2016 19:32:12 +0800
Ready: True
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
5m 5m 1 {default-scheduler } Normal Scheduled Successfully assigned kube-dns-v18-u7jgt to centos-cjw-minion1
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{kubedns} Normal Created Created container with docker id 5814904f6e09
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{dnsmasq} Normal Pulled Container image "iaasfree/kube-dnsmasq-amd64:1.3" already present on machine
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{kubedns} Normal Started Started container with docker id 5814904f6e09
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{dnsmasq} Normal Created Created container with docker id 75ef5bc18dfe
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{dnsmasq} Normal Started Started container with docker id 75ef5bc18dfe
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{healthz} Normal Pulled Container image "iaasfree/exechealthz-amd64:1.0" already present on machine
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{healthz} Normal Created Created container with docker id e11626508ecd
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{healthz} Normal Started Started container with docker id e11626508ecd
3m 3m 1 {kubelet centos-cjw-minion1} spec.containers{kubedns} Normal Killing Killing container with docker id 5814904f6e09: pod "kube-dns-v18-u7jgt_default(370b6791-4cdb-11e6-80f0-fa163ebb45ec)" container "kubedns" is unhealthy, it will be killed and re-created.
3m 3m 1 {kubelet centos-cjw-minion1} spec.containers{kubedns} Normal Created Created container with docker id 32945bc72e9b
3m 3m 1 {kubelet centos-cjw-minion1} spec.containers{kubedns} Normal Started Started container with docker id 32945bc72e9b
2m 2m 1 {kubelet centos-cjw-minion1} spec.containers{kubedns} Normal Killing Killing container with docker id 32945bc72e9b: pod "kube-dns-v18-u7jgt_default(370b6791-4cdb-11e6-80f0-fa163ebb45ec)" container "kubedns" is unhealthy, it will be killed and re-created.
回答1:
The container "kubedns" is unhealthy
message means the pod is failing the specified healthcheck.
Did you also change the __PILLAR__DNS__DOMAIN__
in the specification of the healthcheck command?
-cmd=nslookup kubernetes.default.svc.__PILLAR__DNS__DOMAIN__ 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.__PILLAR__DNS__DOMAIN__ 127.0.0.1:10053 >/dev/null
回答2:
This is because your DNS containers cannot contact the Kubernetes API server on the master. If you edit the YAML file to include the following extra argument, replacing __KUBE_MASTER_URL__
with the correct value for your cluster, something like http://10.1.2.3:8080
, then it should work:
args:
# command = "/kube-dns"
- --domain=__PILLAR__DNS__DOMAIN__.
- --dns-port=10053
- --kube-master-url=__KUBE_MASTER_URL__
__PILLAR__FEDERATIONS__DOMAIN__MAP__
来源:https://stackoverflow.com/questions/38448257/kubedns-container-failed-to-be-created-with-the-skydns-rc-yaml-base-file