问题
I setup my cluster by kubeadm. At the last step i exec kubeadm init --config kubeadm.conf --v=5
. I get the error about the clusterIp value. Here is the part of the output:
I0220 00:16:27.625920 31630 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I0220 00:16:27.947941 31630 kubeletfinalize.go:88] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0220 00:16:27.949398 31630 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons]: Migrating CoreDNS Corefile
I0220 00:16:28.447420 31630 dns.go:381] the CoreDNS configuration has been migrated and applied: .:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
.
I0220 00:16:28.447465 31630 dns.go:382] the old migration has been saved in the CoreDNS ConfigMap under the name [Corefile-backup]
I0220 00:16:28.447486 31630 dns.go:383] The changes in the new CoreDNS Configuration are as follows:
Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "10.10.0.10": field is immutable
unable to create/update the DNS service
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.createDNSService
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:323
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.createCoreDNSAddon
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:305
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.coreDNSAddon
And my config file like this:
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.16.5.151
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master02
# taints:
# - effect: NoSchedule
# key: node-role.kubernetes.io/master
---
apiServer:
certSANs:
- "172.16.5.150"
- "172.16.5.151"
- "172.16.5.152"
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
external:
endpoints:
- "https://172.16.5.150:2379"
- "https://172.16.5.151:2379"
- "https://172.16.5.152:2379"
caFile: /etc/k8s/pki/ca.pem
certFile: /etc/k8s/pki/etcd.pem
keyFile: /etc/k8s/pki/etcd.key
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.10.0.0/16
podSubnet: 192.168.0.0/16
scheduler: {}
I checked the kube-apiserver.yaml generated by kubeadm. the --service-cluster-ip-range=10.10.0.0/16 settings is contains 10.10.0.10 you can see below:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=172.16.5.151
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/k8s/pki/ca.pem
- --etcd-certfile=/etc/k8s/pki/etcd.pem
- --etcd-keyfile=/etc/k8s/pki/etcd.key
- --etcd-servers=https://172.16.5.150:2379,https://172.16.5.151:2379,https://172.16.5.152:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.10.0.0/16
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.17.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 172.16.5.151
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /etc/k8s/pki
name: etcd-certs-0
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /etc/k8s/pki
type: DirectoryOrCreate
name: etcd-certs-0
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
status: {}
As you see above. all the service-ip-range has been set to 10.10.0.0/16. It is strange that when i exec "kubectl get svc" I get the kubernetes clusterip is 10.96.0.1
[root@master02 manifests]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d3h
which means the default service-ip-range is: 10.96.0.0/16. And what i modifyed does not work. Does anyone know How to Custom the service-ip-range scope. And how to slove my problem?
回答1:
Posting this answer as community wiki to expand and explain root cause.
When kubeadm
is initiated, and we do not specify any flags, $ kubeadm init
it will create kubeadm
cluster with default values. You can check in Kubernetes docs flags which can be specified during initilization and which are default values.
--service-cidr
string Default: "10.96.0.0/12" Use alternative range of IP address for service VIPs.
That's the reason why default kubernetes
service used 10.96.0.1
as ClusterIP
.
Here OP also wanted to use own config.
--config
string Path to a kubeadm configuration file.
Whole initialization workflow can be found here.
As Kubernetes docs exmplain Kubeadm reset
Performs a best effort revert of changes made by kubeadm init or kubeadm join.
Depends on our configuration sometimes, some configs stay on the cluster.
Issue, that OP encountered was mentioned here - External etcd clean up
kubeadm reset
will not delete any etcd data if external etcd is used. This means that if you run kubeadm init again using the same etcd endpoints, you will see state from previous clusters.
Regarding Immutable fields: Service “kube-dns” is invalid: spec.clusterIP: Invalid value: “10.10.0.10”: field is immutable
.
In Kubernetes, some fields are secured to prevent changes that might disrupt working of the cluster.
If any field is immutable
but we have to change it, this object must be removed and add again.
回答2:
Because this node I joined the cluster as a node beforeBecause this node I joined the cluster as a node before.Later I reset this with "kubeadm reset " command.After the reset, I joined it as a master role to the cluster. So I get the error in my question above. The error is because the range of the clusterip before I reset is already recorded in the etcd cluster. And "kubeadm reset" command does not clean up the data in the etcd.So the new definition of clusterip conflicts with the original.So the solution is to clean up the data in the etcd and reset it again. (Since the cluster I built is a test cluster, I cleaned the etcd directly. Please be careful in the production environment)
来源:https://stackoverflow.com/questions/60305724/service-kube-dns-is-invalid-spec-clusterip-invalid-value-10-10-0-10-fiel