前言:
最近在研究k8s集群ELK收集容器里面的日志,部署elk时发现Kibana没有连接到es中,报错信息如下:
{"type":"log","@timestamp":"2020-06-30T07:09:33Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"} {"type":"log","@timestamp":"2020-06-30T07:09:36Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"Unable to revive connection: http://elasticsearch-0.elasticsearch.kube-system:9200/"} {"type":"log","@timestamp":"2020-06-30T07:09:36Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"} {"type":"log","@timestamp":"2020-06-30T07:09:36Z","tags":["license","warning","xpack"],"pid":6,"message":"License information from the X-Pack plugin could not be obtained from Elasticsearch for the [data] cluster. Error: No Living connections"} {"type":"log","@timestamp":"2020-06-30T07:09:39Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"Unable to revive connection: http://elasticsearch-0.elasticsearch.kube-system:9200/"} {"type":"log","@timestamp":"2020-06-30T07:09:39Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"} {"type":"log","@timestamp":"2020-06-30T07:09:39Z","tags":["warning","task_manager"],"pid":6,"message":"PollError No Living connections"} {"type":"log","@timestamp":"2020-06-30T07:09:41Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"Unable to revive connection: http://elasticsearch-0.elasticsearch.kube-system:9200/"}
报错信息提示连接不上es(k8s中是通过Pod名称、Services名称、名称空间进行连接)
启动一个busybox镜像来进行DNS测试发现无法解析任何Services,查看Dns Pod的状态、事件、日志没有发现异常,于是开始了k8s的踩坑循环中。
一、发现apiserver日志报错,如下图:
这个报错信息一般是证书有问题,服务器时区没同步或者证书hosts少ip(证书文件从刚开始集群部署成功后,就没有动过),于是我尝试重新生成ssl证书文件并上传到服务器中,不但没有解决问题,且越来越严重。
kubectl get nodes 发现node状态从Ready变成Not Ready(心态有点小崩)
查看各个组件的日志,发现都连接不上kube-apiserver,kube-apiserver报错信息如下图:
这也是证书问题,我再次重新生成证书上传到服务器,上图报错信息解决了,但是新的问题又来了
apiserver报错如下图:
kubelet报错如下图:
解决方法:
1、检查kubelet的配置信息
2、找一台干净的机器重新部署(k8s-node2配置一样),kubectl get nodes 发现有k8s-node3节点。
3、在Master上执行kubectl delete node k8s-node2,并且重新加入集群问题解决。
二、apiserver日志新的报错信息如下图:
网上查文档大概是我集群别的的项目Servicesaccount与apiserver进行通信出现的问题。
解决方法
重新生成serviceaccount
可参考 https://tonybai.com/2017/03/03/access-api-server-from-a-pod-through-serviceaccount/
三、以上问题处理完成后,发现dns还是无法解析services
简单粗暴的解决方法:
kubelet delete -f coredns.yaml
直接把coredns.yaml文件里的ServiceAccount、ClusterRoleBinding、ConfigMap、重新部署,问题解决。(如果一开始采用这种方法,估计就没那么多的坑!)
来源:oschina
链接:https://my.oschina.net/u/4350255/blog/4331872