kubelet

kubelet failed with kubelet cgroup driver: “cgroupfs” is different from docker cgroup driver: “systemd”

喜你入骨 提交于 2020-01-10 19:38:10
问题 Configuration for cgroup driver is right in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" I also checked the Environment with cli $ systemctl show --property=Environment kubelet | cat Environment=KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf\x20--require-kubeconfig=true KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests\x20--allow-privileged=true KUBELET_NETWORK_ARGS=--network-plugin=cni

Kubelet Configuration

谁说胖子不能爱 提交于 2020-01-03 19:29:14
问题 I am running into OOM issues on CentOs on some kubernetes nodes. I would like to set it up like they have in the demo: --kube-reserved is set to cpu=1,memory=2Gi,ephemeral-storage=1Gi --system-reserved is set to cpu=500m,memory=1Gi,ephemeral-storage=1Gi --eviction-hard is set to memory.available<500Mi,nodefs.available<10% Where do I add those params? Should I add them to /etc/systemd/system/kubelet.service? What format? Also, do I just set these on the worker nodes? This is in a live

Logs sent to console using logback configuration in java app, not visible in Kubernetes using kubectl logs

拜拜、爱过 提交于 2020-01-02 23:04:50
问题 I read in kubernetes docs somewhere that kubernetes reads application logs from stdout and stderror in pods. I created a new application and configured it to send logs to a remote splunk hec endpoint (using splunk-logback jars) and at the same time to console. So by default, the console logs in logback should go to System.out, which should then be visible using kubectl logs . But it's not happening in my application. my logback file: <?xml version="1.0" encoding="UTF-8"?> <configuration>

Where are Kubernetes' pods logfiles?

自作多情 提交于 2019-12-31 22:40:34
问题 When I run $ kubectl logs <container> I get the logs of my pods. But where are the files for those logs? Some sources says /var/log/containers/ others says /var/lib/docker/containers/ but I couldn't find my actual application's or pod's log. 回答1: The on-disk filename comes from docker inspect $pod_name_or_sha | jq -r '.[0].LogPath' assuming the docker daemon's configuration is the default {"log-driver": "json-file"} , which is almost guaranteed to be true if kubectl logs behaves correctly.

how to change kubelet working dir to somewhere else

[亡魂溺海] 提交于 2019-12-31 01:53:14
问题 kubernetes 1.7.x kubelet store some data in /var/lib/kubelet, how can I change it to somewhere else ? Because my /var is every small. 回答1: Ok. I figured it out. On centos. /etc/systemd/system/kubelet.service.d/10-kubeadm.conf you can add Environment="KUBELET_EXTRA_ARGS=$KUBELET_EXTRA_ARGS --root-dir=/data/k8s/kubelet" then systemctl daemon-reload systemctl restart kubelet 回答2: if your /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is loading environment from /etc/sysconfig/kubelet , as

Should I install Kubelet for OpenShift?

社会主义新天地 提交于 2019-12-11 18:38:12
问题 I'm trying to download the OpenShift Origin server with this tutorial in order to be able to increase amount of memory dedicated to the build. Yet Whil following the tutorial I tried to launch the server but got an error about Kubelet. I have Ubuntu 16.04. ~/openshift-origin-server-v3.9.0-191fece-linux-64bit$ sudo ./openshift start .... F0622 17:00:24.659470 14610 server.go:173] failed to run Kubelet: failed to create kubelet: failed to get docker version: Cannot connect to the Docker daemon

Occasionally pods will be created with no network which results in the pod failing repeatedly with CrashLoopBackOff

北城以北 提交于 2019-12-08 04:24:08
问题 Occasionally, I see an issue where a pod will start up without network connectivity. Because of this, the pod goes into a CrashLoopBackOff and is unable to recover. The only way I am able to get the pod running again is by running a kubectl delete pod and waiting for it to reschedule. Here's an example of a liveness probe failing due to this issue: Liveness probe failed: Get http://172.20.78.9:9411/health: net/http: request canceled while waiting for connection (Client.Timeout exceeded while

Kubernetes 1.3.4版本之kubelet改动

China☆狼群 提交于 2019-12-07 17:23:03
kubernetes 1.3.4版本之kubelet启动报错: I0805 11:10:26.517174 2057 kubelet.go:2479] skipping pod synchronization - [container runtime is down] E0805 11:10:26.567819 2057 kubelet.go:2837] Container runtime sanity check failed: container runtime version is older than 1.21 之前升级 kubernetes 1.2版本也遇到,详见: http://my.oschina.net/fufangchun/blog/677117 看了下docker的版本是1.8.2的,api是1.20的,之前kubernetes 1.2.0是没有问题的 [root @localhost ~]# docker version Client: Version: 1.8.2-el7.centos API version: 1.20 Package Version: docker-1.8.2-10.el7.centos.x86_64 Go version: go1.4.2 Git commit: a01dc02/1.8.2 Built: OS/Arch: linux

how to change kubelet working dir to somewhere else

a 夏天 提交于 2019-12-01 21:58:26
kubernetes 1.7.x kubelet store some data in /var/lib/kubelet, how can I change it to somewhere else ? Because my /var is every small. Ok. I figured it out. On centos. /etc/systemd/system/kubelet.service.d/10-kubeadm.conf you can add Environment="KUBELET_EXTRA_ARGS=$KUBELET_EXTRA_ARGS --root-dir=/data/k8s/kubelet" then systemctl daemon-reload systemctl restart kubelet if your /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is loading environment from /etc/sysconfig/kubelet , as does mine, you can update it to include your extra args. # /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS=--root-dir=

kubelet failed with kubelet cgroup driver: “cgroupfs” is different from docker cgroup driver: “systemd”

十年热恋 提交于 2019-11-30 14:11:54
Configuration for cgroup driver is right in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" I also checked the Environment with cli $ systemctl show --property=Environment kubelet | cat Environment=KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf\x20--require-kubeconfig=true KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests\x20--allow-privileged=true KUBELET_NETWORK_ARGS=--network-plugin=cni\x20--cni-conf-dir=/etc/cni/net.d\x20--cni-bin-dir=/opt/cni/bin KUBELET_DNS_ARGS=--cluster-dns=10.96.0