I\'m running kubernetes on bare-metal Debian (3 masters, 2 workers, PoC for now). I followed k8s-the-hard-way, and I\'m running into the fo
I had to do a yum update
in addition to this change to make it work. Might be helpful for others trying this workaround.
The angeloxx's workaround works also on AWS default image for kops (k8s-1.8-debian-jessie-amd64-hvm-ebs-2017-12-02 (ami-bd229ec4))
sudo vim /etc/sysconfig/kubelet
add at the end of DAEMON_ARGS string:
--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
finally:
sudo systemctl restart kubelet
Try to start kubelet with
--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
I'm using this solution on RHEL7 with Kubelet 1.8.0 and Docker 1.12
Thanks angeloxx!
I'm following the kubernetes guide: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/
In the instructions, they have you make a file: /usr/lib/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
with the line:
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd
I took your answer and added it to the end of the ExecStart line:
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
I'm writing this in case it helps someone else
@ wolmi Thanks for the edit!
One more note: The config I have above is for my etcd cluster, NOT the kubernetes nodes. A file like 20-etcd-service-manager.conf on a node would override all the settings in the "10-kubeadm.conf" file, causing all kinds if missed configurations. Use the "/var/lib/kubelet/config.yaml" file for nodes and/or /var/lib/kubelet/kubeadm-flags.env.
For those a little further along, in kops AMI kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08 as above, I had to add:
add at the end of DAEMON_ARGS string:
--runtime-cgroups=/lib/systemd/system/kubelet.service --kubelet-cgroups=/lib/systemd/system/kubelet.service
and then:
sudo systemctl restart kubelet
but I found I was still getting:
Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
restarting dockerd resolved this error:
sudo systemctl restart docker
Thanks
After a little more digging around I found a better resolution to add this into the kops configuration:
https://github.com/kubernetes/kops/issues/4049