I have installed a local instance of Kubernetes via Docker on my Mac.
Following the walkthrough on how to activate autoscaling on a deployment I have experienced an
I finally got it working.. Here are the full steps I took to get things working:
Have Kubernetes running within Docker
Delete any previous instance of metrics-server from your Kubernetes instance with kubectl delete -n kube-system deployments.apps metrics-server
Clone metrics-server with git clone https://github.com/kubernetes-incubator/metrics-server.git
Edit the file deploy/1.8+/metrics-server-deployment.yaml to override the default command by adding a command section that didn't exist before. The new section will instruct metrics-server to allow for an insecure communications session (don't verify the certs involved). Do this only for Docker, and not for production deployments of metrics-server:
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
command:
- /metrics-server
- --kubelet-insecure-tls
Add metrics-server to your Kubernetes instance with kubectl create -f deploy/1.8+
(if errors with the .yaml, write this instead: kubectl apply -f deploy/1.8+
)
Remove and add the autoscaler to your deployment again. It should now show the current cpu usage.
EDIT July 2020:
Most of the above steps hold true except the metrics-server has changed and that file does not exist anymore.
The repo now recommends installing it like this:
apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
So we can now download this file,
curl -L https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml --output components.yaml
add --kubelet-insecure-tls
under args
(L88) to the metrics-server
deployment and run
kubectl apply -f components.yaml
Had same issue while using my kubernetes kubeadm lab and the updated procedure is here https://github.com/kubernetes-sigs/metrics-server
This solved the issue: horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
We upgraded to AWS EKS version 1.13.7 and that's when we started having problems with HPA, It turns out on my deployment I had to specified a value for resources.requests.cpu=200m
and the HPA started working for me.
For who are use Internal-IP here may work for you. Follow @Mr.Turtle above at step 4. add more one command.
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.3
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP