Init container with kubectl get pod
command is used to get ready status of other pod.
After Egress NetworkPolicy was turned on init container can\'t access
We aren't on GCP, but the same should apply.
We query AWS for the CIDR of our master nodes and use this data as values for helm charts creating the NetworkPolicy for the k8s API access.
In our case the masters are part of an auto-scaling group, so we need the CIDR. In your case the IP might be enough.
You need to get the real ip of the master using 'kubectl get endpoints --namespace default kubernetes' and make an egress policy to allow that.
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-apiserver
namespace: test
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- ports:
- port: 443
protocol: TCP
to:
- ipBlock:
cidr: x.x.x.x/32
Update: Try Dave McNeill's answer first.
If it does not work for you (it did for me!), the following might be a workaround:
podSelector:
matchLabels:
white: listed
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
This will allow accessing the API server - along with all other IP addresses on the internet :-/
You can combine this with the DENY all non-whitelisted traffic from a namespace rule to deny egress for all other pods.