问题
I have two pods namely payroll and mysql labelled as name=payroll
and name=mysql
. There's another pod named internal with label name=internal
. I am trying to allow egress traffic from internal to other two pods while allowing all ingress traffic. My NetworkPoliy
looks like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchExpressions:
- {key: name, operator: In, values: [payroll, mysql]}
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 3306
This does not match the two pods payroll and mysql. What am I doing wrong?
The following works:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: payroll
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 3306
What is the best way to write a NetWorkPolicy
and why is the first one incorrect?
I also am wondering why the to
field is an array while the podSelector
is also an array inside it? I mean they are the same right? Multiple podSelector
or multiple to
fields. Using one of them works.
回答1:
This does not match the two pods payroll and mysql. What am I doing wrong?
- I've reproduce your scenarios with pod-to-service and pod-to-pod environments, in both cases both yamls worked well. That said after fixing the indentation on line 19 where both
podSelector
should be in the same level, as follows:
- to:
- podSelector:
matchLabels:
name: payroll
- podSelector:
matchLabels:
name: mysql
What is the best way to write a
NetWorkPolicy
?
- The best one depends on each scenario, it's a good practice to create one networkpolicy for each rule. I'd say the first yaml is the best one if you intend to expose ports
8080
and3306
on BOTH pods, otherwise it would be better to create two rules, to avoid leaving unnecessary open ports.
I also am wondering why the
to
field is an array while thepodSelector
is also an array inside it? I mean they are the same right? MultiplepodSelector
or multipleto
fields. Using one of them works.
From NetworkPolicySpec v1 networking API Ref:
egress
NetworkPolicyEgressRule array: List of egress rules to be applied to the selected pods. Outgoing traffic is allowed if there are no NetworkPolicies selecting the pod, OR if the traffic matches at least one egress rule across all of the NetworkPolicy objects whose podSelector matches the pod.Also keep in mind that this list also includes the Ports Array as well.
Why is the first one incorrect?
- Both rules are basically the same, only written in different formats. I'd say you should check if there is any other rule in effect for the same labels.
- I'd suggest you to create a test cluster and try applying the step-by-step example I'll leave below.
Reproduction:
- This example is very similar to your case. I'm using
nginx
images for it's easy testing and changed ports to80
onNetworkPolicy
. I'm calling your first yamlinternal-original.yaml
and the second you postedsecond-internal.yaml
:
$ cat internal-original.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-original
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchExpressions:
- {key: name, operator: In, values: [payroll, mysql]}
ports:
- protocol: TCP
port: 80
$ cat second-internal.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: payroll
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 80
- Now we create the pods with the labels and expose the services:
$ kubectl run mysql --generator=run-pod/v1 --labels="name=mysql" --image=nginx
pod/mysql created
$ kubectl run internal --generator=run-pod/v1 --labels="name=internal" --image=nginx
pod/internal created
$ kubectl run payroll --generator=run-pod/v1 --labels="name=payroll" --image=nginx
pod/payroll created
$ kubectl run other --generator=run-pod/v1 --labels="name=other" --image=nginx
pod/other created
$ kubectl expose pod mysql --port=80
service/mysql exposed
$ kubectl expose pod payroll --port=80
service/payroll exposed
$ kubectl expose pod other --port=80
service/other exposed
- Now, before applying the
networkpolicy
, I'll log into theinternal
pod to downloadwget
, because after that outside access will be blocked:
$ kubectl exec internal -it -- /bin/bash
root@internal:/# apt update
root@internal:/# apt install wget -y
root@internal:/# exit
- Since your rule is blocking access to DNS, I'll list the IPs and test with them:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP
internal 1/1 Running 0 62s 10.244.0.192
mysql 1/1 Running 0 74s 10.244.0.141
other 1/1 Running 0 36s 10.244.0.216
payroll 1/1 Running 0 48s 10.244.0.17
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP 10.101.209.87 <none> 80/TCP 23s
other ClusterIP 10.103.39.7 <none> 80/TCP 9s
payroll ClusterIP 10.109.102.5 <none> 80/TCP 14s
- Now let's test the access with the first yaml:
$ kubectl get networkpolicy
No resources found in default namespace.
$ kubectl apply -f internal-original.yaml
networkpolicy.networking.k8s.io/internal-original created
$ kubectl exec internal -it -- /bin/bash
root@internal:/# wget --spider --timeout=1 http://10.101.209.87
Spider mode enabled. Check if remote file exists.
--2020-06-08 18:17:55-- http://10.101.209.87/
Connecting to 10.101.209.87:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.109.102.5
Spider mode enabled. Check if remote file exists.
--2020-06-08 18:18:04-- http://10.109.102.5/
Connecting to 10.109.102.5:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.103.39.7
Spider mode enabled. Check if remote file exists.
--2020-06-08 18:18:08-- http://10.103.39.7/
Connecting to 10.103.39.7:80... failed: Connection timed out.
- Now let's test the access with the second yaml:
$ kubectl get networkpolicy
NAME POD-SELECTOR AGE
internal-original name=internal 96s
$ kubectl delete networkpolicy internal-original
networkpolicy.networking.k8s.io "internal-original" deleted
$ kubectl apply -f second-internal.yaml
networkpolicy.networking.k8s.io/internal-policy created
$ kubectl exec internal -it -- /bin/bash
root@internal:/# wget --spider --timeout=1 http://10.101.209.87
Spider mode enabled. Check if remote file exists.
--2020-06-08 17:18:24-- http://10.101.209.87/
Connecting to 10.101.209.87:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.109.102.5
Spider mode enabled. Check if remote file exists.
--2020-06-08 17:18:30-- http://10.109.102.5/
Connecting to 10.109.102.5:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.103.39.7
Spider mode enabled. Check if remote file exists.
--2020-06-08 17:18:35-- http://10.103.39.7/
Connecting to 10.103.39.7:80... failed: Connection timed out.
- As you can see, the connection to the services with the labels were ok and the connection to the pod that has other label has failed.
Note: If you wish to allow pods to resolve DNS, you can follow this guide: Allow DNS Egress Traffic
If you have any questions, let me know in the comments.
来源:https://stackoverflow.com/questions/62248909/how-does-matchexpressions-work-in-networkpolicy