问题
I want my pods to receive multicast network traffic flowing from outside of my kubernetes cluster to specific ports in my nodes.
I'm considering two solutions:
- Adding
hostNetwork: true
flag to theiryaml
file along withhostPort
configuration in order to receive the traffic directly to the pod. - Forwarding the traffic locally on the nodes from
eth0
interface todocker0
interface usingiptables
command.
Method 1 is an official feature in Kubernetes, but it feels like breaking a security wall that docker originally imposed, and might cause port collisions with host's processes, etc.
Method 2 on the other hand transparently forwards the multicast network traffic to the pods.
Despite the fact I can use an automation tool to spread this configuration (ansible/salt etc), anything configured 'out of the scope' of Kubernetes feels a little hacky to me.
Would like to hear your pros and cons, comments, and maybe other solutions to the problem of multicasting to a kubernetes cluster.
回答1:
In the end we picked method 1, as it is the documented way to achieve what we wanted, and I can report that it works fine.
回答2:
I played a bit with hostNetwork and I understand your reservations. I see that turning it on gives my pod the same IP as the hosting node. But then it cannot communicate with any of the nodes (maybe I did something wrong?).
Edit: I definitely missed out on something
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
I added the dnsPolicy
So I am trying now something alternative working with a CNI. Still researching that. It is new for me, so I will post an update once I can.
回答3:
I heard that WeaveWorks supports multicast: https://www.weave.works/use-cases/multicast-networking/
来源:https://stackoverflow.com/questions/48304357/multicast-traffic-to-kubernetes