Is there some way to handle SIP, RTP, DIAMETER, M3UA traffic in Kubernetes?

前端 未结 3 1098
夕颜
夕颜 2021-02-01 09:23

From a quick read of the Kubernetes docs, I noticed that the kube-proxy behaves as a Level-4 proxy, and perhaps works well for TCP/IP traffic (s.a. typically HTTP traffic).

相关标签:
3条回答
  • 2021-02-01 09:45

    With regard to SCTP support in k8s: it has been merged recently into k8s as alpha feature. SCTP is supported as a new protocol type in Service, NetworkPolicy and Pod definitions. See the PR here: https://github.com/kubernetes/kubernetes/pull/64973

    Some restrictions exist:

    • the handling of multihomed SCTP associations was not in the scope of the PR. The support of multihomed SCTP associations for the cases when NAT is used is a much broader topic which affects also the current SCTP kernel modules that handle NAT for the protocol. See an example here: https://tools.ietf.org/html/draft-ietf-tsvwg-natsupp-12 From k8s perspective one would also need a CNI plugin that supports the assignment of multiple IP addresses (on multiple interfaces preferably) to pods, so the pod can establish multihomed SCTP association. Also one would need an enhanced Service/Endpoint/DNS controller to handle those multiple IP addresses on the right way.
    • the support of SCTP as protocol for type=LoadBalancer Services is up to the load balancer implementation, which is not a k8s issue
      • in order to use SCTP in NetworkPolicy one needs a CNI plugin that supports SCTP in NetworkPolicies
    0 讨论(0)
  • 2021-02-01 09:52

    It is possible to handle TCP and UDP traffic from clients to your service, but it slightly depends where you run Kubernetes.

    Solutions

    A solution which working everywhere

    It is possible to use Ingress for both TCP and UDP protocols, not only with HTTP. Some of the Ingress implementations has a support of proxying that types of traffic.

    Here is an example of that kind of configuration for Nginx Ingress controller for TCP:

    apiVersion: v1 kind: ConfigMap metadata: name: tcp-configmap-example data: 9000: "default/example-go:8080" here is a "$namespace/$service_name:$port"

    And UDP: apiVersion: v1 kind: ConfigMap metadata: name: udp-configmap-example data: 53: "kube-system/kube-dns:53" # here is a "$namespace/$service_name:$port"

    So, actually, you can run your application which needs plain UDP and TCP connections with some limitations (you need somehow manage a load balancing if you have more than one pod etc).

    But if you now have an application which can do it now, without Kubernetes - I don't think that you will have any problems with that after migration to Kubernetes.

    A Small example of a traffic flow

    For SIP UDP traffic, for an example, you can prepare configuration like this:

    Client -> Nginx Ingress (UDP) -> OpenSIPS Load balancer (UDP) -> Sip Servers (UDP).

    So, the client will send packets to Ingress, it will forward it to OpenSIPS, which will manage a state of your SIP cluster and send clients packets to a proper SIP server.

    A solution only for Clouds

    Also, if you will run in on Cloud, you can use ServiceType LoadBalancer for your Service and get TCP and UDP traffic to your application directly thru External Load Balancer provided by a cloud platform.

    About SCTP

    What about SCTP, unfortunately, no, that is not supported yet, but you can track a progress here.

    0 讨论(0)
  • 2021-02-01 10:06

    Also, it is true that currently there is no known implementation (open-source or closed-source) of Ingress API, that can allow a Kubernetes cluster to handle the above listed type of traffic ?

    Probably, and this IBM study on IBM Voice Gateway "Setting up high availability"

    (here with SIPs (Session Initiation Protocol), like OpenSIPS)

    Kubernetes deployments

    In Kubernetes terminology, a single voice gateway instance equates to a single pod, which contains both a SIP Orchestrator container and a Media Relay container.
    The voice gateway pods are installed into a Kubernetes cluster that is fronted by an external SIP load balancer.
    Through Kubernetes, a voice gateway pod can be scheduled to run on a cluster of VMs. The framework also monitors pods and can be configured to automatically restart a voice gateway pod if a failure is detected.

    Note: Because auto-scaling and auto-discovery of new pods by a SIP load balancer in Kubernetes are not currently supported, an external SIP.

    And, to illustrate Kubernetes limitations:

    Running IBM Voice Gateway in a Kubernetes environment requires special considerations beyond the deployment of a typical HTTP-based application because of the protocols that the voice gateway uses.

    The voice gateway relies on the SIP protocol for call signaling and the RTP protocol for media, which both require affinity to a specific voice gateway instance. To avoid breaking session affinity, the Kubernetes ingress router must be bypassed for these protocols.

    To work around the limitations of the ingress router, the voice gateway containers must be configured in host network mode.
    In host network mode, when a port is opened in either of the voice gateway containers, those identical ports are also opened and mapped on the base virtual machine or node.
    This configuration also eliminates the need to define media port ranges in the kubectl configuration file, which is not currently supported by Kubernetes. Deploying only one pod per node in host network mode ensures that the SIP and media ports are opened on the host VM and are visible to the SIP load balancer.


    That network configuration put in place for Kubernetes is best illustrated in this answer, which describes the elements involved in pod/node-communication:

    0 讨论(0)
提交回复
热议问题