From a quick read of the Kubernetes docs, I noticed that the kube-proxy behaves as a Level-4 proxy, and perhaps works well for TCP/IP traffic (s.a. typically HTTP traffic).
With regard to SCTP support in k8s: it has been merged recently into k8s as alpha feature. SCTP is supported as a new protocol type in Service, NetworkPolicy and Pod definitions. See the PR here: https://github.com/kubernetes/kubernetes/pull/64973
Some restrictions exist:
It is possible to handle TCP and UDP traffic from clients to your service, but it slightly depends where you run Kubernetes.
It is possible to use Ingress for both TCP and UDP protocols, not only with HTTP. Some of the Ingress implementations has a support of proxying that types of traffic.
Here is an example of that kind of configuration for Nginx Ingress controller for TCP:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-configmap-example
data:
9000: "default/example-go:8080" here is a "$namespace/$service_name:$port"
And UDP:
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-configmap-example
data:
53: "kube-system/kube-dns:53" # here is a "$namespace/$service_name:$port"
So, actually, you can run your application which needs plain UDP and TCP connections with some limitations (you need somehow manage a load balancing if you have more than one pod etc).
But if you now have an application which can do it now, without Kubernetes - I don't think that you will have any problems with that after migration to Kubernetes.
A Small example of a traffic flow
For SIP UDP traffic, for an example, you can prepare configuration like this:
Client -> Nginx Ingress (UDP) -> OpenSIPS Load balancer (UDP) -> Sip Servers (UDP).
So, the client will send packets to Ingress, it will forward it to OpenSIPS, which will manage a state of your SIP cluster and send clients packets to a proper SIP server.
Also, if you will run in on Cloud, you can use ServiceType
LoadBalancer for your Service and get TCP and UDP traffic to your application directly thru External Load Balancer provided by a cloud platform.
What about SCTP, unfortunately, no, that is not supported yet, but you can track a progress here.
Also, it is true that currently there is no known implementation (open-source or closed-source) of Ingress API, that can allow a Kubernetes cluster to handle the above listed type of traffic ?
Probably, and this IBM study on IBM Voice Gateway "Setting up high availability"
(here with SIPs (Session Initiation Protocol), like OpenSIPS)
Kubernetes deployments
In Kubernetes terminology, a single voice gateway instance equates to a single pod, which contains both a SIP Orchestrator container and a Media Relay container.
The voice gateway pods are installed into a Kubernetes cluster that is fronted by an external SIP load balancer.
Through Kubernetes, a voice gateway pod can be scheduled to run on a cluster of VMs. The framework also monitors pods and can be configured to automatically restart a voice gateway pod if a failure is detected.Note: Because auto-scaling and auto-discovery of new pods by a SIP load balancer in Kubernetes are not currently supported, an external SIP.
And, to illustrate Kubernetes limitations:
Running IBM Voice Gateway in a Kubernetes environment requires special considerations beyond the deployment of a typical HTTP-based application because of the protocols that the voice gateway uses.
The voice gateway relies on the SIP protocol for call signaling and the RTP protocol for media, which both require affinity to a specific voice gateway instance. To avoid breaking session affinity, the Kubernetes ingress router must be bypassed for these protocols.
To work around the limitations of the ingress router, the voice gateway containers must be configured in host network mode.
In host network mode, when a port is opened in either of the voice gateway containers, those identical ports are also opened and mapped on the base virtual machine or node.
This configuration also eliminates the need to define media port ranges in the kubectl configuration file, which is not currently supported by Kubernetes. Deploying only one pod per node in host network mode ensures that the SIP and media ports are opened on the host VM and are visible to the SIP load balancer.
That network configuration put in place for Kubernetes is best illustrated in this answer, which describes the elements involved in pod/node-communication: