Load balancing to multiple containers of same app in a pod

三世轮回 提交于 2021-01-29 10:14:27

问题


I have a scenario where I need to have two instances of an app container run within the same pod. I have them setup to listen on different ports. Below is how the Deployment manifest looks like. The Pod launches just fine with the expected number of containers. I can even connect to both ports on the podIP from other pods.

kind: Deployment
metadata:
  labels:
    service: app1-service
  name: app1-dep
  namespace: exp
spec:
  template:
    spec:
      contianers:
        - image: app1:1.20
          name: app1
          ports:
          - containerPort: 9000
            protocol: TCP
        - image: app1:1.20
          name: app1-s1
          ports:
          - containerPort: 9001
            protocol: TCP

I can even create two different Services one for each port of the container, and that works great as well. I can individually reach both Services and end up on the respective container within the Pod.

apiVersion: v1
kind: Service
metadata:
  name: app1
  namespace: exp
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 9000
  selector:
    service: app1-service
  sessionAffinity: None
  type: ClusterIP

---
apiVersion: v1
kind: Service
metadata:
  name: app1-s1
  namespace: exp
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 9001
  selector:
    service: app1-service
  sessionAffinity: None
  type: ClusterIP

I want both the instances of the container behind a single service, that round robins between both the containers. How can I achieve that? Is it possible within the realm of services? Or would I need to explore ingress for something like this?


回答1:


Kubernetes services has three proxy modes: iptables (is the default), userspace, IPVS.

  • Userspace: is the older way and it distribute in round-robin as the only way.
  • Iptables: is the default and select at random one pod and stick with it.
  • IPVS: Has multiple ways to distribute traffic but first you have to install it on your node, for example on centos node with this command: yum install ipvsadm and then make it available.

Like i said, Kubernetes service by default has no round-robin. To activate IPVS you have to add a parameter to kube-proxy

--proxy-mode=ipvs

--ipvs-scheduler=rr (to select round robin)




回答2:


One can expose multiple ports using a single service. In Kubernetes-service manifest, spec.ports[] is an array. So, one can specify multiple ports in it. For example, see bellow:

apiVersion: v1
kind: Service
metadata:
  name: app1
  namespace: exp
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 9000
  - name: http-s1
    port: 81
    protocol: TCP
    targetPort: 9001
  selector:
    service: app1-service
  sessionAffinity: None
  type: ClusterIP

Now, the hostname is same except the port and by default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm.




回答3:


What I would do is to separate the app in two different deployments, with one container in each deployment. I would set the same labels to both deployments and attack them both with one single service.

This way, you don't even have to run them on different ports.

Later on, if you would want one of them to receive more traffic, I would just play with the number of the replicas of each deployment.



来源:https://stackoverflow.com/questions/57130296/load-balancing-to-multiple-containers-of-same-app-in-a-pod

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!