Kubernetes service with clustered PODs in active/standby

前提是你 提交于 2020-01-05 04:34:07

问题


Apologies for not keeping this short, as any such attempt would make me miss-out on some important details of my problem.

I have a legacy Java application which works in a active/standby mode in a clustered environment to expose certain RESTful WebServices via a predefined port.

If there are two nodes in my app cluster, at any point in time only one would be in Active mode, and the other in Passive mode, and the requests are always served by the node with app running in Active mode. 'Active' and 'Passive' are just roles, the app as such would be running on both nodes. The Active and Passive instances communicate with each other through this same predetermined port.

Suppose I have a two node cluster with one instance of my application running on each node, then one of the instance would initially be active and the other will be passive. If for some reason the active node goes for a toss for some reason, the app instance in other node identifies this using some heartbeat mechanism, takes over the control and becomes the new active. When the old active comes back up it detects the other guy has owned up the new Active role, hence it goes into Passive mode.

The application manages to provide RESTful webservices on the same endpoint IP irrespective of which node is running the app in 'Active' mode by using a cluster IP, which piggy-backs on the active instance, so the cluster IP switches over to whichever node is running the app in Active mode.

I am trying to containerize this app and run this in a Kubernetes cluster for scale and ease of deployment. I am able to containerize and able to deploy it as a POD in a Kubernetes cluster.

In order to bring in the Active/Passive role here, I am running two instances of this POD, each pinned to a separate K8S nodes using node affinity (each node is labeled as either active or passive, and POD definitions pin on these labels), and clustering them up using my app's clustering mechanism whereas only one will be active and the other will be passive.

I am exposing the REST service externally using K8S Service semantics by making use of the NodePort, and exposing the REST WebService via a NodePort on the master node.

Here's my yaml file content:

apiVersion: v1
kind: Service
metadata:
  name: myapp-service
  labels:
    app: myapp-service
spec:
  type: NodePort
  ports:
    - port: 8443
      nodePort: 30403
  selector:
    app: myapp

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: active
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myapp
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: nodetype
                operator: In
                values:
                - active
      volumes:
        - name: task-pv-storage
          persistentVolumeClaim:
           claimName: active-pv-claim
      containers:
      - name: active
        image: myapp:latest
        imagePullPolicy: Never
        securityContext:
           privileged: true
        ports:
         - containerPort: 8443
        volumeMounts:
        - mountPath: "/myapptmp"
          name: task-pv-storage

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: passive
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myapp
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: nodetype
                operator: In
                values:
                - passive
      volumes:
        - name: task-pv-storage
          persistentVolumeClaim:
           claimName: active-pv-claim
      containers:
      - name: passive
        image: myapp:latest
        imagePullPolicy: Never
        securityContext:
           privileged: true
        ports:
         - containerPort: 8443
        volumeMounts:
        - mountPath: "/myapptmp"
          name: task-pv-storage

Everything seems working fine, except that since both PODs are exposing the web service via same port, the K8S Service is routing the incoming requests to one of these PODS in a random fashion. Since my REST WebService endpoints work only on Active node, the service requests work via K8S Service resource only when the request is getting routed to the POD with app in Active role. If at any point in time the K8S Service happens to route the incoming request to POD with app in passive role, the service is inaccessible/not served.

How do I make this work in such a way that the K8S service always routes the requests to POD with app in Active role? Is this something doable in Kubernetes or I'm aiming for too much?

Thank you for your time!


回答1:


You can use a readiness probe in conjunction with election container. Election will always elect one master from the election pool, and if you make sure only that pod is marked as ready... only that pod will recieve traffic.




回答2:


One way to achieve this is add label tag in the pod as active and standby. then select the active pod in your service. this will send the traffic to pod labeled as active.

https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#service-and-replicationcontroller

you can find another example on this document.

https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/



来源:https://stackoverflow.com/questions/47291581/kubernetes-service-with-clustered-pods-in-active-standby

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!