How do I model a PostgreSQL failover cluster with Docker/Kubernetes?

后端 未结 4 599
谎友^
谎友^ 2021-02-12 03:20

I\'m still wrapping my head around Kubernetes and how that\'s supposed to work. Currently, I\'m struggling to understand how to model something like a PostgreSQL cluster with st

相关标签:
4条回答
  • 2021-02-12 03:52

    There's an example in OpenShift: https://github.com/openshift/postgresql/tree/master/examples/replica The principle is the same in pure Kube (it's not using anything truly OpenShift specific, and you can use the images in plain docker)

    0 讨论(0)
  • 2021-02-12 03:53

    You can look at one of the below postgresql open-source tools

    1 Crunchy data postgresql

    1. Patroni postgresql .
    0 讨论(0)
  • 2021-02-12 03:57

    Kubernetes's statefulset is a good base for setting up the stateful service. You will still need some work to configure the correct membership among PostgreSQL replicas.

    Kubernetes has one example for it. http://blog.kubernetes.io/2017/02/postgresql-clusters-kubernetes-statefulsets.html

    0 讨论(0)
  • 2021-02-12 04:14

    You can give PostDock a try, either with docker-compose or Kubernetes. Currently I have tried it in our project with docker-compose, with the schema as shown below:

    pgmaster (primary node1)  --|
    |- pgslave1 (node2)       --|
    |  |- pgslave2 (node3)    --|----pgpool (master_slave_mode stream)----client
    |- pgslave3 (node4)       --|
       |- pgslave4 (node5)    --|
    

    I have tested the following scenarios, and they all work very well:

    • Replication: changes made at the primary (i.e., master) node will be replicated to all standby (i.e., slave) nodes
    • Failover: stops the primary node, and a standby node (e.g., node4) will automatically take over the primary role.
    • Prevention of two primary nodes: resurrect the previous primary node (node1), node4 will continue as the primary node, while node1 will be in sync but as a standby node.

    As for the client application, these changes are all transparent. The client just points to the pgpool node, and keeps working fine in all the aforementioned scenarios.

    Note: In case you have problems to get PostDock up running, you could try my forked version of PostDock.

    Pgpool-II with Watchdog

    A problem with the aforementioned architecture is that pgpool is the single point of failure. So I have also tried enabling Watchdog for pgpool-II with a delegated virtual IP, so as to avoid the single point of failure.

    master (primary node1)  --\
    |- slave1 (node2)       ---\     / pgpool1 (active)  \
    |  |- slave2 (node3)    ----|---|                     |----client
    |- slave3 (node4)       ---/     \ pgpool2 (standby) /
       |- slave4 (node5)    --/
    

    I have tested the following scenarios, and they all work very well:

    • Normal scenario: both pgpools start up, with the virtual IP automatically applied to one of them, in my case, pgpool1
    • Failover: shutdown pgpool1. The virtual IP will be automatically applied to pgpool2, which hence becomes active.
    • Start failed pgpool: start again pgpool1. The virtual IP will be kept with pgpool2, and pgpool1 is now working as standby.

    As for the client application, these changes are all transparent. The client just points to the virtual IP, and keeps working fine in all the aforementioned scenarios.

    You can find this project at my GitHub repository on the watchdog branch.

    0 讨论(0)
提交回复
热议问题