Redis sentinels in same servers as master/slave?

前端 未结 4 1971
醉梦人生
醉梦人生 2021-02-06 09:37

I\'ve been doing some reading on how to use Redis Sentinel, and I know it\'s possible to have 2 or more sentinels, and load balance between them when calling from the client sid

相关标签:
4条回答
  • 2021-02-06 10:06

    It all depends the level of Disaster Recovery you want to achieve, let's assume you have the following components independently of where they are hosted:

    • 2 Sentinels
    • 1 Master
    • 1 Slave

    1 Master 1+ Slaves

    One host scenario

    Host fails: You loose everything, bad replication scenario for most use cases.

    Two host scenario

    Host 1:

    • (Current elected) Master
    • 1 Sentinel

    Host 2:

    • Slave
    • 1 Sentinel

    It is true that in this scenario you can have the hosts fail one at a time which gives you some level of security. Just try to understand if by different server you mean physically different hosts. If these are just VMs on the same host, you do not get the same level of DR (Disaster Recovery).

    Regarding your question:

    I rather have the sentinels be in the same server as the master/slave to reduce latency.

    Notice that Sentinels keep track of the current master and slaves, but the Redis clients do not connect to the Master VIA the Sentinels, they just get where the current master is via the Sentinels, e.g., in terms of reads and writes you're not looking into any considerable* latency gains.

    Configuration provider. Sentinel acts as a source of authority for clients service discovery: clients connect to Sentinels in order to ask for the address of the current Redis master responsible for a given service. If a failover occurs, Sentinels will report the new address.

    (see: http://redis.io/topics/sentinel)

    The way I see it the only gains you have in terms of latency are the heartbeats sent from the Master and Slaves to the sentinel. As long as you are not spreading your servers through the whole world that should be ok.

    It all depends on the use cases, but it seems you would do best to keep things as separate as possible if all other things are equal (costs, distance to clients, etc).

    0 讨论(0)
  • 2021-02-06 10:09

    First, Sentinel is not a load balancer or a proxy for Redis.

    Second, not all failures are death of the host. Sometimes the server hangs briefly, sometimes a network cable gets unplugged, etc. Because f this, it is not good practice to run Sentinel on the same hosts as your Redis instance. If you're using Sentinel to manage failover, anything less than three sentinels running on nodes other than your Redis master and slave(s) is asking for trouble.

    Sentinel uses a quorum mechanism to vote on a failover and slave. With less than two sentinels you run the risk of split brain where two or more Redis servers think they are master.

    Imagine the scenario where you run two servers and run sentinel on each. If you lose one you lose reliable failover capability.

    Clients only connect to Sentinel to learn the current master connection information. Anytime the client loses connectivity they repeat this process. Sentinel is not a proxy for Redis - commands for Redis go directly to Redis.

    The only reliable reason to run Sentinel with less than three sentinels is for service discovery, which means not using it for failover management.

    Consider the two host scenario:

    Host A: redis master + sentinel 1 (Quorum 1)
    Host B: redis slave + sentinel 2  (Quorum 1)
    

    If Host B temporarily loses network connectivity to Host A in this scenario HostB will promote itself to master. Now you have:

    Host A: redis master + sentinel 1 (Quorum 1)
    Host B: redis master + sentinel 2  (Quorum 1)
    

    Any clients which connect to Sentinel 2 will be told Host B is the master, whereas clients which connect to Sentinel 1 will be told Host A the master (which, if you have your Sentinels behind a load balancer, means half of your clients).

    Thus what you need to run to obtain minimum acceptable reliable failover management is:

    Host A: Redis master
    Host B: Redis Slave
    Host C: Sentinel 1
    Host D: Sentinel 2
    Host E: Sentinel 2
    

    Your clients connect to the sentinels and obtain the current master for the Redis instance (by name), then connect to it. If the master dies the connection should be dropped by the client whereupon the client will/should connect to Sentinel again and get the new information.

    How well each client library handles this is dependent on the library.

    Ideally Hosts C,D, and E are either on the same hosts where you connect to Redis from (ie. the client host). or represent a good sampling got them. The main thrust here is to ensure you are checking from where you need to connect to Redis from. Failing that place them in the same DC/Rack/Region as the clients.

    If you are wanting to have your clients talk to a load balancer try to have your Sentinels on those LB nodes if possible, adding additional non-LB hosts as needed to obtain an odd number of sentinels > 2. An exception to this is if your client hosts are dynamic in that the number of them is inconsistent (they scale up for traffic, down for slow periods, for example). In this scenario you pretty much must run your Sentinels on non-client and non-redis-server hosts.

    Note that if you do this you will then need to write a daemon which monitors the Sentinel PUBSUB channel for the master switch event to update the LB -which you must configure to only talk to the current master (never try to talk to both). It is more work to do that but does make use of Sentinel transparent to the client - which only knows to talk to the LB IP/Port.

    0 讨论(0)
  • 2021-02-06 10:16

    You can have sentinels on the same machine with master/slave, but the sentinels must be odd(3/5/7) in number. There should be atleast three sentinels and it is must to have a dedicated machine for atleast one sentinel.

    If you have only two nodes, then in case of a split-brain (network disrupt) situation, the slave will be promoted to master. Both the master now will accept data from clients.However, when things come back to normal, one of the master will be demoted as a slave. That master will lose all of its data as it is a slave now and will replicate the data from current master.

    check this for good a explanation of redis architectural desings and split-brain: http://www.yzuzun.com/2015/04/some-architectural-design-concepts-for-redis/

    0 讨论(0)
  • 2021-02-06 10:16

    It's certainly not a recommended approach.

    The Redis Sentinel docs explains the tradeoffs pretty well. Hope this helps. https://redis.io/topics/sentinel#example-sentinel-deployments

    0 讨论(0)
提交回复
热议问题