Understanding Local_Quorum

前端 未结 2 1586
谎友^
谎友^ 2021-01-27 04:56

We have a 3 DC\'s[US,EU,ASIA] with 3 nodes each, so totally 9 nodes.We are experimenting anyways, so we can increment if we need to.

We are planning to use a RF of 2 pe

相关标签:
2条回答
  • 2021-01-27 05:30

    But this calculator states otherwise.Here,if we go for cluster-size of 3 and RF:2, WL/RL as Quorum - it says we can survive loss of No Nodes.Am I missing something related to the Quorum size and the total number of machines in a cluster?

    And that calculator is correct.

    Assume that you write data with a key of "A" to your cluster. With 3 nodes in 3 DCs and a RF of 2, your write will look like this:

    US          EU           ASIA
    node1- A    node1- A     node1- A
    node2- A    node2- A     node2- A
    node3-      node3-       node3-
    

    we can tolerate a failure of 1 Node per DC

    Since high-availability is the problem you are trying to solve, let me ask a question:

    How can you guarantee that when a node goes down, that it will always be node3?

    Sometimes it will be. But sometimes you may also lose node1 or node2, which contains the replicas for your data "A." When that happens, and you query at LOCAL_QUORUM, it will look for two replicas in a DC. If one node is down, and that node happens to contain a replica for "A," your query will fail.

    So two points here. MarcintheCloud is right, and if availability is really a concern of yours, then you should increase your RF to 3.

    The second, is that you should ask yourself if you really need to query at LOCAL_QUORUM. Netflix did a presentation at the 2013 Cassandra Summit, talking about how they experimented with "eventual consistency" and LOCAL_ONE. It's quite good. It'll make you think really carefully about how badly you really need to use QUORUM consistency.

    A Netflix Experiment: Eventual Consistency != Hopeful Consistency

    0 讨论(0)
  • 2021-01-27 05:50

    Quorum as you mentioned is majority (1/2 + 1). In the case of RF=2, majority is 1+1=2. That means you need 2 acknowledgements from your replica nodes in order for the request to be successful. Thus, if one of your two replicas goes down, you cannot achieve consistency and the request will fail.

    To be able to handle an outage and still do quorum, I suggest upping replication factor to 3.

    0 讨论(0)
提交回复
热议问题