Replication, availibillity and data safety

First, great product! Second, I have some questions about data safety that is not clearly documented. Can somebody confirm my assertions below?

A) 3 nodes, replication factor 1; If we disconnect 1 or more nodes, some ranges will be unavailable, but ranges will be available again once nodes are restored. Will only loose data if node is permanently destroyed or disk corrupted

B) 3 nodes, replication factor 2; same behavior as in A, but we can also recover from 1 permanently downed node (manual intervention required?).

In summary; will cdb with replication-factor 2 work like postgresql with synchronous slave?


Hey @frelars,

I can see where you’re coming from, but that’s not the case. CRDB is built using the raft consensus algorithm, and raft requires a majority of members in a group to reach consensus. No consensus is possible with 2 members, which is why you’re prohibited from setting num_replicas to 2.

Also, in the first example, it’s not true that only some ranges will be unavailable. The system requires some ranges (like our liveness range) be available to make the system work. In a case with a replication factor of one, if the node that went down contained the system ranges, all data would become unavailable.

So basically, you can run CRDB with a single node, and have no fault tolerance on any ranges, or three or more nodes, with a replication factor of three or more, and be able to tolerate at least one node failure without data becoming unavailable.

Hope that helps.

Thanks, for A I understand that system will be unavailable and thats fine.

But for B I’m a bit surprised. If we run 2 factor replication (assume for now it is allowed, did not know it was prohibited), as long as both nodes are available the raft protocol should be able to make progress, and everything committed should be safe. In the case of one node permanently downed, the raft will not be able to make progress (waiting for missing node), but everything already committed should be safe. So by manually “removing” the downed node the remaining cluster should be able to continue. We assume in this example that system tables have replication factor of 3 or more.

For our case, the jump in replication factor from 1 (with backup) to 3 is a bit steep. We would instead like to increase data safty by adding a second node and increase performance/capacity by adding a third node. Most small/medium size businesses run 1 db with some form of slave+backup, and when master is down, manual intervention reroutes trafic to slave (or backup restored on master)

Is this something cdb will consider supporting? I think supporting this will make cdb even better choise for small/medium size businesses that plans to grow.


  • I understand that cdb operates on ranges, not nodes. So when I talk about a node I mean all ranges hosted by that node.
  • By data safety I mean the abillity for the cluster to recover from permanent loss of a node, not availibillity. Downtime is acceptable.