2TB per node limitation?

Google Spanner have a limitation of 2TB of data per node.

To provide high availability and low latency for accessing a database, Cloud Spanner requires that there is a node for every 2 TB of data in the database.

Would CockroachDB have the same limitation? Or if this is a best practice to limit only 2TB per node?

We don’t impose a specific limit. We have some work-in-progress documentation here which estimates a practical limit of around 3TB per node with 8 cores and 16GB RAM. Nodes with more CPUs and memory could likely support more.

1 Like