Hi,

GKE (on GCP) has a limit of 60TB per node.

Does your managed service on GCP have a similar limit? Ideally we would like to start off small and then add storage when needed without any limits.

Cheers

Hi,

GKE (on GCP) has a limit of 60TB per node.

Does your managed service on GCP have a similar limit? Ideally we would like to start off small and then add storage when needed without any limits.

Cheers

Hi @batman

thank you for your inquiry!

There is a limit on storage per node (the particular limit depends on the package you choose), but of course it is configurable and weâ€™ll be happy to accommodate your growing size needs over time.

Does this answer? Is there anything specific youâ€™d like to know?

Thanks for the fast reply.

I always thought data is replicated on every node? So say we had a DB of size100 TB, I thought that was 100 TB/node or 300 TB of storage for a 3 node cluster? Is that incorrect?

One of our internal systems has 1 Peta byte of storage. I am not sure how much we will need in the next 5 years but I need to know if we wanted to have a DB that needs storage to scale, we can achieve that with a managed solution?

CockroachDB uses N:M replication - increasing the number of nodes and increasing the replication factor are two independent operations.

- replication factor is about how many node failures you wish to tolerate
- number of nodes is about overall horizontal scalability (performance)

It is common/customary to use a replication factor of 3 with many more nodes.

So if you have, say, 10 nodes with each node able to store 500GB of data, with replication factor 3 the total â€śusefulâ€ť storage is 10x500/3 = ~1.6TB. With 20 nodes it would be ~3.3TB.

We commonly advise to increase storage/node if you have a lot of â€ścoldâ€ť data (rarely accessed) relative to warm data. As the amount of warm data increases, we recommend increasing the number of nodes instead (at constant storage/node) so that you can serve the warm data more effectively.

Overall, serving 1PB of data through a CockroachDB cluster should be fine. The remaining question is what cluster size youâ€™d need. For this I recommend you contact one of the sales reps.

Thanks, that was a very useful explanation. I missed that the replication factor and number of nodes were independent.

GKE can support 1000 nodes per cluster which means that with a limit of 64 TB/node and default 3 way replication, the useful storage is 1000*64/3 ~= 21,333 TB.

1 Like

Weâ€™re not going to recommend you run CockroachDB with 1000 nodes at this point. To achieve 1PB of useful storage weâ€™ll recommend to bump storage/node and vcpus/node instead.

But Iâ€™m glad the situation is now clarified. Will be curious to know what you end up choosing in the end.