Data replicated everywhere?

Hello.

I would like to know if there is a way to configure a database or table to be replicated on all the nodes (Without having to increase of decrease the replication factor when adding/removing a node).

In our case this is data that is 99% read only but we need to have it a close as possible as the location it is needed.

Thank you. :slight_smile:

Hey @kedare - have you seen the documentation on replication zones? The example of even replication across DCs might suit your use case.

Hello Tim.

I took a look but it looks like it still requires the amount of replicas to be configured ?
Here the idea would be to just have to setup “Put a replica on every node” without having to care about the amount of replicas (Basically each app server has a Cockroach running so they have all the data locally)

Thanks

Gotcha, I think I misunderstood. We don’t have a setting to ensure that each node in a cluster of variable size receives a full copy of a table. If you had a stable number of nodes, the replication factor would be the way. However, I wouldn’t recommend changing the replication factor to the number of nodes if you could remove them in the future. Increases are currently handled better than decreases.

Out of curiosity, what problem are you trying to solve by replicating the table across every node? There might be an easier way.

Basically we have a CockroachDB on each of the reverse proxy, and they hold data used by the reverse proxy like SSL configuration settings or redirect rules.
We want the data to be as close as possible of each reverse proxy (for now they are all in the same region but in the future we will spread on multiple regions all over the world) for performance and availability reasons (the data should still be readable even if other nodes go down on the cluster).
In past we were using MongoDB for this (With the data replicated everywhere by default)

Gotcha. So each node is actually a proxy server and you want to be able to have a full copy of the data without having to hop to other nodes? How many of these servers do you have, and are you anticipating on adding as you scale?

Right now we just have 2 nodes (+ 1 static seed just used as static nodes until we can move everything on Kubernetes). In the next weeks we plan to have other nodes (To extend this deployment on other type of reverse proxies we have that will need access to that data too)
We also plan to add autoscaling (We don’t know yet how to do this as we need to handle the certificate management in CockroachDB, maybe with some scripting that could be possible)
And in a more far away term (As for now we have some platform limitations on the GKE compatibility with Google Anycast LB that are blocking this step) multi regional deployment on GKE (with data replication everywhere basically)