Cockroachdb and docker swarm

I am interested in deploying Cockroachdb on the cloud, and I want to use docker swarm to orchestrate the cluster. I would like to define a service where the number of replicas would be equal to the cluster size, so that re-scaling or redeploying the database by executing a service for every node wouldn’t be necessary. This would be much faster and less error prone. This could be done by making a docker swarm service definition infer the characteristics about the node in the cluster, much like how Kubernetes does so by its templates. Is this possible? Or should I stay with deployments of every node individually with its own unique docker service?

Dear superman,
thank you for your interest in CockroachDB.

Before we can help you further with Docker Swarm, can you clarify what you intend to do exactly?
The following part of your request raises several questions: “so that re-scaling or redeploying the database by executing a service for every node wouldn’t be necessary”.

CockroachDB already does scaling and data rebalacing automatically for you – your question suggests you are under the mistaken assumption that extra work would be needed to achieve this. What do you wish to achieve which CockroachDB doesn’t already provide?

Also, even though we could theoretically help you configuring your swarm so that “the number of replicas would be equal to the cluster size”, I would seriously recommend against doing this. The default number of replicas is 3 but replication is organized by “range”, i.e. by chunk of data, not for the entire database – if you have a cluster of more than 3 nodes, the replicas will be distributed among available nodes automatically to balance the data load, but ensuring that no two replicas are on the same node. This is how CockroachDB already automatically tolerates failure of any one node without loss of service. You can increase the replication factor to 5 to tolerate 2 simultaneous node failures, 7 to tolerate 3 node failure, etc. However, setting the replication factor to 20 because you have 20 nodes is completely overkill - you are in effect saying that you are expecting 9 of your nodes to fail simultaneously at any time. Is this realistic?

Hi Rapheal,

I’m sorry for the confusion. When I meant replicas, I meant the number of containers docker swarm would dedicate to cockroachdb. Currently, when deploying cockroachdb on docker swarm, a new docker service for node is required. I was hoping to have a docker service configuration where it would only be nessecary to change the docker replica number to scale the cluster up and down. This would be a similar configuration to how a webserver is scaled on docker swarm. Thanks for the timely response :smile:.

Hi again,

I found an article that solved my problem, I just translated the instructions to cockroachdb from rethinkdb and it worked like a charm. Deployment became much easier.

Thanks for the help

You’re welcome! Thanks for the link!

Hi again superman,

My colleague @jesse points out that you should be careful not to run multiple replicas on a single machine (e.g. with multiple services or pods on a single server). This does not buy you any resilience and the benefits for performance will be seriously limited. It is better to run different services on different physical machines. If you want more performance per machine, be sure that your docker services present multiples cores to the CockroachDB process.