This was an excellent article on deploying Cockroach on Docker Swarm with secrets:
It is creating 3 separate
docker services and allowing them to talk over an overlay network.
But, why 3 separate services?
Does each CockroachDB
node require a separate Cert/Key?
Can you not use the same CA/Cert/Key created across all Cockroach nodes in a cluster?
Also, Docker Swarm 1.13.1 uses TLS encryption between each swarm node. So any “internal” communications between physical swarm nodes is secure. And since we specified a dedicated network,
cockroachdb, this keeps other containers from being able to sniff the packets.
So do we need to run in “secure” mode in Docker Swarm?
The key differences seem to be these lines:
The first one makes the service only run one instance.
And the second line basically creates an unnamed local Data Volume, mounted locally to the machine. Ok, I get that. Basically this:
- Allows the creating of an anonymous data container for that one Cockroach node running in that one container, on whatever swarm node it starts on.
- Mounts the data volume local to the swarm’s cluster node disk, instead of to a virtual Data Container. Perhaps because it is much faster.
Does Cockroach need fast SSD IOPS?
Can it survive on 300 IOPS poor-speed drives?
If so, can we drop the
If we can, that would:
- Create anonymous data volume containers, linked to the Cockroach running instance. Should remain persisted between restarts. Though rolling upgrades, I’ll have to research.
- Allow us to define just 1 cockroach service for the entire swarm, and use the
scale=Xfeature to scale a the cockroach Docker Swarm service to X instances - when there are only Y physical swarm machines (nodes) available, as an example.
That would provide us with more saturation options for underutilized boxes in the swarm, specifically by allowing us to scale up more instances than physical swarm boxes.
This could in theory work. But, I am wondering if this is a bad idea.