Best practices for cluster startup/recovery?

So how do people generally manage/configure cockroachdb on production servers? Are there some documented best practices?

Does everyone write their own case-specific shell scripts to invoke/monitor/restart cockroachdb?

I’d was looking at https://github.com/tomogoma/cockroach-installer/ which has a systemd unit file that will watch/restart the cockroach process.

What I’m puzzled by is that there seems to be no easy way to both start and monitory cockroach with systemd. I can make a unit file with fixed command line options which will get me certs, a data directory, etc, but starting up with or without a --join= option will vary from invocation to invocation.

We’re looking at a 3 or 4 server deployment, all at one site (on centos 7). I can write a simple script that will manually start a node or start and join a node… but I can’t see an easy way to automate monitoring/restarting because I wont know which of the modes I should start in (e.g. initial node, or join an existing cluster). That pretty much seems to leave systemd monitoring out. I guess I could write a complicated script that will try to detect the situation and then invoke/exec cockroach with different arguments, and then have systemd monitor that…

I’d drop a list of potential peers into a variable, use nc to try to connect to each of them. If none allow connection, then start up as the first node in a cluster, otherwise I’d --join any of the ones I found alive. Even still there are race conditions, which I’m not sure how I could easily handle.

Also I dont really find any “recover from a totally crashed cluster” documentation. Is it really as simple as ‘start any node, join a second node, join the rest’? All the docs I see assume that you still have a live node somewhere else. What if I loose power, UPS, generator at my one site with all 4 nodes? There doesn’t seem to be a good way of bootstrapping a cluster.

MariaDB/Galera-cluster has a similar challenge - they use a config file to store potential peers, a ‘standard join-a-cluster’ service file (or systemd unit file) and a special case manual script to start the first node on the cluster.

Is there a better, standard way?

Hello Fred,

The simplest solution would be to use the init command to initialize your cluster.

Using the init command, you can start all your nodes with a --join flag set. They will all see that they are not initialized and will wait. Once you issue cockroach init ... to one of the nodes, that will initialize the cluster and the others will start joining.

As for the “recover from a totally crashed cluster”, there is nothing special to do. Once a node has been initialized, it will happily start. You still need to get enough nodes up and running to get a quorum, but there is no required order.

Similarly, once a node is in a cluster, it persists the list of other nodes to local disk and uses that in addition to the specific --join flag. This means that even if the nodes in the --join flag no longer exist, you can still start the cluster. We do recommend keeping the --join flag reasonably fresh though.

As for tools, we do not currently have any published recommendations. We personally use supervisor in our clusters, but systemd is a perfectly reasonable solution.

To clarify, do you mean:

  1. on each node, manually start cockroachdb with the init command and --join=otherhost
  2. then I can just add a systemd unit file to start/restart it without a join option and it will start/restart/stop as needed because cockroach remembers it’s hosts? My hostlist/IP address will be very static - but now and then I may want to manually start with a join option.

Right?

The init command is a client command, it sends an RPC to the specified host and tells it to initialize.

The recommended process to create a cluster would be something along the lines of:

  1. start all nodes with --join=<some set of nodes, if not all, then at least some> (this would be in your systemd config)
  2. at this point, all nodes are uninitialized and just waiting
  3. issue cockroach init --host=<one of the nodes specified in the --join flag>. This can be done from any machine that can reach the specified host. Again, this is a client command, not a server flag.
  4. the specified node will initialize the cluster and start serving. Other nodes will now join the cluster

While you could technically clear out the --join flag after the cluster has been created, you should leave at least a handful of active nodes in it.
Obviously, any new nodes created will need to point to one of the existing nodes in the cluster.

Recently I was crashing nodes in my roach cluster, due to an issue that is now fixed in 1.1.3, to get the node started again I just logged into the machine and did:

$ cockroach start --certs-dir=certs  --cache=50% --background

No --join at all. The node finds it’s old friends and away we go.

I presume I could just put that into a systemd service file and it would just work. (Without the background flag)

That only leaves the issue of supplying the --join flags at initial installation time.

We use Ansible to manage our clusters (install/upgrade/add nodes etc). I’ve built a couple of modules to help with the initial Cluster config and a couple of modules to manage databases/users/privs.

The playbooks generate systemd configs based on whatever config is used as input. We also leave out the --join directive in systemd, and so far we’ve had no issues.