Migrate from "sigle-node" to "cluster" mode?

I’ve created a new CRDB instance (as start-single-node), imported a PostgreSQL backup into it, and run some queries over it.
Now, I would migrate from “single-node” to a clustered instance with 3 CRDB, keeping data already imported.
Would you point me documentation or detail steps necessary for that “migration”?

Thanks,

@ecerichter In our docs we have an example of how Replication and Rebalancing works from one node to three nodes. Does that help?

Thanks, after re-reading docs I think I’m starting to understand. Some concepts are not totally clear.
For instance, if I stop my “single-node” and start a cluster instance with same data directory, will it replicate the data into other instances when they join? Is my assumption correct?

@ecerichter Yes, that is correct. The data will be replicated to new nodes as they join the cluster.

There are actually a few other steps involved in scaling from single-node to multi-node. Take a look at this example and let me know if there are open questions: cockroach start-single-node | CockroachDB Docs

Ok, I’ve follow all the instructions to convert my single node to a cluster with 3 replicas, and created a systemd script to start each node.:

Node1 => 26257 / 8080
Node2 => 26258 / 8081
Node3 => 26259 / 8082

Looking at DBConsole (port 8080, all seems right, with all 53 ranges replicated, 0 ranges unde-unreplicated, 0 unavailable ranges.

After, I’ve connected to using “cockroachdb sql” in port 26257 (which corresponds to first node) and executed following commands:

$ cockroach sql --certs-dir=/var/lib/cockroach/certs --host :26257
root@:26257/zips> use zipdb;
root@:26257/zips> select count(*) from locality;
  count
---------
  10734

In a second terminal, I’ve stopped node 1, and tried to repeat the queries on first terminal, which caused error:

root@:26257/zips> select count(*) from locality;
invalid syntax: statement ignored: unexpected error: driver: bad connection
warning: connection lost!
opening new connection: all session settings will be lost
warning: error retrieving the transaction status: dial tcp [::1]:26257: connect: connection refused
warning: connection lost!
opening new connection: all session settings will be lost
warning: error retrieving the database name: dial tcp [::1]:26257: connect: connection refused

What am I doing wrong? I suspect the problem is that I need a load balancing in front of my nodes, but I cannot find any reference in “starting a cluster” doc page.

I’ll appreciate your help.

When you’ve done cockroach sql --certs-dir=/var/lib/cockroach/certs --host :26257, you’ve explicitly asked the sql shell to connect to node 1. If you shutdown node 1, that connection will be dangling. You can open a connection to one of the live nodes or, as you say, you probably want to set up a load balancer.

See if this or this helps.