Secure Multi-Region Kubernetes Deployment Recommendations

deployment

(Steven Anderson) #1

Hi there,

I’m having trouble setting up a secure multi-region deployment, even after reading the post on the forum and on gitlab.

We’d like to go to production on GKE with the following setup:

1 node - us-west1
1 node - us-central1
1 node - us-east1
Each node communicates with each other through a A-record subdomain that routes to their reserved GCP IP address, which google then routes into our GKE, which then routes it into the service and then into the pod.

I believe we got pretty close because the health endpoints on the domains were responding and the log stopped receiving errors that the node couldn’t communicate with the other nodes. The closest I’ve gotten to a secure deployment was with the following setup:

  1. Reserve a static IP addresses for each node
  2. Set subdomains on our DNS provider to point to each node IP address:
  1. Setup the GKE clusters in each region with the cockroach db kubernetes guide
  2. Create a loadbalancer for each pod routes from the static IP into the node’s pod using the statefulset selector at “statefulset.kubernetes.io/pod-name
  3. Start each pod with the command:

exec /cockroach/cockroach start
–logtostderr --certs-dir /cockroach/cockroach-certs
–advertise-host DOMAIN-1
–host $(hostname -f)
–http-host 0.0.0.0
–join DOMAIN-1,DOMAIN-2,DOMAIN-3
–cache 25% --max-sql-memory 25%"

The issue is that we cannot get the “cluster-init-secure” job to run, it fails to communicate with any of the nodes. We’ve tried running it both via GKE and through the cockroach binary on the local machine, with the CA and root key and crt files and the command just hangs. We can’t even connect to a db from a local machine. The command in the yaml:

- "/cockroach/cockroach"
- "init"
- "--certs-dir=/cockroach-certs"
- "--host=DOMAIN_1"

And the errors:

E180513 11:15:32.699942 1 cli/error.go:109 unable to connect or connection lost.

Please check the address and credentials such as certificates (if attempting to
communicate with a secure cluster).

initial connection heartbeat failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure
Error: unable to connect or connection lost.

Please check the address and credentials such as certificates (if attempting to
communicate with a secure cluster).

initial connection heartbeat failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure
Failed running “init”

Questions we have:

  1. Given that each node has three regions, and each node is on different GKE regions from eachother, what is the appropriate “cockroach start” flag settings for “–advertise-host” and “–host”? This is the most confusing bit for us, as it seems that we need to set --host to “{$ hostname -f}” or the nodes can’t communicate with the outside.
  2. What is the correct command for the “cockroach init” in our setup?
  3. If our setup is not how we should go about it, how should we go about creating a 3 region GKE deployment?

Thanks in advanced, I’ve spent days worth of cycles trying to get this to work and any help would be appreciated.


(Alex Robinson) #2

Hi @Steve_Maestro,

Very nice work so far!

Your use of --advertise-host with the external domains is correct. Using the reserved static IPs would also work. The point of --advertise-host is to tell the other nodes which address to use to reach the node on, which in this case needs to be the externally-reachable IP/domain.

I would think you shouldn’t be setting --host at all. It’s rarely needed in conjunction with --advertise-addr. At best, it’s having no effect. At worst, it’s causing incoming connections to fail because they aren’t to the address that the cockroach process is listening on. Why do you say that it needs to be set to $(hostname -f)?

This is probably either a certificate issue or an issue caused by setting the --host flag. The cockroach init command you included in your post looks fine.

Your setup is a perfectly reasonable way to do it for someone who’s ok with doing some manual work and who doesn’t need the absolute highest performance (going through load balancers to each node will have a negative effect on that). And given that we don’t currently have an easier approach to offer instead, I wouldn’t suggest changing much.


My main question is what you’ve done for certificates. Did you create the certs locally and manually create secrets containing them in each kubernetes cluster? Did you set up the kubernetes clusters to all use the same CA cert/key?

I believe the default on GKE is for each different kubernetes cluster to have a different CA cert/key. And because cockroach’s default config file (using the cockroachdb/cockroach-k8s-request-cert container) asks for the kubernetes master to sign each node’s certificate, setting up secure cockroach in 3 different kubernetes clusters would mean each cockroach node would have their cert signed by a different CA, so they won’t trust each other’s certs.

This (getting the certs set up) is the toughest / most manual part of the getting a secure multi-region cluster right now. Our goal is for this to all be easier (or at least better documented) by around the end of June, most likely using something like Istio for cross-kubernetes-cluster naming and a manual process for generating/distributing certificates.


(Steven Anderson) #3

@a-robinson we decided to go the GCE route. I was incorrect with my first post, even though I got the nodes up, because they were on different regions their CA’s were signed differently and they couldn’t communicate with each other. By using GCE, we were able to leverage the cert commands to create a multi-region deployment. We found that it was much easier to just maintain a cluster that was on GCE.

Some notes:

  • There’s a bug with cockroachdb we discovered when we forcefully restarted a GCE instance. If the persistent disk doesn’t get mounted on restart, then cockroach will init a new cluster in the location the disk is supposed to be. This will break imports (I think the error we got was "node 4 and 5 have the same name, we’ve been looking for this error ). The fix for this is to, unfortunately, create a fresh cluster and ensure that each node has a startup script that will mount the persistent disks on boot.

  • To use TCP proxy for GCE we use port 5222 because cockroach’s default port isn’t available if we’re going to do cross region deployments.

@a-robinson this if our node startup steps, using these steps we can add a node in less than 10 mins to the cluster.

  1. create a new instance from our cockroachdb instance template, instance name format is -- e.g. us-central1-0 (we don’t care about what zone the node is in)

  2. in the GCE instance creation window, add a blank SSD disk with the format <country-acronym>-<region>-<node-index>-ssh

  3. go into VPC Network > External IP Addresses, find the IP that was created for the instance and set it to static, our format is cockroachdb-node-us-<datacenter>-<node-index>

  4. SSH into the instance using an alias we add to our bashrc: alias cockroach-west1-0='gcloud compute ssh cockroachdb-node-us-west1-0 --project our-project-name --zone us-west1-b -- -L 8080:localhost:8080' (this also allows us to access admin when the server is up)

  5. ssh into instance, mount the disk (this is required for first time setup)

# get disk name
sudo lsblk # outputs sdb

# format disk
sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb

# create ssd01 dir
sudo mkdir -p /mnt/disks/ssd01

# mount it to ssd01
sudo mount -o discard,defaults /dev/sdb /mnt/disks/ssd01
# set permissions
sudo chmod a+w /mnt/disks/ssd01;

# double check work
sudo lsblk
  1. download the cockroach binary:
wget -qO- https://binaries.cockroachdb.com/cockroach-v2.0.1.linux-amd64.tgz || tar  xvz
  1. copy it to path
sudo cp -i cockroach-v2.0.1.linux-amd64/cockroach /usr/local/bin
  1. quit out of the server to go back to local machine, go into the directory where the root key and CA are in ./key
mkdir west0
cp ./key/ca.crt west0/
cockroach cert create-node \
<internal gce ip> \
<external gce ip> \
<internal group instance ip> \
<external group instance ip> \
<hostname to instance set on our cloudflare>  \
localhost \
127.0.0.1 \
<load balance ip> \
<load balancer domain> \
--certs-dir=<location of certs, in this case /west0> \
--ca-key=<location of key, ./key/ for us> \
  1. upload certs dir
ssh <username>@<machine_ip>  "mkdir certs"
  1. upload keys
scp west0/ca.crt \
west0/node.crt \
west0/node.key \
<username>@<machine_ip>:~/certs
  1. SSH back into server and create cockroach service
sudo vim /etc/systemd/system/cockroach.service
# contents of cockroach.service
[Unit]
Description=Cockroach Database cluster node
Requires=network.target

[Service]
Type=simple
LimitNOFILE=35000
WorkingDirectory=/home/<username>
ExecStartPre=/bin/sleep 30
ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --advertise-host=us-west-0.<our domain> --locality=continent=NA,country=USA,region=us-west,datacenter=us-west1-b --store=path=/mnt/disks/ssd01 --cache=25%% --port=5222 --max-sql-memory=25%% --join=us-west-0.<our domain>:5222,us-central-0<our domain>:5222,us-east-0.<our domain>:5222
ExecStop=/usr/local/bin/cockroach quit --certs-dir=certs --host=us-west-0.<our domain>
Restart=always
RestartSec=10
RestartPreventExitStatus=0
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=cockroach
User=SJAnderson

[Install]
WantedBy=default.target
  1. start the service
sudo systemctl enable cockroach
sudo systemctl start cockroach
  1. if this is the first time we’re adding nodes together, then exit into your local machine where the creds are and init the cluster
cockroach init --certs-dir=root/ --host=us-east-0.<our domain> --port=5222

Other notes:

  1. We needed to be able to route data from apache beam via google dataflow into cockroachdb. We were able to use JdbcIO. For anyone looking for instructions on how to connect to your cockroachdb cluster, convert your key to a .pk8 and your crt to a der and drop the files in your classpath.
# import this into your pom
		<dependency>
			<groupId>org.postgresql</groupId>
			<artifactId>postgresql</artifactId>
			<version>42.2.2</version>
		</dependency>
		<dependency>
			<groupId>org.apache.beam</groupId>
			<artifactId>beam-sdks-java-io-jdbc</artifactId>
			<version>2.4.0</version>
		</dependency>

Our connection that worked. Again, remember to place your files in your classpath. We added Files/ to our .gitignore for security (I know this isn’t a good idea but google KMS didn’t work for us in dataflow).

private static final JdbcIO.DataSourceConfiguration connectionConfig = JdbcIO.DataSourceConfiguration.create(
    "org.postgresql.Driver",
    "jdbc:postgresql://<domain>:5222/<table>?"
    + "sslmode=require&ssl=true"
    + "&sslcert=/Files/client.username.der"
    + "&sslkey=/Files/client.username.pk8"
  ).withUsername("<username>")
  1. We had to export 48 million rows from bigquery into cockroachdb (had to move PII data into cockroachdb to meet our client’s GDPR standards). We realized that if we do the work to build our CSV file on our GCE instance, than we can leverage the power of our instances (n1-standard-4) plus the insane speed GCE instances has with GCP to create a good csv file for import, quickly.

NOTE: Make sure one of your cockroach instances has FULL google cloud storage privileges.

a. In bigquery, create your table and save it to a dataset.
b. Click the table, then click “Export Table” and put it in your google storage bucket. Most likely it will create a bunch of files.
c. SSH into your cockroach instance
cd /mnt/disks/ssd01/
mkdir exports
d. Copy the exported files from your bucket into your directory

gsutil -m cp gs://<bucket>/<pattern>.csv .

e. create a bash script to combine all your files

vim ./combine.sh
# this script combines all your exports into one csv that cockroachdb can read
# WE SKIP THE FIRST LINE HERE, CSVS FOR COCKROACH DO NOT NEED
# HEADER LINES
#!/bin/bash
OutFileName="output.csv"                       # Fix the output name
i=0                                            # Reset a counter
for filename in ./<your file pattern>*.csv; do
 if [ "$filename"  != "$OutFileName" ] ;      # Avoid recursion
 echo "appending $filename";
 then
   tail -n +2  $filename >>  $OutFileName # Append from the 2nd line each file
   i=$(( $i + 1 ))                        # Increase the counter
 fi
 echo "finished appending $filename";
done

f. run the script

bash ./combine.sh

g. export the file back to GCS

sudo gsutil -m cp ./output.csv gs://<bucket>/output.csv

h. exit to your local machine, upload your create table <tablename> () file to GCS

-- example file
CREATE TABLE my_table (
  event_id STRING NOT NULL,
  geo_city STRING,
  time TIMESTAMP,
  session_id STRING,
  site_id string NOT NULL,
  user_id STRING,
  PRIMARY KEY (site_id, session_id, event_id),
  INDEX user_id_idx (site_id, user_id),
  INDEX user_uid_idx (site_id, user_uid)
);

i. in your local machine, create vim ./import-command.sh

cockroach sql --execute=" \
  IMPORT TABLE <table_name> \
  CREATE USING 'gs://<bucket>/<path_to_step_h_file>.sql' \
  CSV DATA ('gs://<bucket>/output.csv') \
  WITH nullif = '';" \
--certs-dir=<certs_dir> --database=<db_name> --host=<db_host> --port=5222

j. run the import script

bash ./import-command.sh

k. use the alias from the earlier step to create a SSH tunnel, then click on jobs to confirm your import job is running.