Accessing Admin UI on Docker Swarm

I have followed the steps here and now have a functioning Swarm:
https://www.cockroachlabs.com/docs/orchestrate-cockroachdb-with-docker-swarm.html
I exposed port 8080 on the swarm but the Admin UI is not functional. The browser logs are filled with this sort of error:

oconnection error: desc = "transport: x509: certificate is valid for cockroachdb-1, localhost, not 8615fec8cdc4"

In the swarm, ‘cockroachdb-1’ is the host name that returns the virtual IP for the service. ‘8615fec8cdc4’ is the hostname for the container where the one instance of the service is running.

Thanks for the report, @smcdowell! Could you share a screenshot of what you mean by “not functional” and give a little more detail on what endpoint you’re accessing the UI at?

I access it at https://a-swarm-node:8080
Here is the UI – you can see it renders but none of the Ajax calls in the background succeed.

For services running in Swarm they will never have predictable hostnames.
Maybe it needs to pay attention to the --advertise-host parameter passed on startup? (which in this case is the service name cockroachdb-1)

Thanks for the extra detail, and I’m sorry for the delay over the weekend!

This is something that I think we’re going to have to fix internally – it’s an issue with cockroachdb code grabbing the hostname and using it for some internal requests.

In the meantime, the recommendation from that issue of including all names that a node might be accessed under in the cert is tough to do without dynamic certificate signing. However, you could work around it by specifying the hostname of each of the containers. Are you able to use the --hostname flag documented here to fix the UI?

Yes, using the hostname option on the service works! I wasn’t aware of this option, from what I can tell each replica of the service calls itself “[hostname]” but to the outside world that name is still round-robin load balanced. Further you can still reference a container by its ID via a network alias that Docker configures.

By the way, I have created a script to create all of the secrets to spin up a cluster, and a .yml file to create the cluster using docker stack … based on your orchestration tutorial.

1 Like

Thanks for testing that out, @smcdowell! We’ll get our docs updated soon to include that option.

If you’re up for sharing your script and yaml file, I’m sure others could get some use out of them. Would you be up for sending a PR to add them to a new docker directory under https://github.com/cockroachdb/cockroach/tree/master/cloud?

I’m having the same issue with the Kubernetes secure setup. I’m not sure what is the right way use or what the equivalent version of --hostname is for the k8 yaml script.

Any help on this would be greatly appreciated!

Oof, it looks like I broke this last week. I’ll change it back in the secure config, but if you want to fix it sooner you can replace the --advertise-host flag in the configuration with the --host flag.

This’ll go back to working once our 2.0 release is out in March due to the fix to https://github.com/cockroachdb/cockroach/issues/10374, but it’s not in the current release.

Thank you for reporting your problem, @varick!

I’ve updated the main repo’s secure-mode config file: https://github.com/cockroachdb/cockroach/pull/21375.

Thank you for the quick reply! Hmmm, I made this that change but now I’m
getting this error. Any thoughts:

kubectl port-forward cockroachdb-0 8080

Unable to listen on port 8080: All listeners failed to create with the
following errors: Unable to create listener: Error listen tcp4
127.0.0.1:8080: bind: address already in use, Unable to create listener:
Error listen tcp6: address [[::1]]:8080: missing port in address

error: Unable to listen on any of the requested ports: [{8080 8080}]

That just means there’s some other process running on your machine already listening on port 8080. You need to kill it before trying to port-forward onto your local port 8080.

Gezze, I can’t believe I didn’t catch that, I kept looking on the k8 side.
Thank you for your help, everything is working now!