I’m also trying this approach but I also encountered problems that I haven’t successfully resolved…
- 3 Gossip Kubernetes clusters in different regions via kops
- Port 26257 is open on k8s nodes security group
- I don’t have Federated Kubernetes setup because v1 image is gone and v2 is not ready
- VPC Peering is setup to ensure Node to Node connection with internal IP is available.
- Services is accessible via external IP (AWS ELB).
After some tweaking the setup.py, I still have
Readiness probe failed errors, and inside the log I found
Secure node-node and SQL connections are likely to fail.
What I tweak is:
First of all, set
service.beta.kubernetes.io/aws-load-balancer-type: "nlb" in
annotations in dns-lb.yaml so there are static ips returned.
external_ip is defined in line 120, I first get the externalIP from load balancer of Kube-dns, which is a domain address with AWS ELB. I use dig to wait until it return the ips. I use those ips to prepare the Kube-dns ConfigMap for IPs on other clusters…
It seems fine… However, I’m still not able to connect across clusters to the Pod IP (100.x.x.x). Is there a way to do that? Do I need Federated Kubernetes in order to achieve that?
Also… is it possible to, instead a 3 replicas stateful service, create 3 separate services with its own service ip… don’t run
cockroach start yet until everything is up and we have all the external service address to put in “join”? (or just capture the main one and have other join to that address)
I think in theory this should address my issue… But just curious if there is a better way to do achieve it…