Pod names resolution across multiple Kubernetes clusters on AWS

aws

(Andrei) #1

I’m trying to install CockroachDB across two Kubernetes clusters on AWS. Clusters are connected using VPC Peering, so pod-to-pod connectivity is guaranteed.
I’m facing a problem with exposing DNS Server for enabling pod name resolution between clusters as described in https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes/multiregion#exposing-dns-servers-to-the-internet
The Load Balancer definition provided in the GitHub Project (https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/multiregion/dns-lb.yaml) defines an UDP LoadBalancer, but the AWS does not support UDP LoadBalancers, so the configuration of stubDomains is not possible.

Are there alternative mechanisms for enabling cross cluster pod names resolution on AWS?
Thanks!


(Tim O'Brien) #2

Hi @ak-icc,

You should be able to add the following or replace the udp configuration on dns-lb.yaml:

  - name: dns
    port: 53
    protocol: TCP
    targetPort: 53

The docs on kube-dns are a bit thin, but as far as I can see that should be all that’s necessary to switch the protocol from UDP to TCP (or add it if it’s TCP only).

Let me know if that doesn’t work for you.


(Andrei) #3

Hi @tim-o,

i’ve tried it with TCP, the created LoadBalancer resource looks like this:

kind: Service
....
spec:
....
  ports:
  - name: dns
    nodePort: 31166
    port: 53
    protocol: TCP
    targetPort: 53
...
status:
  loadBalancer:
    ingress:
    - hostname: internal-ad2a0449824aa11e9b54f02f5a217943-1440569433.eu-central-1.elb.amazonaws.com

The setup.py expects that the LoadBalancer will have an IP address:

external_ip = check_output(['kubectl', 'get', 'svc', 'kube-dns-lb', '--namespace', 'kube-system', '--context', context, '--template', '{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}'])

In my case AWS provides only the hostname and the script stays handing in the wait loop.
I will try to extend the setup script and resolve IP for stubDomains from the hostname.


(Andrei) #4

Hi @tim-o,

after reading AWS documentation i’m not sure that the approach with cross-connecting DNS Server in both clusters via IPs of Load Balancers will work reliably. The Elastic Load Balancers on AWS can change theirs IPs (that’s why the the kubectl outputs for the LoadBalancer not the IP, but the hostname like internal-ad2a0449824aa11e9b54f02f5a217943-1440569433.eu-central-1.elb.amazonaws.com).
So the IPs we are configuring as stubDomains during deployment can change after some time and the pod name resolution will not function anymore. Or am i wrong?


(Jesse) #5

Hi @ak-icc

Thanks for continuing to investigate and dig in here. You’re right that the current configuration and docs require a stable public IP address for load balancing. That’s the approach we took for the documentation and our testing, which focused on GKE. Unfortunately, we just don’t have precise insight into getting this working on AWS at the moment. I’ve put this in our backlog to investigate get documented: https://github.com/cockroachdb/docs/issues/4314.

In the meantime, I’d suggest you look into ways to update the configuration to use a load balancer hostname that is resolvable and routable from all the clusters.

Best,
Jesse