All Query Request Goes To Single Node


I am new to CockroachDB and I have created three node local cluster with different ports. I am using java application for inserting and reading data from database. I am using Hibernate as per the tutorial in

Hibernate Configuration.

<?xml version = "1.0" encoding = "utf-8"?>
<!DOCTYPE hibernate-configuration SYSTEM  "">
		<property name="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</property>
		<property name="hibernate.connection.driver_class">org.postgresql.Driver</property>
		<property name="hibernate.connection.url"><![CDATA[jdbc:postgresql://localhost:26257/testing?&sslmode=disable]]></property>
		<property name="hibernate.connection.username">root</property>
		<property name="hibernate.connection.password"></property>
		<property name="show_sql">true</property>
		<mapping class="com.ksh.hbr.Person"/>
		<mapping class="com.ksh.hbr.Address"/>
		<mapping class="com.ksh.hbr.PersonAddress"/>

Now my question is “When I check into OVERVIEW DASHBOARD all queries requests are going to single node. And if that node goes down then whole cluster is down for application. Whatever the IP address I am using for connection in hibernate config, all request goes into that node. Other nodes has data but they are not servicing any request. I am not understanding meaning of cluster here. In Cassandra you connect to cluster not particular IP address as a result three nodes works equally. Here in cockroach-db, feels like only one node is working and other two nodes are just storage. SO COCKROACH-DB IS DESIGNED LIKE THIS OR IS IT KNOW ISSUE ?” I have created application which has 20 threads running simultaneously and doing reading and inserting into database.

Please reply as soon as possible.

Hi @kshitij_23,

to make use of all nodes the most commonly missed two ingredients are:

  1. set up load balancing for your clients – if each client only talks to a single node, then the client won’t be able to operate if that node goes down. Some clients take multiple SQL endpoints, or you can set up a tcp load balancer.
  2. for small tables, there may only be a single range, and so load does not distribute evenly across the cluster. You can address this by splitting the range manually, see Note that if this is the only problem and a node goes down, the other two nodes should still service traffic (after ~10s of failover) if your clients are set up correctly.