@ashwinmurthy 64 nodes is not a limit on the cluster size, simply the size of clusters we’re committing to test and have running smoothly. Larger clusters should work, and as @dianasaur323 mentioned, we’ve tested a 100 node cluster before. It is possible there will be a few implementation hiccups as the cluster size increases, but I don’t foresee any significant problems until we get to cluster sizes closer to 1000 nodes.
Performance of small KV operations (values smaller than 100 bytes) is dominated by the CPU. We need to do more extensive testing and characterization here, but some recent testing showed 10k inserts/sec on a single 32 CPU machine.
Distributed transactions vs single row updates do not affect scalability. The single row updates will be faster than the distributed transactions, but in either case, assuming your data is sufficiently sharded at the application level, CockroachDB will scale.
Performance and scalability are ongoing efforts. I’m not sure what I would characterize as the main bottleneck today. Perhaps the scalability of the application workload. Specifically, CockroachDB currently works best if the application workload has low contention and some natural partitioning of the data (e.g. by user or customer or account). Improving high contention scenarios is on our roadmap.