Scaled. Use one huge DB table, or several?

Scaling
CockroachDB nodes: 9, or even more, 7-19.

Emphasis on: Performance of insertions

Q: Use one huge DB table, or split data to more than one DB tables?
(or use partitioned table)

Table:
Hundreds of millions of rows in a table.
ID field, and Small JSON data in one column of one row.

Some test are being executed.

Other things to take in account?
Should reading be ok in both cases: using one huge table or X tables.

Hey @roachman,

Can we know a little bit more about your workload? Perhaps you could share a sample DDL?

What are your reads doing? Do you have a UUID or INT as a PK?

We would definitely need to know more about your overall workload before we could make some recommendations.

Thanks,

Ron

I am trying to test /reach best possible throughput. Inserts as fast as possible. Maximum possible rate.

bigint is in use as ID. It is PK.

I have seen difference in throughput with my setup. One third faster (or even more) with a couple/some tables in comparison to one table. Totally independent tables.

14-17 key value pairs in one json data entry. Quite small.

Emphasis is on scaling.
Do insertions scale/improve better with more than or equal to two tables in comparison to one huge table.

Multi row batch inserts are used (golang).
Language: golang, or java (either one).