CRDB 1.0 perfomance of batch insert one million records

Hello,

We trying to insert into single CRDB node table one million records, using java postgres jdbc driver, and commit after every 20 rows.
Table for example: CREATE TABLE user(id serial, name string(255), email string(255), url string(255), photo string(255))

My Linux node:
Centos 7.2.2 Linux VM
Intel Xeon E5-2680
8GB RAM
HDD(not ssd), XFS

Inserting started at ~100 qps and service latency P99 ~ 500ms,
After first 250 thousands loaded we have range split and perfomance degradation with
10-20 qps and service latency P99 ~ 7 sec.

Part of log:

What we missing? How we can tune this perfomance?

do you have indexes in place? If possible, remove them during the insert. When this is a 1 time migration effort that should be no problem and will normally improve insert performance a lot.
Also: try array inserts in batches of about 500. That is a lot quicker than row by row inserts.
There tips work in any rdbms.

How are you performing the inserts? As 20 separate INSERT statements, one INSERT of 20 rows, or as a JDBC batch of insert statements? A single insert of 20 (or more) rows will be the fastest; 20 separate inserts will be much slower.

I don’t think it’ll make much difference for your insert performance, but your table definition doesn’t include a primary key. You almost always want to have one, in this case the id column (and in CockroachDB you can only set the primary key when the table is created; you can’t change it later).