Slow on insert (V 1.0.1)

I have this code here

that will create some time series data.

The insert performance is horribly slow (1…3 inserts per second).

Running 1.0.1 on a decent Linux box (hard drive, no SDD):

CockroachDB node starting at 2017-05-28 03:48:29.67187993 +0200 CEST
build:      CCL v1.0.1 @ 2017/05/25 15:17:49 (go1.8.1)
admin:      http://dev.zopyx.com:10000
sql:        postgresql://root@dev.zopyx.com:26257?sslmode=disable
logs:       /home/ajung/sandboxes/cockroach-latest.linux-amd64/cockroach-data/logs
store[0]:   path=/home/ajung/sandboxes/cockroach-latest.linux-amd64/cockroach-data
status:     initialized new cluster
clusterID:  44912f3f-ef4f-4483-9515-587922b68f1c
nodeID:     1

My box has 8 CPUs, 64 GB RAM, I see very little CPU and RAM utilization.

Running the same script on my MBP is much faster (500-600 inserts per second).

Am I missing something significant here?

Andreas

That’s much slower than expected (both 1-3/sec on the HDD and 500-600 on an MBP). The bulk of our testing has been on SSDs, but I’ve seen 500+ inserts/sec on HDD (and thousands on an MBP’s SSD). The biggest difference in our tests and yours is parallelism - most of our performance tests run in multiple threads so they can take advantage of group commit on the server. If you do the same you should see more throughput, although it’s very surprising in any case that a single thread can’t do more than 3 inserts/sec.

The expected speed for HDD or SDD on a decent box with a single node server would be some hundred inserts - in particular I would expect a somewhat constant insertion time per row but on my box there is a huge variance with my HDD. Is there perhaps a certain dependency of the underlaying filesystem? In my case BTRFS. I have seen similar weird issues in variances in the IO speed with Docker on different system. I some cases the issue was related to the Linux kernel version and/or the underlaying FS (in particular the device mapper approach of Docker).

Andreas

I don’t believe we’ve done any performance testing in docker or on copy-on-write filesystems in general, so it is certainly possible that we’re doing something suboptimal for them.

Perhaps most notably, we sync writes to disk in the critical path for data safety by default, which might cause problems on certain filesystems. It could certainly be something else, but I’d try disabling raft log syncing by running SET CLUSTER SETTING kv.raft_log.synchronize = false; from a cockroach sql shell to see if that has a drastic effect on latency.

Had insert rates similar to the OP using a similar script. Disabling raft increase qps to about 48.

in my benchmark, 3-nodes with SSD, parallel insert workload, then I SET CLUSTER SETTING kv.raft_log.synchronize = false, throughput is larger than before, but PCT50 and PCT99 latency are higher than before… why…