I test crdb insert performance using sysbench in a 1 node cluster, 3 node cluster (3 replicas), and 5 node cluster (3 replicas). Every node has 1 store in SSD and running in different machine. I run sysbench on every machine at the same time.
Here are my results:
tps in 1 node cluster:4500 p95 latency:11ms
tps in 3 node cluster:5200(1700 per node) p95 latency:28ms
tps in 5 node cluster:7700(1500 per node) p95 latency:34ms
Hardware of my cluster:
CPU: Intel® Xeon® CPU E5-2630 v4 @ 2.20GHz (36 core)
RAM: 64G
Hardisk: 1T HDD+800SSD(data stored in SSD)
Schema of my insert test:
CREATE TABLE sbtest1(
id UUID DEFAULT gen_random_uuid(),
k INTEGER DEFAULT '0' NOT NULL,
c CHAR(120) DEFAULT '' NOT NULL,
pad CHAR(60) DEFAULT '' NOT NULL,
PRIMARY KEY (id)
)
Here is my sysbench command:
./sysbench --db-driver=pgsql --pgsql-host= --pgsql-port=26257 --pgsql-user=root --pgsql-password= --time=120 --report-interval=1 --threads=32 /home/fee/sysbench/share/sysbench/oltp_insert.lua --auto-inc=off --tables=1 --table-size=10000000 prepare
So the test runs in a single table with 10000000 rows prepared. And the insert SQL is simple:
INSERT INTO sbtest1 (k, c, pad) VALUES " .."(%d, '%s', '%s')"
I want to know:
-
is this performance normal for crdb? Because when i test tidb,its 1 node performance is the same as crdb,but 5 node insert performance is about 15000+ qps.
-
What’s more, I tried 2 stores (on different disk) on one node, but it seems one of these two disks doesn’t work at all. Can crdb scale by adding stores (on different disks) on one node?
-
If i use a auto_increment primary key except a random one, its tps dropped from 4500 to 1300 (on a one node cluster). Is this normal too?
Thanks for your help!