Tpc-c poor performance

I use loadgen to run tpc-c test on 3 nodes with crdb v1.1.0. But got poor performance as below:

32 concurrency

./tpcc -concurrency=32 -warehouses=100 --duration=5m  postgresql://root@a1:26257/tpcc?sslmode=disable

_elapsed___newOrders___newOrder(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
  300.0s       27539            91.8    215.1    209.7    335.5    419.4   1208.0

TPCC       27539      10893781.8 ns/op

64 concurrency

./tpcc -concurrency=64 -warehouses=100 --duration=5m  postgresql://root@a1:26257/tpcc?sslmode=disable

_elapsed___newOrders___newOrder(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
  300.0s       25615            85.4    471.9    453.0    771.8   1140.9   6710.9

TPCC       25615      11712288.7 ns/op

when run 128 concurrency, got many error:

./tpcc -concurrency=128 -warehouses=100 --duration=5m  postgresql://root@a1:26257/tpcc?sslmode=disable

2017/10/16 12:42:29 error in payment: driver: bad connection
...
2017/10/16 12:36:42 error in payment: select by last name fail: pq: restart transaction: HandledRetryableTxnError: TransactionAbortedError: txn aborted "sql txn" id=351d7126 key=/Table/111/1/50/0 rw=true pri=0.01763613 iso=SERIALIZABLE stat=ABORTED epo=0 ts=1508128600.475877835,0 orig=1508128600.475877835,0 max=1508128600.975877835,0 wto=false rop=false seq=7
...
_elapsed___newOrders___newOrder(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
  300.0s        7564            25.2   1887.9   1208.0   6710.9  10200.5  10200.5

TPCC        7564      39665007.6 ns/op

Hi @louis,

This is pretty much expected - like I was saying on Gitter, since we haven’t implemented think time or wait time yet in tpcc, the workload that’s currently simulated is a worst-case extremely high contention workload, which is why performance suffers with higher concurrency. It’s not representative of a normal tpcc benchmark.

In a correctly implemented tpcc benchmark (which we’re working on and should have ready this or next week), per-warehouse concurrency is capped, and each concurrent connection spends a lot of its time being idle. The benchmark is not designed to measure extremely high contention - instead it’s supposed to measure a highly concurrent but low contention workload.

As for the errors you’re seeing with concurrency=128, in CockroachDB sometimes the client must get involved to retry a transaction. The load generator doesn’t currently implement retries correctly, hence the output errors.

Thanks,
Jordan

can you inform me when the tpcc benchmark ready for test?

email: hdchild@163.com

thx

@jordan if the tpcc benchmark tool loadgen ready for test?