It would be nice to hear comments regarding “duplicate key value” cases.
In use: CockroachDB, three nodes, load balancer.
Delete one of the three cockroach pods, wait until comes back in use, wait until nodes is available and live, sleep.
Do same for one pod at a time.
Do all of that in sequence.
First tests have been done with default grace period, in kubectl delete pod: to be tested also with a longer grace period.
Generate data, a testing application:
Insert at the same time in an other process.
Establish DB connections through a load balancer, or to directly to one specific node.
E.g. without threads. Insert in batches, or one row at a time.
Either create new DB connection for each commit. Or try to keep same connection as long as possible.
The testing application tried to insert again in case of any exceptions, currently.
- ERROR: context canceled
- ERROR: restart transaction: TransactionRetryWithProtoRefreshError: TransactionAbortedError(ABORT_REASON_TIMESTAMP_CACHE_REJECTED_POSSIBLE_REPLAY):
“sql txn” id=fea8b1ac key=/Table/81/1/20981/0 rw=true pri=0.00463594 stat=PENDING epo=0 ts=1557838074.239846360,0 orig=1557838074.239846360,0 max=1557838074.247671045,0 wto=false seq=5
- ERROR: server is not accepting clients
- ERROR: duplicate key value. Unique index / PK prevents insertion.
From the (testing) application point of view.
In most of the cases, insertions should be retried. That seems to be generic “rule”.
In duplicate key value case: ignore the error?
Is that the correct approach? Is that the generic way how the applications should be implemented?
Other comments / thoughts regarding the topic?
Could there be other alternative functionalities to handle such problems? ( In the future. )
Can there be some means to prevent such duplicate entry errors? With current design.
New design / new functionality: by a protocol between client app and cockroachdb server? (JDBC, libraries, etc.)