I’m getting serialization errors while running transactions in parallel (pretty big ones). Is there a way to get some insights on which ranges they are getting contention?
We will be adding visualization for this down the road, but for now, we have a script that you can run, it can be found here.
You’ll want to do the following:
- Start Testing
- Pull the
/_status/raftendpoint and save it to a file.
- Run that file against the hottest_ranges.py script.
I see the “hot” range, which have range of /Table/55 to /Table/56. I don’t know how to map that table id to the table name, but I think I know which table is that (we only write 3 tables ).
I should have clarified a bit more. Here is the problem I’m observing:
- I run a load script that submits a some amount of data in parallel. I don’t really know, but I think this data should not create contention (it writes to different key ranges).
- However, I’m seeing some amount of serialization errors.
- I would be interested to know which key or key range caused these errors.
- We do a lot of statements in transactions, these transactions are somewhat heavy.
So, it’s not a high load scenario, but rather low load unexpected conflicts scenario with large transactions.
This is CRDB running locally, one node, not a real setup. Would be fine even to log every conflict or something like that. Is it possible?
Actually, forget about that. User error. Transactions were rolled back for another reason
P.S. I would still be interested to know if there is a way to pinpoint conflicts.