tl;dr: Does a query inside of a transaction grab pessimistic read locks on all of the required ranges of secondary indexes, for the duration of the transaction?
I have a question around a variant of the phantom read anomaly.
Let’s say we’re designing a system to detect collision of objects on a line, and each object takes up some space [x, y] on the line. Within each transaction we want to perform a query from [a, b] (likely larger than the object itself), calculate where there is room to determine a new [x, y] where there is no existing object, and commit to the database this new object.
The line is very large, and we want to support high throughput over the entire line, but we expect the throughput on a practical range to be relatively low. So we would like to perform the following transaction:
pseudo code below:
tx = BeginTransaction(); objects = tx.Query("SELECT * FROM objects WHERE objects.x >= a AND objects.y <= b";) // calculate where we can place the object, let's call these i and j tx.Exec("INSERT into OBJECTS (x, y) VALUES (i, j)";) tx.Commit()
We want a guarantee that between the Query and the commit, no new objects have been placed on the line between a and b (ie: a ReadLock on the Range[s] of secondary indexes). By only grabbing it on that range we should maintain throughput on the rest of the line (or most of it at least, it’s ok if any pessimistic lock is grabbed on a slightly larger range than we queried for).