Transactions + rows locking

Hi,
is there a possibility to “lock” read rows in running transaction? I have a use case in which I need in T1 read some rows, compute some operation, update read documents and commit transaction T1. What I need is while running T1 all reads/updates on rows processing by T1 should be blocked.
Or T1 may fail if other transactions read same rows while T1 was running

thanks, martin

Hi @martinfridrich,

You should not run into this problem with CRDB.

Read more about our transaction model here.

Here is a blog post that explains serializable, lockless and distributed transactions.

Let me know if you have any other questions.

Thanks
Matt

As Matt says, CRDB gives you serializable transactions, so from a data consistency perspective transactions are completely isolated from each other. Transactions may be forced to retry though, in order to guarantee this isolation.
If you explicitly want to queue things up, which can be a a legitimate thing to do for performance considerations, you can achieve that by introducing a dummy key that all these overlapping transactions write to as the first thing they do. This will achieve queuing - each write will block until the previous txn writing to that key commits.

There’s periodic talk of supporting select for update clauses for these purposes and/or the Postgres advisory lock functions (pg_try_advisory_lock() and friends). So far, neither of these are supported.

thank you both, I’ve slightly changed my algorithm and it should works, thanks

No problem at all.

Matt