Select for update more restrictive than postgresql

I just had a deadlock problem with the liquibase database creation in keycloak and managed to find the source of the problem.
It boils down to the fact that a “select for update” statement blocks not only a normal select statement of the affected row (I could except that, although postgresql does not do that) but also a “select count(*)” of the affected table.
That is way more restrictive than postgresql. I tested that in a local postgresql 10 instance. The only thing where postgresql will block is another “select for update” statement on the same row.
The chain of events is this:

  1. In one session open a transaction and select a row for update:
    select id from databasechangeloglock where id=1 for update;

  2. In another session postgresql will execute the statements
    select id from databasechangeloglock where id=1;
    select count(*) from databasechangeloglock;
    without blocking. CockroachDB will block on these statements until you “commit;” in the first session.

KeyCloak uses the “select for update” in an open transaction for the lock and Liquibase uses the “select count(*)” to check whether its databasechangeloglock table is already present later on, so this effectively deadlocks the application on startup.

Would it be possible to change the behaviour of cockroach for this, so that it becomes more compatible to postgresql?

Hi Jens, what version of cockroach did you test this on?

Uups, I didnt say, sorry. It’s 20.1.3.

According to the realease notes, the “select for update” feature was introduced in version 20.1.0.

Hi Jens, thank you for the thoughtful question.

What you’re observing is a known deviation from postgres’ concurrency control behavior, and it relates to more than just SELECT … FOR UPDATE. In postgres, row-level locks never block data querying, they only block writers or lockers of those rows. This is discussed in

In CockroachDB, things are different. Reads wait on both writes (e.g. a read will wait on an UPDATE) and on locks (e.g. a read will wait on a SELECT … FOR UPDATE, as you noticed).

There are two reasons I can give to justify this difference in behavior. The first is that CockroachDB is a distributed system, which means that our approach to concurrency control has additional challenges. The most relevant of these is that commit information for transactions is not centralized in cockroach, like it is in postgres. So even with a policy of never blocking reads on writes, reads would still need to perform remote RPCs when they find an exclusive lock to determine whether the lock and the provisional value it is protecting should be visible. So we’d lose a lot of the benefit of such a policy as this RPC would impose latency and extra load on the system. As it turns out, waiting is actually a better policy for total system throughput, all things considered.

The other reason for this difference in that is that CockroachDB was designed with serializable isolation as its focus. Meanwhile, Postgres only gained support for real serializable isolation in version 9.1 (2011, 15 years after the initial release). This history is important when it comes to the behavior we’re discussing here. Having reads never block on writes is a big win for lower levels of isolation like snapshot and repeatable read. However, at higher levels of isolation, it can actually hurt performance as it leads to scenarios that require transaction retries (discussed in Here’s an example that demonstrates this:

nathan=# show transaction_isolation;         |
 transaction_isolation                       |
-----------------------                      |
 serializable                                |
(1 row)                                      |
nathan=# begin;                              |           
BEGIN                                        |         
                                             |   nathan=# begin;   
                                             |   BEGIN   
nathan=# insert into kv values (100, 100);   |            
INSERT 0 1                                   |           
                                             |   nathan=# select * from kv;        
                                             |    k | v          
                                             |   ---+---       
                                             |   (0 rows)        
nathan=# select * from kv;                   |            
  k  |  v                                    |                 
-----+-----                                  |          
 100 | 100                                   |          
(1 row)                                      |                  
                                             |   nathan=# insert into kv values (200, 200);            
                                             |   INSERT 0 1            
nathan=# commit;                             |        
COMMIT                                       |       
                                             |   nathan=# commit;      
                                             |   ERROR:  could not serialize access due to read/write dependencies among transactions             
                                             |   DETAIL:  Reason code: Canceled on identification as a pivot, during commit attempt.       
                                             |   HINT:  The transaction might succeed if retried.

Notice that because the read in the second transaction did not block on the first transaction’s write, it returned a result that could not be serialized with the rest of its effects, so the transaction had to throw an error. These retry errors are disruptive to users and require retry loops in applications. They also hurt system throughput, as it can lead to thrashing. Cockroach was aware of this during its design, and it decided that since it was primarily focused on serializable isolation, it would be better to avoid these situations by blocking reads on writes. We still need transaction retry loops but in fewer situations.

That all said, these two arguments mostly apply for exclusive locks acquired by mutations, not to SELECT … FOR UPDATE, which itself isn’t actually protecting any provisional changes. For that reason, we are exploring ways to make it so that reads do not block on this form of lock. This is being tracked in

Cockroach also supports the ability to override this behavior using transaction priorities. A high priority transaction will not block on a lower priority transaction’s locks, which can have close to the effect you are looking for. See for more.


hey @jhollmann Did you manage to find a solution? i have the same issue and not sure what i should do to get keycloak to work? @nathan Would i need to to set the Standalone.xml with extra args?

Hello @mrahman, at the moment we don’t have a solution. We are still using cockroach 19 for Keycloak (and have rewritten the liquibase xml files for that as cockroach can not add a primary key to a table after creation in that version) and we are postponing a cockroach update because of this new problem.
I’m hoping that the issue mentioned in nathan’s post will be resolved and maybe will solve the problem.

Hi Jens,

I am trying to setup Keybase with cockroachdb and facing similar issue. Can you guide me how you solved this problem with cockroach 19?


Hi Neeraj,

that was (unfortunately) quite a bit of work. Roughly we patched the sources (for example LiquibaseJpaUpdaterProvider) so that we can change the name of the liquibase changelog-master file when we set a property (something like -Dcockroachdb=true).
And then we created a new changelog-master-cockroach file and rewrote all the changeset files, so that they set the primary key of a table in the create-table-command and not afterwards as a seperate command, and referenced them in the new changelog-cockroach-master file.