Strong Consistency?

Hi,

I understand that CRDB provides Serialization level isolation. And due to its implementation methodology, it even provides sequential consistency on all transactions on any node.

Jepsen’s Comments test showed that transactions from same client C on two different nodes can be reordered to an observer. However, it hinted, that by threading a causality token through transactions this can be prevented (or the chances of it reduced).

Is this mechanism already implemented in 2.0?
If so, I suppose this mechanism would still leave a gap, if the client lost the causality token in between transactions (restart/crash etc.).

Is there any updated documentation (need not be very user friendly) that can explain the current sequential consistency guarantees (not isolation level), under what conditions they apply, and most importantly, examples of conditions under which they can fail.

This is just a hypothetical exercise at present and I do not have an exact problem that requires it.


thanks,
gaurav

Hi Gaurav

thank you for your interest in CockroachDB!
As you’ve probably read from Aphyr’s explanation and our own blog, serializable isolation means that the transaction commits appear to occur in some order decided by the database, but that order may be different from the order (in time) seen by the clients.

What this means in practice is this: a client can commit transaction A, then use some data it observed in transaction A to open an unrelated transaction B, and then be surprised that B appears to be running before A committed logically (e.g. some data from A appears not committed in B) even though it was opened later in time.

A causality token is a piece of data that the client can extract from transaction A, and use to guarantee that a txn B opened after A commits in time is also opened after A commits logically.

How this works currently in CockroachDB: a client can run SELECT cluster_logical_timestamp() in transaction A and observe some value X, before it commits A. Then when it opens transaction B, it runs SELECT cluster_logical_timestamp() again and observes some value Y. If Y <= X + maxoffset, then the transaction B may be running logically before A. To ensure that B can observe A’s commit, the client simply retries opening transaction B until Y > X + maxoffset. Then B is guaranteed to open after A commits both in time and logically.

In other words the cluster logical timestamp, plus the maxoffset delta, is the causality token currently offered by CockroachDB. Sadly, this is not yet documented so this short explanation on the forum is the best you’ll get for now. We hope to improve this interface to make the life of client app developers easier over time.

Let me (us) know if you have any question or comment.

1 Like

Thanks Raphael - this is a great explanation! I will get back on this forum if I have more related questions.