Suppose as part of a transaction I want to keep a record of changes made by that transaction (such as an audit log):
begin isolation level serializable; -- Do the main work of my transaction. ... update data set n = 3 * n + 1 where key = 'foo'; ... -- Now record changes I've made. insert into queue (id, key, v) values (default, 'foo', (select n from data where key = 'foo')); commit;
(I want the insert into the queue to be part of the transaction so that the insert can’t get lost in a crash).
If the queue id is strictly increasing (it doesn’t need to be strictly sequential, but if each id inserted is always greater than the previous), it’s easy to retrieve new rows from the queue… if say the last
id read was
100, all I need is:
select * from queue where id > 100;
SERIAL be used for the queue id?
Returns a unique ID used by CockroachDB to generate unique row IDs if a Primary Key isn’t defined for the table. The value is a combination of the insert timestamp and the ID of the node executing the statement, which guarantees this combination is globally unique.
Which sounds like
SERIAL would be at least be mosty increasing, but if for example there was some clock drift between nodes, it might not necessarily be guaranteed to always be strictly increasing.
SQL Sequences (when they become available) would do the trick… but they might be stronger (and less efficient) than I need, since I don’t actually need “once and only once” delivery.
I’d like the insert into the queue to be as efficient as possible, so that I don’t slow down my main transactions.
Maybe there’s some way of implementing a queue besides using a strictly increasing row id?
I don’t mind if retrieving new items from the queue might sometimes return duplicates (items that I retrieved before)… though of course I’d never want to miss items