Write and update limits for a single transaction

This is a known limitation of CRDB that only 100,000 write operations can be packed in one transaction. Are there any plan to lift this limitation in the near furture?

This is being tracked in this issue. It’s currently marked for the 1.2 release (6 months away), although since we don’t yet have a clear design it may get pushed out again.

I am trying to work around the transaction limit, and run this on a shell script in a loop

./cockroach sql --certs-dir=./certs/ --execute="set timezone='America/New_York';insert into Filu.T_LogMsg (ID, Msg, Date) select ID, Msg, Date from Test.T_LogMsg Order by id limit 500 on conflict(ID, Msg, Date) do nothing;"
./cockroach sql --certs-dir=./certs/ --execute="set timezone='America/New_York';delete from Test.T_LogMsg Order by id limit 500;"

The first few hours ran perfectly, and each insert and delete are around 500ms. However, over the weekend, the statement run time is up to 20 seconds each.

I’ve do a test on the select statement, and it took 20 seconds or more.

select ID, Msg, Date from Test.T_LogMsg Order by id limit 500

The table structure is very simple

CREATE TABLE T_LogMsg( id STRING NOT NULL, Msg STRING NOT NULL, "Date" Date NOT NULL, CONSTRAINT "primary" PRIMARY KEY (id ASC), FAMILY "primary" (id, Msg, "Date") )

Am I doing something wrong?

You’re seeing the effects of KV tombstones as described in this issue. If you want to delete the entire contents of the table and nothing else is using it, TRUNCATE TABLE is the most efficient way to do so. Otherwise (if something is using the table concurrently or you are only deleting part of the table), the best thing to do is currently to use a RETURNING clause to get the rows affected by each deletion and use that as a cursor as you move forward:

DELETE FROM Test.T_LogMsg WHERE id > $1 ORDER BY ID  LIMIT 500 RETURNING id

After each query, get the last id returned and use it as $1 in the next query (you’ll probably want to use something other than a shell script for this).

Can I get past this limit using the following approach?

  1. I create a sequence that is incremented by a large number (lets say 1 billion at a time).
  2. I create a table with an INT primary key (not serial).
  3. I write application logic where each app server selects from the sequence and keeps that large block of ids for it’s processing. When it creates a record, it simply increments it’s internal counter by one. When it gets close to running out of it’s block of numbers it does another select from the sequence to get the next block.

In this case can I create more than 100K rows in that table in a single transaction?

Thanks, :slight_smile:

PS

I’m planning on setting up all of my keys and indexes that way, so I’m assuming that if I change the configuration to allow more than 100K inserts per transaction, even if my table has 10 indexes I still would be able to insert more than 10K records.

Hi russoturisto,

The 100k writes limit is going away in the 2.0 release of CockroachDB, so there should be no need for a workaround in the future.

In 1.1 the limit is hardcoded and cannot be circumvented; you cannot write more than this in a single transaction.

However, I believe the algorithm in your suggestion will succeed, because you seem to be breaking up the workload into multiple transactions and using a different strategy to keep the IDs consistent. By creating large blocks of IDs with consistent transactions on your sequence number record, your application will be able to write the records into the database across multiple transactions without worrying about conflicting IDs. However, you still have the restriction that each individual transaction must fit under the 100k writes limit.

Note that the limit is on writes, not rows; a table with indexes or multiple column families will require multiple writes per inserted row.

Thanks,
Matt T

Matt,

Thank you for your quick reply!

To clarify, does the following statement apply only to <= 1.1 and not 2.0?

“However, you still have the restriction that each individual transaction must fit under the 100k writes limit.”

Thanks again!

PS

I am hoping that in 2.0 I will be able to generate more than 100K writes per transaction, given that I gather large quantities of data (over a period of time) that then write it (in a single transaction) to a number of tables many of which have multiple indexes. I can write the code that counts the number of writes and splits up the batch insert into multiple transactions but I would much rather not do that.

You are correct, that only applies to <= 1.1.