Sudden increase in Upsert execution time

Hello,

I wrote a tool to run bulk-upserts statements. I ran 8 instances of the program, I am seeing the following pattern in execution time:
1s, 2s, 2s, … working for 1000 statements with execution time < 10 seconds. Suddenly, one query takes more than 5 to 10 minutes.

This brings down the overall performance of the tool.
What is the reason for this behavior and how can I fix it?

you are talking about statements, not transactions. Are all statements in a session in the same transaction?

Hi @may98ank, welcome to the forum! Since SQL is such a flexible language, there are many factors that can influence performance. So, in order to understand the issue we would need more details. Can you provide a simple and easy way to reproduce the performance problems that you’re seeing?

Continuing the discussion from Sudden increase in Upsert execution time:

Hi @kernfeld-cockroach , so my application needs to run update queries on a table with a composite primary key with 5 columns. To be able to run updates faster we converted the update query into upsert. Initially we saw a 6kfps performance, but as rows were increased, speed is getting slower and slower. We need a system which will maintain the update performance when table has 100 million rows.
Can you help?
Thanks

@may98ank the good news is that CockroachDB can definitely scale up to hundreds of millions of rows. If you haven’t already, I’d recommend applying the advice from our Optimize Statement Performance as well as our Bulk-insert best practices. If you’re generating something like sequential IDs, you’ll also want to apply the advice from How do I auto-generate unique row IDs in CockroachDB?. You could also try running EXPLAIN ANALYZE.

If none of that succeeds, can you provide some more detail about your schema and queries?