How to handle purely temporary data?

Hi
A certain type of the SQLs executed by one of my programs does a lot of lookups for a ~small subset of values (from e.g. 1 to 50K values) within a master table that can contain millions of rows (e.g. 300M).

Executing a…
select col1, count(*) from table X where col2 in (<50k_single_explicit_values>) group by col1 order by count(*) desc limit 1
…is in general bad practice, therefore previously with MariaDB+MyRocks I used to first insert those values into a temporary table and then to execute…
select col1, count(*) from table X where col2 in (select col2b from temp_table_subprcs_X) group by col1 order by count(*) desc limit 1

I have already performed a few simple manual tests and the execution of such an SQL in CRDB was great (for data present in the block cache e.g. ~2.5 seconds in MariaDB+MyRocks vs ~1.7 seconds in CRDB) but I’m not sure about what to do about the temporary table.

So far I have partially rewritten my program (python, uses multiple sub-processes to handle data to be processed in parallel) to misuse “normal” tables (as temporary lookup storage) which are dedicated to each subprocess (still rewriting, I haven’t tested all this yet :stuck_out_tongue: ):

  1. program inserts into [normal_table_having_subProcessID_in_its_name].
  2. program runs the above SQL and retrieves the result.
  3. program performs a “rollback” (to avoid any useless writes to disk for data of step #1).

I did read about the experimental temporary tables (here) but it doesn’t mention many details - especially “why” I should use it vs. “normal” tables.
In general, using such a temporary table just seems to add complexity (table disappears after disconnecting + gc activity).

Any recommendations about how to handle temporary data?
When dealing with temp data I primarily want to avoid all kinds of disk writes/reads.

Is it “kind of ok” to do it like I’m about to do it (meaning to perform an “insert” and to then roll back - e.g. not sure if CRDB has a WAL that would be put under pressure during the 2 ops? Wasn’t able to find such infos and couldn’t perform useful observations as the host that I got is remote&quite big so such “writes” might have gone unnoticed during my initial tests…).

Or would it be better to use such an experimental temporary table?

Or can a “normal” table be created so that its definition is stable but it always keeps its data in RAM and does not ever write any [data, logs, anything] to disk?
E.g. in MariaDB I used to do that by executing “create table blah engine=memory”.

Thank you!

Hello Stefano,

Thanks for the question, for your use case, I think a materialized view or CTE could potentially be used.

A materialized view makes sense if you don’t want to query the main table, however it does write the query results to disk. I think this makes a lot of sense if you want to repeatedly query a subset of the table.
https://www.cockroachlabs.com/docs/stable/views.html#materialized-views

CTE is another option, Common Table Expressions | CockroachDB Docs

If you know all the queries you want to run upfront, you could batch them together with a CTE, this would avoid a write to disk.

I’m curious though what is the use case of the master table? Are concurrent operations happening while you’re doing the read? And is there a particular reason why you’re trying to avoid writes to disk?

Thank you Richard - materialized views and CTEs are interesting - I don’t think that they fit my usecase but I’ll try not to forget about them, they might be useful in the future for something else.

I guess that from a certain perspective my master table is similar to something like a deduplicated filesystem cache.
It contains chunks of bytes of files (1 row for each single chunk) which

  1. were identfied to be very similar to chunks of bytes belonging to other files (therefore the percentage of common chunks of bytes between the file being processed and the one which has its data already present in the master table is high)
  2. belong to files that were just processed but which were identified not to be very similar to the other files which have their data already present in the master table.

The master table can reach a max total amount of rows → when that limit is reached the master table is partially purged by deleting the chunks belonging to the files which were the least successful to find other files with similar contents (both the historical amount of hits & age play a role to decide what to delete from the master table).

Therefore, what I do is to:

  1. transfer from my program to the database the chunks of bytes that I want to search.
  2. do a search in the master table to find the already known file which has the highest amount of common chunks compared to what I’m searching.
  3. if the amount of common chunks between what I’m processing and the best match is higher than a certain threshold then I set somewhere else a relationship between the two, I get rid of the data that was transferred in step #1 and exit this portion of the program.
    On the other hand if that amount is lower than the threshold then I add all chunks of the file that I am processing into the master table → the chunks of this file become a new candidate for what will be processed next.

Yes, the multiple files are being processed in parallel, so the master table gets concurrent reads & writes.

In MariaDB I can use a temporary table hosted in RAM in step #1 into which to insert all chunks that I would like to search → then in step #2 I do the search → then in step #3, if the threshold of common chunks is not reached, I can just add directly all chunks into the master table by reading them from the temporary table hosted in RAM and this is the only place where a write to disk occurs.

I’m trying to avoid writes to disk just because it’s nice in general to have those resources available for something more useful :slight_smile: .
Having to use for steps #1/#2 something that performs writes to disk would mean that I would be performing a useless “write” to disk for each-single-search and then again potentially an additional write for the same stuff later in step #3 if not enough common chunks are found.

Concerning the experimental temporary tables that I mentioned in my original question:
do they generate writes to disk or not? (maybe they are hosted in RAM and use the one set as parameter “–max-sql-memory”?)

Thank you

I believe using temp tables will still perform writes.

I don’t think CockroachDB has the concept of in-memory tables right now.

Thanks a lot Richard.
I guess that I’ll try to use temporary tables + rolling back the “insert”
or the same with normal tables
or to use materialized views (maybe the overhead of creating and then immediately dropping them is lower than inserting & rolling back the transaction, who knows)
or maybe CTEs (materialized).

Cheers!

FYI to everybody that reads this:
I have tried all options I was aware of but the performance (in relation to handling temporary data) was always at least as most 50% of the one I had/have with MariaDB+RocksDB.

For whoever will read this post for a related problem, a key parameter was “kv.transaction.max_intents_bytes”.
Absolutely not sure to what that parm is related to (pls. read as well this )

In my case when CRDB went to 100% with its CPU consumption it always printed this in its logs:

a transaction has hit the intent tracking limit (kv.transaction.max_intents_bytes); is it a bulk operation? Intent cleanup will be slower.

In my case I had to increase that setting up to 16MBs with…

SET CLUSTER SETTING kv.transaction.max_intents_bytes = 16777216;

…to completely avoid that message, but maybe completely avoiding it might not be the perfect solution - if I remember correctly I didn’t see much performance improvement starting from ~4MiB, but I might be wrong.

Cheers