"In-Memory" CockroachDB?


It is possible to store (or cache) the whole table in-memory at CockroachDB to achieve minimal response latency and maximum throughput of range-reads operations (e.g. analytics queries)?

Assume that you have ~2 Tb RAM total, 20 nodes (100 Gb per each) cluster and want to store “in-memory” 100 Gb table with a billion of time-series events. The table is continuous changes (events writes every minute) and continuously reads (typical select range is 1 million rows) by users.

Can be CockroachDB efficiently used in this case? I know that CockroachDB is OLTP, but a huge disadvantage of OLAP systems is the lack of transactions or serializable isolation level. Also, I know about the CDC, but it is not suitable for me.

May be enough of ram can allow me to perform analytical queries quickly enough over CockroachDB.



You can set up a larger RocksDB cache with the --cache flag (see https://www.cockroachlabs.com/docs/stable/start-a-node.html). We don’t have any higher level table caches at this time.

You can also set up CockroachDB with an in-memory store (see the --store flag). Of course, you lose data durability by doing that (so we don’t recommend it for production).