It is possible to store (or cache) the whole table in-memory at CockroachDB to achieve minimal response latency and maximum throughput of range-reads operations (e.g. analytics queries)?
Assume that you have ~2 Tb RAM total, 20 nodes (100 Gb per each) cluster and want to store “in-memory” 100 Gb table with a billion of time-series events. The table is continuous changes (events writes every minute) and continuously reads (typical select range is 1 million rows) by users.
Can be CockroachDB efficiently used in this case? I know that CockroachDB is OLTP, but a huge disadvantage of OLAP systems is the lack of transactions or serializable isolation level. Also, I know about the CDC, but it is not suitable for me.
May be enough of ram can allow me to perform analytical queries quickly enough over CockroachDB.