A Few Questions I Can't Name

We have a device in our hand, and this device creates 1500 Record averagely in a day.

Records are being saved in a table like this;

Table name: Record
id - bigint - primary key
deviceid - int (indexed)
createddate - datetime with timezone (indexed)
…other not indexed columns…

This table can be billions of row in size.

Update process is not happening in any entered data. Only reading process is happening.

Reading process is usually being done inside Records that created within the past 3-4 months. Records older than 3-4 months are rarely accesible.

Usually the inquiry that is made is like this; Bring the records of device with id x from date z to y.

The node specifications we are considering using; (Node count will be increased according to database size, minimum 3 node)

  • 16 core cpu (32 vcpu) (AMD Ryzen™ 9 5950X)
  • 6x 4tb nvme
  • 128gb ddr4 ECC Ram

Not all of these machines will be seperated for cockroachdb but more than %80 percent of these machines will be used for cockroachdb.

After the preliminary information, here are my questions;

  1. If we assume that there are 500 thousand device, there will be 9-10 thousand insert process in a second. Does this create any problem? Can the database handle this load?

  2. Can we move the inaccessible-rarely used data to cheaper node (hdd or sas not nvme)? If we assume that database getting bigger rapidly, moving the datas that are rarely accesible (older than 3-4 months records) or maybe never accessible to a cheaper node, will significantly reduce the cost.

  3. In addition, I have a one more table named Device. This device table is constantly being updated but only a row is usually being updated. Is there a way to lock one row inside the transaction?