Slow performance degradation over time

We are close to using CRDB for a project and have setup a 5-node cluster using the latest 21.1.5 version. Our use case is very simple but the workload is both insert and update heavy. We want to be able to insert 5000 rows/s and update 5000 rows/s. We have reached this level after some effort (e.g. using bulk INSERT and UPSERT and improving the SQL query) but see processing rate degrade over time.

There is only one table and its schema is given by:

CREATE TABLE IF NOT EXISTS db.schema.input (
    f1 STRING(16),
    f2 STRING(16),
    f3 INET,
    f4 INET,
    f5 INET,
    f6 INT4,
    f7 INT4,
    f8 INT,
    f9 INT DEFAULT 0,
    f10 INT2,
    f11 STRING,
    f12 STRING,
    f13 STRING,
    f14 "primary" PRIMARY KEY (f1, f3, f6, f8),
    INDEX index_1 (f3, f6, f7, f8),
    INDEX index_2 (f8, f9) USING HASH WITH BUCKET_COUNT = 8
);

There are 8 Cockroach Loader Programs (CLP) that are pushing data to the 5-node cluster. Each operates on batches of 200 JSON records reading from a Kafka cluster. Thus, one batch usually consists of 100 inserts and 100 updates. The CLP code (written in Go) converts 100 inserts to a single bulk INSERT and similarly converts the updates into a bulk UPSERT.

Here is the number of queries per second graph for a period of 36 hours:

At the time of starting we were processing at nearly 10 KFPS (kilo flows or records per second) but the processing rate has now come down to 8.31 KFPS. Any advice will be highly appreciated.

We can probably close this topic as duplicate. I created a new topic after not being able to find this one.