I’ve encountered strange disk usage twice while running the load generators from https://github.com/cockroachdb/loadgen against a 3-node cluster of CockroachDB 1.1.2 on Cloudstack instances with 10GB disks. Hosts have spinning hard drives.
Running Yahoo Cloud Storage Benchmark with 95% read and 5% write against node1 it eventually filled the disk of that node and it shut down. The other nodes still had plenty of disk and kept operating.
In the admin ui it looked like the ycsb database had a very modest size, tens of megabytes.
I manually deleted all files in the data directory of node1 and restarted it. It came back as a new node and data was replicated back to it, filling it’s disk to about 90%. Now the admin ui shows the size of the ycsb database beeing 8 gigabytes and it is spread evenly across all nodes. I deleted the database and the disk space was returned after 25h by compaction.
Decomissioned the old node1.
Command: ./tpcc -drop=true -load=true postgres://root@roach1:26257/tpcc?sslmode=disable
Why did the disks not fill up evenly?
Why did the admin ui show a smaller size than it actually was?
Why did the disk not fill to 100% when node1 was rebourne and the data replicated back? Might data have been lost here?
Running TPC-H, same setup. Doing 100% analytical reads this time. After all test data have been inserted and the benchmark reads start to happen the disk of node1 fills up pretty quickly. In about 4 hours it grows from 1.2 to 8.2 GB. At this point that node shuts down. The admin ui shows the size of the database as 2.8 GB. Node2 has 2.4 GB data and node3 has 8.2 GB.
I did the same thing, delete all data on disk and restart the node. Data is replicated back to the new node but this time it ends up with 2.3 GB data. Admin ui still shows size of the database as 2.8 GB.
How could the reads increase disk usage?
Is there a way to delete the data for one node but keep its identity. For example delete all SSTs?
Command: ./tpch -drop=true -load=true postgres://root@roach1:26257/tpch?sslmode=disable