How to handle 'ranges_unavailable' and 'ranges_underreplicated'?

I’m trying to upgrade cockroach db from v1.0.6 to v1.1.2 (to use removing node functionality which is introduced in v1.1)
I’m following these instructions (
But my cluster’s ‘ranges_unavailable’ and ‘ranges_underreplicated’ values are not 0. How can I handle this? Node 4,5 are test node which will be removed.

I know node 4 has 6 of replicas_leaders but I deleted its data folder from filesystem already and these are all for test so I don’t mind it, just want to know what’s going on.

(I’m using cockroach db on Windows 10 Pro with binary version.)

Thank you :slight_smile:

hmm… it’s weird
it keeps changing.
sometimes it’s 0 but somtimes not.

I just updated binary in anyway, but I failed :disappointed_relieved:
I had 2 databases on my own and only one table with 20k row data before upgrade. As a result, only database and table scheme are remained. All inner data is gone.

And also I removed nodes followed by instructions ( but admin ui keep showing version warning (though it’s not a big problem)


any idea?

@jykeith How exactly did you delete the data from node 4? Was it running
when you did that? It also seems like node 5’s data may be missing. With a
replication factor of 3, only 1 failure can be tolerated, so if 2 nodes are
missing data, then there is a problem.

Hi @dan
I added node 4 and 5 into cluster, and then terminated both processes in less than 10 min from it starts and didn’t exist simultaneously. it was a test for adding a node. I predicted it would be missed on node 4 and node 5 only data(though no such case I think), however all data was gone.

@jykeith, yeah that sounds like the problem. The default replication
factor is 3, which allows for one failure, but you’re removed 2 nodes. The
correct way to do this is to fully decommission the first node (as
described on and
after that has finished, then decommission the second node.