Scaling down issue and removing deleted pods from console

I’ve ran some traffic using this command:

kubectl run workload-run -it --image=cockroachdb/cockroach:v19.1.2 --rm --restart=Never --namespace=thesis-crdb -- workload run bank --duration=5m 'postgresql://root@k8crdb-cockroachdb-public:26257?sslmode=disable'

I watched the traffic on in the console, and then decided to scale from 3 to 5 nodes:
kubectl scale statefulset k8crdb-cockroachdb --replicas=5 --namespace=thesis-crdb

This is working fine. They get auto-balanced and the metrics is working.

However, when I scale down again kubectl scale statefulset k8crdb-cockroachdb --replicas=3 --namespace=thesis-crdb

The traffic simulation ends with:

Error: pq: server is not accepting clients
pod "workload-run" deleted
pod thesis-crdb/workload-run terminated (Error)

Is this normal? Or shouldn’t it be able to scale down while running?

Another issue I found was that when you have scaled down and you have 2 dead nodes (in my case above) the metrics is bugged out and doesn’t show a graph, however, the Transactions are still taking place.

And how can I remove the decommissioned nodes after I have scaled down? The pods are removed but they are still visible in the console which makes the metrics not working.