Thanks for the reply Ron, I was able to reach the admin UI in the second region as second region nodes came back together but that was all. Most Admin UI pages were not working but overview page showed all the nodes from the primary region as dead.
I was not able to see logs and since this was in test environment and we needed to resume testing quick, I actually deleted the cluster and restarted from scratch. I am going to re-test the upgrade process following the link you provided but I don’t see anything different from what I did in the following steps, apart from not setting the cluster.preserve_downgrade_option as suggested in that page:
1- Update statefulset in first region, wait for changes to get applied and pods to come back online
2- Move to second region and repeat the same steps.
Is there any other documentation on additional steps required? Is the cluster setting change required if I am only adding more metadata to the Kubernetes pods? I will retry the same steps and update this question if I get more details.
Again, thanks for your help and support.