Rolling restart slowness with Kubernetes and Akka sharded cluster

We have a setup of multiple roles representing different types of akka sharded clusters in Kubernetes. In our latest change we added several new sharded clusters. We noticed during rolling restart that communication between pods between sharded clusters is slow. Our theory is the shard regions of the clusters in each pod are taking up more time to compute the location of the sharded actors. The latency is around 10s. Does anybody have an idea what the cause of the slowness is?