I ran into some distribution issues with
ShardedDaemonProcess. For example, let’s say you need to build 3 projections and you want to have 3 workers for each projection. So for each projection, we will make a call to
shardedDaemonProcess.init("projection-name", 3, _ => projectionBuildingBehavior()). So in total we will start 9 daemon processes, but the problem is that if you scale the cluster to 9 nodes, 6 of them will be sitting completely idle, because the entity ID for each process is just a number and process #1 for each
.init call will end up in the same node, because all of them will do
"1".hashCode % 3 to determine the shard region.
One workaround would be to just have a single
.init call for all projections and then based on the process number determine what projection worker to spawn. But having separate
.init calls is much more intuitive.
I wonder, should
ShardedDaemonProcess be modified to have overall better distribution, or is that not really possible? In which case it would still be good to update the docs about this behavior.