It is documented that
This cluster tool is intended for small numbers of consumers and will not scale well to a large set. In large clusters it is recommended to limit the nodes the sharded daemon process will run on using a role.
But how large would it be too large for sharded daemon process? And what could go wrong if it were used in a large cluster?
I can’t see that this should be a be a big problem. Maybe the caveat is referring to that keep alive messages are sent to each daemon process periodically. That is nowadays sent from a Cluster Singleton, originally that was sent from each node, so perhaps that was the concern.
There is also some small state in Distributed Data for supporting coordinated scaling of the daemon processes, but I don’t think that should be a problem.
@johanandren Do you recall anything else? Shall we remove the comment from the documentation?
It was originally about every cluster node pinging every sharded daemon process, yes.
I think that it might still not be a good idea for large numbers of daemon processes, say tens of thousands and upwards, but cluster size should not matter at all anymore. I’ll update the docs.
That’s great! Thank you guys.