For our OpenSource project, Eclipse Ditto, we heavily use Akka cluster tools and distributed publish subscribe.
In our “IoT” scenario, each “Thing” (a.k.a device) is represented as PersistenceActor (sharded via cluster sharding). Each event of these Thing PersistenceActors is emitted to a distributed pubsub topic “thing.events”.
The events can now be consumed via Websocket or SSEs at a “gateway” service subscribing via pubsub to the above topic.
Under medium load (in a test we used 1.000.000 Thing PersistenceActors emitting a total of 1.000 events/second into the “thing.events” topic), the “gateway” service instances (also receiving 1.000 msgs/s each) now run into memory problems.
From a heapdump I figured that the mailboxes of multiple
akka.remote.transport.ProtocolStateActor are dominating the consumed memory containing many
I did not find any documented limitation of how many messages an Akka cluster can handle via pub/sub or limits on how many subscribers/publishers/overall topics.
Could you help me out? Is this intended to work in that way?