Event Recovery during Rolling deployment/upgrade of persistent actor pods

Hi,

Are events for an actor stored in memory in a node (maybe ShardRegion) such that if the actor restarts in the same node the actor does not need to query the Persistence Db to recover its events?

How about if our kubernetes cluster contains one pod nodes and we do a rolling deployment? Are the events of an actor that is going to be transferred to another node handed over to that node from its original node? Or is this the case that the events need to be queried from the Persistence Db?

If the state of an actor needs to survive a restart of the actor, be it local or to move between cluster nodes it needs to store its state some place external to the actor itself. For it to survive full cluster shutdown it needs to be persisted, for example in a db using Akka Persistence Durable State or Event Sourcing.

The enqueued messages in an actor inbox are not persisted and are send to deadletters if the actor is stopped before the messages were processed, regardless if using persistence or not.

Yes I get that. So I’m talking about persisted journal events. I am just investigating when an actor who has a gazillion of persisted events it needs to replay is expected to take a long recovery time and when it is not. From you answer if the kubernetes cluster is still alive a db query to akka persistence journal Db is not always required if an actor restarts. Is that right?

So if an actor restarts on the same node, same ShardRegion, I assume that its journal events are stored somewhere in memory and need not be queried from the persistence journal db. Is that correct?

However, for the case that an actor transfered to another another, is there a handover from the node it is removed from and the node it is transferred to? Or is there none and this is the case that all of the journal events of the actor need to be queried from the persistence journal table? This is assuming of course that the actor has no snapshot.

No, the event sourced actor will always replay from the journal on restart, regardless if the restart is on the same node or due to a move between nodes.

There is no separate in memory store of all the events of a node, such an extra in-mem replica of the journal would be extremely expensive, once an event is replayed to the entity it is garbage collected and the heap available for other things.

Snapshots commonly help avoid having to replay the entire event log for the entities when they start, in case of gazillions of events.

For sharding specifically there is an internal handoff mechanism since it is there to make sure that there is only one instance for a specific id alive in the cluster at any time (important since persistent actors requires a single writer). However the handoff is only about that, not the internal state of the entity running inside sharding.

Note that by default sharding doesn’t start the entity anew on re-balance or passivation, the entity stays stopped until there is a new message sent to it through sharding.

Thank you Johan for clarifying this.

I’d reiterate what @johanandren said about snapshots. That is the mechanism to prevent long recovery times. That’s very tunable, you’ll need to experiment with tradeoffs of how often and when to do snapshots. Snapshots exist so that you never need to replay a significant number of events.

I think at that time we encountered slow db queries when our very busy actors hit the snapshot interval. I think more than writing the snapshot entry it was the deletion of previous snapshots that were taking up time. We ended up making the snapshot interval big. We had another way of doing snapshots that was occasionally triggered on passivation by persisting an event. But for some reason we ended up with actors that had no snapshots or very large number of events to replay.

On hindsight we might have set the snapshot interval too big.