Is it common to stop child actors in event sourcing?

I am building an event sourced system using the shopping cart example in the Lightbend docs. There is a persistent actor for each aggregate root (shopping cart in the example), which makes me wonder about memory footprint. I know actors are really light weight, but if the number of actors only increase, at some point you’ll have a problem.

Since persistent actors store the data and can retrieve when starting (presumably that’s slower, though), I am wondering if it’s common to stop actors to save on space, e.g., if no command has been registered in a certain time span or a certain event indicating that the aggregate’s life is complete (e.g., Checkedout event in the case of the shopping cart). If the aggregate has to be cold-restarted, the parent should be able to do so.

When using it in combination with Cluster Sharding (like explained in the guide), the entities will be passivated after 2 minutes (by default).

We have recently released a new version of Akka that includes different passivation strategies. Checkout out this section in our docs: Cluster Sharding • Akka Documentation

You can also ask the actor to stop after processing a given message by calling thenStop on the Effect.

Using child actors is not recommended with event sourcing because you can’t have transaction between two actors and therefore you don’t get the transaction boundary and consistency that an aggregate requires.

1 Like

Thanks, Renato!

We’ll probably passivate actors manually for now, both when given a certain EOL message such as “Checkout” and when sufficient time has passed. We’d rather not use cluster sharding at the moment as we don’t have the scale it required as a startup. I think right now we’ll just have an actor register that registers all the aggregates as children. We’d frankly rather write and maintain the extra code than introduce a dependency we don’t explicitly need and configure it; this is because we know how to write and test code but are less familiar with configurations for any given library.

I get your point about actors and transaction boundaries on aggregate roots, it’s quite helpful! Do you have a resource/thoughts about what to do if you have sub-aggregates?

For example, let’s modify the shopping cart instance so that your e-commerce website now sells to businesses (maybe it’s electronics supplies). Each business has a number of accounts for each authorized buyer, but the total number of accounts and the total cash value of transactions must be less than some budget, i.e., someone may not check out a cart that causes the entire year’s purchases to exceed the company budget (across all the carts). You can also add additional complications by introducing department-level invariants as well.

I can think of two solutions:

  1. “compress” the state by lifting the shopping cart state into a larger state object that has all the shopping carts in the company in it into a “CompanyShoppingCarts” aggregate. This has the benefit of being a single aggregate, but the state starts to get complex and no fun to code with, and the stream size can get too big. Each user of your service just wants their cart.

  2. Have a projection of all the carts in a company account, but I am not sure whether the guarantee on consistency is secure here.

Note that you implicitly have this problem in the classical shopping cart example: how do you address the invariant “must have sufficient quantity of item?” I guess that one is better dealt in its own microservice/context on the item page so that AddItemToCart cannot be materialized unless the item service agrees its legal.

It’s probably a decent idea (if using Typed) to code to RecipientRef (the common supertype of ActorRef and sharding’s EntityRef (and the testkit’s TestProbe, for that matter) rather than ActorRef wherever possible (e.g. in message protocols), so that if you get to a point where cluster sharding is desirable, it’s a pretty easy migration.

There is a technique I’ve used when the state of an aggregate is undesirably large (a common example of this is when the aggregate is the accumulated history of something with an arbitrarily long life, something more like IoT than a shopping cart): make the aggregate at top-level a non-persistent actor which delegates persistent state to children which can be loaded and individually passivated. The actor serving as the aggregate root has to take some care (e.g. with a lot of stashing) to present an illusion to the outside world that this is all one actor.

There’s not really such a thing as a sub-aggregate in DDD terms (at least as I understand it).

Absolute consistency requires, in general, synchronization and coordination. Any solution which has an invariant that “no cart can be checked out which exceeds a company budget” has to at some point run every checkout for that company through a process which tracks how much of the budget is left before allowing the checkout to proceed: to do otherwise implies a window of time between when the checkout proceeds and when the budget tracking happens.

There are basically three readily available options:

  • have all carts for a company be in one aggregate: you get consistency and can prevent carts from exceeding the budget
  • you can make checking out a process which attempts to reserve the amount of the cart against the budget and then if successful, completes the checkout
  • you can defer the check until later and build a process for allowing an over-budget checkout to be undone/canceled/unchecked-out/escalated as appropriate

The first effectively limits how much activity a given account’s carts can have because all changes to those carts goes through a mutex. The second one avoids that, but there exist situations where a cart that could eventually checkout will be unable to checkout (i.e. sales that end up being OK to make are delayed). The third will allow the budget to be potentially exceeded.

The choice of which of those three to take is a question of which is more important: none of them is more/less “technically right” than the others.

Similarly with the question of “sufficient quantity”, though that’s made a little easier by the fact that if dealing with physical items, whatever computer process which is tracking inventory is intrinsically eventually consistent with what’s on the shelf in the warehouse (consider whether people in the warehouse pocketing items will record their theft as it happens in the inventory system, or whether the robot several aisles over who crashes into the shelves and causes them to topple like dominos will update the inventory right away), so wherever you perform the check “do we have enough of this in-stock”, there’s a situation where it ends up that you don’t have the inventory and have to deal with a cart you can’t fulfill. In that situation, you’re still going to need a process (maybe in software, maybe ad hoc and carried out by people) for handling a “checked out but can’t fulfill” cart.

Can you elaborate on what you mean by your technique? Using the IoT example, do you mean something like:

Outer Aggregate for device “A” → [Device events for 1/1/2022, Device events for 1/2/2022, …]

I’m looking to keep state over time but I’m worried about the same problem (memory footprint)

@slyons Yeah, that’s broadly the approach. The outer sharded actor tracks its children (e.g. in a Map[LocalDate, ActorRef] and spawns/watches them as needed (and they can passivate after a receive timeout).