Aggregate root communication patterns

My question is regarding communication between aggregate roots in event sourcing using Akka persistent actors. Specifically, if I want an aggregate root to respond to an event that happened in another aggregate root, how do I do that?

Using the shopping cart example: Implementing Microservices with Akka :: Akka Platform Guide, we can take the example that the Item aggregate needs to know when a cart has been checked out to update its count in the warehouse. What’s the most idiomatic way to do this? I can think of two options

  1. Have the Item actor representing the AR also understand the ShoppingCart.Event type (Behavior[Item.Command with ShoppingCart.Event]) and have it subscribe to a stream of events from the shopping cart service, or
  2. Have a a coordinator actor subscribe to a stream of events (requiring the producing aggregates to produce to it) and send commands to the correct aggregate roots
1 Like

If you’re event sourcing via Akka Persistence, the typical way you would accomplish that would be the second. Akka Persistence simplifies this by allowing tagging of events and then allowing other interested parties to subscribe to the events with a specific tag.

In this case, it’s probably most natural for the “checkout event” to be tagged specially. The item-management service can then subscribe to a stream of checkout events (Akka Projections is especially useful here as it provides machinery to facilitate resuming the subscription where it left off): that stream can then request the items which are in the cart (this is a case where, especially if the cart was being passivated based on inactivity or LRU, I would strongly consider querying the cart aggregate directly for strong consistency, but there is a diversity of opinion about how prescriptive CQRS is…) and dispatch to the appropriate Items for inventory update. An alternative approach is to publish complete checked-out carts to Kafka (again Akka Projections facilitates this publishing) and have the item service subscribe to those.

It’s perhaps worth having the item-management/inventory service maintain its own representation of a cart aggregate, which consists of the items (viz. their IDs) in the cart as well as the progress made in updating the counts.

1 Like

Awesome, thanks Levi.

Do you have a demo of how this works between two aggregates modeled as actors without Kafka? That’s what I am working with right now. My hack is to introduce a “propagator” actor that receives events from one aggregate and has the relevant consumer aggregates as references in its state; it then has logic that translates events from the producer aggregate to commands for the consumer aggregate. Wondering if there’s a more officially supported/idiomatic way to do this.

I guess this is some documentation for how it works: Processing with Actor • Akka Projection

Yes, that’s pretty much the way to go. A projection that sends the event to an actor that transform to commands to the entities (in your case, not an entity, but barebone persistent actor).

And indeed, you better use Akka Projection to track the offsets and and deliver the events to the propagator actor.

Why is it better to use Projection than just send a message to a propagator actor, assuming no cluster sharding? I am asking because we’d prefer the simplest solution possible.

Because Akka Projections will take care of a few things for you.

First, it will track the offsets. After a crash or shutdown it will restart from where it stopped. Note that this is at-least-once delivery. But that’s also the case if you do it yourself.

Also, it will make sure that is started when you system starts and in case of some db failure, it will restart the projection.

You can do it yourself by using Akka Persistence Query directly, but then you are on your own to deal with restarts, resumes, etc.

You don’t need to use sharding to use Projection. You can start one without using the ShardedDaemonProcess.

1 Like

I am implementing this now and creating a projection that can be processed using Akka streams. Looks like the only way to subscribe is to have the upstream entity know the subscribers (unless using Kafka), or is there a way for the subscriber entities to subscribe to events with a certain tag?