Akka Persistence: CQRS
Bridging the gap between read and write models
Hi all!I have a question regarding the handling of CRQS with Akka Persistence.So let me first rephrase what I think I understood so far, so that maybe wrong assumption from my side do not lead to misconceptions.
So I am sorry for the long post, hopefully it is helpful to understand the background of my question though.
If you want to jump right to my question, jump to the conclusion Conclusion - A Misconception? part.
CQRS and Akka Persistence
Doing CQRS, we decouple the write model from the read model.As we usually want to read what was written previously (for some definition of ‘previously’ - see eventual consistency), so we need to bridge between the read and the write model (at least if not both run on the same database in the same data structure).
Akka Persistence takes care of the write model. Akka Persistence Query can be used to query the model, e.g. to build a bridge to the write model.
Akka Persistence Query uses Read Journals to query the data from the write models.There are multiple read journals for various databases. All expose a somewhat key / value storage interface (a journal partitioned into persistence IDs).
Read Journals - Status Quo
The documentation says the following about the Read Models:
Most journals will have to revert to polling in order to achieve this [watch for additional incoming events], which can typically be configured with a
The only plugin I found which somewhat does something related to push was the Mongo DB. Though if I take a closer look, with some Issues (#155) or room for improvement (#163).
Except for this read journal, I found no other read journal which did not use polling for fetching new events.
Basically (ignoring tags) there are 2 major sources for read journals:
Sourceof persistence IDs
Sourceof events (
EventEnvelopeto be precise)
In order to subscribe to all events, I need to iterate over all persistence IDs with the first and subscribe to all events for that persistence ID.
So for a polling read journal, this means that I register a database polling action for every persistence ID. Which is acceptable, if there are only a few persistence IDs with a lot of events.
It is however a horrible idea, if there are a lot of persistence IDs with only a few events each. A lot of polling actions, which mostly fetch no new data at all but still keep the database busy.
Alternatively I can choose to lower the
refresh-interval , which eases the load on the database, but potentially increases the time between when an update reached the write model and is visible in the read model.
Conclusion - A Misconception?
Given the current status quo mostly polling is used to subscribe to new events.Akka Persistence Query is therefor a bad fit for a lot of persistence IDs with few events each.
Is this a misconception? Where did I branched of with my reasoning from the right path, to get here then?
Implement your own journal which uses DB push
(which limits the journal to databases supporting this feature)
- You still need to handle failures on the duplication to the write side in order not to miss updates…
Use the Mongo DB Journal and use server pushes for subscription to new events
(there is even an
allEventsfunction, which let’s you subscribe to all events at once)
Don’t use Akka Persistence at all and run your own Event Sourcing (dropping CQRS all along and write the read models atomically together with your original write side)