Streaming events to UI Model and using PersistentReadSide

I have a UI component that needs to be as up to date as possible.

The system is a trading order management system with a shared blotter. The blotter is used by a group of traders to see orders flowing from fund managers. The traders then trade in the market (outside the system) and update the order statuses.

So I have a service for FundOrder and another for the Blotter. FundManagers create orders and Traders update them. The Blotter essentially represents the overall state of Orders broken down by various categories. The blotter has value data representing the categories and the assignment of Traders.

So for Blotter I have a Service, a PersistentEntity and a PersistentReadSide I also want a UI that is super responsive.

My thought is.

  1. Trader logs in
  2. connects to BlotterService and reads latest version of Blotter from PersistentReadSide
  3. Connects to BlotterService and obtains a connection to the stream of events being produced by BlotterPersistentEntity - through kafka topic
  4. UI model takes the model from readside applies events from topic to produce an up to date model and continues to update the model as time progresses only using the PersistentReadSide again to start up.

So this needs versioning but is occurs to me that the offset is the version.

If the PersistentReadSide writes the last processed offset into the readModel that is sent to UI then the UI can subscribe to events after that offset from the topic bringing itself quickly up to date without needing to deal with any events not already dealt with by the PersistentReadSide.

So Questions:

  • Does this make sense in general?
  • Any Examples or experiences?
  • Are there potential issues with using the offset as a version, gotchas etc?
  • Thoughts on using broker vs pub-sub for the stream directed to client?

I know that it looks like I’m building two read sides here but they are both contributing. One is persistent and doing the heavy lifting of keeping the blotter up to date over years. The other is simply bringing the UI up to date over the last few seconds.

Hi @JonMitchell,

from my experience it is always better to start with less complex solution as possible and then add complexity if required.
I would suggest to start with this solution:
Readside represents an eventual consistency projection of data. You need to query a readside and subscribe to future events for readside update and update the UI.
It is important that you do not miss any of the events that will update a readside after you queried it.
For this solution I suggest using websocket and PubSub.
Publish to Pubsub should be done in ReadSideProcessor after data has been stored in the readside ensuring that readside store accepted required change.

WebSocket flow on connect:

  1. subscribe to PubSub - on each PubSub subscribe, event should be pushed to websocket down stream
  2. query readside and push to web socket down stream (on time per connection*)

UI should handle situation where new events have been received (#1) prior to getting read side representation done by query (#2). This should ensure that events are not missed. You can use event offset for that.

You should be aware that PubSub has at-most-once semantic meaning that events could be lost specially if done between nodes (ReadsideProcessor runnning on one node and websocket connected to other node).
*This could be solved by periodically doing #2 to do sync with readside.

You could replace PubSub with Kafka to get at-least-once semantic but this will add additional complexity specially on the Kafka consuming level and would require implementation that is out-side of the Lagom Broker API consumer scope.

Hope this helps.


1 Like

Thanks for getting back to me.
I’m interested that firstly pub-sub is easier - fair enough I suspect it meets my needs.

I’m not sure though why you suggest that the PersistentReadside should do the publishing of events? Once the command is handled by the writeSide the system has validated all constraints and is consistent. Why wouldn’t the writeSide just emit the domain event via pub-sub at this point? The PersistentReadside is just applying the event to it’s own model not validating it?

I have in my mind that in general Aggregates should accept commands and emit events and that readsides should accept events and possibly issue commands.

Generally speeking it is the same.
Publishing in readside processor will ensure you higer (eventual) consistancy then in Entity.