Eventual consistency, while being consistent


#1

Hey,

I have the following scenario:

  • Kick off Lagom Process 1, that eventually saves something to the database via read side processor.
  • As soon as that happens, I want to execute Lagom Process 2 which queries the database, and the data from process 1 needs to be there since it depends on it.

Now, if I understand correctly, there is no guarantee, when the data from process 1 will be written, so I cannot hardcode a constant delay between the two processes.

What is the best way to deal with this?


(Lutz Huehnken) #2

You could have Process 1 publish an event to a topic on the message bus, and have Process 2 subscribe to that topic (see https://www.lagomframework.com/documentation/1.4.x/scala/MessageBroker.html).

The event could either carry the needed information (Event-Carried State Transfer, see https://martinfowler.com/articles/201701-event-driven.html), or Process 2 could just be notified ( Event Notification) and then call an API from Process 1 to get the needed information.

The data should be transferred in either an event or through an API call. Process 2 should never access the database of Process 1 directly (see for example the first principle of https://isa-principles.org).


(Joo) #3

I had the similar requirement in the past.

I calculate the value of the portfolio and save to the readside database, then I needed some other process to read that value from the database, create some other data using the queried data.

I tried many tricks including the putting the thread to the sleep for 20 seconds before the query gets fired.

Long story short, I figured that I should get out of this kind of linear & sequential thinking of designing my processes, and I need to think more from the event centric perspective.

I solved that problem by publishing that calculated portfolio value to the kafka topic, and let the other service to start its process once it receives the new portfolio value. I think what Lutz explained above is exactly this.