Multiple persistent entities within one service - anti-pattern?

Hi all! I’ve been going through Lagom’s documentation for Scala recently, as well as analyzing some examples here and there (like lagom/online-auction-scala and others) and I’m wondering… How unusual for you would it be to have more than one PersistentEntity within one Lagom service - or in other words, within one LagomApplication? @james notes here that it’s quite unusual to have more than one persistent entity/aggregate root per service, as it may suggest we’re building a service with more than one responsibility (thus - a service that won’t be autonomous), missing the key principles described in Sizing individual microservices.

The reason why I’m asking is because I’m trying to decompose my domain into Lagom’s concepts properly, and I find it not that simple. Possibly because most of the examples I’ve seen so far are rather simplistic, when it comes to the domain they’re modelling. In my case, I want to model something like a job offer. It has a few parameters that specify it and it ends up being a single PersistentEntity, obviosuly. Now - we would like to have an applicant represented in our system/domain. JobOfferEntity shouldn’t accept an application if applicant’s skills don’t match job offer requirements.

Assuming we stick to the rule saying that we should have only one PersistentEntity in our service for handling job offers logic, we will end up having a list of all applicants (all users) within every single JobOfferEntity (our aggregate root for a given job offer). For sure, that’s no good.

Accessing the read-side from the write-side (to get a list of skills for a given applicant for our job offer aggregate/entity) is even more of an anti-pattern, so I don’t even consider this. For a second I’ve been thinking about splitting applicant skills handling (ApplicantEntity) and job offers (JobOfferEntity) into separate services, but frankly… I think that might be a bit over the top, don’t you think? If one’s set of skills only matter in the context of handling job offers, there’s probably no reason to reach ApplicantEntity via HTTP request (or some message broker) every time we want to issue ApplyForAJob on JobOfferEntity (or any other command/event that requires some applicant’s data). These two entities seem to be coupled strongly enough to keep them as close as possible.

Thanks for any comments!

Hi!

in my use case i created an authorization service which manages 2 PersistentEntities: Roles and Groups.
I’ve got the following PersistentEtities:
1.) Group - Fields are:

  • persistent id
  • a name (string)
  • a list of user-uuids
  • a list of role-uuids.

2.) Role - Fields are:

  • persistent id
  • a name (string)
  • a list of permissions

Whereas the permission object is no entity. Its just an object with the fields entityGroup, action and resource. (all strings). (This layout is from Apache Shiro - a example of a permission is: device:read:12345 - saying that this role permits read-action on device 12345)

I’ve got 1 read side which subscribes both event streams and updates relational database tables on events. Every processor processes its own tables. For Queries i can simply do a very simple join and get all permissions of a given user. (user is in n groups. 1 group has n roles. 1 role has n permissions).

E.G:
select rp.entityGroup||’:’||rp.“access”||’:’||rp.entityId from authz_group_user gu inner join authz_group_role gr on gu.groupid=gr.groupid inner join authz_role_perm rp on gr.roleid=rp.roleid where userid=‘61067a85-44ce-491f-a9ab-b1942d2df15b’

The service itself provides distinct calls to each persistent entity. On read-side i get the advantage of the join possibilty.

I’m quite sure i could do this with 2 distinct services and a remote-read side as well.
But this separation comes at the cost of the added complexity (3 instead of 1 service) for this rather small service and i’m not sure that i wouldn’t run into data integrity problems if i split all.

One possible problem with splitting up could be that a user removes a permission from a group on the write side. The remote-read side is down and the event is not processed. This could lead to a inconsistency allowing another user to access resources he is not allowd to (according to read side). If the Read-Side is IN this service, this situation is not only very unlikely, i could also handle this easily in the service itself. e.g. by returning the data modification request only after read side processing has been completed.
If that would be not be a problem it seems to me that this would be like cracking a walnut with a sledgehammer.

I do not know whether this is an antipattern or not. I think, as @ignasi35 said, it depends. For my use case, i think my solution is “appropriate”. No other service has 2 PersistentEntities in my case but i would be very glad to hear some critics of people with more experience than myself.

greetings,
Michael

Hi,

I think your question comes down to Sizing individual microservices - Does this service do only one thing?.
I would also say it depends from case to case.

“One thing”, for me, is exposing “one thing” publicly (using service calls and/or public events).
If there are “multiple things” in one service and some/all need to be exposed publicly then I would consider splitting it to multiple services.
On the other hand having multiple services comes with a cost - additional resources (cpu, ram, disk,…).
But microservice concept requires far more resources then for example monolith (at list in my case) and this is something you have to get used to (especially from financial point).
Also consider that service functionally will grow by time and it is a question will you be able to split it then.

One case where I use multiple entities in one service:
When I require atomic number increment I use “helper” entity with GetAndIncrement command that increments current number, persists it and returns. This “helper” entity is used by “aggregate root” entity and is not exposed publicly but only used locally.

I hope this helps.

Br,
Alan