Service Design - Api referencing other Api services

Hello all,
I’d like to check with your opinions on how to design the API services in Lagom. I have something that I want to try.

Let’s say in my Lagom project, I have following services:

  • admin-api
  • admin-impl
  • accounting-api
  • accounting-impl

Admin service is like an “internal gateway” service, that any interactions between internal company staffs and the system will be done through this admin service.

The service will be responsible for authenticating the internal users, and also responsible for invoking the other services like accounting service on behalf of the caller.

The reason for doing this is that the business domain requires very strict auditing of all user activities within the system.

So let’s say, if the internal staff wants to check the account balance of the system, from the GUI, internal user will invoke the end-point exposed by the Admin Service, Admin Service will authenticate the user, and check if the user has the right permission to make that call, if so, it makes the synchronous call using “.invoke()” to the Accounting service and return the result back to the user.

Also, if the internal staff wants to create a new record in the system, she will make POST call the Admin Service end-point and so on.

The plan is that in Kubernetes, only the Admin Service will be exposed to the load balancer, and the Admin GUI only interacts with the Admin Service APIs, nothing else.

What do you think about this kind of architecture?

Also, is it okay for API service to have dependencies on the other API services in Lagom? This is in order to avoid having to duplicate the data structure at the API level when invoking other services.



Hi Joo,

I think this approach is introducing code and ops complexity[1] and runtime latency[2]. It does provide consistency and auditability (gooood!) but introduces coupling and admin-service becomes a SPOF and a piece on your system that will need an upgrade on every downstream change (not so gooood).

[1] I refer to your last paragraph where you already need to depend on xyz-api from admin-api.
[2] you introduce a network hop on every call

There’s a different approach which is make admin a library of filters and use that library on every service implementation. A session login success would generate a token (random UUID, signed string with roles, …) that would be included in every request.

  • UUID: every service uses the admin filter to transform the UUID into a Principal object (using a local cache or a roundtrip to the user-service). This is decoupled but still relies on admin-service which would be a SPOF.
  • signed token: Each service, using the admin filters, would then be able to authenticate the caller and authorise the request because all required data would be on a signed header. This approach is eventually consistent but could be enough in some cases. Also, it’s more resilient.

There’s not a one-size fits all solution. I wanted to introduce alternatives.

You could base you solution on a hybrid implementation. When I log into my online bank I provide a weak PIN (few digits) that grants me access to many read-only operations. Then, for more advanced operations I need to pass a stronger authentication challenge (SMS, coordinates, phone call,…).

My point is that maybe your solution is the strong authentication mechanism and should only be used for a very specific set of operations, while you use a weaker (more performant) authentication mechanism for not-so-critical actions.


PS: by authentication mechanism I’m referring not only to the password scheme and/or use of 2FA but also the validation processes (cached, non-cached, proxied via admin-service, …)

1 Like

Thanks so much, Ignasi.

Yes, in fact that was the main concern I had on this design. Admin-svc becoming a SPOF.

I like your suggestion, the only problem is that I don’t have any good examples to follow on implementing that. I saw this being used in the online-auction, but I think I’ve also seen big warning sign somewhere saying “dont use this in production”.

Let me check with you if I understood this correctly:

When you say “session login success”, you are talking about successful login attempt from the client to “user-service”, am I right?

So let’s say for example if we have an end-point “/user/login” in “user-service”, and it returns the caller back with the token, and the user uses it to authenticate himself with making calls to the other services…

The point I am bit unclear is how we can “uses the admin filter to transfer the UUID into a Principal object”, and how each service can use the “admin filters to authenticate the caller and authorise the request”

Perhaps there are some technical details I don’t quite know yet, like authorizing, and the concept of Principal and etc…

Also, would you avoid using my original suggestion, knowing that this internal service will only be used by maximum 30 users? (as it gives good auditability)

Thanks again!

Also @ignasi35, wouldn’t this approach makes testing bit more complex?

i.e. I write system integration tests using Python, which makes a call to the end-points to make sure it returns something. And to do this with the “header” approach that you mentioned, the integration tests will have to first authenticate itself and make sure to attach that token to every calls it makes to the services.

Same goes for when I do ad-hoc testing of the end-points using the Postman.

Lastly, is this approach that you suggested considered a JWT approach?


Hi @lejoow,

OK, there are many open questions. I’ll try to comment on all:

  • if you have a single service exposed to the outside that service will handle all requests and reroute them to downstream services. In online-auction, that service is a Play application which handles sessions, renders server-side HTML, handles downstream service failure with fallback calls, etc… This is similar to your original idea. In online-auction I think the search API is not going straight through the Play gateway and is exposed so HTML requests hit the serach-service-impl directly (I’m not sure we completed that implementation though).
  • if you implemented a JS/HTML single-page-application (aka SPA)and exposed all your services’ API’s so that browsers can hit the services directly you’d decrease latency. One such service would have the api/users/login endpoint which would produce a token (more on that later) that the SPA requests would have to include on every call.
  • whether if you use a single gateway or an SPA, you may want all your services to receive some information about who the principal is and what roles she has. A first choice is to use a random token the FooService can use to send a request to users-service to obtain the principal. The second choice is to use a string that’s signed by the user-serviceso that when FooService gets a request all the information is self-contained. This second choice is very similar to JWT (but I’m no JWT expert).

In any case, the moment you enable AUTH filters on your services your tests will need to either:

  • setup services without AUTH (inject a noop AUTH filter?)
  • depend on a `test-user-service``