I have 6 modules that has its own implementation similar to HelloService and each one is deployed as a microservice inside kubernetes. Suppose I want to scale out one of the modules, how would I do that. I will create a new replica inside kubernetes. I will have a singleton implemented that manages the work sent to the worker nodes inside the replica. And also how will the role based scaling work in this case!?
You can use akka-management, which has an out-of-the-box support for node discovery using Kubernetes’ API.
Lagom already shards the entities, using Akka Cluster Sharding under the hood. In combination with
akka-management, and thus the scaling of the nodes (hosting persistent entities) is just a matter of application configuration. You can read more about the node roles in general here, and the Lagom configuration describes how you can specify that entities run on cluster nodes with a specific role.
Regarding the singleton, have a look at Akka’s Cluster Singleton documentation.
You should also consider sharding the entity events, where that makes sense. The Event Tags section in the Read Side documentation covers that.
Finally, you should ensure your cluster can handle network partitions properly, so it can avoid a split-brain. I recommend you start by reading the Split Brain Resolver documentation, since you’ll be using Akka tools for your clustering solution.
Hope this helps.