We run a pretty simple remote actor in an Akka cluster. There are 4 actors that get a request, make an HTTP call using Akka Http, do very basic map on the response, and return it.
The system is receiving 20k TPS, and we need 20 2cpu container to run it, which seems like too much.
- Thread dump shows 2 of default-blocking-io-dispatcher-142/141 threads: Where could they be coming from? The blocking-io-dispatcher is part of Akka Stream, but we don’t use it anywhere explicitly, nor make any blocking calls
- How would you recommend to tune the thread pools? Right now I am using default dispatcher for the actors, and ExecutionContext.Implicits.global for the future operations (simple maps). And I see 24 .default-dispatcher threads
- What’s the best way to avoid context switching on the map operations? was thinking to use FastFuture, a Trampoline execution context, or https://github.com/kamon-io/Kamon/blob/master/kamon-core/src/main/scala/kamon/util/CallingThreadExecutionContext.scala