Default-blocking-io-dispatcher in Akka HTTP

Hi,

We run a pretty simple remote actor in an Akka cluster. There are 4 actors that get a request, make an HTTP call using Akka Http, do very basic map on the response, and return it.
The system is receiving 20k TPS, and we need 20 2cpu container to run it, which seems like too much.

  1. Thread dump shows 2 of default-blocking-io-dispatcher-142/141 threads: Where could they be coming from? The blocking-io-dispatcher is part of Akka Stream, but we don’t use it anywhere explicitly, nor make any blocking calls
  2. How would you recommend to tune the thread pools? Right now I am using default dispatcher for the actors, and ExecutionContext.Implicits.global for the future operations (simple maps). And I see 24 .default-dispatcher threads
  3. What’s the best way to avoid context switching on the map operations? was thinking to use FastFuture, a Trampoline execution context, or https://github.com/kamon-io/Kamon/blob/master/kamon-core/src/main/scala/kamon/util/CallingThreadExecutionContext.scala

Hi @eugenemiretsky,

I would do some profiling to find what’s going on. The number of threads definitely looks too high if each container only gets 2 cpu. You should probably also use the default dispatcher for the Future use cases and also strip down the amount of threads on the default dispatcher. The ForkJoinPool backing Implicits.global and Akka’s default dispatcher is known to have some overhead while spinning to wait for new tasks. In a CPU constrained scenario you should be fine with the number of threads being equal the number of cores available.

Johannes