Default-blocking-io-dispatcher in Akka HTTP


We run a pretty simple remote actor in an Akka cluster. There are 4 actors that get a request, make an HTTP call using Akka Http, do very basic map on the response, and return it.
The system is receiving 20k TPS, and we need 20 2cpu container to run it, which seems like too much.

  1. Thread dump shows 2 of default-blocking-io-dispatcher-142/141 threads: Where could they be coming from? The blocking-io-dispatcher is part of Akka Stream, but we don’t use it anywhere explicitly, nor make any blocking calls
  2. How would you recommend to tune the thread pools? Right now I am using default dispatcher for the actors, and for the future operations (simple maps). And I see 24 .default-dispatcher threads
  3. What’s the best way to avoid context switching on the map operations? was thinking to use FastFuture, a Trampoline execution context, or

Hi @eugenemiretsky,

I would do some profiling to find what’s going on. The number of threads definitely looks too high if each container only gets 2 cpu. You should probably also use the default dispatcher for the Future use cases and also strip down the amount of threads on the default dispatcher. The ForkJoinPool backing and Akka’s default dispatcher is known to have some overhead while spinning to wait for new tasks. In a CPU constrained scenario you should be fine with the number of threads being equal the number of cores available.


Would you recommend using Affinity dispatcher, or Pinned dispatcher for each actor? We have an actor per CPU core, and as I mentioned, no blocking operation

No, usually using the fork-join-executor will give you best latency and throughput.

Having a somewhat similar issue here - don’t seem to be able to tune the client to get good performance.