Blocking IO dispatcher without a BatchingExecutor

Hi,

I’m trying to set up a dispatcher that will be used exclusively for blocking IO and I seem to be having troubles with that.

As far as I can tell, every dispatcher that I fetch via the Dispatchers.lookup method is implementing MessageDispatcher which in turn extends BatchingExecutor. In the documentation of BatchingExecutor it says that:

A batching executor can create deadlocks if code does not use scala.concurrent.blocking when it should, because tasks created within other tasks will block on the outer task completing.

Since the dispatcher I’m trying to create is intended for blocking IO, batching is not a behavior that I want to have. If I’m running with the BatchingExecutor, it seems that I would have to wrap every single call made through it with blocking. This is something that I would like to avoid.

Is there a way to have a dispatcher that doesn’t use batching? Some configuration value that can be tweaked? Or am I missing something, and batching shouldn’t be an obstacle even for a blocking IO dispatcher?

Thanks

Calling blocking IO from an actor blocks on something external so that is fine and will not cause the executor to deadlock.

Furthermore the batching will only happen for future callbacks, so the deadlock scenario does not apply to actors processing messages.

Thanks for the answer.

The thing is that I’m using the same ExecutionContext for my Futures as well and I did actually stumble upon a deadlock that was resolved by wrapping a call with blocking.

In the case of using Futures with that execution-context, is there any other way to bypass the batching (or disabling it altogether)? Or would you recommend to just avoid the Akka execution-context and initialize one on my own?

I think there is a risk that you end up with the same deadlocks with the Scala 2.13 execution context, since that also does batching internally for performance reasons (it’s not a public API which is why one of the reasons we have a duplicate in Akka).

If I initialize my own ExecutionContext I can just implement the trait directly, and bypass whatever batching that is implemented by default in the execution-contexts provided by the standard library.

If there’s isn’t a way to reuse the ExecutionContext from Akka without batching, I guess I’ll have to provide my own.

Ultimately, there’s no easy way to avoid starvation (and deadlocks) when you have blocking calls that you don’t flag as such. A BatchingExecutor thread might be easier to starve because a single, unflagged blocking call might block tasks already batched behind the currently blocking task. But even if you don’t use BatchingExecutor, you can still starve the whole thread pool if you just run enough of those blocking calls in parallel. But even when flagged, there’s no guarantee that the thread pool can provide additional resources (threads) to resolve the issue, so with co-depending blocking tasks you might still end up in a deadlock.

So, the usual suggestions apply:

  • avoid blocking
  • move blocking code to blocking dispatcher
  • flag blocking code with scala.concurrent.blocking
  • avoid co-depending blocking tasks (simpler said than done, because you might not even know about the dependencies)