RestartFlow eventually fails the stream?

The following statement in scaladoc seems unclear to me. It does not give clear answer whether the stream will fail or not once the number of restarts reaches the maxRestarts value:

This [[Flow]] will not emit any failure. The failures by the wrapped [[Flow]] will be handled by restarting the wrapping [[Flow]] as long as maxRestarts is not reached.

Could someone confirm that the stream will fail? Thanks.

Hi @pawelkaczor,

the Flow will fail iff maxRestarts is a value greater or equal than zero and the number of consecutive restarts exceeds that value.


Looking at the code a bit more, I don’t see how this could ever trigger a failure:

  protected final def maxRestartsReached(): Boolean = {
    // Check if the last start attempt was more than the minimum backoff
    if (resetDeadline.isOverdue()) {
      log.debug("Last restart attempt was more than {} ago, resetting restart count", minBackoff)
      restartCount = 0
    restartCount == maxRestarts

  // Set a timer to restart after the calculated delay
  protected final def scheduleRestartTimer(): Unit = {
    val restartDelay = BackoffSupervisor.calculateDelay(restartCount, minBackoff, maxBackoff, randomFactor)
    log.debug("Restarting graph in {}", restartDelay)
    scheduleOnce("RestartTimer", restartDelay)
    restartCount += 1
    // And while we wait, we go into backoff mode

  // Invoked when the backoff timer ticks
  override protected def onTimer(timerKey: Any) = {
    resetDeadline = minBackoff.fromNow

the reset deadline keeps being set to minBackoff.fromNow after each backoff, which obviously takes at least that long. That means that in maxRestartsReached, resetDeadline.isOverdue() is always true and the restartCount is always reset to 0.

Where’s the flaw in my logic? This seems consistent with what I’m observing, i.e. the flow is never interrupted if maxRestarts > 0.

Never mind, I found Restart sources where it is explained that maxRestarts only applies when the flow is first started. In my opinion, that makes the attribute very badly named. This merits a documentation change, IMO.

1 Like