Remote actor system will not terminate when run in a servlet container

I have created a bare bones web servlet app that uses akka remoting. I defined my config as follows:

akka {
  loggers = ["akka.event.slf4j.Slf4jLogger"]
  loglevel = DEBUG
  debug {
    lifecycle = on
    receive = on
  }
  actor {
    provider = "cluster"
  }
  remote {
    artery {
      transport = tcp
      canonical.hostname = "localhost"
      canonical.port = 25563
    }
  }
} 

And a simple servlet:

class TestServlet extends HttpServlet {

    private val log = LoggerFactory.getLogger(classOf[TestServlet])

    val system = ActorSystem("test")

    override def init(): Unit = {
        log.debug("Created system: " + system.name)
    }

    override def doGet(req: HttpServletRequest, res: HttpServletResponse) = {
        res.setContentType("text/html")
        val pw = res.getWriter()
        pw.println("<html><body>")
        pw.println("Test Servlet")
        pw.println("</body></html>")
        pw.close()
    }

    override def destroy(): Unit = {
        super.destroy()
        log.debug("Terminating")
        implicit val ec = system.dispatcher
        system.terminate().foreach { term =>
            log.debug("Terminated: " + term)
        }
    }
}

When I run my app in the Tomcat (8.5) container and then unload it the actor system will not terminate.

I am using akka version 2.6.8

Full logging attached: logging.txt

All the code can be found here on github

Note that you cannot execute the termination future foreach callback on the system.dispatcher execution context.

When the system has stopped and the future completes, that threadpool has been shut down, and the function doing a debug log will therefore never be executed.

Ok but that is not what is preventing the system from terminating. I changed to the global ExecutionCcontext and the problem still persists

Yes, sorry, should have marked that as an “by the way”.

I saw in the log you shared that there is a phase of coordinated shutdown timing out after 10s so something is not stopping, that could be an actor that does something cpu intensive or blocking. I’d recommend doing a stack dump to see if there are threads still busy with something after the termination has timed out.

Will give that a try so. Tx Johan

Well that made things interesting. I defined the following implicit class to add a timeout on a future:

	implicit class FutureEnricher[T](future: Future[T]) {

		def withTimeout[U](duration: FiniteDuration)(f: Try[T] => U) = {
			val system = ActorSystem("timeout")
			implicit val ec = system.dispatcher
			lazy val to = after(duration = duration, using = system.scheduler)(Future.failed(
				new TimeoutException("Future timed out after: " + duration)))
			Future.firstCompletedOf(Seq(future, to)).onComplete { _ =>
				system.terminate()
				f
			}
		}
	}

Note I define a new actor system just for the scheduler. My Servlet destroy now looks like:

	override def destroy(context:ServletContext) {
		log.debug("Terminating actor system")
		import scala.concurrent.ExecutionContext.Implicits.global
		system.terminate().withTimeout(10 seconds) {
			case Success(term) =>
				log.debug("System terminated: " + term)
			case Failure(ex) =>
				log.debug("Failed to terminate", ex)
				val file = File.createTempFile("dump", "bin")
				HeapDump.dump(file, true)
				log.debug("Created heap dump: " + file.getPath)
		}
	}

And now suddenly everything terminates properly!