Memory leak after actors are tried to be stopped

Hi, I am trying to create actors periodically every 10 minutes, and stopping them after 5 minutes (exception topic actors).

As I analyze from profiler (YourKit), I see that actors are not being garbage collected and have strong references to GC root. So the heap size is growing.

After creating a new actor object, my code schedules to call following commands:

override def receive: Receive = {
    case Start =>"$actorName started")

      val (killSwitch, result) = consumer
        .map(message => message.value())
        .groupedWithin(20, 5.seconds)

      consumerBase = result
      consumerKillSwitch = killSwitch

    case Stop =>"$actorName stopped")

      self ! Kill

What are the best practices to kill/terminate actors so that they can be collected by GC?

Versions used :
akka-actor = "2.6.13"
akka-stream = "2.6.13"
alpAkka = "2.0.7"


Hi @hakangs. By default there is a delay in shutting down the Kafka consumer. This delay exists to allow the stream some time to commit any outstanding offsets in flight in the stream, because the consumer is required to do the commit. By default the timeout is 30 seconds, so that might be showing up in your profiler as a leak. If you use the DrainingControl to shutdown your stream (instead of the kill switch) then it’s safe to set this timeout to 0. Or if you’re not concerned about missing some commits at shutdown, you can set it to 0.

Hope that helps!

thanks @seglo , here is the latest version and we are not creating new actors (using existing actors, with restarting them) , what do you think about it :slight_smile:

Are you setting the stop-timeout to 0 when you create the Alpakka Kafka consumer source?

Ideally you wouldn’t stop the actor until draining is complete. drainAndShutdown returns a Future. I would recommend using the pipeTo pattern to send a message to self (or some other caller) when the draining is complete.