Alpakka Kafka and Manual Offset Management

Hi everybody,

I try to understand the Manual Offset Management of the Alpakka Kafka but I have some problems to understand the concepts…

When I read the existing API, I have a feeiling Manual Offset method on the Source.scala are mainly designed to handle External Offset Management but not for Business Logic deciding to commit the Offset in Kafka managed Offset management.

I have used in many Projeckt (without Akka und Alpakka Kafka), Kafka’s Manual Offset Management to takes advantage in long running processes, in the case of Business Logic success signals a commit of the Kafka Offset, to mark the message as successfully processed.

Now I can implement the same kind of the logic with Akka/Kafka combination, without using Alpakka (writing a Kafka Consumer, sending the message to Akka with an Ask, delivering the ‘offset’ as payload and returning the ‘offset’ in response payload to the ask in the case of a Business Logic success), but my main motivation is to use Alpakka is to take advantage of the Backpressure mechanisms of the Alpakka.

But if I look toi the methods in the Source,scala, ‘plainPartitionedManualOffsetSource’ and ‘committablePartitionedManualOffsetSource’, they give me the impression they are there for the external offset management but not really for Commiting the offset depending the result of the Business Case.

To be more concrete, this is an Alpakka Stream configuration that works for me at the moment,

   val control : Consumer.DrainingControl[Done] =
        .sourceWithOffsetContext(consumerSettings, Subscriptions.topics("myTopic"))
        .mapAsync(streamConfigProperties.getAkkaStreamParallelism) { consumerRecord =>
          val myAvro : myAvro = consumerRecord.value().asInstanceOf[myAvro];

which works but as I mentioned I try to convert this to

   val control : Consumer.DrainingControl[Done] =
          partitions => getOffsetsOnAssign(partitions, consumerSettings),
          partitions => Set[TopicPartition]()
        .map {
          source =>
            source._2.mapAsyncUnordered(streamConfigProperties.getAkkaStreamParallelism) {
              message =>
                val myAvro : MyAvro = 
                askUpdate(myAvro, message.committableOffset)
                  .map(response =>
                    response match {
                      case i1: MyActor.ProcessCompleteResponse =>
                      case unh @ _ =>
              "Business Case says we can't commit")
def getOffsetsOnAssign(partitions : Set[TopicPartition], consumerSettings : ConsumerSettings[String, SpecificRecord]) : Future[Map[TopicPartition, Long]] =
    Future {
    }.map(partitions => {
      val kafkaConsumer: org.apache.kafka.clients.consumer.Consumer[String, SpecificRecord] = 
      val mapOffsets : util.Map[TopicPartition, OffsetAndMetadata] = 

      var finalMap : Map[TopicPartition, Long] = Map[TopicPartition, Long]()
      mapOffsets.forEach((key, value) => {
          if(value != null) {
            finalMap += (key -> value.offset())
          } else {
            finalMap += (key -> 0L)


According to my Tests this works too, but I am not sure this is the correct way to do this and may be a more compact code can be created for it.

And actually, I am not sure what is expected from us if ‘committablePartitionedManualOffsetSource’ ‘onRevoke’ occurs.

Any comments or suggestions?