We are using akka-stream-kafka lib for writing kafka subscribers and we have a low latency requirement i.e. below 10ms.
our kafka consumer subscribe API looks like this
private def subscribe(subscription: Subscription): Source[Event, scaladsl.Consumer.Control] = scaladsl.Consumer .plainSource(consumerSettings, subscription) .map(record ⇒ Event.fromPb(PbEvent.parseFrom(record.value())))
We are subscribing to around 200 topics from single JVM on which producer is producing events at varying rates for ex. 1 msg/sec, 100 msgs/sec.
I was looking at the akka-stream-kafka code and could see that for every subscription it creates KafkaConsumerActor which keeps polling to KafkaConsumer at configured poll interval. This means, in our case we are creating 200 streams which in turns create 200 actors which are continuously polling KafkaConsumer.
Does this adds overhead and cost latency, we are getting 99%tile latency around 200ms?