Kafka in place of Akka Streams

Akka stream - mainly used for backpressure. That is faster publisher should not overload the slower subscriber.

Consider I am getting stream of data and I want those to publish to other system for processing. In this case, I can use backpressure at consumer end to get the data which consumer can process.

What if I can publish the data to Kafka and consume the message one by one. In this case, since consumer is a pull model, end system will not get overload right?

In what way, akka stream differ from the above flow?

@hemnath

Akka Stream backpressure is a pull model under the hood because he uses pull and push to make the backpressures happen. Using Kafka to handle the overflow data can resolve your problem but you ill need to pay attention to data life time in Kafka to make sure thath you are not loosing data. Using only akka stream can make you proccess “flow” more reactive in that way. If your consumer can`t handle a lot of messages you can start many of them or just ajust the consumer configurations to do their job faster

pay attention to data life time in Kafka to make sure thath you are not loosing data

Whether you are mentioning about the topic retention period?

@hemnath
Well, in your approach you want to use Kafka to hold the incoming data supossing that you have many producers and one slow consumer your data can stay a long time in Kafka until be consumed, so i think that you must to pay attention:
“The Kafka cluster durably persists all published records—whether or not they have been consumed—using a configurable retention period. For example, if the retention policy is set to two days, then for the two days after a record is published, it is available for consumption, after which it will be discarded to free up space. Kafka’s performance is effectively constant with respect to data size so storing data for a long time is not a problem.”

You can find this at https://kafka.apache.org/documentation/