Design Principles behind Akka Streams

at the Design Principles behind Akka Streams I can read:

From this follows that the principles implemented by Akka Streams are:
* all features are explicit in the API, no magic
* supreme compositionality: combined pieces retain the function of each part

Then I see:

The most common building blocks are
* Source: something with exactly one output stream
* Sink: something with exactly one input stream
* Flow: something with exactly one input and one output stream

I can expect, according to the design principles, that a developer is able explicitly:

  • take base class for building block, extend it if necessary, and instantiate it
  • take base classes for input and/or output streams, extend them if necessary, and instantiate them
  • add that inputs and outputs to the instance of building block - in any number
  • define handling method which can take elements from inputs, create output elements and put them in outputs.

In fact, I could not see in the documentation how to do all that steps. I only see a lot of magic in form of method call chain, which create some internal structure, and then that internal structure is processed with a magic tool materializer and produces… God only knows what it produces and how it is mapped to the chain of method calls.
I can admit you address only smart developers who can master your magic tools and create desired graph of objects without new and extends keywords, but please do not claim that all features are explicit in the API, no magic. Tell honestly, that ordinary developers can pass by if they do not want to waste their time.

Hi Alexei,
Hmmm, I think we’re having a mismatch in expectations from “being used to something” and the wording here perhaps. As with all feedback, I’m sure there is some underlying core thing to this post that we could address, however from this writeup I am having trouble finding out what it would be.

Defining the term “magic”: The term “magic” is used here in the context of “there is no reflection, or other behind-the-scenes mechanisms used to inject behavior or other things from the side” (think registering handlers via annotations, or AOP like things. In Akka Streams all such things are expressed via composing stages, using the various via / to methods etc. In that sense then, I do believe we do hold up to that definition of “no magic”.

The second part of the argument seems to focus on the de-emphasis on “extend things”, but rather the focus on “compose things”. I would not say that “only smart developers” can deal with this - after all, developers combine values all day long, and extending is actually a more complex case since there will be many methods to implement and interactions to worry about…


  • take base class for building block, extend it if necessary, and instantiate it

Taking this request, as an example, I’m not sure making users extend things would be simpler than: .map(el -> transform(el)).

If you want, you can extend and get the more powerful semantics of course though, read about GraphStages then. Do note that Akka Streams does protect you from all the complexity that Reactive Streams has, and you don’t need to take care to protect against most data races and more that a “just extend a Publisher” implementation would have to guard against (see these 41 rather difficult and concurrency heavy rules of Reactive Streams).

I would welcome feedback that could be more specific, and less dismissive of the design, which is there for a reason – to protect users from concurrency issues.

I wonder if you would enjoy the ability to express many of the stages as an simplified GraphStage, that would work only as a Flow, and would basically simplify the current abstraction, to specifically the 1-to-1 port case? We’ve been avoiding it in the past in order not to duplicate too many APIs, but such could perhaps be a nice addition, if you indeed thing the ‘extends’ part is important for newcomers?

– Konrad

Yes there is such underlying core thing. This thing is the deep similarity/analogy/parallelism (not sure which english word here fits best) between synchronous and asynchronous programming. This similarity stems from the observation, that any concurrent execution graph can be represented as a Petry Net. Having a concrete Petri Net schema, we can implement it both using threads or asynchronous tasks. The immediate consequence is that each construct in one world has its counterpart in another world. For example, a blocking queue in synchronous world corresponds to the reactive stream (with back pressure) in the async world (and vice versa).
Then, how a programmer used to develop parallel programs in sync world? He is given the main building blocks - Threads and Runnables, and a bunch of ready-made communication facilities - Semaphore, CountownLatch, blocking queus, CompletableFuture etc. Each of that construct he can extend and arbitrary combine with others. And he is carefully taught how to build custom communication facilities from scratch using synchronized/wait/notify toolkit. Naturally, a programmer wants to work the same way in the async world, too. Moreover, he wants to know how to reimplement a part of his project from sync to async style and backwards, without loosing integrity.

The approach of reactive stream implementors is radically different. Instead of providing programmer with toolkits and freedom to extend and combine, they (including you) worry about how to “to protect users from concurrency issues” - which in fact means depriving programmer the ability to understand what happens under the hood. I suspect this is because implementors themselves does not understand it either. How many rules has blocking queue construct? Exactly 3: a) consumer is blocked until producer puts an element to the queue, b) producer is blocked until consumer makes room for new messages and 3) messages preserve their order. Since reactive stream is a counterpart of blocking queue, it must have also small number of rules. 41 rules (as you said) are beyond comments.

Lol I love this kind of topics :smiley:

If you buy a Mercedes, most of the time you don’t want to fix it yourself. I think knowing how a blocking queue works is not similar to knowing how this much complex “reactive world” works. You are not building a simple blocking queue here with 3 rules. You have graphs and graph components. If you have one input and one output maybe that 3 rule is enough, but if you have N input and M output they will stand still?

I agree with you that the documentation is not so good for newcomers. The problem with it that if you have no idea what are the terms/definitions you are reading you will have no idea how it works. And when you know how it works, you just can’t see where it is bad bcs you understand the terms.

From your original post:

You can do that.
BTW I think this is one of the cleanest part of the whole documentation. You get state graphs, and explicit rules how can you construct your custom stage. You don’t need to see all the implementation behind it. If you want to, scala and java are not your languages, try out C++ or Go.

If you think this lib is messy and “for smart devs only” I recommend you to see some Rx implementations.

Yes this is exactly what I mean.
Imagine you develop an interpreter for Petri Nets. Will complexity of implementing a transition depend on number of input and output places? You just handle them in a loop, no matter how many they are.

I insist that execution graph is a kind of a Petri Net and so interactions between its components are simple an uniform. There is no place for 41 rule.