DistributedPubSub send method should ask first to replicate registry actors in cluster

akka-cluster
akka
(Vikas Saran) #1

I observerd one problem in DistributedPubSub peer to peer send message.

I am explaining my scenario.

I created one actor(A1) on Node1 and register to local mediator. within milliseconds I am sending message to A1 from Node2. but local mediator of Node1 didn’t replicate registry actors in cluster. so local mediator of Node2 doesn’t find actor A1.

so there is some my questions:-

How much time take to replicate registry actors in cluster ??

Can there is any way to sync registry actors before sending message to Actor A1 ??

is there any mechanism to retry after sometime if no matching actor found ??

Thanks Akka Team.

(Johan Andrén) #2

The subscriptions are eventually consistent and gossiped between nodes, by default every 1s, but can be tuned with akka.cluster.pub-sub.gossip-interval setting. Every gossip-interval a node is chosen randomly and gossip is sent to that node, when the next gossip interval has passed the two nodes does the same and so on, so dissemination time depends on the size of the cluster.

Tuning it to be within milliseconds does not sound practical though, that will be a lot of gossip traffic.

PubSub does nothing around guarantees so if you want to make sure that at least one subscriber saw the published message you will have to introduce some form of Ack from the subscriber to the publisher (which comes with its own set of problems in case there are no subscribers or a huge number of subscribers)

If those are your requirements then it may be better to go for some other solution (or write your own pub-sub which fulfills them).

Note though that even if you were to implement some form of sync, you could still have an actor subscribe just after the sync but before the message is published. Such is the world of distributed systems.

1 Like
(Vikas Saran) #3

In my scenario, a subscriber will receive only one message then it will kill itself. I am creating an actor ( subscriber ) for every Http Request and then this actor will receive a response message from remote actor in the cluster. after receiving the message it will return the HTTP response and kill itself.

so is there any way to send gossip to all cluster node when the new subscriber (actor) is created.

Thanks, Akka Team For Quick Response.

(Patrik Nordwall) #4

It doesn’t sound like Distributed PubSub is the right tool for this task if the subscriber is only used for one single message. Could you explain what you would like to achieve without describing a solution and we can see if we have some guidance of what to use.

1 Like
(Vikas Saran) #5

@patriknw

ok. I am explaining my use case.

we are doing integration with the third-party client. the client has async architecture. when we are invoking any rest API of the client. it is giving acknowledge. for the response, they are invoking our API. but our frontend expecting the response in same HTTP call.

so we designed the following solution.

suppose we are getting request R1 from the frontend on Node N1. then we are creating an actor(A1) for request R1 and registering in the local mediator. and invoking third-party api and actor waiting for the response.

suppose they are invoking our API for response and we are getting the response on Node N2. so now we are sending the response message to Actor A1 from Node N2 via Distributed PubSub. and then Node N1 giving the response to the frontend.

Please suggest your solution to this problem.

Thanks Akka Team for a quick response.

(Vikas Saran) #6

@patriknw Please can you update on this.

(Patrik Nordwall) #7

Ok, then I understand better what you would like to do.
I guess you have some kind of correlation id from R1/A1 that is passed along the request to the third-party api and then back in the response that may arrive at N2. Let’s call this a request-id. Do you use that as the topic id, i.e. creating a new topic for each request-id?

Topics should be long lived because there is rather much overhead of replicating and manage them, and as you have noticed it takes a while for other nodes to see that they exist after they have been created.

Unless you have very many nodes in the cluster a simple solution would be to broadcast the response to all nodes and filter on the destination side (in A1).

With Distributed PubSub you can use one single topic and one shared subscriber actor for that topic on each node. That subscriber actor will then have to delegate to interested local A1 actors.

Even simpler would be to use a Cluster aware group router and send messages with the akka.routing.Broadcast wrapper. Also for this you will have one actor on each node with a fixed (know) actor path that other nodes can use as the broadcast destination. That actor will then have to delegate to interested local A1 actors.

An alternative design would be to use Cluster Sharding, with one entity actor for each request. That can short lived without problems. Then the entity actor corresponding to A1 would have to know how to respond to the frontend even though it might not be running on N1. Don’t know if that could be a problem, depending on what you use for the frontend communication from A1.

If you anyway would like to stick to your original design, although I wouldn’t recommend it, it is possible to ask Distributed PubSub for known topics by sending DistributedPubSubMediator.GetTopics to the mediator, which will reply with DistributedPubSubMediator.CurrentTopics. Then you would have to retry that on N2 until the known topic for the request shows up. Very inefficient, but your choice.

(Vikas Saran) #8

Thanks @patriknw yes, we have implemented same . we are creating a fixed name actor(subscriber ) on each Node and then broadcasting the response. then subscriber is sending to local actor A1/R1.

We are just passing message from subscriber to A1/R1. no processing in subscriber.

but curious to know that is there any chance of overflow of subscriber’s mailbox?

can we create a set of actors instead of single actor ? so we can distribute messages among them.