Akka Cluster is from unavailability to recovery, and member nodes cannot connect to the seed node again

My example is modified from an official example: akka-sample-cluster-java,
The configuration :

akka {
  loglevel = debug
  actor {
    provider = cluster

    serialization-bindings {
      "sample.cluster.CborSerializable" = jackson-cbor
    }
  }
  remote {
    artery {
      canonical.hostname = "127.0.0.1"
      canonical.port = 0
    }
  }
  cluster {
    seed-nodes = [
      "akka://ClusterSystem@127.0.0.1:25260",
      "akka://ClusterSystem@127.0.0.1:25261"]
    downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
  }
}

server, node-seed:

mvn exec:java -Dexec.mainClass="sample.cluster.stats.DcServer" -Dexec.args="25260"
[INFO] --- exec-maven-plugin:3.0.0:java (default-cli) @ akka-sample-cluster-java ---
SLF4J: A number (4) of logging calls during the initialization phase have been intercepted and are
SLF4J: now being replayed. These are subject to the filtering rules of the underlying logging [2021-02-03 10:49:17,518] [INFO] [akka.event.slf4j.Slf4jLogger] [] [ClusterSystem-akka.actor.default-dispatcher-3] - Slf4jLoggersyst started
em.
SLF4J: See also http://www.slf4j.org/codes.html#replay
[2021-02-03 10:49:17,731] [INFO] [akka.remote.artery.tcp.ArteryTcpTransport] [] [ClusterSystem-akka.actor.default-dispatcher-3] - Remoting started with transport [Artery tcp]; listening on address [akka://ClusterSystem@127.0.0.1:25260] with UID [-8821946585502473497]
[2021-02-03 10:49:17,745] [INFO] [akka.cluster.Cluster] [] [ClusterSystem-akka.actor.default-dispatcher-3] - Cluster Node [akka://ClusterSystem@127.0.0.1:25260] - Starting up, Akka version [2.6.10] ...
[2021-02-03 10:49:17,825] [INFO] [akka.cluster.Cluster] [] [ClusterSystem-akka.actor.default-dispatcher-3] - Cluster Node [akka://ClusterSystem@127.0.0.1:25260] - Registered cluster JMX MBean [akka:type=Cluster]
[2021-02-03 10:49:17,825] [INFO] [akka.cluster.Cluster] [] [ClusterSystem-akka.actor.default-dispatcher-3] - Cluster Node [akka://ClusterSystem@127.0.0.1:25260] - Started up successfully
[2021-02-03 10:49:17,853] [INFO] [akka.cluster.sbr.SplitBrainResolver] [] [ClusterSystem-akka.actor.default-dispatcher-5] - SBR started. Config: strategy [KeepMajority], stable-after [20 seconds], down-all-when-unstable [15 seconds], selfUniqueAddress [akka://ClusterSystem@127.0.0.1:25260#-8821946585502473497], selfDc [default].
[2021-02-03 10:49:18,238] [WARN] [akka.stream.Materializer] [] [ClusterSystem-akka.actor.default-dispatcher-18] - [outbound connection to [akka://ClusterSystem@127.0.0.1:25261], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(127.0.0.1/<unresolved>:25261,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused
[2021-02-03 10:49:18,238] [WARN] [akka.stream.Materializer] [] [ClusterSystem-akka.actor.default-dispatcher-18] - [outbound connection to [akka://ClusterSystem@127.0.0.1:25261], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(127.0.0.1/<unresolved>:25261,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused
[2021-02-03 10:49:20,209] [INFO] [akka.actor.ActorCell] [] [ClusterSystem-akka.actor.default-dispatcher-18] - Sending process request - DcIpccTaskHandle
[2021-02-03 10:49:20,213] [INFO] [akka.actor.LocalActorRef] [akkaDeadLetter] [ClusterSystem-akka.actor.default-dispatcher-3] - Message [sample.cluster.stats.dc.TaskMessage] to Actor[akka://ClusterSystem/user/DcIpccTaskHandle#-904996812] was dropped. No routees in group router for [ServiceKey[sample.cluster.stats.dc.TaskMessage](DcIpccTaskHandle)]. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[2021-02-03 10:49:22,217] [INFO] [akka.actor.ActorCell] [] [ClusterSystem-akka.actor.default-dispatcher-18] - Sending process request - DcIpccTaskHandle
[2021-02-03 10:49:22,218] [INFO] [akka.actor.LocalActorRef] [akkaDeadLetter] [ClusterSystem-akka.actor.default-dispatcher-18] - Message [sample.cluster.stats.dc.TaskMessage] to Actor[akka://ClusterSystem/user/DcIpccTaskHandle#-904996812] was dropped. No routees in group router for [ServiceKey[sample.cluster.stats.dc.TaskMessage](DcIpccTaskHandle)]. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[2021-02-03 10:49:22,953] [INFO] [akka.cluster.Cluster] [akkaMemberChanged] [ClusterSystem-akka.actor.default-dispatcher-3] - Cluster Node [akka://ClusterSystem@127.0.0.1:25260] - Node [akka://ClusterSystem@127.0.0.1:25260] is JOINING itself (with roles [DcServer, dc-default], version [0.0.0]) and forming new cluster
[2021-02-03 10:49:22,954] [INFO] [akka.cluster.Cluster] [] [ClusterSystem-akka.actor.default-dispatcher-3] - Cluster Node [akka://ClusterSystem@127.0.0.1:25260] - is the new leader among reachable nodes (more leaders may exist)
[2021-02-03 10:49:22,958] [INFO] [akka.cluster.Cluster] [akkaMemberChanged] [ClusterSystem-akka.actor.default-dispatcher-3] - Cluster Node [akka://ClusterSystem@127.0.0.1:25260] - Leader is moving node [akka://ClusterSystem@127.0.0.1:25260] to [Up]
[2021-02-03 10:49:22,964] [INFO] [akka.cluster.sbr.SplitBrainResolver] [] [ClusterSystem-akka.actor.default-dispatcher-18] - This node is now the leader responsible for taking SBR decisions among the reachable nodes (more leaders may exist).

client, node-member:

mvn exec:java -Dexec.mainClass="sample.cluster.stats.DcWorkerPrv3"                
[INFO] --- exec-maven-plugin:3.0.0:java (default-cli) @ akka-sample-cluster-java ---
SLF4J: A number (4) of logging calls during the initialization phase have been intercepted and are
SLF4J: now being replayed. These are subject to the filter[2021-02-03 10:49:34,416] [INFO] [akka.event.slf4j.Slf4jLogger] [] [ClusterSystem-akka.actor.default-dispatcher-3] - Slf4jLoggering started
 rules of the underlying logging system.
SLF4J: See also http://www.slf4j.org/codes.html#replay
[2021-02-03 10:49:34,632] [INFO] [akka.remote.artery.tcp.ArteryTcpTransport] [] [ClusterSystem-akka.actor.default-dispatcher-3] - Remoting started with transport [Artery tcp]; listening on address [akka://ClusterSystem@127.0.0.1:51639] with UID [-1111624712757022496]
[2021-02-03 10:49:34,647] [INFO] [akka.cluster.Cluster] [] [ClusterSystem-akka.actor.default-dispatcher-3] - Cluster Node [akka://ClusterSystem@127.0.0.1:51639] - Starting up, Akka version [2.6.10] ...
[2021-02-03 10:49:34,731] [INFO] [akka.cluster.Cluster] [] [ClusterSystem-akka.actor.default-dispatcher-3] - Cluster Node [akka://ClusterSystem@127.0.0.1:51639] - Registered cluster JMX MBean [akka:type=Cluster]
[2021-02-03 10:49:34,731] [INFO] [akka.cluster.Cluster] [] [ClusterSystem-akka.actor.default-dispatcher-3] - Cluster Node [akka://ClusterSystem@127.0.0.1:51639] - Started up successfully
[2021-02-03 10:49:34,758] [INFO] [akka.cluster.sbr.SplitBrainResolver] [] [ClusterSystem-akka.actor.default-dispatcher-5] - SBR started. Config: strategy [KeepMajority], stable-after [20 seconds], down-all-when-unstable [15 seconds], selfUniqueAddress [akka://ClusterSystem@127.0.0.1:51639#-1111624712757022496], selfDc [default].
[2021-02-03 10:49:35,163] [WARN] [akka.stream.Materializer] [] [ClusterSystem-akka.actor.default-dispatcher-3] - [outbound connection to [akka://ClusterSystem@127.0.0.1:25261], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(127.0.0.1/<unresolved>:25261,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused
[2021-02-03 10:49:35,163] [WARN] [akka.stream.Materializer] [] [ClusterSystem-akka.actor.default-dispatcher-3] - [outbound connection to [akka://ClusterSystem@127.0.0.1:25261], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(127.0.0.1/<unresolved>:25261,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused
[2021-02-03 10:49:35,279] [INFO] [akka.cluster.Cluster] [] [ClusterSystem-akka.actor.default-dispatcher-5] - Cluster Node [akka://ClusterSystem@127.0.0.1:51639] - Received InitJoinAck message from [Actor[akka://ClusterSystem@127.0.0.1:25260/system/cluster/core/daemon#1349184795]] to [akka://ClusterSystem@127.0.0.1:51639]
[2021-02-03 10:49:35,335] [INFO] [akka.cluster.Cluster] [] [ClusterSystem-akka.actor.default-dispatcher-5] - Cluster Node [akka://ClusterSystem@127.0.0.1:51639] - Welcome from [akka://ClusterSystem@127.0.0.1:25260]

When the node-seed is all restarted, the node-member log is as follows, and it is impossible to reconnect to the cluster again.

[2021-02-03 10:49:46,390] [INFO] [akka.cluster.Cluster] [] [ClusterSystem-akka.actor.default-dispatcher-3] - Cluster Node [akka://ClusterSystem@127.0.0.1:51639] - Exiting confirmed [akka://ClusterSystem@127.0.0.1:25260]
[2021-02-03 10:49:46,390] [INFO] [akka.cluster.sbr.SplitBrainResolver] [] [ClusterSystem-akka.actor.default-dispatcher-5] - This node is now the leader responsible for taking SBR decisions among the reachable nodes (more leaders may exist).
[2021-02-03 10:49:46,978] [INFO] [akka.cluster.Cluster] [] [ClusterSystem-akka.actor.default-dispatcher-5] - Cluster Node [akka://ClusterSystem@127.0.0.1:51639] - is the new leader among reachable nodes (more leaders may exist)
[2021-02-03 10:49:46,988] [INFO] [akka.cluster.Cluster] [akkaMemberChanged] [ClusterSystem-akka.actor.default-dispatcher-5] - Cluster Node [akka://ClusterSystem@127.0.0.1:51639] - Leader is removing confirmed Exiting node [akka://ClusterSystem@127.0.0.1:25260]
[2021-02-03 10:49:47,420] [INFO] [akka.remote.artery.Association] [] [ClusterSystem-akka.actor.default-dispatcher-5] - Association to [akka://ClusterSystem@127.0.0.1:25260] having UID [-8821946585502473497] has been stopped. All messages to this UID will be delivered to dead letters. Reason: ActorSystem terminated

What can I do to make the problem repair itself ?

Does anyone have the same problem?

We have the same problem, have you found the reason?

@patriknw could you pls help in this part

In the original question here it looks like only 25260 is started, not 25261. If restarting 25260 it can’t join 25261 because it’s not running.

Start both and don’t restart both at the same time.

1 Like

tnx a lot @patriknw for reply

our configuration for akka-cluster and akka-remote is like:

akka.actor.serializers {
  kryo = "com.twitter.chill.akka.AkkaSerializer"
}

akka.actor.serialization-bindings {
  "java.lang.Class" = kryo
  "scala.Serializable" = kryo
  "java.io.Serializable" = none
}

akka.remote {
  log-frame-size-exceeding = 5megabyte
  artery {
    enabled = on
    transport = tcp
    canonical {
      hostname = "<getHostAddress>" # external hostname, if is empty, will be resolved using AWS SDK (see AkkaConfig)
      port = 2551 # external port
    }
    bind {
      hostname = "0.0.0.0" # hostname inside a docker container
      port = 2551          # port inside a docker container
    }
    advanced {
      maximum-frame-size = 30 MiB
      maximum-large-frame-size = 30 MiB
    }
    tcp {
      connection-timeout = 15 seconds
    }
  }
}

akka.management {
  http {
    port = 19999
    bind-hostname = "0.0.0.0"
  }
  cluster.bootstrap {
    contact-point-discovery {
      required-contact-point-nr = 2
      discovery-method = kubernetes-api
      stable-margin = 20s
    }
    contact-point {
      fallback-port = ${akka.management.http.port}
    }
  }
}
akka.discovery {
  kubernetes-api {
    pod-label-selector = "app=service-name"
  }
}
akka.cluster {
  configuration-compatibility-check.enforce-on-join = off # Enable it again if needed
  allow-weakly-up-members = off

  seed-nodes = [
    "akka://service-name@127.0.0.1:2551"
  ]
  seed-node-timeout = 60s
  shutdown-after-unsuccessful-join-seed-nodes = 90s

  min-nr-of-members = 1
  role {
    cluster-quorum.min-nr-of-members = 1
  }

  roles = [cluster-quorum]

  failure-detector {
    threshold = 12.0
    min-std-deviation = 200 ms
    acceptable-heartbeat-pause = 6 s
    expected-response-after = 15 s
  }
  downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
  split-brain-resolver {
    active-strategy = lease-majority
    down-members = on
    stable-after = 15s
    keep-majority {
      role = cluster-quorum
    }
    lease-majority {
      lease-implementation = "akka.coordination.lease.kubernetes"
      lease-name = xxx
      acquire-lease-delay-for-minority = 3s
      release-after = 40s
      role = ""
    }
  }
  down-removal-margin = 15s
}

akka.coordination.lease.kubernetes {
  lease-class = "akka.coordination.lease.kubernetes.KubernetesLease"
  api-service-host = "localhost"
  api-service-port = 8080
  namespace-path = "/var/run/secrets/kubernetes.io/serviceaccount/namespace"
  namespace = "<namespace>"
  heartbeat-interval = ""
  heartbeat-timeout = 120s
  api-server-request-timeout = ""
  secure-api-server = true
  lease-operation-timeout = 5s
}

akka.cluster.sharding {
  remember-entities = on
}

The cluster works well when we turn off Artery tcp. By enabling Artery tcp cluster forms well after deployment, but restart all members when we try to restart some pods/members.

It seem new members (restarted ones) stay in weaklyUp state and will not transition to Up properly. And after some moment all members of cluster transition to weaklyUp and then removed from cluster.

some logs after restarting:

[outbound connection to [akka://service-name@10.53.XXX.XXX:2551], * stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.XXX.XXX:2551,None,List(),Some(5000 milliseconds),true)] failed because of java.net.NoRouteToHostException: No route to host

[outbound connection to [akka://service-name@10.53.XXX.XXX:2551], message stream] Upstream failed, cause: Association$OutboundStreamStopQuarantinedSignal$:

Outbound * stream to [akka://service-name@10.53.XXX.XXX:2551] failed. Restarting it. akka.remote.artery.OutboundHandshake$HandshakeTimeoutException: Handshake with [akka://service-name@10.53.XXX.XXX:2551] did not complete within 20000 ms

Coordinated shutdown phase [cluster-exiting] timed out after 10000 milliseconds

[outbound connection to [akka://service-name@10.53.XXX.XXX:2551], * stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.XXX.XXX:2551,None,List(),Some(5000 milliseconds),true)] failed because of akka.io.TcpOutgoingConnection$$anon$2: Connect timeout of Some(5000 milliseconds) expired

Cluster Node [akka://service-name@10.53.XXX.XXX:2551] - Marking node as UNREACHABLE [Member(akka://service-name@10.53.YYY.YYY:2551, Leaving)].

shardEntityClassName: Graceful shutdown of shard region timed out, region will be stopped. Remaining shards [[0-98],[11-99],[6-10],[5-7]], remaining buffered messages [0].

Probing [http://*:19999/bootstrap/seed-nodes] failed due to: Tcp command [Connect(*.default.pod.cluster.local:19999,None,List(),Some(10 seconds),true)] failed because of java.net.ConnectException: Connection refused
1 Like

As @hnourhani mention we faced this problem while we were trying to switch from netty.tcp to artery.tcp. Recently we figured out that problem happens when we perform rollout restart for some (not all) cluster nodes. Main symptom of this problem is that cluster is not calling method .registerOnMemberUp for 4-5 mins after node was spawned in k8s. After that SplitBrainResolver triggers all-cluster-down event and restarts all nodes in cluster. After restart cluster goes back to normal. While problematic was struggling to connect I observed these logs:

2021-10-13T12:59:55.041Z,Outbound control stream to [akka://service-name@10.53.66.190:2551] failed. Restarting it. akka.remote.artery.OutboundHandshake$HandshakeTimeoutException: Handshake with [akka://service-name@10.53.66.190:2551] did not complete within 20000 ms
2021-10-13T12:59:54.833Z,Outbound control stream to [akka://service-name@10.53.28.60:2551] failed. Restarting it. akka.remote.artery.OutboundHandshake$HandshakeTimeoutException: Handshake with [akka://service-name@10.53.28.60:2551] did not complete within 20000 ms
2021-10-13T12:59:54.833Z,Outbound message stream to [akka://service-name@10.53.28.60:2551] failed. Restarting it. akka.remote.artery.OutboundHandshake$HandshakeTimeoutException: Handshake with [akka://service-name@10.53.28.60:2551] did not complete within 20000 ms
2021-10-13T12:59:54.826Z,Message [akka.cluster.GossipEnvelope] from Actor[akka://service-name/system/cluster/core/daemon#-977753005] to Actor[akka://service-name/deadLetters] was not delivered. [20] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:54.825Z,Message [akka.cluster.GossipEnvelope] from Actor[akka://service-name/system/cluster/core/daemon#-977753005] to Actor[akka://service-name/deadLetters] was not delivered. [19] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:54.085Z,"[outbound connection to [akka://service-name@10.53.88.245:2551], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.88.245:2551,None,List(),Some(5000 milliseconds),true)] failed because of akka.io.TcpOutgoingConnection$$anon$2: Connect timeout of Some(5000 milliseconds) expired"
2021-10-13T12:59:42.877Z,"[outbound connection to [akka://service-name@10.53.38.193:2551], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.38.193:2551,None,List(),Some(5000 milliseconds),true)] failed because of akka.io.TcpOutgoingConnection$$anon$2: Connect timeout of Some(5000 milliseconds) expired"
2021-10-13T12:59:42.877Z,"[outbound connection to [akka://service-name@10.53.38.193:2551], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.38.193:2551,None,List(),Some(5000 milliseconds),true)] failed because of akka.io.TcpOutgoingConnection$$anon$2: Connect timeout of Some(5000 milliseconds) expired"
2021-10-13T12:59:38.246Z,"[outbound connection to [akka://service-name@10.53.30.134:2551], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.30.134:2551,None,List(),Some(5000 milliseconds),true)] failed because of java.net.NoRouteToHostException: No route to host"
2021-10-13T12:59:38.245Z,"[outbound connection to [akka://service-name@10.53.30.134:2551], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.30.134:2551,None,List(),Some(5000 milliseconds),true)] failed because of java.net.NoRouteToHostException: No route to host"
2021-10-13T12:59:38.232Z,"[outbound connection to [akka://service-name@10.53.5.174:2551], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.5.174:2551,None,List(),Some(5000 milliseconds),true)] failed because of java.net.NoRouteToHostException: No route to host"
2021-10-13T12:59:38.232Z,"[outbound connection to [akka://service-name@10.53.5.174:2551], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.5.174:2551,None,List(),Some(5000 milliseconds),true)] failed because of java.net.NoRouteToHostException: No route to host"
2021-10-13T12:59:37.721Z,"[outbound connection to [akka://service-name@10.53.39.136:2551], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.39.136:2551,None,List(),Some(5000 milliseconds),true)] failed because of java.net.NoRouteToHostException: No route to host"
2021-10-13T12:59:37.072Z,"[outbound connection to [akka://service-name@10.53.28.60:2551], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.28.60:2551,None,List(),Some(5000 milliseconds),true)] failed because of java.net.NoRouteToHostException: No route to host"
2021-10-13T12:59:37.071Z,"[outbound connection to [akka://service-name@10.53.28.60:2551], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.28.60:2551,None,List(),Some(5000 milliseconds),true)] failed because of java.net.NoRouteToHostException: No route to host"
2021-10-13T12:59:37.064Z,"[outbound connection to [akka://service-name@10.53.83.168:2551], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.83.168:2551,None,List(),Some(5000 milliseconds),true)] failed because of java.net.NoRouteToHostException: No route to host"
2021-10-13T12:59:37.064Z,"[outbound connection to [akka://service-name@10.53.83.168:2551], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.83.168:2551,None,List(),Some(5000 milliseconds),true)] failed because of java.net.NoRouteToHostException: No route to host"
2021-10-13T12:59:35.292Z,"[outbound connection to [akka://service-name@10.53.66.190:2551], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.66.190:2551,None,List(),Some(5000 milliseconds),true)] failed because of java.net.NoRouteToHostException: No route to host"
2021-10-13T12:59:35.291Z,"[outbound connection to [akka://service-name@10.53.41.63:2551], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.41.63:2551,None,List(),Some(5000 milliseconds),true)] failed because of java.net.NoRouteToHostException: No route to host"
2021-10-13T12:59:35.195Z,"[outbound connection to [akka://service-name@10.53.41.63:2551], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.41.63:2551,None,List(),Some(5000 milliseconds),true)] failed because of java.net.NoRouteToHostException: No route to host"
2021-10-13T12:59:35.195Z,"[outbound connection to [akka://service-name@10.53.66.190:2551], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(10.53.66.190:2551,None,List(),Some(5000 milliseconds),true)] failed because of java.net.NoRouteToHostException: No route to host"
2021-10-13T12:59:33.965Z,Cluster Node [akka://service-name@10.53.49.204:2551] - Welcome from [akka://service-name@10.53.16.153:2551]
2021-10-13T12:59:33.804Z,Cluster Node [akka://service-name@10.53.49.204:2551] - Received InitJoinAck message from [Actor[akka://service-name@10.53.16.153:2551/system/cluster/core/daemon#-1396646191]] to [akka://service-name@10.53.49.204:2551]
2021-10-13T12:59:33.779Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-49-204.default.pod.cluster.local-19999#-1256883258] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-49-204.default.pod.cluster.local-19999#-1256883258] was not delivered. [18] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-49-204.default.pod.cluster.local-19999#-1256883258] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:33.474Z,Bootstrap request from 10.53.32.221:58288: Contact Point returning 0 seed-nodes []
2021-10-13T12:59:33.464Z,Bootstrap request from 10.53.49.204:41412: Contact Point returning 0 seed-nodes []
2021-10-13T12:59:32.892Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-32-221.default.pod.cluster.local-19999#1720485968] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-32-221.default.pod.cluster.local-19999#1720485968] was not delivered. [17] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-32-221.default.pod.cluster.local-19999#1720485968] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.774Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-75-204.default.pod.cluster.local-19999#1652826630] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-75-204.default.pod.cluster.local-19999#1652826630] was not delivered. [16] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-75-204.default.pod.cluster.local-19999#1652826630] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.689Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-74-87.default.pod.cluster.local-19999#2112158556] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-74-87.default.pod.cluster.local-19999#2112158556] was not delivered. [15] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-74-87.default.pod.cluster.local-19999#2112158556] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.680Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-7-89.default.pod.cluster.local-19999#-1006710795] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-7-89.default.pod.cluster.local-19999#-1006710795] was not delivered. [14] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-7-89.default.pod.cluster.local-19999#-1006710795] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.674Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-21-209.default.pod.cluster.local-19999#1638973622] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-21-209.default.pod.cluster.local-19999#1638973622] was not delivered. [13] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-21-209.default.pod.cluster.local-19999#1638973622] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.602Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-26-33.default.pod.cluster.local-19999#-1070449504] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-26-33.default.pod.cluster.local-19999#-1070449504] was not delivered. [12] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-26-33.default.pod.cluster.local-19999#-1070449504] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.592Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-9-244.default.pod.cluster.local-19999#80769743] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-9-244.default.pod.cluster.local-19999#80769743] was not delivered. [11] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-9-244.default.pod.cluster.local-19999#80769743] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.587Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-93-15.default.pod.cluster.local-19999#167246373] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-93-15.default.pod.cluster.local-19999#167246373] was not delivered. [10] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-93-15.default.pod.cluster.local-19999#167246373] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.499Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-16-153.default.pod.cluster.local-19999#-265628774] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-16-153.default.pod.cluster.local-19999#-265628774] was not delivered. [9] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-16-153.default.pod.cluster.local-19999#-265628774] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.465Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-85-45.default.pod.cluster.local-19999#914280228] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-85-45.default.pod.cluster.local-19999#914280228] was not delivered. [8] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-85-45.default.pod.cluster.local-19999#914280228] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.390Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-16-229.default.pod.cluster.local-19999#1042119243] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-16-229.default.pod.cluster.local-19999#1042119243] was not delivered. [7] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-16-229.default.pod.cluster.local-19999#1042119243] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.380Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-18-142.default.pod.cluster.local-19999#-324866982] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-18-142.default.pod.cluster.local-19999#-324866982] was not delivered. [6] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-18-142.default.pod.cluster.local-19999#-324866982] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.367Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-8-164.default.pod.cluster.local-19999#-2128734455] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-8-164.default.pod.cluster.local-19999#-2128734455] was not delivered. [5] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-8-164.default.pod.cluster.local-19999#-2128734455] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.366Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-78-27.default.pod.cluster.local-19999#1492036664] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-78-27.default.pod.cluster.local-19999#1492036664] was not delivered. [4] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-78-27.default.pod.cluster.local-19999#1492036664] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.366Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-51-75.default.pod.cluster.local-19999#852236807] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-51-75.default.pod.cluster.local-19999#852236807] was not delivered. [3] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-51-75.default.pod.cluster.local-19999#852236807] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.366Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-14-125.default.pod.cluster.local-19999#-2017607255] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-14-125.default.pod.cluster.local-19999#-2017607255] was not delivered. [2] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-14-125.default.pod.cluster.local-19999#-2017607255] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.365Z,Message [akka.management.cluster.bootstrap.contactpoint.HttpBootstrapJsonProtocol$SeedNodes] from Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-69-62.default.pod.cluster.local-19999#866261469] to Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-69-62.default.pod.cluster.local-19999#866261469] was not delivered. [1] dead letters encountered. If this is not an expected behavior then Actor[akka://service-name/system/bootstrapCoordinator/contactPointProbe-10-53-69-62.default.pod.cluster.local-19999#866261469] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2021-10-13T12:59:32.190Z,"Joining [akka://service-name@10.53.49.204:2551] to existing cluster [akka://service-name@10.53.14.35:2551, akka://service-name@10.53.14.125:2551, akka://service-name@10.53.14.95:2551, akka://service-name@10.53.16.229:2551, akka://service-name@10.53.16.153:2551]"
2021-10-13T12:59:32.189Z,"Contact point [akka://service-name@10.53.14.35:2551] returned [5] seed-nodes [akka://service-name@10.53.14.35:2551, akka://service-name@10.53.14.125:2551, akka://service-name@10.53.14.95:2551, akka://service-name@10.53.16.229:2551, akka://service-name@10.53.16.153:2551]"
2021-10-13T12:59:32.188Z,"Contact point [akka://service-name@10.53.17.253:2551] returned [5] seed-nodes [akka://service-name@10.53.14.35:2551, akka://service-name@10.53.14.125:2551, akka://service-name@10.53.14.95:2551, akka://service-name@10.53.16.229:2551, akka://service-name@10.53.16.153:2551]"
2021-10-13T12:59:32.188Z,"Contact point [akka://service-name@10.53.14.95:2551] returned [5] seed-nodes [akka://service-name@10.53.14.35:2551, akka://service-name@10.53.14.125:2551, akka://service-name@10.53.14.95:2551, akka://service-name@10.53.16.229:2551, akka://service-name@10.53.16.153:2551]"
2021-10-13T12:59:32.185Z,"Contact point [akka://service-name@10.53.14.113:2551] returned [5] seed-nodes [akka://service-name@10.53.14.35:2551, akka://service-name@10.53.14.125:2551, akka://service-name@10.53.14.95:2551, akka://service-name@10.53.14.113:2551, akka://service-name@10.53.16.153:2551]"
2021-10-13T12:59:32.168Z,"Contact point [akka://service-name@10.53.73.86:2551] returned [5] seed-nodes [akka://service-name@10.53.14.35:2551, akka://service-name@10.53.14.125:2551, akka://service-name@10.53.14.95:2551, akka://service-name@10.53.16.229:2551, akka://service-name@10.53.16.153:2551]"
2021-10-13T12:59:31.977Z,"Located service members based on: [Lookup(service-name,None,Some(tcp))]: [ResolvedTarget(10-53-85-45.default.pod.cluster.local,None,Some(/10.53.85.45)), ResolvedTarget(10-53-49-204.default.pod.cluster.local,None,Some(/10.53.49.204)), ResolvedTarget(10-53-17-253.default.pod.cluster.local,None,Some(/10.53.17.253)), ResolvedTarget(10-53-74-87.default.pod.cluster.local,None,Some(/10.53.74.87)), ResolvedTarget(10-53-14-113.default.pod.cluster.local,None,Some(/10.53.14.113)), ResolvedTarget(10-53-14-125.default.pod.cluster.local,None,Some(/10.53.14.125)), ResolvedTarget(10-53-78-27.default.pod.cluster.local,None,Some(/10.53.78.27)), ResolvedTarget(10-53-93-15.default.pod.cluster.local,None,Some(/10.53.93.15)), ResolvedTarget(10-53-51-75.default.pod.cluster.local,None,Some(/10.53.51.75)), ResolvedTarget(10-53-16-229.default.pod.cluster.local,None,Some(/10.53.16.229)), ResolvedTarget(10-53-21-209.default.pod.cluster.local,None,Some(/10.53.21.209)), ResolvedTarget(10-53-7-89.default.pod.cluster.local,None,Some(/10.53.7.89)), ResolvedTarget(10-53-9-244.default.pod.cluster.local,None,Some(/10.53.9.244)), ResolvedTarget(10-53-18-142.default.pod.cluster.local,None,Some(/10.53.18.142)), ResolvedTarget(10-53-14-95.default.pod.cluster.local,None,Some(/10.53.14.95)), ResolvedTarget(10-53-26-33.default.pod.cluster.local,None,Some(/10.53.26.33)), ResolvedTarget(10-53-69-62.default.pod.cluster.local,None,Some(/10.53.69.62)), ResolvedTarget(10-53-16-153.default.pod.cluster.local,None,Some(/10.53.16.153)), ResolvedTarget(10-53-32-221.default.pod.cluster.local,None,Some(/10.53.32.221)), ResolvedTarget(10-53-75-204.default.pod.cluster.local,None,Some(/10.53.75.204)), ResolvedTarget(10-53-14-35.default.pod.cluster.local,None,Some(/10.53.14.35)), ResolvedTarget(10-53-73-86.default.pod.cluster.local,None,Some(/10.53.73.86)), ResolvedTarget(10-53-8-164.default.pod.cluster.local,None,Some(/10.53.8.164))], filtered to [10-53-85-45.default.pod.cluster.local:0, 10-53-78-27.default.pod.cluster.local:0, 10-53-32-221.default.pod.cluster.local:0, 10-53-14-35.default.pod.cluster.local:0, 10-53-49-204.default.pod.cluster.local:0, 10-53-18-142.default.pod.cluster.local:0, 10-53-69-62.default.pod.cluster.local:0, 10-53-51-75.default.pod.cluster.local:0, 10-53-75-204.default.pod.cluster.local:0, 10-53-14-125.default.pod.cluster.local:0, 10-53-26-33.default.pod.cluster.local:0, 10-53-73-86.default.pod.cluster.local:0, 10-53-17-253.default.pod.cluster.local:0, 10-53-16-229.default.pod.cluster.local:0, 10-53-7-89.default.pod.cluster.local:0, 10-53-93-15.default.pod.cluster.local:0, 10-53-14-95.default.pod.cluster.local:0, 10-53-14-113.default.pod.cluster.local:0, 10-53-8-164.default.pod.cluster.local:0, 10-53-21-209.default.pod.cluster.local:0, 10-53-74-87.default.pod.cluster.local:0, 10-53-9-244.default.pod.cluster.local:0, 10-53-16-153.default.pod.cluster.local:0]"
2021-10-13T12:59:31.819Z,"Located service members based on: [Lookup(service-name,None,Some(tcp))]: [ResolvedTarget(10-53-85-45.default.pod.cluster.local,None,Some(/10.53.85.45)), ResolvedTarget(10-53-49-204.default.pod.cluster.local,None,Some(/10.53.49.204)), ResolvedTarget(10-53-17-253.default.pod.cluster.local,None,Some(/10.53.17.253)), ResolvedTarget(10-53-74-87.default.pod.cluster.local,None,Some(/10.53.74.87)), ResolvedTarget(10-53-14-113.default.pod.cluster.local,None,Some(/10.53.14.113)), ResolvedTarget(10-53-14-125.default.pod.cluster.local,None,Some(/10.53.14.125)), ResolvedTarget(10-53-78-27.default.pod.cluster.local,None,Some(/10.53.78.27)), ResolvedTarget(10-53-93-15.default.pod.cluster.local,None,Some(/10.53.93.15)), ResolvedTarget(10-53-51-75.default.pod.cluster.local,None,Some(/10.53.51.75)), ResolvedTarget(10-53-16-229.default.pod.cluster.local,None,Some(/10.53.16.229)), ResolvedTarget(10-53-21-209.default.pod.cluster.local,None,Some(/10.53.21.209)), ResolvedTarget(10-53-7-89.default.pod.cluster.local,None,Some(/10.53.7.89)), ResolvedTarget(10-53-9-244.default.pod.cluster.local,None,Some(/10.53.9.244)), ResolvedTarget(10-53-18-142.default.pod.cluster.local,None,Some(/10.53.18.142)), ResolvedTarget(10-53-14-95.default.pod.cluster.local,None,Some(/10.53.14.95)), ResolvedTarget(10-53-26-33.default.pod.cluster.local,None,Some(/10.53.26.33)), ResolvedTarget(10-53-69-62.default.pod.cluster.local,None,Some(/10.53.69.62)), ResolvedTarget(10-53-16-153.default.pod.cluster.local,None,Some(/10.53.16.153)), ResolvedTarget(10-53-32-221.default.pod.cluster.local,None,Some(/10.53.32.221)), ResolvedTarget(10-53-75-204.default.pod.cluster.local,None,Some(/10.53.75.204)), ResolvedTarget(10-53-14-35.default.pod.cluster.local,None,Some(/10.53.14.35)), ResolvedTarget(10-53-73-86.default.pod.cluster.local,None,Some(/10.53.73.86)), ResolvedTarget(10-53-8-164.default.pod.cluster.local,None,Some(/10.53.8.164))], filtered to [10-53-85-45.default.pod.cluster.local:0, 10-53-78-27.default.pod.cluster.local:0, 10-53-32-221.default.pod.cluster.local:0, 10-53-14-35.default.pod.cluster.local:0, 10-53-49-204.default.pod.cluster.local:0, 10-53-18-142.default.pod.cluster.local:0, 10-53-69-62.default.pod.cluster.local:0, 10-53-51-75.default.pod.cluster.local:0, 10-53-75-204.default.pod.cluster.local:0, 10-53-14-125.default.pod.cluster.local:0, 10-53-26-33.default.pod.cluster.local:0, 10-53-73-86.default.pod.cluster.local:0, 10-53-17-253.default.pod.cluster.local:0, 10-53-16-229.default.pod.cluster.local:0, 10-53-7-89.default.pod.cluster.local:0, 10-53-93-15.default.pod.cluster.local:0, 10-53-14-95.default.pod.cluster.local:0, 10-53-14-113.default.pod.cluster.local:0, 10-53-8-164.default.pod.cluster.local:0, 10-53-21-209.default.pod.cluster.local:0, 10-53-74-87.default.pod.cluster.local:0, 10-53-9-244.default.pod.cluster.local:0, 10-53-16-153.default.pod.cluster.local:0]"
2021-10-13T12:59:30.465Z,"Querying for pods with label selector: [app=service-name,country=routing]. Namespace: [default]. Port: [None]"
2021-10-13T12:59:30.464Z,"Looking up [Lookup(service-name,None,Some(tcp))]"
2021-10-13T12:59:29.365Z,"Querying for pods with label selector: [app=service-name,country=routing]. Namespace: [default]. Port: [None]"
2021-10-13T12:59:29.365Z,"Looking up [Lookup(service-name,None,Some(tcp))]"
2021-10-13T12:59:29.313Z,"Locating service members. Using discovery [akka.discovery.kubernetes.KubernetesApiServiceDiscovery], join decider [akka.management.cluster.bootstrap.LowestAddressJoinDecider], scheme [http]"
2021-10-13T12:59:29.292Z,Bound Akka Management (HTTP) endpoint to: 0.0.0.0:19999
2021-10-13T12:59:29.241Z,Bootstrap using `akka.discovery` method: kubernetes-api
2021-10-13T12:59:29.239Z,Initiating bootstrap procedure using kubernetes-api method...
2021-10-13T12:59:28.640Z,Including HTTP management routes for HealthCheckRoutes
2021-10-13T12:59:28.581Z,Using self contact point address: http://10.53.49.204:19999
2021-10-13T12:59:28.571Z,Including HTTP management routes for ClusterBootstrap
2021-10-13T12:59:28.296Z,"SBR started. Config: strategy [LeaseMajority], stable-after [2 minutes], down-all-when-unstable [2 minutes], selfUniqueAddress [akka://service-name@10.53.49.204:2551#9094264407656853622], selfDc [default]."
2021-10-13T12:59:28.292Z,Binding Akka Management (HTTP) endpoint to: 0.0.0.0:19999
2021-10-13T12:59:28.277Z,"Cluster Node [akka://service-name@10.53.49.204:2551] - No seed nodes found in configuration, relying on Cluster Bootstrap for joining"
2021-10-13T12:59:28.101Z,Loading liveness checks []
2021-10-13T12:59:28.100Z,"Loading readiness checks [(sharding,akka.cluster.sharding.ClusterShardingHealthCheck)]"
2021-10-13T12:59:28.080Z,Cluster Node [akka://service-name@10.53.49.204:2551] - Metrics collection has started successfully
2021-10-13T12:59:26.519Z,Cluster Node [akka://service-name@10.53.49.204:2551] - Started up successfully
2021-10-13T12:59:26.519Z,Cluster Node [akka://service-name@10.53.49.204:2551] - Registered cluster JMX MBean [akka:type=Cluster]
2021-10-13T12:59:26.388Z,"Cluster Node [akka://service-name@10.53.49.204:2551] - Starting up, Akka version [2.6.14] ..."
2021-10-13T12:59:26.328Z,Remoting started with transport [Artery tcp]; listening on address [akka://service-name@10.53.49.204:2551] and bound to [akka://service-name@0.0.0.0:2551] with UID [9094264407656853622]
2021-10-13T12:59:25.243Z,Slf4jLogger started

I compared config with netty.tcp part line-by-line and they looks pretty similar (hostnames, ports, discovery strategies etc.). @patriknw could you help please?

Is there an update on this? :slight_smile: