Seeing the akka.remote.OversizedPayloadException while using replicated data

Hi folks, need some help: i am using akka 2.5.23 and I am using the Replicated data. I noticed after adding arround 53-60 elements i hit the size limit and start seeing this error: akka.remote.OversizedPayloadException: Discarding oversized payload sent to Some(Actor[akka://AssetRepairOrchestrator@asset-repair-02301.node.ad2.r2:56745/system/ddataReplicator/$wi#-289779066]): max allowed size 262144 bytes. Message type [akka.cluster.ddata.Replicator$Internal$ReadResult]. after which i see the same error in gossip akka.remote.OversizedPayloadException: Discarding oversized payload sent to Some(Actor[akka://AssetRepairOrchestrator@asset-repair-02303.node.ad2.r2:56745/system/ddataReplicator#1559929375]): max allowed size 262144 bytes. Message type [akka.cluster.ddata.Replicator$Internal$Gossip]. I updated the conf file and edited the max-delta-elements to 45, but still see the issue. The full conf file looks as shown below:`akka {

loggers = [“akka.event.slf4j.Slf4jLogger”]
loglevel = “DEBUG”
logging-filter = “akka.event.slf4j.Slf4jLoggingFilter”

akka.extensions = [“akka.cluster.metrics.ClusterMetricsExtension”]

actor {
provider = “cluster”

enable-additional-serialization-bindings = on
allow-java-serialization = off

serializers {
  jackson = "com.oracle.pic.smgl.aro.messages.JacksonSerializer"
}
serialization-bindings {
  "com.oracle.pic.smgl.aro.messages.AROMessage" = jackson
}

}

cluster {
roles = [“aro-service”]
distributed-data {
max-delta-elements = 30
}
}

remote {

artery {
  enabled = on
  transport = tls-tcp
  canonical.port = ${akkaRemotePort}
  canonical.hostname = "<getHostName>"
  bind.hostname = "<getHostName>"
  bind.port = ${akkaRemotePort}

  ssl {
    ssl-engine-provider = com.oracle.pic.commons.akka.ArterySSLEngineProviderImpl
    config-ssl-engine {

      server {
       <omited>
      }

      client {
        <omited>
      }
    }
  }

}

}

discovery {
method = odo
odo {
applications = [
{
name = {odoAppName} akkaManagementPort = {akkaManagementPort}
}
]
}
}

management {
http {
port = ${akkaManagementPort}
bind-hostname = “0.0.0.0”

}

cluster.bootstrap = {
  effective-name = "my service"
}

}

}`

Any updates?

The delta-crdt support is only an optimization. Sometimes the full state must be transferred, for example when a new node joins the cluster. The Gossip message is the full state and that may contain several top level entries (several different ORMaps if you have that). The number of such entries to be included in a single Gossip message can be configured with akka.cluster.distributed-data.max-delta-elements.

You have tried to reduce that to 30. I don’t know if you have more than 30 top level entries, or if it is the size of one specific ORMap that is too large. It sounds like it is the latter since you mention 53-60 elements. That is not limited by the max-delta-elements configuration. You have to use less elements in the map, or reduce the size of each element.

One way is to split up the ORMap into several top level ORMaps.

I don’t know about that JacksonSerializer that you are using, but JSON can be rather verbose and you might be interested in using the JacksonSerializer in Akka 2.6.0, which has support for compression and also CBOR.

Thanks Patrik. I changed my code to store all the entities as top level entities. Is the configuration of remote.artery.advanced.maximum-frame-size denote the total permissible size of all the data in all the top level entities. I ask this as i have started to see the following error with akka.cluster.ddata.Replicator$Internal$Write

ERROR 2019-11-18 00:10:27,881 [AssetRepairOrchestrator-akka.actor.default-dispatcher-37] akka.remote.artery.Encoder: Failed to serialize oversized message [ActorSelectionMessage(akka.cluster.ddata.Replicator$**Internal$Write)].**
akka.remote.OversizedPayloadException: Discarding oversized payload sent to Some(Actor[akka://AssetRepairOrchestrator@asset-repair-02303.node.ad2.r2:56745/]): max allowed size 262144 bytes. Message type [ActorSelectionMessage(akka.cluster.ddata.Replicator$Internal$Write)].

The Write is used for the direct replication of a Update, i.e. a single top level entry. Meaning that your entry is still too large.

You could increase maximum-frame-size but you would probably run into performance problems by too large messages.

Have you looked into more compact serialization format, or compression of the json?

Thanks for all the help. I had a second ORmap which was creating the issue. This is solved. I will look into compression of json.