Consumer interrupted with WakeupException after timeout. Message: null


#21

Any ideas?


(Alan Klikic) #22

This kafka log looks ok.
What problem do you have now?


#23

Still the same issue. :frowning:

Consumer interrupted with WakeupException after timeout. Message: null.

I see that the producer works fine. But the consumer not.


(Alan Klikic) #24

How did you conclude that producer is working?
Can you try consuming messages from the topic using kafka console cosumer?


#25

I think that the producer is working because I changed the id after that I got errors. Turn back solved that. But if I start the client I get this error. The logs of my kafka let me think that all works fine. Sorry this are my frist experience with kafka.


#26

I logged me now in kafka-0. I will try to get all topics. I looking for an example at this moment. If you have any help for me, I would be thankful.


(Alan Klikic) #27

Check again your cassandra offset table. If timeuuidoffset column is populated with offset producing should work fine.

Also check connectivity using kafka ip and port by checking kafka endpoints.

I deploy kafka on deficated hosts (headless service without selector is used) so l’m not aware of potential challenges when deployed on kubernetes.


#28

timeuuidoffset is still null :frowning:

If you mean the connectivity test you describe above, it is the same error.

I have no name!@kafka-0:/$ kafka-console-producer.sh --topic test --broker-list localhost:9092
>test1
[2019-03-13 20:53:23,061] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 1 : {test=UNKNOWN_SERVER_ERROR} (org.apache.kafka.clients.NetworkClient)
[2019-03-13 20:53:23,162] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 3 : {test=UNKNOWN_SERVER_ERROR} (org.apache.kafka.clients.NetworkClient)

I executed this inside kafka-0

Oh wait! I have found something!

I have no name!@kafka-0:/$ kafka-topics.sh --zookeeper 10.***.21.50:2181 --list
item-issued-topic
topic-item-created

#29

Their should be a few messages from today morning.

I have no name!@kafka-0:/$ kafka-console-consumer.sh --bootstrap-server kafka-0:9092 --topic topic-item-created --from-beginning
^CProcessed a total of 0 messages

But their are no messages. But the topics are created. What could be the problem that the messages was not submitted to kafka?


(Alan Klikic) #30

This has to be related to kafka deployment and setup and not related directly to Lagom.


#31

Okay. I will ask at stackoverflow for kafka help.

But why I get all the time the null message exception. Is that also the problem with kafka?

Thank you Alan for your time


(Alan Klikic) #32

This a generic exception that indicates, in most cases, connection problem. I agree that it is confusing and does not point to the right cause. I beleive this has been resolved in the newest version of alpakka kafka (not yet used in lagom).


#33

Hi Alan,
good to know. Now I found out that the send data are too big.

[2019-03-12 13:43:12,316] WARN [SocketServer brokerId=1001] Unexpected error from /10.***.97.165; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 369295616 larger than 104857600)
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:104)
        at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:381)
        at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:342)
        at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:467)
        at kafka.network.Processor.poll(SocketServer.scala:689)
        at kafka.network.Processor.run(SocketServer.scala:594)
        at java.lang.Thread.run(Thread.java:748)
[2019-03-12 13:43:22,229] WARN [SocketServer brokerId=1001] Unexpected error from /10.***.97.165; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:104)
        at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:381)
        at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:342)
        at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:467)
        at kafka.network.Processor.poll(SocketServer.scala:689)
        at kafka.network.Processor.run(SocketServer.scala:594)
        at java.lang.Thread.run(Thread.java:748)
[2019-03-12 13:50:24,530] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-03-12 13:52:22,738] WARN [SocketServer brokerId=1001] Unexpected error from /10.***.97.165; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:104)
        at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:381)
        at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:342)
        at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:467)
        at kafka.network.Processor.poll(SocketServer.scala:689)
        at kafka.network.Processor.run(SocketServer.scala:594)
        at java.lang.Thread.run(Thread.java:748)

Why are the messages are so big? Is that the reason because their comes a big number on messages from the producer at once?
Possible to change the behavior for that or is that also a setting problem in kafka?

Thank you


(Alan Klikic) #34

This is around 352MB. I do not believe you are publishing message with payload of this size.
This is for sure kafka connection issue. Check this