Error running the reference hello Lagom application for Scala

kafka
scala

(Jim) #1

Hi,

I have followed the instructions on getting started with Lagom and have run reference implementation however when I go to the URL http://localhost:9000/api/hello/Alice I get an HTTP 500 error. The only thing I can see in the logs is:

2018-03-15T20:12:22.524Z [error] akka.actor.ActorSystemImpl [sourceThread=application-akka.actor.default-dispatcher-7, akkaSource=akka.actor.ActorSystemImpl(application), sourceActorSystem=application, akkaTimestamp=20:12:22.524UTC] - Internal server error, sending 500 response
akka.stream.StreamTcpException: Tcp command [Connect(0.0.0.0:57797,None,List(),Some(10 seconds),true)] failed because of Connect timeout of Some(10 seconds) expired

Has anyone else had this error?

Thanks

Jim


(Renato) #2

This is new for me. Can you describe which getting started guide did you follow? Scala or Java, sbt or maven? That will help us narrow down the possibilities.

Have you also see any other error on bootstraping?


(Jim) #3

Hi Renato,

I followed the steps here: https://www.lagomframework.com/documentation/1.4.x/scala/IntroGetStarted.html

I was at a Scala dojo and my colleagues also tried this and it was also failing for one other, for the rest (about 6) it was working.

Regards

Jim


(Jim) #4

Also now in the logs it is continually logging this:

08:42:36.149 [warn] org.apache.kafka.clients.NetworkClient [] - Connection to node -1 could not be established. Broker may not be available.
08:42:36.203 [warn] org.apache.kafka.clients.NetworkClient [] - Connection to node -1 could not be established. Broker may not be available.


(Ignasi Marimon-Clos) #5

Hi @Jim,

this sounds like a specific setup in those two machines where the error happened. There are several possible context differences to look for to try and isolate the actual cause:

  • available memory
  • JVM version (major and minor)
  • sbt launcher version (I don’t think this is really an issue)

Make sure you clean your target/ folders completely upon retrials. ZooKeeper is known to sometimes corrupt it’s storage if closed abruptly.

Cheers


(Jim) #6

Hi Ignasi,

I did do a clean and delete the target folder and still having the same problem. This is my Java version

java version “1.8.0_161”
Java™ SE Runtime Environment (build 1.8.0_161-b12)

It was running on a MacBook Pro with 16GB of memory. I have no problem running Play/Akka apps.

Thanks

Jim


(Tim Moore) #7

@Jim is your machine configured to use a proxy or VPN? You might need to configure your system or your build tool to exclude all of your local IP addresses from proxying.


(Jim) #8

I have Tunnelblick installed but I have shut it down. I also shut Docker down in case that was causing any problems.


(Pascal H.) #9

I had the same problem

[warn] org.apache.kafka.clients.NetworkClient [] - Connection to node -1 could not be established. Broker may not be available.

In my case deleting target directory helps a lot.

Unfortunately, this does not help to have a stable development environment.
@TimMoore, @renato :

  • Do you have any recommendation to prevent corruption by ZooKeeper ?
  • Do you use the embedded Kafka broker (which is very convenient for a rapid testing) or do you use an external installation?

(Renato) #10

@hpwxf, we mostly use the embedded one and this is the first time we saw this issue.

It will be interesting to track down how this is happening. Seems to be environment related.

Can you tell us which OS and which Java version you are using?


#11

Not a fix, but to get moving again faster I added these tasks to sbt:

val cleanCas = taskKey[Unit]("Delete embedded-cassandra data.")
val fixKafka = taskKey[Unit]("When kafka won't start, run this to delete zookeeper data.")

then add the tasks to the root project:

lazy val `hello` = (project in file("."))
  .aggregate(
    // your services here
  )
  .settings(
    fixKafka := IO.delete(baseDirectory.value / "target/lagom-dynamic-projects/lagom-internal-meta-project-kafka/target/zookeeper_data"),
    cleanCas := IO.delete(baseDirectory.value / "target/embedded-cassandra/data")
  )

(Pascal H.) #12

My current configuration is:

  • Java 1.8.0.144 (I’ve also another config with 1.8.0.161; I could try with Java 9 if you think it’s a good idea)
  • Lagom 1.4.0 (I can upgrade to 1.4.1 if it helps)
  • macOS High Sierra 10.13.3

I built my code from a derivation of online-auction-lagom example.

In my environment, this problem occurs at least once each day (after crash of components in my code).
This problem already occured when I studied online-auction-lagom example.


(Ignasi Marimon-Clos) #13

After some reviewing, I think

Connection to node -1 could not be established. Broker may not be available.

means the embedded Kafka is not running. When using sbt runAll there’s a forked JVM running KafkaLauncher. When you see that error what is actually hapening is that the sbt process where your services are running started a KafkaLauncher which crashed mid-start.

When that happens you can use jps (or ps aux) to see how the KafkaLauncher process is gone.

I don’t know of any way to reproduce the bug and prevent it so any information you can gather when that happens would be a great help to narrowing down the causes and eventually providing a fix. Feel free to move the discussion to an issue if you prefer.

Cheers,

PS: When you experience the crash you can have a look at Kafka logs in "target/lagom-dynamic-projects/lagom-internal-meta-project-kafka/target/log4j


(Pascal H.) #14

Thank you for your help.

You’re right. The embedded Kafka crashed.

[info] Starting Kafka
[info] Starting Cassandra
..........
[info] Cassandra server running at 127.0.0.1:4000
Kafka Server closed unexpectedly.

If I inspect logs in the directory you suggest, I can see in server.log:

[2018-03-26 14:07:07,953] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2018-03-26 14:07:07,957] INFO Got user-level KeeperException when processing sessionid:0x16262346bea0000 type:create cxid:0x24 zxid:0xbe txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-03-26 14:07:07,957] INFO Got user-level KeeperException when processing sessionid:0x16262346bea0000 type:create cxid:0x25 zxid:0xbf txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-03-26 14:07:07,957] INFO Got user-level KeeperException when processing sessionid:0x16262346bea0000 type:create cxid:0x26 zxid:0xc0 txntype:-1 reqpath:n/a Error Path:/brokers/ids/0 Error:KeeperErrorCode = NodeExists for /brokers/ids/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-03-26 14:07:07,960] INFO Result of znode creation is: NODEEXISTS (kafka.utils.ZKCheckedEphemeral)
[2018-03-26 14:07:07,961] FATAL [Kafka Server 0], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.RuntimeException: A broker is already registered on the path /brokers/ids/0. This probably indicates that you either have configured a brokerid that is already in use, or else you have shutdown this broker and restarted it faster than the zookeeper timeout so it appears to be re-registering.

Can you see something useful here? (I can upload the entire file).

If not, I will create an issue with all data I have.


(Renato) #15

I think we can move it to a GitHub issue and continue discussing and solving it there.


(Pascal H.) #16

Ok;
New Github issue: https://github.com/lagom/lagom/issues/1285