JAR using play-json works differently on machines using spark-submit

Play JSON Version (2.5.x / etc)


API (Scala / Java / Neither / Both)


Operating System (Ubuntu 15.10 / MacOS 10.10 / Windows 10)

Use uname -a if on Linux.

Linux chs-gqc-379-mn003.us-south.ae.appdomain.cloud 3.10.0-957.27.2.el7.x86_64 #1 SMP Mon Jul 29 17:46:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

JDK (Oracle 1.8.0_72, OpenJDK 1.8.x, Azul Zing)

openjdk version “1.8.0_212”
Scala 2.11.8 being used inside Spark 2.3.2

Paste the output from java -version at the command line.

Library Dependencies

build.sbt contents:

name := "CMDW-Security"
version := "19.11.25"
scalaVersion := "2.11.12"

libraryDependencies ++= Seq(
    "org.scalaj" %% "scalaj-http" % "2.4.2",
    "org.scalatest" %% "scalatest" % "3.0.5" % "test",
    "com.typesafe.play" %% "play-json" % "2.7.3",
    "org.passay" % "passay" % "1.5.0"

If this is an issue that involves integration with another system, include the exact version and OS of the other system, including any intermediate drivers or APIs i.e. if you connect to a PostgreSQL database, include both the version / OS of PostgreSQL and the JDBC driver version used to connect to the database.

Expected Behavior

Please describe the expected behavior of the issue, starting from the first action.

I use a JAR file to be executed using spark-submit . I’m using scalaj-http library to read a JSON from the internet and parse it using play-json library.

The code involves reading from JSON in this way: response("token")

I try executing my code using spark-submit on two different machines (Mac/Spark 2.3.2/Scala 2.11.12) and Analytics Engine on IBM Cloud (Linux chs-gqc-379-mn003.us-south… [as mentioned above]/Scala 2.11.8/Spark 2.3.2)

Actual Behavior

While reading the JSON works on my local Mac machine using spark-submit , unfortunately it does not work on the Analytics Engine, and it throws the following error:

Exception in thread "main" java.lang.NoSuchMethodError: play.api.libs.json.JsLookup$.apply$extension1(Lplay/api/libs/json/JsLookupResult;Ljava/lang/String;)Lplay/api/libs/json/JsValue;
	at com.ibm.cmdwldap.restapi.Authorization.getAuthToken(Authorization.scala:42)
	at com.ibm.cmdwldap.executable.Test$.main(Test.scala:21)
	at com.ibm.cmdwldap.executable.Test.main(Test.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:904)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Authorization.scala:42 refers to reading the JSON as mentioned above. Given that both run the same version of Spark and an almost similar version of Scala, I’m quite confused to why my program crashes on Analytics Engine.

PS. When I run my JAR file directly using java -jar , it works on both the machines, but my scenario requires me to do a spark-submit on Analytics Engine, hence I’m trying to explore all possible options.

Reproducible Test Case

  1. Create any Scala program that reads a JSON using play-json
  2. Compile it into a JAR
  3. Run it on an instance of Analytics Engine of IBM Cloud, it will crash.