CodecNotFoundException

scala
akka

(dheeraj) #1

Hi All,

I am trying to create a simple repository for CRUD operations in Lagom using Akka Persistence for Cassandra.

I created a table as follow:

CREATE TABLE IF NOT EXISTS test (id bigint, amount decimal, PRIMARY KEY (id));

Now when i tried to insert a record using com.lightbend.lagom.scaladsl.persistence.cassandra.CassandraSession like this

session.executeWrite(“INSERT INTO test (id, amount) VALUES (?,?)”, BigInt(1), BigDecimal(1231.123))

Gives following two exception:
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [decimal <-> scala.math.BigDecimal]

com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [bigint <-> scala.math.BigInt]

And record couldn’t get inserted.

I completely understand that Cassandra doesn’t understood scala.math.BigInt and scala.math.BigDecimal so i
tried by typecasting these values in Java data types like this

session.executeWrite(“INSERT INTO test (id, amount) VALUES (?,?)”, java.lang.Long.valueOf(1), java.math.BigDecimal.valueOf(1111.2121))

And record get inserted.

May be I completly on different path so my question is should I use Akka Persistence for such kind of purpose or Akka Persistnce for Cassandra is not developed for such kind of use.
If I can use it for such kind of use then why scala data types are not supported?
Is there any documentation for Cassandra data types mapping with Scala data types? Such as
https://docs.datastax.com/en/datastax_enterprise/4.8/datastax_enterprise/spark/sparkSupportedTypes.html
If there are any examples available please share with me.

Regards,
Dheeraj


(Renato) #2

Hi @dheeraj,

I think you are mixing things. Akka Persistence Cassandra is not a general purpose Scala/Cassandra integration library. It’s built on Scala, that’s true, but it’s not about providing codecs or data type mappers to store Scala types for Cassandra.

Akka Persistence Cassandra is a plugin for akka-persistence and as such, it focuses on append-only, event sourcing, style of persistence. This is not about CRUD persistence neither about codecs.

When you persist an event using the plugin, your event must first be serialized to some binary representation (eg: protobuf, avro, etc). The serialization mechanism has nothing to do with data type mapping.

Because you mentioned Lagom, some extra information can be helpful. Your events will be serialized first to json and then to a byte array. What you get in your table is the string representation of the json payload, but serialized as byte array.