Alpakka S3: difficulty migrating to redesigned API?

I inherited a codebase that was developed against 0.20, need to migrate to 1.x because of binary interface changes within Akka.

It looks like the Alpakka S3 connector API changed significantly, I’m able to squeeze the new connector in the existing doing something like this (untested still because of trouble below):

    override def provide(key: String): Source[ByteString, NotUsed] = {
      S3.download(bucket.getName, key).flatMapMerge(1, { _.fold(Source.empty[ByteString])(_._1) })
    }

however, upgrading the automated tests which are run against https://github.com/findify/s3mock, the S3Mock is started with a free port chosen at runtime.

Previously it was simple to inject a custom configuration (they can differ per test) into S3Client and away you go, now one appears to be limited to:

  1. application.conf
  2. adding a custom .withAttributes on every API call?

Neither of these ways is convenient at all, there appears to be no simple way to create & inject settings programmatically once per test?

I suppose this was driven by the design decision to expose the API as object rather than instance methods. What about adding an implicit to the public API for settings to make this a bit less painful? Am I missing something huge?

Thanks
Peter

I guess another way may be to recreate the ActorSystem every test and inject a Config

Yes, passing different config to the ActorSystem is a good approach.

That’s what we do in our S3 tests

Cheers,
Enno.