Alpakka S3 connector sending incorrect date in x-amz-date header

We have a Scala-Play service deployed in Kubernetes, which uses the Alpakka S3 connector to upload files to S3. We’re using version 0.15, which is a bit elderly but has been working just fine, until two days ago.

On the morning of 12/30, our time, we started getting “RequestTimeTooSkewed” exceptions from Amazon for every single request. Amazon checked their logs and told us the date we were passing to them in the x-amz-date header was 30-12-2019 instead of 30-12-2018.

The problem persisted through 31-12, and today, 01-01-2019 our time in the morning, it stopped (not surprisingly, since the year now is really 2019).

Does anyone have any idea on the subject that can help us avoid it in the future? Is there any known issue with the x-amz-date header that could cause it to behave like this?

Just to stress - nothing changed in the code interacting with Alpakka, nor in its configuration. An existing pod deployed in Kubernetes and working fine, was deleted, and when it restarted it stopped working.

Thanks for any bit of information you can provide.

Below are the specifics of our implementation.


**Deployment:** Debian on Kubernetes pods, base docker image openjdk:8-jdk

**The dependency (in sbt):**

  "com.lightbend.akka" %% "akka-stream-alpakka-s3" % "0.15"

**Our Scala code:**

  implicit val materializer = ActorMaterializer(ActorMaterializerSettings(actorSystem).withSupervisionStrategy(decider))

  val awsCredentialsProvider = new DefaultAWSCredentialsProviderChain()
  val settings = new S3Settings(bufferType = MemoryBufferType, proxy = None,
    credentialsProvider = awsCredentialsProvider, s3Region = "eu-west-1", pathStyleAccess = false)
  val s3Client = new S3Client(settings)(actorSystem, materializer)

  def s3Sink(key: String): Sink[ByteString, Future[MultipartUploadResult]] = {
    s3Client.multipartUpload(config.googleOcrS3Bucket, key, ContentTypes.`application/json`)

  def uploadToS3Flow(localFile: String, s3Key: String) = {

**The configuration in application.conf:** {
  executor = "thread-pool-executor"
  thread-pool-executor {
    keep-alive-time = 60s
    fixed-pool-size = off
    core-pool-size-min = 8
    core-pool-size-factor = 3.0
    core-pool-size-max = 32
    max-pool-size-min = 8
    max-pool-size-factor  = 3.0
    max-pool-size-max = 32
    task-queue-size = -1
    task-queue-type = "linked"
    allow-core-timeout = on
  buffer = "memory"

  disk-buffer-path = ""

  proxy {
    host = ""
    port = 8000
    secure = true
  aws {
    credentials {
      provider = anon
    default-region = "eu-west-1"

That bug was fixed in 0.15.1.


1 Like

Thank you!!