Alpakka S3: retry multipartUpload parts on failure?

Hi,

we are encountering problems with large multipartUploads using Alpakka. It seems this is due to https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/ (“Because Amazon S3 is a distributed service, a very small percentage of 5xx errors are expected during normal use of the service.”).

For downloads and non-multipart uploads, this issue was mitigated by Alpakka S3: Retrying internal errors / https://github.com/akka/alpakka/pull/1303 . However, the fix does not apply to multipartUploads (which uses superPool instead of singleRequest).

Is there an easy way to enable this for multipartUploads? The problem is worse for these as they are harder to retry (might not have source bytes anymore, or a single, late chunk fails in a huge upload).

Thanks

Hi Urs,

I had a quick look in S3Stream, you’re right the multipart upload works differently and doesn’t use the retry mechanism.
It might be possible to add something like the RetryFlow from akka-stream-contrib here.
We’ve seen this need for retrying arise in a couple of connectors and plan to look into a recommended way of implementing it.

I don’t see an obvious way to add retrying “from the outside” to the S3 multipart upload.

Enno.