Alpakka S3: retry multipartUpload parts on failure?

Hi,

we are encountering problems with large multipartUploads using Alpakka. It seems this is due to https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/ (“Because Amazon S3 is a distributed service, a very small percentage of 5xx errors are expected during normal use of the service.”).

For downloads and non-multipart uploads, this issue was mitigated by Alpakka S3: Retrying internal errors / https://github.com/akka/alpakka/pull/1303 . However, the fix does not apply to multipartUploads (which uses superPool instead of singleRequest).

Is there an easy way to enable this for multipartUploads? The problem is worse for these as they are harder to retry (might not have source bytes anymore, or a single, late chunk fails in a huge upload).

Thanks

Hi Urs,

I had a quick look in S3Stream, you’re right the multipart upload works differently and doesn’t use the retry mechanism.
It might be possible to add something like the RetryFlow from akka-stream-contrib here.
We’ve seen this need for retrying arise in a couple of connectors and plan to look into a recommended way of implementing it.

I don’t see an obvious way to add retrying “from the outside” to the S3 multipart upload.

Enno.

Hi Enno
Do you know if there is plans or a ticket to implement the retry mechanism on the multipart upload?
Is there any workaround to implement my own retry?

Thanks
Fabio Pinheiro

Hi @FabioPinheiro

and thank you for asking.
We are currently working on a general way to retry things within an Akka Stream which might become available soon. We see the need for retrying external calls quite regularly in Alpakka and hope this will give us a good way to solve those scenarios.

There is no ticket for Alpakka S3 for this right now. Please feel free to add it.

Cheers,
Enno.