SSLEngine closed already thrown on nginx restarts

The following example project and nginx configuration reproduces the problem:
nginx.conf: (using nginx 1.15.5)

It is a simple Scala client using Play 2.5 sending asynchronous requests (300) on a scheduler to nginx (the configuration proxies to a server running on port 9100 but the problem occurs even without a server running).

The reason I’m using Play 2.5 is that we currently have to support it for our tenants.

if you clone and start the project with “sbt run” and trigger several reloads of nginx with something like:
for i in {1..50000}; do sleep 5;ps -ef | grep -c nginx && sudo nginx -s reload && ps -ef | grep -c nginx; done

You will start seeing the exceptions “SSLEngine closed already” coming on the application log.

The problem occurs with a sufficiently large number of concurrent requests (I would say hundreds) and nginx reloads.

One of the nginx core developers suggests it could either be a problem in nginx, the client sending requests on a closed connection and/or mishandling asynchronous close events.

Would anyone be able to shed some light on whether we could have a problem in AsyncHttpClient (or netty)?

nginx mailing list thread where the issue is discussed:,281786,281862#msg-281862

This can be the cause. By default play-ws keeps the connections alive but if nginx is restarting, then it won’t be valid anymore. See for possible configurations tweaks you can do.


Hi Marcos,

Disabling keepAlive on the client seems to solve the problem, however it has a few drawbacks, such as more connections and TLS handshakes, I don’t think HTTP pipelining is possible without keepAlive (correct me if I am wrong).
Supposing we opt not to disable keepAlive, is it recommended then that the clients handle this exception?

Yes. I don’t know if Async-Http-Client has a way to revalidate the connections though.

I think you are right. Here is a matter of choosing between having persistent connections + better performance but not an automatic failure handling VS not having persistent connections + no failures when the server closes the connection but not great performance.

I think this is a good approach.


Hi Marcos, thank you for your help.

I am thinking if it is in fact a good idea to have the client handling an exception coming from a low-level library like netty?
By client I mean something that depends on play-ws.

Shouldn’t play-ws somehow wrap this exception and return a meaningful value signalling the connection was abruptly closed by the server?