Production deployment experience sharing


Those of you using Play! 1.x in production, could you share your experience so everyone benefits?

On-premises/self-hosted? VPS? Dedicated server? Cloud?

Play! built-in server? Server via modules (e.g. Netty)? Application server (e.g. WildFly) with war file?

play run/ctrl-c? play start/stop? Application server own start/stop mechanism? System service (e.g. systemd)?

Front-end/reverse proxy server
Apache, Lighttpd, Nginx?

Web application firewall (WAF)
ModSecurity, Naxsi?

Other aspects you may find relevant, whether issues e.g. crashes you faced, lessons you learned, security tips or any other suggestions you’d like to share.

Thanks in advance.

1 Like

We used to self-host on Ubuntu VPS servers.
Built-in Play server, using a Linux system service for start/stop control.
We fronted with Nginx.

This was all pretty easy and straightforward… then we discovered CleverCloud … moved everything over there and haven’t looked back since.

CleverCloud provides out-the-box, 1st class runtime support for Play1 (and Play2 of course)… you literally push you code via Git … or you can even hook it up to Github, for auto-deployment (via branch watching) if you want (similar to Heroku, but not as expensive and much better support).

CleverCloud takes care of all the plumbing: blue/green deployments, reverse proxying, auto-renewed Let’s Encrypt SSL certs or custom, auto-scaling etc.

The ONLY change we had to make to our applications to make them work on the stack, was to switch out Memcached for Redis - which was a painless affair, seeing that there’s a Play1 Redis module (called Play-Redis, based on Jedis, which we just update the dependencies on occasionally) which is a drop-in replacement for Memcached.

Best move ever for us to switch to CleverCloud!

W’ve been using Linux with play integrated netty server, in Ubuntu servers with Apache2, in all cases we use two production nodes with apache load_balancer, allowing us to manually pass to production without interrupt the service, and keeping just one node up.

The relevant apache2 config is:

Timeout 5400
ProxyTimeout 5400

ProxyPreserveHost On
<Location /balancer-manager>
        SetHandler balancer-manager
        Order Deny,Allow
        Deny from all
        Allow from

<Proxy balancer://mycluster>
BalancerMember http://localhost:7001
BalancerMember http://localhost:7002

<Proxy *>
Order Allow,Deny
Allow From All

ProxyPass /myapp balancer://mycluster/myapp
ProxyPass /balancer-manager !

ProxyPassReverse /myapp balancer://mycluster/myapp

In server startup we’ve used normal rc scripts.