Java Developer

Java and NGINX Microcaching - An Example of speeding your Tomcat deployments

Many times we do very complicated things just to make our application handle more transactions. For example:
  • Tuning database design and queries to lighten database work
  • Use of table index and partitions for efficient database operations
  • Database clustering to increase throughput of database server
  • Caching of database results on the Java application level, to reduce communication and also database work.
  • Clustering of application server
And there are many others.
I have read from this post that depending on your application, it maybe possible to throw away all the techniques above (to some extent) in favor of just leveraging NGINX microcaching to speed up your Java applications.
  • Microcaching - caching of files for a very short amount of time (for example 1 or 2 seconds).
The idea when we cache is to avoid recomputing the whole page. Instead, the server should just return the last result that was stored in disk or memory. A short period is used to invalidate the cache, such that the user will have the illusion that the contents are still dynamic and will not feel that there is a cache.
Caching in general is effective on speeding up Java applications because most systems are more READ heavy than WRITE. Meaning, most usage of a particular system involves fetching information, rather than creating or updating contents. Consider the case of a blog or forum software. More often than not, visitors of such are more likely to read rather than posting comments or replies.
I have tried this approach with PHP applications before. But for the purpose of this post, I am trying it with Tomcat.
From my previous post, I created a simple tutorial on how to setup NGINX and TOMCAT. To review, we assume you have installed your application (with ROOT context) on a Tomcat Server running on port 8080. The application can be accessible by the URL http://yourdomain.com:8080

Install nginx server and stop the service after:

apt-get install nginx
service nginx stop

Create a file /etc/nginx/sites-enabled/yourdomain.conf and use the contents:

server {
    server_name  yourdomain.com;
    location / {
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Server $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://localhost:8080;
    }
}

Now when you access your server via http://yourdomain.com, the traffic will go to your NGINX server. NGINX will then communicate with Tomcat via port 8080, to have the response rendered to your browser. However, plainly using this approach will only incur overhead. As NGINX is introduced as a middle man with no performance benefit.

Now to use the microcaching feature of NGINX to speed up your Tomcat based application, just replace the contents of /etc/nginx/sites-enabled/yourdomain.conf with the following:

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=TOMCAT:50m max_size=100m;
server {
    server_name yourdomain.com;
    location / {
        set $no_cache "";
        if ($request_method !~ ^(GET|HEAD)$) {
            set $no_cache "1";
        }
        if ($no_cache = "1") {
            add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/";
            add_header X-Microcachable "0";
        }
        if ($http_cookie ~* "_mcnc") {
            set $no_cache "1";
        }
        proxy_no_cache $no_cache;
        proxy_cache_bypass $no_cache;
        proxy_pass http://localhost:8080;
        proxy_cache TOMCAT;
        proxy_cache_key $scheme$host$request_method$request_uri;
        proxy_cache_valid 200 302 1s;
        proxy_cache_valid 301 1s;
        proxy_cache_valid any 1s;
        proxy_cache_use_stale updating;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_max_temp_file_size 1M;
    } 
}

The beauty of NGINX is in the simplicity of it's configuration. Without having much knowledge, you can guess what the configuration is doing. In case you are using JBOSS, Glassfish, or any other application server, the configuration will just be the same. The only configuration you may need to change is the port number of your app server.
This simple and short change has huge effect in speeding up your Java applications. I did a quick test for this and was able to achieve at least 100 times performance. I will publish the benchmarks on my next post.
NGINX and Java Posts

Tags: application server, debian, linux, load testing, microcaching, nginx, performance, reverse proxy, tomcat, ubuntu, web server