Utilizing the caching features of NGINX will boost your Java Application performance. It was shown in the previous post how to configure microcaching with NGINX and Tomcat or other Java application server. This post will show the benchmark results of how much can be potentially gained.

The sample application

I wrote a simple application, that will represent a typical Java application. It is written in Grails for faster development. It can also represent a typical Java stack as Grails is also based in Spring and Hibernate.

It only has a single table represented by the model Blog:

package bench
class Blog {
    String title
    String category
    String body, remarks
    static constraints = {
        body (maxSize: 65535)
        remarks (maxSize: 65535)
    }
    static mapping = {
        body type: 'text'
        remarks type: 'text'
    }
}

With only one controller and action that can be invoked:

package bench
class ReadController {
    def index() {
        def count = Blog.executeQuery("select count(*) from Blog b where rand() < 0.7") [0]
        def list = Blog.executeQuery("from Blog b order by rand()", [max:5])
        [list:list, count:count]
    }
}

As you could see, the order by rand() and where rand() < 0.7 forces a full table scan. In this experiment, I filled up the table with 8,000 rows. Which is a nice balance. Since many enterprise applications have millions of rows, but utilizing indexes and partitions to reduce I/O.

The view will just simply render the list of Blog articles in HTML:

<!DOCTYPE html>
<html>
	<head>
		<meta name="layout" content="main"/>
		<title>Welcome to Grails</title>
	</head>
	<body>
        <g:each in="${list}" var="item">
            <h2>${item.title}</h2>
            <p><strong>Category: ${item.category}</strong></p>
            <p><strong>Count: ${count}</strong></p>
            <p>${item.body}</p>
        </g:each>
	</body>
</html>

Using the configuration from the previous post, we can access the tomcat application server directly through port 8080. To access the page above, the url will be http://yourdomain.com:8080/read

To access it in a way that passes through NGINX and apply a 1 second microcache, the url will be http://yourdomain.com/read, which means the default port 80 where NGINX is listening to.

Load Testing with siege

For load testing, siege is used. This is simple command line tool yet very effective.

siege -c100 -d10 -t60s http://yourdomain.com:8080/read

For the above command, it will simulate 100 users accessing the given URL for 60 seconds. But each user will have around 10 seconds delay between successive access.

Benchmark – Single URL

For this test, we will skew the condition to favor caching by only having all processes access the same URL.

Testing Tomcat first. After trying several combinations of parameters, it seems Tomcat can only handle around 30 users.

$ siege -c30 -d10 -t60s http://yourdomain.com:8080/read
** SIEGE 2.70
** Preparing 30 concurrent users for battle.
The server is now under siege...
Lifting the server siege...      done.
Transactions:		         314 hits
Availability:		      100.00 %
Elapsed time:		       59.66 secs
Data transferred:	        8.49 MB
Response time:		        0.13 secs
Transaction rate:	        5.26 trans/sec
Throughput:		        0.14 MB/sec
Concurrency:		        0.67
Successful transactions:         314
Failed transactions:	           0
Longest transaction:	        0.22
Shortest transaction:	        0.10

When I increase to 35 users, it causes a lot of errors and the page is extremely slow when accessed through a browser. As expected, most database driven apps are IO bound and not CPU bound on heavy or full load.
nginx_tomcat_bench1

Next is the result with NGINX microcaching. Right away, I tested a 100 times increase of users. And the system was able to handle it easily, and accessing the page via browser is still very fast. I believe the configuration can handle so much more, but there is no point in going on. It is already established that with this case, the gain is off the roofs.

$ siege -c3000 -d10 -t60s http://yourdomain.com/read
** SIEGE 2.70
** Preparing 3000 concurrent users for battle.
The server is now under siege...
Lifting the server siege...      done.
Transactions:		       33528 hits
Availability:		      100.00 %
Elapsed time:		       59.62 secs
Data transferred:	      291.25 MB
Response time:		        0.18 secs
Transaction rate:	      562.36 trans/sec
Throughput:		        4.89 MB/sec
Concurrency:		      102.83
Successful transactions:       33528
Failed transactions:	           0
Longest transaction:	        3.26
Shortest transaction:	        0.00

Benchmark – Multiple URL

A more realistic testing is to have the multiple users accessing different URLs. This can be achieved by preparing a text file that will be passed to siege. Create sites.txt with the following content:

yourdomain.com:8080/read/index/1
yourdomain.com:8080/read/index/2
yourdomain.com:8080/read/index/3
...
yourdomain.com:8080/read/index/49
yourdomain.com:8080/read/index/50

We don’t need to modify the sample application as NGINX caches results per URI. Since the above URL list have different URI’s, the result will not be only 1 object.

Testing on Tomcat yields the same above as accessing single URL only. The balance is in 30 users and increasing to 35 will result to very slow performance.

$ siege -c30 -d10 -t60s -f sites.txt
** SIEGE 2.70
** Preparing 30 concurrent users for battle.
The server is now under siege...
Lifting the server siege...      done.
Transactions:		         344 hits
Availability:		      100.00 %
Elapsed time:		       59.07 secs
Data transferred:	        9.29 MB
Response time:		        0.13 secs
Transaction rate:	        5.82 trans/sec
Throughput:		        0.16 MB/sec
Concurrency:		        0.76
Successful transactions:         344
Failed transactions:	           0
Longest transaction:	        0.91
Shortest transaction:	        0.10

Change sites.txt to test NGINX caching, sites.txt needs to be edited to access on port 80:

yourdomain.com/read/index/1
yourdomain.com/read/index/2
yourdomain.com/read/index/3
...
yourdomain.com/read/index/49
yourdomain.com/read/index/50

And the result is interesting. The maximum number of users I could squeeze without having errors is 250. A huge scale down from 3,000 users earlier with single URL.

$ siege -c250 -d10 -t60s -f sites.txt
** SIEGE 2.70
** Preparing 250 concurrent users for battle.
The server is now under siege...
Lifting the server siege...      done.
Transactions:		        2965 hits
Availability:		      100.00 %
Elapsed time:		       59.81 secs
Data transferred:	       25.74 MB
Response time:		        0.02 secs
Transaction rate:	       49.57 trans/sec
Throughput:		        0.43 MB/sec
Concurrency:		        1.03
Successful transactions:        2965
Failed transactions:	           0
Longest transaction:	        2.55
Shortest transaction:	        0.00

It seems the performance will continuously go down as we increase the number of unique URI’s being accessed.

Interpretation and conclusion

Without using a cache, my Tomcat setup and Java application can take around 5 requests per second. Not that my machine have inferior hardware, but because the application I prepared takes a lot of IO to serve the content. The good thing about this setup is consistency. Regardless if your app have few or many popular content or not, the throughput is still the same at 5 request per second.

Using NGINX microcaching on the other hand has a different behavior. If the number of popular content is very few, page request will most likely hit the cache and the results are rendered quickly. Hence the good results in the first experiment above. However, as we simulate to increase popular content, we can see the performance drop. This is due to cache misses. If we increase the popular content to a large value, we could easily imagine that most request will miss the cache, and the microcache will have no effect on performance.

It seems that using microcaching, is not a universal approach in enhancing performance. It is mostly suitable if your application is targetting a reddit. Where most contents are receiving low traffic except for few articles that gets a sudden burst due to reddit submission.

A more effective approach is to use a larger amount of time to cache your application content (for example: minutes not seconds). And implement logic for eviction when content is updated, so that it is not served from cache. Think of a blog application where posts are cached for few minutes. When nothing is going on, a blog post is served from the cache. But when a user submits a comment or the author updates the posts, then the application should have a way to ignore the cache and update the results.

Microcaching is good, but on specific cases only. And the usual clustering approach is still an indispensable technique of improving your architecture.

NGINX and Java Posts

Comments

comments