Categories
caching configuration microcaching nginx SSI module web-server

SSI module : Date wise caching in Nginx

Hi everyone, today we are going to see how to implement date wise caching in Nginx using SSI module. There happen to be time when we want to refresh the cache as soon as new date comes. That is, we need such caching when we have our content as static but only for a specific date and it changes with new date.

For the date-wise caching, it doesn’t need actually a lot of work. You just need to change the fastcgi_cache_key and append date as well. For the date, we can use $date_local variable in nginx. But the problem comes as $date_local contains time as well, it is in non readable format, just numbers, (we can’t rewrite it).

SSI module comes for the rescue here. SSI module is a great implementation in nginx which processes SSI(Server Side Include) commands in the response to be sent. We can format the $date_local variable using this, to a readble form and even modify it for just the date.

Then we can use $date_local variable and append it with cache_key. Please follow below to see some codes of how to achieve it.

server {

ssi on;

location ~ \.php$ {

fastcgi_cache_key $your_key|$date_local;

}

}

Now, in somewhere in your view(response to be sent) html tag include below ssi command,

less than html greater than

less than!–# config timefmt=”%A, %d-%b-%Y” –greater than

less than/html greater than

Now, above command will be executed by Nginx before it is sent to client. It changes the $date_local default format to the desired format. And so your cache_key will be generated date-wise. As soon as, new date comes a new cache will be generated, and for the same date, the response from the cache will be served.

So now with the help of SSI module of Nginx, date wise caching can be easily done.
Read my article here for how to install nginx from source, which is needed to install third party module(SSI module) for Nginx.

Categories
bypass caching configuration fastcgi full-page-caching mastering-nginx microcaching nginx performance static upstream

Microcaching : How to do caching in nginx web server?

Caching is a technique to speed up the response of your website’s static content, content which does not change with time. Microcaching is a type of caching which has short expiry time of cache. This article is about how to set up microcaching in nginx web server.

If you have a website and it contain webpages which do not change, then you are at the right place. Yes you can improve your website’s performance by many folds.

How caching improves performance?

Whenever some request on your website is made, it first goes to your Nginx web server (reverse proxy server). Then it is forwarded to some upstream server to run php-code. But if you have some static content, calling upstream servers is just an overhead. You would always want to keep your upstream servers as free as possible because they are already slow. What can be done is unnecessary or repeated request are served by Nginx itself(this guy is damn fast), relieving upstream servers. This is where caching shows up. Whenever there is some static content request, they are cached by Nginx. On further request of same data they are served by Nginx from its cache.

Nginx does full page caching i.e., it caches the data in its html form. The cache data might be encoded depending on the response from Upstream servers(first time response).

How is Caching done?

We will see a simple configuration for caching using Nginx. It might be made more efficient on further deep study of the same.

First we need to define a caching region, in http context of your Nginx configuration,

fastcgi_cache_path /var/nginx/cache levels=1:2 keys_zone=MYCACHE:10m;

Then, we got to use this cache region, [following should be included in proper context, possibly server context]

fastcgi_cache MYCACHE;

Define the cache key,

fastcgi_cache_key $server_name|$request_uri;

You can also create conditions to bypass and to avoid the caching using following parameters,

fastcgi_cache_bypass ByPass; [Bypass is “1” to bypass]

fastcgi_cache_nocache NoCache; [NoCache is “1” to avoid cache]

This is it, this much should do it. You are set for microcaching of static content of your website on your nginx web server.

Have a rocking Deepawali with Caching !

Do contact me in case of any query.

Categories
ab-testing caching configuration hhvm loader.io mastering-nginx microcaching nginx optimization php5-fpm

Super Nginx Configuration : Feeling the 100000 r/s capability of Nginx.

In this post, I will share my experience of how I managed to reduce ten fold the number of servers for our web service and for three fold more traffic. So, actually I could increase our systems efficiency 30 times. For people who can’t scale it, I tell you, this much improvement is tremendous. We are able to save a lot of money. There are still more improvements that can be done. We have seen a lot of posts, articles of how to tweak your Nginx configuration to make it serve lot of requests, I have now actually felt that.

Earlier, for 8000 users/sec, we used to have 23-25 servers running. Now, we have served more than 25000 users/sec with only four machines (Though we also went with one server for around 12000 users/sec, but it gave little high cpu utilization). Even we have served around 45000 users/sec for just 6 servers. Also, the machines used earlier were of higher configuration, than now.

How did we modify our nginx  configuration?

Earlier, we had used a normal configuration with most of the parameters having default or recommended values. My team told me that it used to handle a lot of requests earlier but something happened recently which gave all of us nightmares. We were trying hard to figure out the problem in all possible domains. We improved our code, we tried to modify nginx parameters’ values, db parameters’ values. We were still running on 22 machines for not much load.

Honestly, I didn’t care much about the old configuration anymore, and wanted to write a whole new one. I started with this book Mastering Nginx by Dimitri Aivaliotis. This is a great book. I read mainly 2nd, 4th and 5th chaptersThen I also read these threads, http://tweaked.io/guide/nginx/,                                                                                http://www.aosabook.org/en/nginx.html,                                                                                   https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/,      http://highscalability.com/blog/2014/4/30/10-tips-for-optimizing-nginx-and-php-fpm-for-high-traffic-si.html .

I started writing my own Nginx configuration. I used loader.io and ab testing tools all through the way to set correct values for the parameters. I increased timeout values to 300s from 15s. I also altered other values. Then I also implemented caching(microcaching) in Nginx. And caching of file descriptors for caching. I also changed the memory for caching to RAM. Then, I went for TCP connection from Unix socket connection to upstream servers. I observed drastic improvement on loader.io and ab tools tests. Then we went live with changed configuration and we had actually improved a lot. Surprisingly, we could handle the same load with just four machines(we even went with one machine but more cpu usage).

We were so happy, we were shouting like anything.

Further, I implemented hhvm to replace php5-fpm which actually has brought down the number of machine to just one.

Now we all are flying. We see yet more improvement to be done.

Thank You everyone. 🙂