Categories
ab-testing caching configuration hhvm loader.io mastering-nginx microcaching nginx optimization php5-fpm

Super Nginx Configuration : Feeling the 100000 r/s capability of Nginx.

In this post, I will share my experience of how I managed to reduce ten fold the number of servers for our web service and for three fold more traffic. So, actually I could increase our systems efficiency 30 times. For people who can’t scale it, I tell you, this much improvement is tremendous. We are able to save a lot of money. There are still more improvements that can be done. We have seen a lot of posts, articles of how to tweak your Nginx configuration to make it serve lot of requests, I have now actually felt that.

Earlier, for 8000 users/sec, we used to have 23-25 servers running. Now, we have served more than 25000 users/sec with only four machines (Though we also went with one server for around 12000 users/sec, but it gave little high cpu utilization). Even we have served around 45000 users/sec for just 6 servers. Also, the machines used earlier were of higher configuration, than now.

How did we modify our nginx  configuration?

Earlier, we had used a normal configuration with most of the parameters having default or recommended values. My team told me that it used to handle a lot of requests earlier but something happened recently which gave all of us nightmares. We were trying hard to figure out the problem in all possible domains. We improved our code, we tried to modify nginx parameters’ values, db parameters’ values. We were still running on 22 machines for not much load.

Honestly, I didn’t care much about the old configuration anymore, and wanted to write a whole new one. I started with this book Mastering Nginx by Dimitri Aivaliotis. This is a great book. I read mainly 2nd, 4th and 5th chaptersThen I also read these threads, http://tweaked.io/guide/nginx/,                                                                                http://www.aosabook.org/en/nginx.html,                                                                                   https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/,      http://highscalability.com/blog/2014/4/30/10-tips-for-optimizing-nginx-and-php-fpm-for-high-traffic-si.html .

I started writing my own Nginx configuration. I used loader.io and ab testing tools all through the way to set correct values for the parameters. I increased timeout values to 300s from 15s. I also altered other values. Then I also implemented caching(microcaching) in Nginx. And caching of file descriptors for caching. I also changed the memory for caching to RAM. Then, I went for TCP connection from Unix socket connection to upstream servers. I observed drastic improvement on loader.io and ab tools tests. Then we went live with changed configuration and we had actually improved a lot. Surprisingly, we could handle the same load with just four machines(we even went with one machine but more cpu usage).

We were so happy, we were shouting like anything.

Further, I implemented hhvm to replace php5-fpm which actually has brought down the number of machine to just one.

Now we all are flying. We see yet more improvement to be done.

Thank You everyone. 🙂

Categories
aws bad configuration nginx s3

The Dark-Knight : How to configure your nginx to retrieve static pages of your website from amazon s3 storage during 502 bad gateway error?

Nginx is widely used as a front-end proxy server to php server which means that when a client makes a request, nginx receives them first and then sends them to php(back-end) server to process them. Nginx has got ability to handle as much as 10000 requests at any point of time.

Now, lets talk about the topic.
There can be situations when you may receive server errors(502 bad gateway, 503 service unavailable, or anything like that). They are seriously nightmares if you have ever faced them. That could create bad impression of you to your users. I have faced it so I can tell you that it is really embarrassing. You probably lose out a lot of your users and other disasters might happen.

The cause for such errors could be from bad coding to high load(good thing, but unable to handle high load is not good).

Now, this is one of the solution where you learn how to configure your system to save yourself from embarrassment. I named it Dark Knight because you shouldn’t need this, but still it guards your website.

Solution:

Suppose, the url to backup has the following form, www.example.com/a/b .

Suppose the static page for www.example.com/a/b is stored in s3 storage as bucket/a/b/a_b.html. The url for same would be https://s3.amazonaws.com/bucket/a/b/a_b.html.

Now following changes can be added in your nginx configuration,

location @static{

rewrite ^ $request_uri;

rewrite /(.*)/(.*) /bucket/$1/$2/$1_$2.html;

proxy_pass https://s3.amazonaws.com;

}

location /index.php {

error_page 502 =200 @static;

fastcgi_intercept_errors on;

#  body

}

 

Explanation:

The usual cause for 502 error is when php fail to handle anymore requests.

The statements, fastcgi_intercept_errors keeps listening if php sends any error to some request, changes the response code for client and error_page redirects to some location on some particular error code(502 here) sent by php.

Now in @static location,

The current value of $uri is /index.php and $request_uri is /a/b. So rewrite it to the form of our static page in s3 bucket url.

The first rewrite changes $uri from /index.php -> /a/b.

The second rewrite changes $uri from /a/b -> /bucket/a/b/a_b.html.

Now proxy_pass directive sends data from given url(static page url) without changing the url.
Note that we can’t append uri after url in proxy_pass in any named_location(it won’t run).
Do provide proper permission to s3 bucket for access.

Cheers. You have learnt now how to configure your nginx to handle 502 error. Similar things can also be done to other server errors.

Here are few related questions, here, here, here, here on stackoverflow.