Categories
aws optimization php static

Optimization – Store Static Files on AWS S3 with Git Hooks, AWS CLI tool

In this tutorial, we will learn on how to store your static files, like JS, CSS, JSON, images, etc. on AWS S3. This is going to make a drastic improvement for your web servers. The reason for improvement is for every page load, there are over tons of follow up such static file requests. Though, such requests are not processed (only then can be kept at S3 as it is a non-processing storage), these request still hit your web servers significantly. When kept on S3, it saves a lot of file power to the web servers. We are going to use Git Hooks, AWS CLI tool to achieve our goal.

The challenge is repeated uploads of your static files to S3, which, if done manual, is tedious. Here we can make use of Git Hooks and AWS CLI tool to work together to automate the syncing of your static files.

Git Hooks

Git Hooks, in simple terms, are the intermediate steps behind a git command. For example, git pre-push hook is a hook which executes on every git push command, but prior to pushing the code.

AWS CLI

AWS CLI tool in simple terms is a command line interface tool (full form actually) with various commands to communicate with AWS services like S3, EC2, etc.

Here we can write a command for aws using aws cli tool, in the pre-push hook.

The process is that, you can define a constant ASSET_URL for the static files base url location. For ex- for your test environment, it would be http://localhost/project/ and for production it should be your s3 address (or cloudfront url, https://cdn.example.com/, infront of s3). So, the static file urls would look like, ASSETS_URL.'assets/img/a.png', ASSETS_URL.'assets/css/a.css', ASSETS_URL.'assets/js/a.js', etc.

Now the testing and development process would remain same, as all the file copies remain on your local server. But on the production environment, it will look for these files on cdn address provided. So, before sending your code to production servers, you need to update all the new / modified files to the s3.

At this step, git hooks would come into picture. One of the git hook is pre-push hook, which you can edit/create with .git/hooks/pre-push. https://stackoverflow.com/a/14672883/2560576. Example sample for pre-push hook.

In the pre-push hook, you can add aws cli command to update your local assets folder to your s3 assets folder. For example- aws s3 sync assets/ s3://bucket/assets/ --profile aws_credential_profile --acl public-read

So, now when development is complete, and you can execute git push code to push your code to remote repository as usual. But with the help of Git pre-push hook, all the static files will be synced to your s3 bucket’s assets folder just before the actual push.

Now only processing requests are made to your web server, and all static file requests are routed to S3.

Hope this helps someone. Pl give your feedback to improve or add anything.

More- Automatic PWA Converter Platform

Thanks!

Categories
bypass caching configuration fastcgi full-page-caching mastering-nginx microcaching nginx performance static upstream

Microcaching : How to do caching in nginx web server?

Caching is a technique to speed up the response of your website’s static content, content which does not change with time. Microcaching is a type of caching which has short expiry time of cache. This article is about how to set up microcaching in nginx web server.

If you have a website and it contain webpages which do not change, then you are at the right place. Yes you can improve your website’s performance by many folds.

How caching improves performance?

Whenever some request on your website is made, it first goes to your Nginx web server (reverse proxy server). Then it is forwarded to some upstream server to run php-code. But if you have some static content, calling upstream servers is just an overhead. You would always want to keep your upstream servers as free as possible because they are already slow. What can be done is unnecessary or repeated request are served by Nginx itself(this guy is damn fast), relieving upstream servers. This is where caching shows up. Whenever there is some static content request, they are cached by Nginx. On further request of same data they are served by Nginx from its cache.

Nginx does full page caching i.e., it caches the data in its html form. The cache data might be encoded depending on the response from Upstream servers(first time response).

How is Caching done?

We will see a simple configuration for caching using Nginx. It might be made more efficient on further deep study of the same.

First we need to define a caching region, in http context of your Nginx configuration,

fastcgi_cache_path /var/nginx/cache levels=1:2 keys_zone=MYCACHE:10m;

Then, we got to use this cache region, [following should be included in proper context, possibly server context]

fastcgi_cache MYCACHE;

Define the cache key,

fastcgi_cache_key $server_name|$request_uri;

You can also create conditions to bypass and to avoid the caching using following parameters,

fastcgi_cache_bypass ByPass; [Bypass is “1” to bypass]

fastcgi_cache_nocache NoCache; [NoCache is “1” to avoid cache]

This is it, this much should do it. You are set for microcaching of static content of your website on your nginx web server.

Have a rocking Deepawali with Caching !

Do contact me in case of any query.