Categories
analytics chat chatting google google-analytics javascript live live-users long-polling nginx node.js performance persistent persistent-connection php5-fpm Proxy real-time response reverse-proxy socket socket.io socks tcp upstream web-server websocket websocket-handshake

How To Make Your Own Analytics Tool?

An analytics tool is a tool that you can use to get the information about how is your website performance, how are the users using it, what are the number of users at anytime(live users), live users for different urls, and other infos. One such example of the tool is the google analytics.

Analytics tool

Analytics tool gives a lot of information about your website with the nice looking graphs and all. In this blog, we will be discussing how we can make such analytics tool(digging the data).

Question 1: Why should we make such thing if it already exists?

Answer : One reason is that it is not free for always. The other reason is that the process of making it is very sexy and you learn technologies that you can use to make your own other real time apps like chatting app. Now that is more a valid reason.

Question 2: What are we talking about?

Answer : This is an example of a real time application. You see, you are tracking live users’ data. The data can be anything like total number of users anytime or number of users for different urls or different referral or different user agent or different visited ips, or anything. The other example of a real time application is a chatting application where a lot of users are communicating at the same time.

Question 3: What technologies are used for such an application?

Answer : So we need a continuous stream of response from the server about their performance as only servers know what is hitting them. It can only be achieved if there is some persistent connection between client and server so that server can continuously push data to the client. With persistent connection I would mean one for literally persistent connection(web sockets) or else timed persistent connection(long polling) or else chunk persistent connection(polling). By timed persistent connection, I mean persistent connection only for certain time interval after which it is closed and again opened. And by chunk persistent connection, I mean lots of request with no persistency, but the requests are so frequent that altogether they seem persistent(my bad).

Among all, I prefer web sockets, as I can’t logically satisfy myself why to use others and more of it that they are the outdated styles and only existed when sockets were not there.

So in this tutorial we will be using socket.io library with node.js to create web sockets. Below are the other libraries with other language which I found from here.

Analytics tool

And yes, we are going to use Nginx as a reverse proxy for web sockets. A very great tutorial regarding this can be found here. You can skip Nginx if you wish and directly test for your node.js server.

For Nginx configuration, as mentioned in tutorial,

server {
    server_name app.domain.com;
    location / {
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_http_version 1.1;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_pass http://socket_nodes;
    }
}

Proper explanation is given in that tutorial for how this will work for sockets. Now we setup our Node server(server.js), a very nice tutorial can be found here.

var data = {};
var http = require(“http”);
var url = require(‘url’);
var fs = require(‘fs’);
var io = require(‘socket.io‘);
var path;
var server = http.createServer(function(request, response){
        console.log(‘Request Accepted’);
        path = url.parse(request.url).pathname;
        switch(path){
            case ‘/’:
                response.writeHead(200, {‘Content-Type’: ‘text/html’});
                response.write(‘hello world’);
                response.end();
                break;
            case ‘/live’:
                fs.readFile(__dirname + ‘/socket_io.html’, function(error, data){
                    if (error){
                        response.writeHead(404);
                        response.write(“opps this doesn’t exist – 404”);
                        response.end();
                    }
                    else{
                        response.writeHead(200, {“Content-Type”: “text/html”});
                        response.write(data, “utf8”);
                        response.end();
                    }
                });
                break;
            default:
                response.writeHead(404);
                response.write(“opps this doesn’t exist – 404  –“+path);
                response.end();
                break;
        }
    });
 server.listen(8001);
 var listener = io.listen(server);
 listener.sockets.on(‘connection’, function(socket){
    console.log(‘Socket Made.’);
    setInterval(function(){
        socket.emit(‘showdata’, {‘data’: data});
    }, 1000);
 socket.on(‘disconnect’, function(){
    console.log(“Socket Closed.”);
 });
});

Load socket.io.js on the client side in your socket_io.html which will be used to create event to create a persistent connection. (Please don’t bang your head for how would it find socket.io.js)

/socket.io/socket.io.js

     var socket = io.connect(); // your initialization code here.
     socket.on('showdata', function(data){
          console.log(data);
     });

Now you can see all the data in your browser’s console. You can put this data at the right place of your analytics html. Now you can easily send continuous data from node server to your client.

Are you still wondering how would we gather data? Suppose we want to track number of live users, for that we will use a key “live” in data variable and initialize with 0. We will increment it as soon as there is a socket made and decrement it as soon as the socket is closed(Make sure io.connect() is called once and only once for a page load). So the data[‘live’] will determine the number of live users.

In server.js,

listener.sockets.on(‘connection’, function(socket){
    console.log(‘Socket Made.’);
    data[‘live’] +=1;
    setInterval(function(){
        socket.emit(‘showdata’, {‘data’: data});
    }, 1000);
 socket.on(‘disconnect’, function(){
 data[‘live’]-=1;
    console.log(“Socket Closed.”);
 });

As simple as that. It is just the logics in node to be done and you can achieve any data. Now for temporary data(by temporary I mean the machine hosting node server is until rebooted), we don’t need any extra storage.

If we want to use some durable data, or store some information, we can choose database of our choice and send data from node to database.

Now we are also using Nginx as a reverse proxy which is helpful in load handling and better scaling. We can extract other information like user ip, upstream responses(php), and other valuable information and send to node.

So yes, it is really interesting to make such tool, I told you so. You are surely going to stuck in somewhere or make it up or for any suggestion or for confusion, we can always discuss.

Great to see you people with your own analytics tool. 🙂

Categories
ajax bidirectional chat chatting full-duplex http live live-users long-polling node.js persistent persistent-connection polling real-time response reverse-proxy socket socket.io tcp Uncategorized websocket websocket-handshake

Long Polling Vs Socket

There is no match, in the battle of Long Polling Vs Socket, Socket clearly wins.(As long as I’m the referee, for now).

Long Polling and Sockets are the two different, well known, ways to implement real time applications, like live users or chatting where you have to broadcast the changes instantaneously. This post is going to compare the two technologies for their use. You might feel that this post is being biased as I can’t make myself like Long Polling over Sockets. I will try to reason this biasness or more importantly the logical thinking why one should not prefer polling(short or long) over sockets.

Polling is a method where the client requests server like ajax, the difference is that it is set at timed interval which means every some second it requests the server for the response. It is also known as short polling. Long polling is another dirty trick with the same concept.

Long Polling is a method when the client makes ajax request to server but the server do not response instantaneously but instead it holds the response until there is some change in the database or else the request times out. (WTF, then what does it do with the persistent connection all the time before it times out). The trick here is that with long polling we are saving the number of unnecessary requests to the server which would been made for no change in data.

Socket is a method when the client sends a WebSocket handshake request, for which the server returns a WebSocket handshake response. The handshake resembles HTTP so that servers can handle HTTP connections as well as WebSocket connections on the same port. Once the connection is established, communication switches to a bidirectional binary protocol that does not conform to the HTTP protocol. Once the connection is established, the client and server can send WebSocket data or text frames back and forth in full-duplex mode. The data is minimally framed, with a small header followed by payload.

Lets start.

First of all, Long Polling is an outdated method and Socket is the future. Long Polling was used as a trick for polling when sockets were not there.

This makes your client’s browser slow or might hang it up as at every interval there is a ajax request done. This also looks suspicious to the client(what is it that being requested from my browser, why my browser consumes so much memory).

This also increases server load, as there are requests at continuous intervals. It is like timed persistent connection because until the server generates response or else timeout, connection is kept alive and after the response received, another request for persistent connection is generated. So why not create it once and for all. For each request there is an overload of TCP connection handshake which is minimized to single handshake. As there are multiple requests so the overload of payload will also be there.

Long polling maintains a loop which constantly checks for change in database which only decides whether to response back to client. In Node.js, you can share the same memory for different socket connections, so that way they can access shared variables. So you don’t need to use database as exchange point in the middle (like with AJAX or Long Polling). You can store data in RAM, or even republish between sockets straight away.

With AJAX(Long polling) there is a higher chance of MITM as each request is new TCP connection and traversing through internet infrastructure. With WebSockets, once it’s connected it is far more challenging to intercept in between, with additionally enforced frame masking when data is streamed from client to server as well as additional compression, that requires more effort to probe data.

With long polling there is higher latency than sockets. Http is the stateless protocol, which means it forgets connection after the work is done, so there will be delay in response as it will have to make an all new request. But with socket, it reflects the change as soon as it detects.

Some nice links can be found hereherehere.

That’s it for today. You are most welcome to break the monotonicity of the fairness of socket over long polling.

Thanks Guys.

Categories
angular angular-js controller html javascript ng-bind ng-bind-html ng-click onClick scope template trustAsHtml

How to work ng-click with the template in ng bind html?

I had been working on AngularJs recently and I came across a problem where when I found out that ng-click does not work when some string html is drawn using ng bind html directive.

I looked over internet and found couple of questions and some other articles ( here, here ) which tackled similar problem.

The solution I found there was somewhere around to use $compile to compile the html. But that was not working for me according to my requirement(or I didn’t understood that well) as I didn’t want it to compile everytime.

Then I found this real nice and easy hack. The hack is that we replace ng-click with onClick of javascript which does work. And then in the onClick function, we find the scope of the html, and call the required function of the corresponding controller explicitly in there. Below is how its done. 🙂

So earlier we had this in ng-bind-html,

<a ng-click="myControllerFunction()" href="something">

Now either put an id attribute in the anchor tag, or create an empty div with an id attribute set for that, as below, Either,

<a id="myControllerScope" onClick="tempFunction()" href="something">

Or,

<a onClick="tempFunction()" href="something">
<div id="myControllerScope">
div closing tag

Now, at the end or some place where you have written your other javascript code, put below code. It should be outside the angular blocks(controller or run or anything else) as it is just another javascript code.


function tempFunction() {
 var scope = angular.element(document.getElementById('myControllerScope')).scope();
  scope.$apply(function() {
    scope.myControllerFunction();
 });
}

Explanation :

In the javascript code, we find the current scope of the html which was being drawn by ng bind html in the scope variable and then calling the respective function which should have been called by ng-click.

Oh Yes, it works like a charm.

Ok Bye. 🙂

Categories
access-log aws elasticsearch elk filter grok grok-debugger gui input kibana log-format logging logrotate logstash monitoring nginx optimization output s3 webbrowser

ELK : Configure Elasticsearch, Logstash, Kibana to visualize and analyze the logs.

This is about how to configure elasticsearch, logstash, kibana together to visualize and analyze the logs. There are situations when you want to see whats happening on your server, so you switch on your access log. Of course you can tail -f from there or grep from that, but I tell you that it is real cumbersome to analyze the logs through that log file.

What if I told you there is a super cool tool, Kibana, which lets you analyze your logs, gives all kind of information in just clicks.   Kibana is a gui tool designed for the purpose to analyze the large log files which when used properly with logstash and elasticsearch(configure elasticsearch, logstash and kibana together) can be a boon for developers.

Now, logstash is a tool which is used to move logs from one file or storage(s3) to another. Elasticsearch is a search server which is used to store the logs in some format.

Now, here is the picture in short, Logstash will bring logs from s3, formatize them and send them to elasticsearch. Kibana will fetch logs from elasticsearch and show it to you. With that lets see the actual thing which matters, the code. I assume you everything installed, ELK, and s3 as the source of our logs(why?).

So first we configure our Logstash. There are mainly three blocks in logstash, input, filter, output. In input, we specify the source of the log file, in filter block, we format the logs the way we want it to be stored in elastic search, in output block we specify the destination for the output.

Code :

open up terminal

nano /etc/logstash/conf.d/logstash.conf

edit it for the following code,

input {

s3 {

bucket => “bucket_name_containing_logs”

credentials => [“ACCESS_KEY”, “SECRET_KEY”]

region_endpoint => “whichever_probably_us-east-1”

codec => json {

charset => “ISO-8859-1”

}

}

}

filter {

grok {

match => {“message” => “grok_pattern”}

}

}

output {

#stdout {

#codec =>json

#}

elasticsearch_http {

host => “localhost”

port => “9200”

codec => json {

charset => “ISO-8859-1”

}

}

}

Explanation :

In input block, we specify that our log comes from s3. We provide necessary credentials to access the s3 bucket. We specify the charset for the input.

In filter block, we use grok tool to create custom fields in kibana by making proper pattern for the input logs. You can use grokdebugger to debug your pattern for a given input.

In output block, we specify the destination for output as elasticsearch, its address. We also specify the charset for the output.

You can uncomment the stdout block in output block to print data on to console.

Elasticsearch

We don’t need to change anything for elasticsearch configuration for now. Though if curious you can find it at /etc/elasticsearch/elasticsearch.yml . One thing which we should keep in mind that we need high configuration machine for this ELK system, otherwise, you might encounter different errors when elasticsearch gets full. One workaround that can be done for that whenever your elasticsearch is full, clear it out.

The following command will remove all index and bring elasticsearch to its initial state.

curl -XDELETE ‘ http://localhost:9200/_all/ ‘

You can read here to optimize elastic search for memory.

Kibana

You don’t have to do much here in its configuration file, just make sure its listening to the correct port number. You can find its configuration file at /opt/kibana/config/kibana.yml

Now go ahead and enter the ip of the machine wherever the kibana is setup and port number or whatever url you specified in kibana.yml into the browser.

Now you can see something like thisconfigure elasticsearch

You can now explore a lot of things from here, visualize logs, compare fields. Go ahead check out different settings in kibana.

That’s it for now.

Welcome and do let me know when you configure elasticsearch, logstash, kibana combo pack.

Cheerio 🙂