Categories
Uncategorized

AWS Resource Migration across account – Part 1 – EC2

There have been cases when one need to migrate there AWS account. The one very obvious reason is that you’re using AWS free tier services. And after an year, as the free tier is over, you may want to create a new AWS account.

But when you create a new AWS account, you need to recreate all of your AWS resources, like EC2, ELB, S3, RDS, Elasticsearch, etc.

Along with recreating all the resources you would need to retain all the data, tools, settings of your current AWS resources. For example, suppose you were running an EC2 machine for a website. You had installed various tools on that machine, like phpmyadmin, nodejs, nginx, python, gunicorn.

Now creating a new account and corresponding resources is Okay, but re-setting everything to current state is not very straight-forward.

But let me tell you, the resetting of everything is not actually very difficult.

This article of the AWS account migration series will focus on migration of EC2 resource on AWS.

Terminologies:

  • naws : New AWS account
  • oaws: Old AWS account

Steps to migrate EC2 from oaws to naws:

  1. Create an AMI for EC2 resource in oaws
  2. Edit Permission of above created AMI.
  3. Add naws account id in the step 2. This should share the above AMI with naws.
  4. Wait for few minutes. The sharing of AMI might take 5-15 minutes.
  5. Go to AMI list and select Shared AMIs. You should see above create AMI here. In case you don’t see it yet, pl wait for a while.
  6. Once you see the new AMI in naws, select that AMI and launch a new EC2 resource.
  7. Create New security group in naws, copying values from oaws.
  8. Attach above created security group to the new EC2 resource in naws.
  9. You should have a working EC2, a clone of EC2 in oaws. You might have to ssh into new EC2 resource and start your servers, in case they are not included in upstart script as the EC2 is recently launched and hence started.
Categories
aws aws-sdk install php sms sns ubuntu

SMS Services : AWS SNS Implementation In PHP5 In Ubuntu 14.04

Hi Everyone, today we are going to implement one of the AWS service, SNS(Simple Notification Service) for SMS services in php.

SMS services are usually used for otp(One time password) features in various platforms. It is also used for marketing. There are various services which provide bulk sms services, like twilio, plivo, nexmo, etc. Among all of them, I found out that AWS SNS is a better one, with their offerings.

Install PHP SDK for AWS

1- Install Composer
curl -sS https://getcomposer.org/installer | php

2- Create a composer.json file into your project.
sudo touch composer.json

3- Edit composer.json with below content.
{
"require": {
"aws/aws-sdk-php": "3.*"
}
}

4- Run you composer command to install aws sdk.
php composer.phar install

This will install aws-sdk into your project. Now, you must be having a vendor folder in your project, which contains the aws-sdk. It also contains an autoload.php file, which is used to load the aws-sdk in your project.

Use Of PHP-SDK

1- Include the autoload.php into your file where you want to implement sns and specify SnsClient class to be used.
include 'vendor/autoload.php'; or include FCPATH.'vendor/autoloader.php';(for codeigniter)
use Aws\Sns\SnsClient;

2- Get an object of sns library, to use its functions.
$client = new SnsClient([
'version' => 'latest',
'region' => 'us-west-2',
'credentials' => [
'key' => 'KEY',
'secret' => 'SECRET KEY',
],
]);

In above replace, key and secret’s value with their correct value.

Now you are ready to send message using the above client object.

3- Create settings for your object.
$options = array(
'MessageAttributes' => array(
'AWS.SNS.SMS.SenderID' => array(
'DataType' => 'String',
'StringValue' => 'SENDERID'
),
'AWS.SNS.SMS.SMSType' => array(
'DataType' => 'String',
'StringValue' => 'SMSType'
)
),
'Message' => $message,
'PhoneNumber' => $phone
);

In above, replace SENDERID, with your desired sender id, SMSType with desired value(Transactional/Promotional), appropriate message, phone number.

4- Send the message, by calling the publish function of the object.
$result = $client->publish($options);

Result contains the response sent by aws sns.

At this point, you must be able to send a message using aws sns.
Github URL
AWS-SDK Tutorial
AWS-SDK Installation
Composer Commands

Categories
aws web-server

Amazon Web Services : Is Internet Down?

You might be checking out different websites and getting nothing but some or other error. Let me tell you that it is not your internet issue but their website issue.
The big news is Amazon Web Services are facing issues which are leading to such errors. It has lead downtime to many websites like Quora, Slack, reddit, and other big web giants.

Wait! what? So how these things are related?

So Amazon Web Services(AWS) are a pool of cloud services offered by amazon which these websites use to operate themselves. Now when these services are facing issues, so these website do as well. It is not yet clear if there was some internal bug or a hack in happened.

So how bad is it? How long is it going to remain same?

Well, this looks pretty bad as a lot of websites have been affected due to their entire dependency on these services. But it is also true that it is very rare.
It completely depends on the amazon developers on how long is it going to recover their services. But you can check their current status at status.aws.amazon.com.

So what can I do if I am affected by these services?

There is a dirty hack for the users of amazon web services to recover from this issue. From the status page of amazon aws, it can be observed that services of N.Virginia are only affected but not others. So if nothing can be done, then try to re-host your services in some different zone than N.Virginia. This shall solve your issue until aws developers fix theirs.

Dark Internet.

Categories
access-log aws elasticsearch elk filter grok grok-debugger gui input kibana log-format logging logrotate logstash monitoring nginx optimization output s3 webbrowser

ELK : Configure Elasticsearch, Logstash, Kibana to visualize and analyze the logs.

This is about how to configure elasticsearch, logstash, kibana together to visualize and analyze the logs. There are situations when you want to see whats happening on your server, so you switch on your access log. Of course you can tail -f from there or grep from that, but I tell you that it is real cumbersome to analyze the logs through that log file.

What if I told you there is a super cool tool, Kibana, which lets you analyze your logs, gives all kind of information in just clicks.   Kibana is a gui tool designed for the purpose to analyze the large log files which when used properly with logstash and elasticsearch(configure elasticsearch, logstash and kibana together) can be a boon for developers.

Now, logstash is a tool which is used to move logs from one file or storage(s3) to another. Elasticsearch is a search server which is used to store the logs in some format.

Now, here is the picture in short, Logstash will bring logs from s3, formatize them and send them to elasticsearch. Kibana will fetch logs from elasticsearch and show it to you. With that lets see the actual thing which matters, the code. I assume you everything installed, ELK, and s3 as the source of our logs(why?).

So first we configure our Logstash. There are mainly three blocks in logstash, input, filter, output. In input, we specify the source of the log file, in filter block, we format the logs the way we want it to be stored in elastic search, in output block we specify the destination for the output.

Code :

open up terminal

nano /etc/logstash/conf.d/logstash.conf

edit it for the following code,

input {

s3 {

bucket => “bucket_name_containing_logs”

credentials => [“ACCESS_KEY”, “SECRET_KEY”]

region_endpoint => “whichever_probably_us-east-1”

codec => json {

charset => “ISO-8859-1”

}

}

}

filter {

grok {

match => {“message” => “grok_pattern”}

}

}

output {

#stdout {

#codec =>json

#}

elasticsearch_http {

host => “localhost”

port => “9200”

codec => json {

charset => “ISO-8859-1”

}

}

}

Explanation :

In input block, we specify that our log comes from s3. We provide necessary credentials to access the s3 bucket. We specify the charset for the input.

In filter block, we use grok tool to create custom fields in kibana by making proper pattern for the input logs. You can use grokdebugger to debug your pattern for a given input.

In output block, we specify the destination for output as elasticsearch, its address. We also specify the charset for the output.

You can uncomment the stdout block in output block to print data on to console.

Elasticsearch

We don’t need to change anything for elasticsearch configuration for now. Though if curious you can find it at /etc/elasticsearch/elasticsearch.yml . One thing which we should keep in mind that we need high configuration machine for this ELK system, otherwise, you might encounter different errors when elasticsearch gets full. One workaround that can be done for that whenever your elasticsearch is full, clear it out.

The following command will remove all index and bring elasticsearch to its initial state.

curl -XDELETE ‘ http://localhost:9200/_all/ ‘

You can read here to optimize elastic search for memory.

Kibana

You don’t have to do much here in its configuration file, just make sure its listening to the correct port number. You can find its configuration file at /opt/kibana/config/kibana.yml

Now go ahead and enter the ip of the machine wherever the kibana is setup and port number or whatever url you specified in kibana.yml into the browser.

Now you can see something like thisconfigure elasticsearch

You can now explore a lot of things from here, visualize logs, compare fields. Go ahead check out different settings in kibana.

That’s it for now.

Welcome and do let me know when you configure elasticsearch, logstash, kibana combo pack.

Cheerio 🙂