Pages

Thursday, 19 December 2013

Docker LXC Impementation

Installing Docker

Linux kernel 3.8
Due to a bug in LXC, docker works best on the 3.8 kernel. Precise comes with a 3.2 kernel, so we need to upgrade it. The kernel you’ll install when following these steps comes with AUFS built in. We also include the generic headers to enable packages that depend on them, like ZFS and the VirtualBox guest additions.

Installing kernel 3.8 on Ubuntu Precise 12.04 (LTS) (64-bit)


# apt-get update
# apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring
# reboot

Installing kernel 3.8 on Ubuntu Raring 13.04 (64 bit)

# apt-get update
# apt-get install linux-image-extra-`uname -r`

Installing the docker binary

# mkdir /var/src
# wget --output-document=docker https://get.docker.io/builds/Linux/x86_64/docker-latest
# chmod +x docker
# cp -ap docker /usr/bin/docker

Docker initscript for ubuntu.

/etc/init.d/docker

#!/bin/sh

### BEGIN INIT INFO
# Provides:         docker
# Required-Start:    $local_fs $remote_fs $network $syslog $named
# Required-Stop:     $local_fs $remote_fs $network $syslog $named
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: starts the docker
# Description:       starts docker using start-stop-daemon
### END INIT INFO

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/bin/docker
NAME=docker
DESC=docker
PID=/var/run/docker.pid
DAEMON_OPTS="-H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock -d"

test -x $DAEMON || exit 0

set -e

. /lib/lsb/init-functions

start() {
       start-stop-daemon --background --start --quiet --pidfile $PID \
           --retry 5 --exec $DAEMON --oknodo -- $DAEMON_OPTS
}

stop() {
       start-stop-daemon --stop --quiet --pidfile $PID \
           --retry 5 --oknodo --exec $DAEMON
}

case "$1" in
   start)
       log_daemon_msg "Starting $DESC" "$NAME"
       start
       log_end_msg $?
       ;;

   stop)
       log_daemon_msg "Stopping $DESC" "$NAME"
       stop
       log_end_msg $?
       ;;

   restart|force-reload)
       log_daemon_msg "Restarting $DESC" "$NAME"
       stop
       sleep 1
       start
       log_end_msg $?
       ;;
   status)
       status_of_proc -p $PID "$DAEMON" docker
       ;;

   *)
       echo "Usage: $NAME {start|stop|restart|status}" >&2
       exit 1
       ;;
esac

exit 0

Save the file and make it executable
# chmod +x /etc/init.d/docker

Add the docker in startup
# update-rc.d -f docker defaults

Start the docker
# /etc/init.d/docker start

Check the docker process
# ps ax | grep docker
Output
xxxx ?        Rl     0:01 /usr/bin/docker -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock -d
# netstat -tnlp | grep docker
Output
tcp6       0      0 :::4243                 :::*                    LISTEN      xxxx/docker     

Try creating and starting a test docker image
# docker run -i -t ubuntu /bin/bash
This will download the ubuntu image and start a new container

Checking Docker process
# docker ps -a

Stopping a docker
# docker stop <container-id>

Removing a docker
# docker rm <container-id>

Docker for implementing multiple applications

# docker run -i -t ubuntu /bin/bash

After getting the console of docker container follow the below steps

# apt-get update
# echo "deb http://archive.ubuntu.com/ubuntu precise universe" >> /etc/apt/sources.list
# apt-get install -y python-software-properties python g++ make
# add-apt-repository -y ppa:chris-lea/node.js
# echo "deb http://archive.ubuntu.com/ubuntu precise universe" >> /etc/apt/sources.list
# apt-get update
# apt-get install -y nodejs=0.10.22-1chl1~precise
# npm -v
# node -v
# redis-cli --version
# apt-get install redis-server
# mkdir /opt/script
# vim /opt/script/start.sh
   #!/usr/bin/sh
   /etc/init.d/redis-server start
   node /path/to/app/index.js

Once the above steps are done get one more terminal access to the server where the container is running
Now commit the image from the container that is still running

Get the container ID
# docker ps -a
CONTAINER ID        IMAGE                         COMMAND                CREATED             STATUS              PORTS                   NAMES
6972293657e2        ubuntu:latest                   /bin/bash                       5 mins ago        up about 5 mins 0                                      yellow_squirrel

Use the container id to create a new image

# docker commit <<container-id>> <<image-name>>
For ex.
# docker commit 6972293657e2 nodejs-redis

Now use the above image to create multiple docker containers

Start the container in daemon mode or detach it.
# docker run -d nodejs-redis sh /opt/script/start.sh

Inspect the current docker container
# docker inspect <container-id>

Mapping port to the container.
# docker run -d -P nodejs-redis sh /opt/script/start.sh
Finding the ports mapped to docker.
# docker port <container-id> <portno>

Tuesday, 10 September 2013

Rabbitmq Clustering with SSL

Installing Rabbitmq with clustering and SSL

------------------------------------------------------
>>>>>>>>>>>>>>> Rabbitmq Installation <<<<<<<<<<<<<<<<
------------------------------------------------------

# Installing Rabbitmq

yum install rabbitmq-server
or
apt-get install rabbitmq-server

# The above command will install rabbitmq-server on your machine.

# The below commands will be available after installing rabbitmq-server

rabbitmq-server
# and
rabbitmqctl

# The rabbitmq-server is ready now, you can use various options of rabbitmqctl to get details of users,acl,queues,bindings and cluster status

rabbitmqctl list_'users/bindings/queues/vhosts'

# Search for sample code for sending a message and receiving message from the rabbitmq-server

Here is one for you :)

http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/send.py

http://github.com/rabbitmq/rabbitmq-tutorials/blob/master/python/receive.py

------------------------------------------------------
>>>>>>>>>>>>>>> Rabbitmq Clustering <<<<<<<<<<<<<<<<<<
------------------------------------------------------

# Follow the above installation process on the slave node

# Copy the .erlang.cookie from the other server

# Delete the existing .erlang.cookie

rm -vf ~rabbitmq/.erlang.cookie

# from rabbit1-server

rsync -avzP ~rabbitmq/.erlang.cookie root@rabbit2-server:~/rabbitmq/

# Follow below steps to add a node in clustering on rabbit2-server.

rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl cluster rabbit@rabbit2-server rabbit@rabbit1-server

# If the above command gives error check if the rabbit2-server is able to resolve the hostname rabbit1-server.
# If not then add the entries in /etc/hosts.

# If everything goes "ok" continue with below steps.

rabbitmqctl start_app

# Check the cluster status on any of the node

rabbitmqctl cluster_status

Cluster status of node 'rabbit@rabbit1-server' ...
[{nodes,[{disc,['rabbit@rabbit2-server',
                'rabbit@rabbit1-server']}]},
 {running_nodes,['rabbit@rabbit2-server',
                 'rabbit@rabbit1-server']}]
...done.

# If you see the output as above you have successfully setup rabbitmq clustering

# You can test it by sending message on any any one server and check the clustering by listing the queues from other server

rabbitmqctl list_queues

# You will find the queues getting replicated

------------------------------------------------------
>>>>>>>>>>>>>>>>>>> Rabbitmq ACL <<<<<<<<<<<<<<<<<<<<<
------------------------------------------------------

# ACL can be used to restrict user to configure, read, or write on vhost

# Vhost are similar to vhost on apache, we can create our own vhost and use them.
# The root vhost is "/"

# To view all the created vhost
rabbitmqctl list_vhosts

# Create a new user
rabbitmqctl add_user username password

# Set permissions for the user on vhost
rabbitmqctl set_permissions -p vhostpath username ".*" ".*" ".*"

# You can set permissions as per your requirement
rabbitmqctl set_permissions [-p <vhostpath>] <user> <conf> <write> <read>

# Check the given permissions using the below command
rabbitmqctl list_user_permissions username

# Now we can use the created user for connecting rabbitmq-server using the password specified earlier

------------------------------------------------------
>>>>>>>>>>>>>>>>>>> Rabbitmq SSL <<<<<<<<<<<<<<<<<<<<<
------------------------------------------------------

# Copy the certificates on the client node.
# Create a key-cert.pem used by stunnel

cat certificate.key ca-cert.pem > key-cert.pem

# Use the above key-cert.pem in stunnel configuration
# Installing stunnel on all the clients

yum install stunnel

# Edit /etc/default/stunnel4

ENABLED=0
change it to
ENABLED=1

# Else everything will be as it is.

# Copy a sample stunnel configuration in /etc/stunnel directory

cp /usr/share/doc/stunnel4/examples/stunnel.conf-sample /etc/stunnel/stunnel.conf

# Edit the /etc/stunnel/stunnel.conf

Comment below lines using ';'

;[pop3s]
;accept  = 995
;connect = 110

;[imaps]
;accept  = 993
;connect = 143

;[ssmtp]
;accept  = 465
;connect = 25

Uncomment below lines by removing ';'

debug = 7
output = /var/log/stunnel4/stunnel.log

Edit and add below lines

cert = /path/to/key-cert.pem

[amqp]

client = yes
accept = 5673
connect = ipaddress:5671

# Restart stunnel

/etc/init.d/stunnel4 restart

# Check if the new port 5671 is listening or not

netstat -tnlp | grep 5671

# On Rabbitmq Server
# Get the certificates on to the server.

# Edit /etc/rabbitmq/rabbitmq.config

# Add below lines

[
  {rabbit, [
     {ssl_listeners, [5671]},
     {ssl_options, [{cacertfile,"/path/to/cacert.crt"},
                    {certfile,"/path/to/certfile.pem"},
                    {keyfile,"/path/to/keyfile.key"},
                    {verify,verify_peer},
                    {fail_if_no_peer_cert,false}]}
   ]}
].


# Restart Rabbitmq server

/etc/init.d/rabbbitmq-server restart

# Verify the SSL listener has started

netstat -tnlp | grep 5671

Monday, 19 August 2013

Simple KeepAlived Configuration ( Failover / High Availablity )

The basic setup of failover or high availability requires two servers.

Consider the below setup for example

Master1 - 192.168.0.101
Slave1 - 192.168.0.103

Floating or Live ip address 192.168.0.105

Install keepalived on both the servers

# apt-get install keepalived
or
# yum install keepalived

Edit keepalived.conf on both the servers

# vim /etc/keepalived/keepalived.conf

######################################

Add below text on Master1 server

vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 10
priority 200
virtual_ipaddress {
192.168.0.105
}
}

######################################

Add below text on Slave1 server

vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 10
priority 100
virtual_ipaddress {
192.168.0.105
}
}

######################################

Start keepalived daemon on both the servers

# /etc/init.d/keepalived start
or
# service keepalived start

######################################

Check /var/log/syslog or /var/log/messages on both the servers you may find below lines

On Master Server

Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE








On Slave Server

Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE

######################################

Time to test

Shutdown the Master server and check /var/log/syslog on Slave server you will see below lines which states that the virtual or floating IP has been assigned to Slave server.

Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE

Check the ip using ifconfig or ip addr command

# ifconfig
or
# ip addr

######################################

Now start the master server and check that the IP 192.168.0.105 has been assigned back to Master Server.

Check logs on both master and server, you may find below logs

On Master

Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE

On Slave

Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advert
Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE

######################################

We have successfully implemented Keepalived with Failover setup

Thursday, 13 June 2013

Fail2ban to avoid DOS attack on webserver

Install Fail2ban

Edit /etc/fail2ban/jail.conf

[http-get-dos]
enabled = true
port = http
filter = http-get-dos
logpath = /var/log/apache2/access.log
maxretry = 10
findtime = 5
action = iptables[name=HTTP, port=http, protocol=tcp]
bantime = 10


Edit /etc/fail2ban/filter.d/http-get-dos.conf

 [Definition]
failregex = ^<HOST>.*"GET

Restart Fail2ban

/etc/init.d/fail2ban restart

Check iptables you will see new chain

iptables -nvL

OUTPUT:-

Chain fail2ban-HTTP (1 references)
 pkts bytes target     prot opt in     out     source               destination        
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Thursday, 17 January 2013

How To Add Rows with same values in LINUX

How To Add Rows with same values in LINUX

Consider a output of your command is as below

a 10
b 20
c 23
a 85
c 73
b 111
d 69
d 88
b 94
c 33
a 61

I want to add all a,b,c and d value

# cat list | awk '{ a[$1]+= $2 }END { for (i in a) print i,a[i]}'

OUTPUT

a 156
b 225
c 129
d 157