Decoding Docker - Part 3 : Docker files

Hope you have gone through the Part1 and Part2 of  my blog series on Docker. In this post on my Docker series, I am exploring the creation of Docker images

There are multiple ready made docker images available in Docker hub, which you could simply pull and use. However, what if you need a different combination of software versions than what is ready available. Simple, you create an image of your own with all required softwares installed

There are two options to do this.The easiest option is to pull a base image and install everything you want, commit the container as a new image. What if you would like to make some changes down the line? You may have to redo the whole thing. That is where the option to create image using docker file helps. You can simply write a docker file to install and configure the required software. If you wish to make some changes at a later point, you could  edit the docker file and build a new image

In this example , I will explain the process of creating an docker image with apache installed, with the relevant ports opened in it. Lets start with creating a docker file. It is a simple text file with the name "Dockerfile"

#vi Dockerfile

We will be using the Centos base image, so the first line of the docker file explains which image to use

FROM centos

Now lets install all required softwares using the run command

RUN yum -y update
RUN yum -y install python-setuptools
RUN easy_install supervisor
RUN mkdir -p /var/log/supervisor
RUN yum -y install which
RUN yum -y install git

Now build the docker file to an image

Docker build –t  custom/base .

Notice the "." at the end. You should run the command from the directory where "Dockerfile" exists. Now you have created your base image. Lets install apache next. Edit the docker file and add the following content

FROM custom/base
RUN yum -y install httpd
ADD supervisord.conf /etc/supervisord.conf
EXPOSE 22 80 
CMD ["/usr/bin/supervisord"]

We have installed  supervisord in the base image to manage the processes within the container, point in case the apache service. Now, lets write a supervisord config files to start the service on container startup

vi supervisord.conf

Add the following content


command=/bin/bash -c "exec /usr/sbin/apachectl –k start"

Run dockerbuild to create the image

Docker build –t  custom/httpd .

Now lets spin up a container from the image

sudo docker run -p 80:80 -v /root/htdocs:/var/www/html -t -i custom/httpd

Note: You can create a folder named /root/htdocs and use the -v switch to mount this folder at  /var/www/html of the container, so that the storage is persistent
The -p switch will map the 80 port of the container to 80 port of the host

Decoding Docker - Part 2


                                            Docker Remote Registry

Continuing the blog series on my trysts with docker, in this installment we will look into the details of how to set up a docker remote registry. Hope now you have an idea on how to get Docker up and running , if not go ahead and read the first part of my blog series here

Now that we have docker engine up and running, and  few containers spinned up in it we might very well think about a centralized docker image repository. Of course we have Docker hub, and you could  save your images there. But what if you want to have a bit more privacy and would like to save all your hard work in house?That is where Docker remote registry comes in handy.

Docker remote registry can be set up in a local machine for centralized storage of docker images. You can pull and push images just like you do in Docker hub.It allows centralized collaboration of people working on docker containers in your firm. For eg: a developer working on a project can save the current status of his container as an image and push it to the remote registry . His fellow team mate could download the image and spin up and container and continue the work. This is just one of the use cases, the functionality is somewhat similar to an SVN repository. However, one major drawback I noticed was the lack of a search/list functionality.

Here is how you can set it up:

Server side configuration:

To start with, you will need a certificate for connecting to the remote registry. Lets create one using openssl in the machine where you plan to set up your docker remote registry:

Decoding Docker - Part 1

Having worked with multiple Virtualization platforms, I recently got an interesting opportunity to work with its younger sibling containerization . The  platform of choice was obviously Docker. Getting Docker up and run in an OS of your preference is a simple task, you can straightaway get it done using the instructions here . Interesting part is  getting to play around with it

 Getting it up and running:

Docker can be started as a services or at a tcp port. Starting as a service is pretty straight forward

#service docker start

However, the interesting bit is when you want to run it as a deamon listening to a specific port. This is useful in scenarios when you want to manage the docker engine remotely, say using a windows docker client or using one of the open source GUIs available for docker like Shipyard and

The command to run docker as a deamon listening to a port is

# /usr/bin/docker  -d -H tcp:// -H unix:///var/run/docker.sock &

Here docker will listen at all IPs of the machine at port 4243. If you want to connect to this docker engine from a remote docker client, the following command can be used

#docker -H tcp://<docker engine host>:4243 <commands>

For eg: #docker -H tcp://<docker engine host>:4243 ps

One downside of this method is that there is no inherent authentication mechanisms for remote access

Spin up your containers:

Lets start with pulling an image from the Docker hub, which is  a public repository of Docker images

Cloud security - CSA domains

This is the second post in the blog series on Cloud security. You can see the first blog post here

The Cloud security alliance group provides actionable best practices for businesses to transition to cloud services while mitigating the risk involved in doing so. As per the latest version of CSA guide The critical areas of focus in cloud computing is divided into fourteen domains

Cloud Security - Risk factors

Cloud security is a major consideration for enterprise wide cloud adoption, especially public cloud. This is part 1 of a serious of blog posts , where I am planning to pen down the different dimensions of Cloud security, starting with the risk factors of cloud adoption.

The various attributes of security risks  involved in the process can be summed up as follows:

ENISA* recommends the following  risk areas to be taken into account, while embarking on a cloud adoption journey

OpenStack icehouse installation error : nova-api service getting stopped

While trying to install OpenStack icehouse, faced an issue with nova-api service.It was not getting started. The following error was coming up in the Nova-api log

Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-save -c
Exit code: 1

 nova Stdout: ''
2014-10-17 07:21:08.058 27270 TRACE nova Stderr: 'Traceback (most recent call last):\n  File "/usr/bin/nova-rootwrap", line 6, in <module>\n    from oslo.rootwrap.cmd import main\nImportError: No module named rootwrap.cmd\n'

Problem was with one of the oslo.rootwrap module. It was broken

Solution is to upgrade the module using pip

 #pip install oslo.rootwrap --upgrade

OpenStack: Restrict instance deletion

In OpenStack, by default users who are members of a  tenant can delete all instances in that tenant, even if it is spinned up by other users. If you want to restrict that, you  need to tweak the nova policy file  ie /etc/nova/policy.json

Add the following lines in the file:

    "admin_or_user":"is_admin:True or user_id:%(user_id)s",

Make the same changes in the /etc/openstack-dashboard/nova_policy.json file also

Now restart the openstack-nova-api service

Now user will be able to delete only those instances spinned up by them. Admin users will be able to delete all instances

OpenStack : Assign floating IP using heat template

Creating Yaml templates that assign floating IPs to your instances being spawned can be a bit tricky.Let us look at a scenario where we need to spin up a VM, assign a floating IP from a pool and make reference to this floating IP in your userdata as well. We will make use of the network ID of the internal and external network, as well as the Subnet ID of the internal network

The logical workflow is as follows:

  •  Create a port resource using internal network and internal subnet IDs
  • Create a floating IP resource , referring to the external network ID
  •  Associate the floating IP to the port
  •   In the server resource being created, associate the port resource
  Now we will see how this can be implemented using both HOT and AWS template formats

OpenStack monitoring: Zabbix Ceilometer proxy installation

Recently a Ceilometer proxy for Zabbix was released by OneSource. This proxy will pull all the instance information from OpenStack and populate it in Zabbix

The source code can be downloaded from here:

The basic prerequisites for the server where the proxy is running is Python and Pika library. Also there should be network connectivity from the proxy machine to your OpenStack installation.

Agentless openstack monitoring using zabbix

Zabbix can be a tough cookie to crack!! And if you are planning to monitor Openstack using Zabbix, there is lot of additional work to be done .More so, if you want to go the agentless way, ie using SNMP

So, here we go.I am using Ubuntu 12.04 OS, both for my Zabbix server as well as openstack nodes

  • First you need to install the following packages using apt-get in the machine being monitored ie the openstack node

older post