Skip to main content

Tech basics series : Containers , Microservices & Kubernetes - Part 1

 I am starting a set of new blog series to help those who are new to cloud technology - junior engineers, tech aspirants & students etc. I will try to explain the basics in simple terms that will help you develop a good foundation of the latest and greatest in cloud technologies. If you are a seasoned cloud expert, this series will act as a good refresher course!

We will kick off with a series on containers, Microservices  & Kubernetes. After covering the basics we will move on to move advanced topics on how you can build and deploy containerized applications on various cloud platforms


Part 1 - Containers



What are containers?

Containers bundle the application code, its dependencies and configurations required to run the application in a single unit. There are different container technologies available - Docker, Containerd, rkt and LXD. The most popular container technology is Docker . Containers are a form of operating system virtualization, where multiple applications can run in the same host but isolated from each other. Each application  running in a container will have access to its own network resources, mountpoints, file system etc. A Docker container image will consist of a base image, customized by adding additional application code, its dependent libraries and configuration files 

Containers come packaged with everything that it needs to run and can be spinned up in matter of seconds. using a container image. Container images can be stored in a centralized repository called as container registry. There are several managed container registries - Docker hub, Google container registry, Azure container registry etc. These registries can either be public - accessible to all, or made private restricting it to people in an organization or group. Containers are hugely popular as they can run on any platform which supports container technology ie Linux, windows or Mac OS. As they have a very small footprint as they are ephemeral and use less CPU and Memory  resources, you can create multiple replicas of the container as per your scaling requirements

Why do you need containers?

Where there are multiple applications running on the same operating system, there is always a requirement to ensure compatibility between all the libraries and the underlying operating system. The same process has to be followed whenever any of the related components are upgraded or changed. Different environments like Dev, test and production environments could also use different versions of the software. Managing all these at scale can be a challenge delaying the application development and deployment timelines

 With containers all these applications could be run in different environments (containers). By creating docker configuration specific to each environment, it becomes easy to build and deploy different environments at scale and manage their dependencies independently. Once packaged into an image, the application will continue to work the same way irrespective of where it is deployed

How does containers work?

 Docker uses LXC containers in the backend , abstracting it and making it easy to deploy and manage containers. Operating systems consist of OS kernel and software sitting on top of it. Docker offers OS virtualization where the OS kernel is shared between different applications. Each container runs in its own independent namespace with access to its own filesystem, processes, libraries and other files . If  the OS has a Linux kernel, Docker can support different flavors of Linux , for eg: Ubuntu, Suse, CentOS etc. However it cannot support containers in the same host that need a different kernel, for eg: Windows 

How are containers different from virtual machines?

Virtual machines uses a hypervisor software for virtualizing the underlying hardware. Each virtual machine will have its own set of virtualized hardware - CPU, Memory, Storage and NIC cards. You can run different OS in the same virtualization host , i.e., Windows and Linux , as there is no OS sharing between two VMs in a virtualization platform. Containers on the other hand does not provide a strong isolation like virtual machines. With containers its the processes, file system and networking that is isolated. However VM are heavier, i.e. , they need full operating system kernel , device drivers and everything that is required to run the machine. Containers on the other hand just need the resources required to run the applications. Because of this  the start up time of containers are faster when compared to virtual machines


Comments

Popular posts from this blog

Windows server 2012: where is my start button??

If you have been using Windows Server OS for a while, the one thing that will strike you most when you login to a Windows server 2012 is that there is no start button!!.. What??..How am I going to manage it?? Microsoft feels that you really dont need a start button, since you can do almost everything from your server  manager or even remotely from your desktop. After all the initial configurations are done, you could also do away with the GUI and go back to server core option.(In server 2012, there is an option to add and remove GUI). So does that mean, you need to learn to live without a start button. Actually no, the start button is very much there .Lets start looking for it. Option 1: There is "charms" bar on the side of your deskop, where you will find a "start" option. You can use the "Windows +C" shortcut to pop out the charms bar Option 2: There is a hidden "start area"in  the bottom left corner of your desktop

Install nested KVM in VMware ESXi 5.1

In this blog, I will explain the steps required to run a nested KVM hypervisor on  Vmware ESXi. The installation of KVM is done on Ubuntu 13.10(64 bit). Note: It is assumed that you have already installed your Ubuntu 13.10 VM in ESXi, and hence we will not look into the Ubuntu installation part. 1) Upgrade VM Hardware version to 9. In my ESXi server, the default VM hardware version was 8. So I had to shutdown my VM and upgrade the Hardware version to 9 to get the KVM hypervisor working. You can right click the VM and select the Upgrade hardware option to do this. 2)In the ESXi host In /etc/vmware edit the 'config' file and add the following setting vhv.enable = "TRUE" 3)Edit the VM settings and go to VM settings > Options  > CPU/MMU Virtualization . Select the Intel EPT option 4) Go to Options->CPUID mask> Advanced-> Level 1, add the following CPU mask level ECX  ---- ---- ---- ---- ---- ---- --H- ---- 5) Open the vmx

Virtual fibre channel in Hyper V

Virtual fibre channel option in Hyper V allows the connection to pass through from physical  fibre channel HBA to virtual fibre channel HBA, and still have the flexibilities like live migration. Pre-requisites: VM should be running Windows Server 2008, 2008 R2 or Windows Server 2012 Supported physical HBA with N_Port Virtualization(NPIV) enabled in the HBA. This can be enabled using any management utility provided by the SAN manufacturer. If you need to enable live migration, each host should be having two physical HBAs and each HBA should have two World Wide Names(WWN). WWN is used to established connectivity to FC storage.When you perform migration, the second node can use the second WWN to connect to the storage and then the first node can release its connection. Thereby the storage connectivity is maintained during live migration Configuring virtual fibre channel is a two step process Step 1: Create a Virtual SAN in the Hyper-V host First you need to click on Virtual