Skip to main content

Tech basics series : Containers , Microservices & Kubernetes - Part 2

Part 2 : Container orchestration using Kubernetes

In the second part of our tech basics series on Containers, Microservices & Kubernetes , we will talk about container orchestration and Kubernetes. If you are new here, do check out Part 1 of this blog series first!!

Why do you need container orchestration?


Running a single application in a container might look simple and straight forward. However in the real world, there will be multiple other services that the application needs to talk to. The application should scale based on the capacity requirements. There should be a control plane that is capable of orchestrating these connectivity requirements, scaling, scheduling and lifecycle management of containers. That is where container orchestration comes into picture. Container orchestration solutions like Docker swarm, Mesos and Kubernetes offer a centralized tool for managing containers, scheduling and scaling containers. Of the many container orchestration platform Kubernetes is the most popular and widely used one. All leading cloud platforms offer Kubernetes as a managed service, which helps speed up the onboarding process on the platform


Understanding Kubernetes architecture

Kubernetes also known as K8s consist of worker machines nodes  and a control plane. Nodes could be a physical or virtual machine with Kubernetes installed. In a Kubernetes cluster there should be multiple nodes to ensure high available and load sharing. The control plane usually consists of one or more master nodes that has Kubernetes installed and takes care of container orchestration on member nodes. The different components of the container cluster are as shown below:

K8s components*

kube-apiserver: It is the front end of K8s control plane and exposes the Kubernetes API. All users and management services connect to this component and it runs on the master nodes

etcd : This is the  key-value store of K8s , which stores information about the cluster. It is distributed and highly available and stores information about the masters and nodes in a cluster . This information is stored in all nodes of the cluster in a distributed manner and ensures that there are no conflicts especially when there are multiple master

kubelet: It is an agent that ensures that containers run as expected in all servers. It runs on all worker nodes in the cluster

Container Runtime: It is the container run time that is required for deploying containers. It can be Docker or any other container run time like rkt or CRI-O

controller: This component ensures that the cluster is adhering to the desired state. It manages the orchestration process . For eg: when  nodes or endpoints go down the containers are redeployed in a different node by the controller

kube-scheduler : This component on the master nodes decides on which nodes in the cluster the container should be deployed. The nodes are selected based on pre-defined policies , constraints, affinity or anti-affinity  etc.

kube-proxy : This components handles the networking of the cluster . It is a networking-proxy where different rules that govern the networking patterns of the cluster is maintained. Any ingress or egress traffic of the pods are managed by the kube-proxy and the service runs on worker nodes







Comments

Popular posts from this blog

Windows server 2012: where is my start button??

If you have been using Windows Server OS for a while, the one thing that will strike you most when you login to a Windows server 2012 is that there is no start button!!.. What??..How am I going to manage it?? Microsoft feels that you really dont need a start button, since you can do almost everything from your server  manager or even remotely from your desktop. After all the initial configurations are done, you could also do away with the GUI and go back to server core option.(In server 2012, there is an option to add and remove GUI). So does that mean, you need to learn to live without a start button. Actually no, the start button is very much there .Lets start looking for it. Option 1: There is "charms" bar on the side of your deskop, where you will find a "start" option. You can use the "Windows +C" shortcut to pop out the charms bar Option 2: There is a hidden "start area"in  the bottom left corner of your desktop

Install nested KVM in VMware ESXi 5.1

In this blog, I will explain the steps required to run a nested KVM hypervisor on  Vmware ESXi. The installation of KVM is done on Ubuntu 13.10(64 bit). Note: It is assumed that you have already installed your Ubuntu 13.10 VM in ESXi, and hence we will not look into the Ubuntu installation part. 1) Upgrade VM Hardware version to 9. In my ESXi server, the default VM hardware version was 8. So I had to shutdown my VM and upgrade the Hardware version to 9 to get the KVM hypervisor working. You can right click the VM and select the Upgrade hardware option to do this. 2)In the ESXi host In /etc/vmware edit the 'config' file and add the following setting vhv.enable = "TRUE" 3)Edit the VM settings and go to VM settings > Options  > CPU/MMU Virtualization . Select the Intel EPT option 4) Go to Options->CPUID mask> Advanced-> Level 1, add the following CPU mask level ECX  ---- ---- ---- ---- ---- ---- --H- ---- 5) Open the vmx

Virtual fibre channel in Hyper V

Virtual fibre channel option in Hyper V allows the connection to pass through from physical  fibre channel HBA to virtual fibre channel HBA, and still have the flexibilities like live migration. Pre-requisites: VM should be running Windows Server 2008, 2008 R2 or Windows Server 2012 Supported physical HBA with N_Port Virtualization(NPIV) enabled in the HBA. This can be enabled using any management utility provided by the SAN manufacturer. If you need to enable live migration, each host should be having two physical HBAs and each HBA should have two World Wide Names(WWN). WWN is used to established connectivity to FC storage.When you perform migration, the second node can use the second WWN to connect to the storage and then the first node can release its connection. Thereby the storage connectivity is maintained during live migration Configuring virtual fibre channel is a two step process Step 1: Create a Virtual SAN in the Hyper-V host First you need to click on Virtual