Skip to main content

Tech basics series : Containers , Microservices & Kubernetes - Part 3

 In the third part of our tech basics series on Containers, Microservices & Kubernetes, we will talk about Pods, ReplicaSets, and Replication controllers. If you are new here, do check out Part 1 and Part 2 of this blog series first!!


What are Pods?

Pods are the smallest object you can create in Kubernetes that encapsulates containers. Imagine a single node K8s clusters running a single pod. When the application needs to scale, you create additional pods of the same application. The pods can also be distributed across multiple nodes in a cluster. Usually, the relationship between pods and containers is 1:1, but it is not mandatory. There is a case of a side car  container as well, which could be helping the main application and included in the same pod. Every time a new pod of the application is created, both the main container and sidecar container are created together. They share the same network and storage and can connect to each other as localhost. 

Basic commands to manage pods

1. Command to run a container in K8s :

kubectl run <pod name> --image <image name>>

Image can be fetched from any container registry , for eg: Docker hub. Image name has to match the name in the registry

2. To view status of running pods:

kubectl get pods

3. To get additional details of the pods:

kubectl describe pod <name of the pod>

3. Command to view details of the node where the pod is deployed, its IP address etc:

kubectl get pods -o  wide

4. Sample yaml file for a pod: https://github.com/cloudyknots/Kubernetessample/blob/main/poddefinition.yaml

You can use the below command to create the pod

kubectl create -f pod-definition.yml


ReplicaSets & Replicas

If there is only one instance of the application running on a pod, the application will become unavailable if the pod fails. To prevent users from losing access, you can have more than one application instance running in multiple pods. This is enabled by the replication controller. Even in the case of a single pod, the replication controller can bring back the pod if it fails. The replication controller spans multiple nodes in a cluster. When the demand for application increases, the replication controller can deploy the pods across multiple nodes as well to scale the application. ReplicaSets is the new version of the replication controller and is used in current versions of Kubernetes

The details of the replica set are defined in the spec of the replication controller yaml file by adding a section called template, please see sample yaml file here to create a ReplicaSet. You can see that we have additionally specified the number of replicas and a selector in the yaml file. 

kubectl create -f replicaset.yaml

Kubectl get replicaset

Replicaset will monitor the pods and recreate them if they fails. For replicaset to do that we need to create labels for pods while creating the poids. Now we can use the "matchLabels" filters to identify which pods should be monitored by ReplicaSets. Even if you have created additional pods not included in the yaml file definition of the replicaset, it can monitor them based on the labels of the pods and ensure that the number of replicas of the pod(3, in this example) is always running.

If you want to scale your application, you can update the number of replicas in the definition file and run the following command

kubectl replace -f replicaset.yaml

Or you can use the kubectl scale command

or you can increase the replicas on the fly using the following command

kubectl scale --replicas=6 replicaset.yaml





Comments

Popular posts from this blog

Windows server 2012: where is my start button??

If you have been using Windows Server OS for a while, the one thing that will strike you most when you login to a Windows server 2012 is that there is no start button!!.. What??..How am I going to manage it?? Microsoft feels that you really dont need a start button, since you can do almost everything from your server  manager or even remotely from your desktop. After all the initial configurations are done, you could also do away with the GUI and go back to server core option.(In server 2012, there is an option to add and remove GUI). So does that mean, you need to learn to live without a start button. Actually no, the start button is very much there .Lets start looking for it. Option 1: There is "charms" bar on the side of your deskop, where you will find a "start" option. You can use the "Windows +C" shortcut to pop out the charms bar Option 2: There is a hidden "start area"in  the bottom left corner of your desktop

Install nested KVM in VMware ESXi 5.1

In this blog, I will explain the steps required to run a nested KVM hypervisor on  Vmware ESXi. The installation of KVM is done on Ubuntu 13.10(64 bit). Note: It is assumed that you have already installed your Ubuntu 13.10 VM in ESXi, and hence we will not look into the Ubuntu installation part. 1) Upgrade VM Hardware version to 9. In my ESXi server, the default VM hardware version was 8. So I had to shutdown my VM and upgrade the Hardware version to 9 to get the KVM hypervisor working. You can right click the VM and select the Upgrade hardware option to do this. 2)In the ESXi host In /etc/vmware edit the 'config' file and add the following setting vhv.enable = "TRUE" 3)Edit the VM settings and go to VM settings > Options  > CPU/MMU Virtualization . Select the Intel EPT option 4) Go to Options->CPUID mask> Advanced-> Level 1, add the following CPU mask level ECX  ---- ---- ---- ---- ---- ---- --H- ---- 5) Open the vmx

Virtual fibre channel in Hyper V

Virtual fibre channel option in Hyper V allows the connection to pass through from physical  fibre channel HBA to virtual fibre channel HBA, and still have the flexibilities like live migration. Pre-requisites: VM should be running Windows Server 2008, 2008 R2 or Windows Server 2012 Supported physical HBA with N_Port Virtualization(NPIV) enabled in the HBA. This can be enabled using any management utility provided by the SAN manufacturer. If you need to enable live migration, each host should be having two physical HBAs and each HBA should have two World Wide Names(WWN). WWN is used to established connectivity to FC storage.When you perform migration, the second node can use the second WWN to connect to the storage and then the first node can release its connection. Thereby the storage connectivity is maintained during live migration Configuring virtual fibre channel is a two step process Step 1: Create a Virtual SAN in the Hyper-V host First you need to click on Virtual