Saturday, February 18, 2023

Tech basics series : Containers , Microservices & Kubernetes - Part 2

Part 2 : Container orchestration using Kubernetes

In the second part of our tech basics series on Containers, Microservices & Kubernetes , we will talk about container orchestration and Kubernetes. If you are new here, do check out Part 1 of this blog series first!!

Why do you need container orchestration?

Running a single application in a container might look simple and straight forward. However in the real world, there will be multiple other services that the application needs to talk to. The application should scale based on the capacity requirements. There should be a control plane that is capable of orchestrating these connectivity requirements, scaling, scheduling and lifecycle management of containers. That is where container orchestration comes into picture. Container orchestration solutions like Docker swarm, Mesos and Kubernetes offer a centralized tool for managing containers, scheduling and scaling containers. Of the many container orchestration platform Kubernetes is the most popular and widely used one. All leading cloud platforms offer Kubernetes as a managed service, which helps speed up the onboarding process on the platform

Understanding Kubernetes architecture

Kubernetes also known as K8s consist of worker machines nodes  and a control plane. Nodes could be a physical or virtual machine with Kubernetes installed. In a Kubernetes cluster there should be multiple nodes to ensure high available and load sharing. The control plane usually consists of one or more master nodes that has Kubernetes installed and takes care of container orchestration on member nodes. The different components of the container cluster are as shown below:

K8s components*

kube-apiserver: It is the front end of K8s control plane and exposes the Kubernetes API. All users and management services connect to this component and it runs on the master nodes

etcd : This is the  key-value store of K8s , which stores information about the cluster. It is distributed and highly available and stores information about the masters and nodes in a cluster . This information is stored in all nodes of the cluster in a distributed manner and ensures that there are no conflicts especially when there are multiple master

kubelet: It is an agent that ensures that containers run as expected in all servers. It runs on all worker nodes in the cluster

Container Runtime: It is the container run time that is required for deploying containers. It can be Docker or any other container run time like rkt or CRI-O

controller: This component ensures that the cluster is adhering to the desired state. It manages the orchestration process . For eg: when  nodes or endpoints go down the containers are redeployed in a different node by the controller

kube-scheduler : This component on the master nodes decides on which nodes in the cluster the container should be deployed. The nodes are selected based on pre-defined policies , constraints, affinity or anti-affinity  etc.

kube-proxy : This components handles the networking of the cluster . It is a networking-proxy where different rules that govern the networking patterns of the cluster is maintained. Any ingress or egress traffic of the pods are managed by the kube-proxy and the service runs on worker nodes


Thursday, February 16, 2023

Tech basics series : Containers , Microservices & Kubernetes - Part 1

 I am starting a set of new blog series to help those who are new to cloud technology - junior engineers, tech aspirants & students etc. I will try to explain the basics in simple terms that will help you develop a good foundation of the latest and greatest in cloud technologies. If you are a seasoned cloud expert, this series will act as a good refresher course!

We will kick off with a series on containers, Microservices  & Kubernetes. After covering the basics we will move on to move advanced topics on how you can build and deploy containerized applications on various cloud platforms

Part 1 - Containers

What are containers?

Containers bundle the application code, its dependencies and configurations required to run the application in a single unit. There are different container technologies available - Docker, Containerd, rkt and LXD. The most popular container technology is Docker . Containers are a form of operating system virtualization, where multiple applications can run in the same host but isolated from each other. Each application  running in a container will have access to its own network resources, mountpoints, file system etc. A Docker container image will consist of a base image, customized by adding additional application code, its dependent libraries and configuration files 

Containers come packaged with everything that it needs to run and can be spinned up in matter of seconds. using a container image. Container images can be stored in a centralized repository called as container registry. There are several managed container registries - Docker hub, Google container registry, Azure container registry etc. These registries can either be public - accessible to all, or made private restricting it to people in an organization or group. Containers are hugely popular as they can run on any platform which supports container technology ie Linux, windows or Mac OS. As they have a very small footprint as they are ephemeral and use less CPU and Memory  resources, you can create multiple replicas of the container as per your scaling requirements

Why do you need containers?

Where there are multiple applications running on the same operating system, there is always a requirement to ensure compatibility between all the libraries and the underlying operating system. The same process has to be followed whenever any of the related components are upgraded or changed. Different environments like Dev, test and production environments could also use different versions of the software. Managing all these at scale can be a challenge delaying the application development and deployment timelines

 With containers all these applications could be run in different environments (containers). By creating docker configuration specific to each environment, it becomes easy to build and deploy different environments at scale and manage their dependencies independently. Once packaged into an image, the application will continue to work the same way irrespective of where it is deployed

How does containers work?

 Docker uses LXC containers in the backend , abstracting it and making it easy to deploy and manage containers. Operating systems consist of OS kernel and software sitting on top of it. Docker offers OS virtualization where the OS kernel is shared between different applications. Each container runs in its own independent namespace with access to its own filesystem, processes, libraries and other files . If  the OS has a Linux kernel, Docker can support different flavors of Linux , for eg: Ubuntu, Suse, CentOS etc. However it cannot support containers in the same host that need a different kernel, for eg: Windows 

How are containers different from virtual machines?

Virtual machines uses a hypervisor software for virtualizing the underlying hardware. Each virtual machine will have its own set of virtualized hardware - CPU, Memory, Storage and NIC cards. You can run different OS in the same virtualization host , i.e., Windows and Linux , as there is no OS sharing between two VMs in a virtualization platform. Containers on the other hand does not provide a strong isolation like virtual machines. With containers its the processes, file system and networking that is isolated. However VM are heavier, i.e. , they need full operating system kernel , device drivers and everything that is required to run the machine. Containers on the other hand just need the resources required to run the applications. Because of this  the start up time of containers are faster when compared to virtual machines


Sunday, February 12, 2023

Google Professional Cloud Security Engineer Exam Prep notes - Part 3

 Integrating existing identity management solution with Google Cloud Platform

Given below are the  steps to integrate a third party identity management platform

  • You should have a domain that is enabled for email. In case of a preexisting domain registered with Google or a non-existent domain you cannot proceed
  • You should have permissions to verify domain ownership by creating txt or CNAME entry
  • Implement SAML SSO if existing identity management system is to be used for authentication to GCP console
  • Create the first cloud identity administration account and account for admin who will manage users in GCP
  • Configure billing accounts- this can either be an online account or offline invoiced account linked to a Purchase Order. For applying for an invoiced billing account you would need to meet certain criteria , ie be a registered business for one year, have min billing of $2500/month for 3 months
  • Create additional admin accounts like network admins or organization admins
  • Use directory sync  to sync identities. Passwords are not synced by default unless you choose to do so. Synching passwords is not a best practices and it is recommended to use SSO instead
  • You can also use third party IDP connectors such as connectors from Ping, Okta or Azure AD G Suite connector

Recommended usage of service accounts

Service accounts are used in access management scenarios where human users are not involved, for e.g.: when applications want to access a DB, storage or similar resources in Google cloud.While using service account to authenticate to other Google Services and APIs, it is recommended to use a user-managed service account and not default service accounts(default service accounts are created when you enable a Google cloud service). After attaching a user-managed service account to resources, you should use application default credentials(ADC) for authentication. ADC  should be configured in the application environment and is automatically used by client libraries for authentication

Envelope encryption

Envelope encryption is the process of implementing multiple layers of key , where one key is encrypted with another. The encryption can either be done at application or storage layer. The default encryption offered by Google is also envelope encryption, but the central keystore is Google's internal key management system. You can also choose to use KMS instead of the internal key management system

Data Encryption Keys(DEK) - used for data encryption, should be generated locally and stored with encryption at rest

Key Encryption Key(KEK)- Used for encryption DEK. Recommended to store centrally(in KMS) and rotate regularly

The process happens as follows

DEK generated locally-> Data encrypted with DEK-> DEK wrapped with KEK-> Encrypted data and wrapped DEK stored in storage system->KEK stored in KMS


Thursday, February 2, 2023

How cloud and AI are changing the future of our world - Tech talk

On January 28, 2023, I was given the opportunity to give a presentation at my alma mater, College of Engineering Poonjar , as a prelude of IHRD tech fest Tarang23. My manager at Google, Mr Sundar Pelapur , a veteran with two decades of experience in IT industry was my co-presenter . It was a great experience interacting with the next generation of Engineering talent , sharing with them our perspective of the topic " Disrupting the status quo: How Cloud and AI are shaping the future of our world".  

Sharing below an summary of the tech talk, and some useful reference materials, which I think will be helpful for young IT professionals and students who want to make a career in Cloud computing:

We started off with  a brief history of cloud computing , on how the world has moved on from Mainframes in the 1970s to Server/client computing models and to the constructs of Public clouds today

Let's define in simple terms what cloud computing is.. Its nothing but compute power and storage available on demand

Now we have different favors of cloud computing available - Private, Public and hybrid cloud

The billion dollar question is, Why Cloud computing? Let's take a look at some of the benefits offered by cloud computing in terms of scalability, flexibility, cost efficiency and security

Cloud is disrupting almost every industry as we know..

And is part of our day to day life more that we realize ...

Now moving on to AI .. Let's start with a glossary of most commonly used AI terms.. (of course, it goes without saying that the list is not comprehensive and there is new AI capabilities being built everyday)

Let's check out some of the common use cases of  AI

AI is great, but getting an AI project off the ground is not always simple or straightforward

Synergy between Cloud and AI becomes relevant here..

Hugely popular services like Ebay and Spotify are leaning on  to cloud and AI to innovate and improve user experience..

For students and tech enthusiasts who want to get started in the cloud, I would recommend the following courses. 

You can also start building credentials by taking any of the cloud foundation certifications from any of the hyperscalers. There are lot of free training videos available on platforms like Youtube and Coursera that you can leverage for this purpose..

Do checkout the session uploaded to Youtube  : Part 1 ,  Part 2  , Part 3 , Part 4

Happy learning!!

PS: If you are a student or a professional looking for guidance on career in Cloud, I am happy to connect for 1:1 free mentoring session.. Please feel free to drop in your coordinates in a comment and I will be in touch!!

Total Pageviews

About Me

Cloud Solutions expert with 17+ years of experience in IT industry with expertise in Multi cloud technologies and solid background in Datacentre management & Virtualization. Versatile technocrat with experience in cloud technical presales, advisory, innovation , evangelisation and project delivery. Currently working with Google as Infra modernization specialist, enabling customers on their digital transformation journey . I enjoy sharing my experiences in my blog, but the opinions expressed in this blog are my own and does not represent those of people, institutions or organizations that I may be associated with in professional or personal capacity, unless explicitly stated.

Search This Blog

Powered by Blogger.