Thursday, February 2, 2023

How cloud and AI are changing the future of our world - Tech talk

On January 28, 2023, I was given the opportunity to give a presentation at my alma mater, College of Engineering Poonjar , as a prelude of IHRD tech fest Tarang23. My manager at Google, Mr Sundar Pelapur , a veteran with two decades of experience in IT industry was my co-presenter . It was a great experience interacting with the next generation of Engineering talent , sharing with them our perspective of the topic " Disrupting the status quo: How Cloud and AI are shaping the future of our world".  

Sharing below an summary of the tech talk, and some useful reference materials, which I think will be helpful for young IT professionals and students who want to make a career in Cloud computing:


We started off with  a brief history of cloud computing , on how the world has moved on from Mainframes in the 1970s to Server/client computing models and to the constructs of Public clouds today




















Let's define in simple terms what cloud computing is.. Its nothing but compute power and storage available on demand













Now we have different favors of cloud computing available - Private, Public and hybrid cloud




















The billion dollar question is, Why Cloud computing? Let's take a look at some of the benefits offered by cloud computing in terms of scalability, flexibility, cost efficiency and security




















Cloud is disrupting almost every industry as we know..






















And is part of our day to day life more that we realize ...


Now moving on to AI .. Let's start with a glossary of most commonly used AI terms.. (of course, it goes without saying that the list is not comprehensive and there is new AI capabilities being built everyday)




Let's check out some of the common use cases of  AI



AI is great, but getting an AI project off the ground is not always simple or straightforward


Synergy between Cloud and AI becomes relevant here..




Hugely popular services like Ebay and Spotify are leaning on  to cloud and AI to innovate and improve user experience..




For students and tech enthusiasts who want to get started in the cloud, I would recommend the following courses. 


You can also start building credentials by taking any of the cloud foundation certifications from any of the hyperscalers. There are lot of free training videos available on platforms like Youtube and Coursera that you can leverage for this purpose..

Do checkout the session uploaded to Youtube  : Part 1 ,  Part 2  , Part 3 , Part 4

Happy learning!!

PS: If you are a student or a professional looking for guidance on career in Cloud, I am happy to connect for 1:1 free mentoring session.. Please feel free to drop in your coordinates in a comment and I will be in touch!!
Share:

Monday, January 30, 2023

Google Professional Cloud Security Engineer Exam Prep notes - Part 2

  This blog covers review notes for logging, DNS security & Google Cloud web Security Scanner Service


1. Aggregated sinks
Sinks can be constructed with the "includeChildren" parameter set to "True" for cloud organisation / folders. The logs from these organizations , folders , projects or billing accounts can be routed to these sinks.
2. DNS security extension
DNS Security Extensions (DNSSEC) is the security protocol that enables authentication of DNS data. It is a DNS protocol extension that adds an additional degree of security by enabling users to digitally sign their DNS records, making it more challenging for attackers to tamper with DNS data. Customers can enable DNSSEC on Google Cloud's Cloud DNS service to safeguard their domains from unauthorized alterations.

3. Google Cloud web Security Scanner Service
To find common vulnerabilities in web applications, such as those listed in the OWASP Top 10, customers can use the Google cloud web security scanner service. It has the ability to scan App Engine-based applications as well as those hosted on other systems like Compute Engine or Kubernetes Engine. It can help identify vulnerabilities like cross-site scripting(XSS), SQL injection and missing security headers. Though not a replacement for security review or penetration testing, it can be used in conjunction with such measures to check for new vulnerabilities
Share:

Google Professional Cloud Security Engineer Exam Prep notes - Part 1

Key points to review before the exam about firewalls, container best practices and DDoS protection


1. Firewall default rules:
Following rules are created with lowest priority and will be applicable if not overridden by a higher priority rule

  • All default outbound traffic is allowed(Refer the following document for exceptions: https://cloud.google.com/vpc/docs/firewalls#blockedtraffic)
  • All ingress traffic is blocked
2. Disable Public IP and Private Google Access if you want to ensure that compute Engine does not have access to Internet or Google APIs and services
3. Container best practices:

  • Package single app or piece of software as a container. An application with unique parent process but different possible child processes qualifies for this
  • Run a PID1 and register Signal handlers
  • Enable process namespace sharing in Kubernetes
  • Use a specialized init system
  • Optimize for Docker build cache
  • Remove unnecessary tools
  • Build the smallest image possible using the smallest base image, creating images with common layers and reducing clutter
  • Enable image scanning for vulnerability
  • Tag images using options like semantic versioning and Git commit hash
  • Avoid public images if you have stringent security requirements

4. SYN Flood protection
As part of its DDoS protection services, Google Cloud Armor provides protection against SYN floods. It enables you to design unique policies that specify how to manage incoming traffic depending on different factors like IP address or location. You can also set rate restrictions with Cloud Armor to guard against incoming traffic floods
5. Cloud Identity-Aware proxy usage
Google Cloud Identity-Aware Proxy (IAP) enables you to protect access to apps running on Google Cloud Platform (GCP) by using Identity and Access Management (IAM) to identify and authorize users. IAP functions by intercepting requests coming into your application and verifying the user's identity. IAP permits the request to proceed if the user has successfully authenticated and been granted access to the application. If the request is not approved, IAP returns a 403 (forbidden) response
Any application that is accessible via a public or private load balancer, such as Compute Engine instances, Kubernetes Engine clusters, and App Engine applications, can be secured using IAP. You can also protect applications hosted in other clouds or on-premises with the service. IAP also offers TCP forwarding which can protect SSH and RDP access for your VMs
IAP can intercept incoming request to your application and verify identity of the user by checking JWT in cases where JWT assertion is used to authenticate user and contains information and claims that the user wants to transmit
Share:

Blogs in Medium.com - 2022

Do checkout some of  my blogs that I published in Medium.com in 2022 in Google Cloud Community



This is a blog series on Google Cloud DevOps , and how Devops is done the Google way. I have authored Part 2 of the blog series that talks about Compute options for Kubernetes



This is a blog series on Google Cloud Anthos and how it can help scale your applications transcending geographic and cloud boundaries. I have authored Part 6 of this blog series that explains how Multi-Cluster Ingress can be enabled for Anthos



This is a blog series that focusses on the constructs of hosting SAP workloads on Google cloud. I have authored Part 1 of the  blog series that covers the fundamentals of SAP on Google cloud




Share:

Saturday, April 17, 2021

The Cloud Migration Gotchas..

All leading cloud providers have a well defined Cloud Adoption Framework that will help you shape up your cloud migration strategy. Customers would eventually end up with one of the 5 'R's of rationalization - Rehost(Lift&shift) , Refactor, Rearchitect, Rebuild or Replace.  Once you have identified the approach , next steps would be planning and execution. However the best  plans laid out by  a professional services team can be driven off the track by  customer specific environment challenges. If you are helping customers with cloud migration, here are few things that you might want to think through again and prepare for before you go all in .

1.Start with stakeholder buy in

The first step called out in Azure Cloud Adoption Framework is Strategy or rather the motivation of the organization to move to cloud. Though this would usually be done in the presales phase and might have the buy in of the C-Suite, it is very important that this acceptance trickles down to stakeholders of respective application. There could be resistances in terms of adopting new technology i.e. fear of the unknown. Most often this can be traced back to lack of skill up efforts . Ensure that you factor in Skill development efforts during the plan phase . Remember,  you might be an expert in the cloud but for the customer it could be all very new and scary. It is important to give confidence to customer stakeholders that you will not just help them cross the bridge to cloud, but also help them survive there. It could be through extended support after migration , trainings or engagement with support team for ongoing support.

2. DevOps is  not just for software development

Be agile in your cloud migration plan, learn from your mistakes and continuously optimize . The waterfall approach of completing the planning of entire suite of applications before migration will impact your migration timelines. Integrate  DevOps culture and agile methodologies in your migration process. For eg:  leverage IAC for idempotency of  base infrastructure. Identify and automate all migration patterns as much as possible. The success of a migration projects  depends on the cohesiveness of teams involved, be it the migration team, application team, infra team and the stakeholders. The culture shift to DevOps helps where responsibilities are equally owned by everyone involved

3.Assumptions can be dangerous

While working as a service provider helping customers with cloud migration, it is important to reach a common understanding on  the scope of migration. To be more direct - don't assume that scope of work is crystal clear for everyone involved just because there is a document that is signed off on the same. It is prudent to have  a scope discussion during initial phase of migration with stake holders so that everyone is clear on  roles and responsibilities. If there are any add-ons in your agreement with the customer, for eg: enabling monitoring, backup , DR etc.. ensure that there are no grey areas on the same.   For eg; enabling DR once the application is migrated to cloud can become a project in itself . The activities that will be done post migration for DR has to be clearly defined and agreed with customer to avoid scope creep during execution. Be customer centric, but keep the expectations very realistic and get buy in from stakeholders.

4. When in doubt do a POC

In case of complex system migrations, factor in time and effort to do a Proof-of-Concept( POC) before touching the production systems. This could be  a separate environment in itself or one of the non-prod environments of the applications. Especially when you are integrating new cloud native services in your architecture, doing a POC is inevitable irrespective of whether you had done individual component testing independently. This could delay your migration timelines, but its worth the wait than resorting to firefighting post migration. 

5. Take Legacy systems with a pinch of salt

Often customers  prefer Lift & Shift of legacy workloads to cloud. There could be multiple factors contributing to this - unknown dependencies, efforts required to refactor , application sunset being planned in the long term etc. You can use tools like Azure Service map to detect dependencies to an extend. However its always better to err on the side of caution and factor in buffer time to mitigate any blockers that could crop up due to legacy components. As discussed in the previous case this could be one of those scenarios where a POC might be required before the migration if feasible.

Read more about Microsoft Cloud Adoption framework that is designed to provide end to end guidance for customers on the adoption strategies best suited for your business scenarios here :https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ 

Share:

Sunday, January 24, 2021

Azure Arc integrated Kubernetes cluster


Multi and Hybrid cloud deployments have become more of a norm than exception and how seamlessly you can manage resources deployed across multiple environments would determine the success of your digital transformation. Azure Arc enables this by providing a solution that enables consistent management of workloads across environments. It helps onboard resources from  heterogeneous deployments  and manage them using familiar premises of Azure Resource Manager. Azure Arc currently supports VM, Kubernetes clusters(preview) and databases(preview) , and you can monitor and manage them from Azure irrespective of where it is deployed.

Azure Arc can  be used for centralized monitoring and management of k8s clusters deployed across different cloud environments or on-premises. This service is currently in preview. As part of my weekend tinkering , I explored Azure Arc enabled Kubernetes cluster. The process for setting it up for a lab is pretty straight forward, and you will get most of this information from publicly available documents. I have made few tweaks to get them to suit the k8s clusters that I created

To start with, you need to get the kubeconfig file of the cluster that should be integrated with Azure Arc. For testing purpose I created a k8s cluster for testing the integration though kubeadm. That was an interesting experiment in itself as the deployment was done in an Azure VM. The steps to be followed are based on the following article: https://www.mirantis.com/blog/how-install-kubernetes-kubeadm/ . However to make the cluster accessible over  a public DNS, some additional configuration was required. For instance, the kubeadm deployment exposes the API server over port 6443. So inbound connection to this port has to be enabled in the NSG of the VM.

My tweaks for to get Kubeadm based cluster deployment working in Azure in addition to the steps mentioned in the document are as follows

1. Deploy an Ubuntu 18 machine from marketplace
2. Create  a DNS entry for the VM and map it to the public IP.
3. Create NGS with that allows inbound connection at port 6443 from internet in addition to  the default SSH port
4. Use the DNS name of the VM in the kubeadm init command while creating the cluster. Else the certificate will not be bound to the DNS and you will not be able to access the cluster from external and add it to AzureArc. Sample command I used is given below
        
  kubeadm init --pod-network-cidr=192.168.0.0/16 --control-plane-endpoint  kubeadmclstr.eastus2.cloudapp.azure.com
5. Calico installation should be done using the following steps
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml

You can  check out the below video for the full installation process 


Now our K8s cluster is created using kubeadm.  Copy/upload the kubeconfig file to the environment from where you are configuring AzureArc integration. I configured the AzureArc intergation from cloudshell, hence uploaded kubeconfig file to be Azure CloudShell session. Follow this document to enable integration with AzureArc: https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/connect-cluster 

 I have recorded a video of the integration process, you can refer to it below.  

There my k8s cluster is listed in AzureArc!! Now if you want to get additional insights into your cluster performance, enable monitoring of the cluster using the steps in the  MS document. Its easy-peasy really, you can simply follow through the document. 

I tried it for one of my AzureArc enabled clusters , you can refer the video below to view the process .

Note: I enabled monitoring through Bash and integrated my k8s cluster with an existing log analytics work space. For automated CI/CD deployments , you can also use service principals as described in the doc

Viola, Now I can view my K8s cluster and view the associated metrics & logs directly from Azure portal. Of course in real world, this would be your production k8s clusters. As the service is now in preview, you can use it for test and dev purposes and not in production. Hope this blog + videos will help you get started with that. Happy learning!!





Share:

Sunday, November 15, 2020

AKS-managed Azure AD : How to integrate your AKS cluster with Azure AD

AKS is evolving at a dizzying pace and there have been quite  a number of changes since I wrote about AKS namespace isolation and AAD integration . The major update is in terms of creating and Azure AD integrated AKS cluster. You no longer need to create and manage the server and client application, it is handled by the AKS resource provider. 

There are few limitations with this approach though before you get started
  - You cannot disable the AKS-managed Azure AD integration once it is enabled
  - Process is supported only for RBAC enabled clusters
  - Azure AD tenant once integrated cannot be switched to a different one

Lets start with creating an Azure AD group. You can also use an existing one if you want to. Note that creating an Azure AD group would need Global administrator rights

I am executing these steps from Azure cloud shell, where all the required tools like Azure CLI and Kubectl are preinstalled

1. Create the Azure AD group for your cluster administrators. Note down the object id of the group as it is required during cluster provisioning
~$ az ad group create --display-name AKSdemoadminGroup --mail-nickname AKSdemoadmingroup

Note: Once the AD group is created, add the users who will have cluster admin rights to this group



2. Note down the tenant id of your Azure AD. You can get this from Azure portal-> Azure active directory->overview->Tenant information. 


3. Create resource group for the AKS cluster
$ az group create --name demoaksgroup --location  EastUS


4. Create AKS cluster , the object id of the AD group that we created in step 1 and the AD tenant id that was copied in step 2 will be used here
az aks create -g demoaksgroup -n demoaks1  --enable-aad --aad-admin-group-object-ids <AD group obejct ID> --aad-tenant-id <Azure AD tenant ID> --generate-ssh-keys


5. Login to cluster using an user account that is part of cluster admin AD group that was used for the integration. When prompted, login using the Azure AD credentials . 
az aks get-credentials --resource-group demoaksgroup --name demoaks1


In the above example I am executing the "kubectl get nodes" and "kubectl get namespaces" commands after authenticating 

Now you can go ahead and follow the steps in my earlier blog  to setup RBAC for  namespaces using Azure AD credentials




Share:

Total Pageviews

About Me

Cloud Solutions expert with 17+ years of experience in IT industry with expertise in Multi cloud technologies and solid background in Datacentre management & Virtualization. Versatile technocrat with experience in cloud technical presales, advisory, innovation , evangelisation and project delivery. Currently working with Google as Infra modernization specialist, enabling customers on their digital transformation journey . I enjoy sharing my experiences in my blog, but the opinions expressed in this blog are my own and does not represent those of people, institutions or organizations that I may be associated with in professional or personal capacity, unless explicitly stated.

Search This Blog

Powered by Blogger.

Pages - Menu

Blogger templates