Saturday, March 18, 2023

Google Professional Cloud Security Engineer Exam Prep notes - Part 4

 Google API Private Access

Private Google Access is configured at the subnet level and allows subnetworks to access GCP services privately. The resources in the subnet can access Google services without an external IP, for eg: Cloud storage, Youtube, etc. It offers better security as the exposure to outside networks is reduced, thereby minimizing the possibilities of data interception and attacks.

Google cloud service accounts

 These accounts are used for service-to-service authentication. For eg: an application in compute engine can use a service account to access a storage account 

Two types of service accounts - Google-managed service accounts & user-managed service accounts

In Google-managed service accounts, the private and public keys are managed by Google. Each key can be used for a max of two weeks. Private keys of google managed keys are never directly accessible and the platform itself manages the key rotation process

With user-managed keys, only public keys are stored in Google. users should manage the private key , keeping them secure and also for key rotation. For key rotation, you can create up to 10 user-managed service account keys per service

IAM policies and conditions

IAM policy can be considered as a statement of access permissions attached to a resource. Components of policy are a set of roles and role members. Resources inherit policies from the parent resource. Policies specific to a resource are a combination of parent policy and policies assigned to that resource. It is important to note that a least restrictive parent policy will override a more restrictive resource-specific policy. 

 IAM policies have role bindings that bind an IAM principle to a specific role. IAM conditions can be used to specify attribute-based access, ie either allow or deny access based on specific attributes and if the configured conditions are met. These conditions can either be resource or request-specific. For eg: allow access only to cloud SQL service with a specific name prefix

Organization policies 

Organizational policies provide centralized control over all projects in an organization. They can be set on organizations, folders, and projects. You can configure constraints to implement restrictions on Google services. These restrictions will be applied to the specific resource at which it is applied and all its descendants. There are two types of constraints - lists and booleans

Sample usage of list constraint is to create a list of VMs restricted from having external IPs. Enabling and disabling features such as nested virtualization, serial port access, service account creation, etc are boolean constraints. You can also configure at each resource hierarchy node whether you want to inherit the policies from the parent node. 

Difference between organization policies and IAM policies

Organization policies are used to define the "what" ie what restrictions you want to implement on your resources

IAM policies are focussed on the "Who", ie who is authorized to take specific actions on resources based on assigned permissions



Share:

Sunday, March 12, 2023

Tech basics series : Containers , Microservices & Kubernetes - Part 3

 In the third part of our tech basics series on Containers, Microservices & Kubernetes, we will talk about Pods, ReplicaSets, and Replication controllers. If you are new here, do check out Part 1 and Part 2 of this blog series first!!


What are Pods?

Pods are the smallest object you can create in Kubernetes that encapsulates containers. Imagine a single node K8s clusters running a single pod. When the application needs to scale, you create additional pods of the same application. The pods can also be distributed across multiple nodes in a cluster. Usually, the relationship between pods and containers is 1:1, but it is not mandatory. There is a case of a side car  container as well, which could be helping the main application and included in the same pod. Every time a new pod of the application is created, both the main container and sidecar container are created together. They share the same network and storage and can connect to each other as localhost. 

Basic commands to manage pods

1. Command to run a container in K8s :

kubectl run <pod name> --image <image name>>

Image can be fetched from any container registry , for eg: Docker hub. Image name has to match the name in the registry

2. To view status of running pods:

kubectl get pods

3. To get additional details of the pods:

kubectl describe pod <name of the pod>

3. Command to view details of the node where the pod is deployed, its IP address etc:

kubectl get pods -o  wide

4. Sample yaml file for a pod: https://github.com/cloudyknots/Kubernetessample/blob/main/poddefinition.yaml

You can use the below command to create the pod

kubectl create -f pod-definition.yml


ReplicaSets & Replicas

If there is only one instance of the application running on a pod, the application will become unavailable if the pod fails. To prevent users from losing access, you can have more than one application instance running in multiple pods. This is enabled by the replication controller. Even in the case of a single pod, the replication controller can bring back the pod if it fails. The replication controller spans multiple nodes in a cluster. When the demand for application increases, the replication controller can deploy the pods across multiple nodes as well to scale the application. ReplicaSets is the new version of the replication controller and is used in current versions of Kubernetes

The details of the replica set are defined in the spec of the replication controller yaml file by adding a section called template, please see sample yaml file here to create a ReplicaSet. You can see that we have additionally specified the number of replicas and a selector in the yaml file. 

kubectl create -f replicaset.yaml

Kubectl get replicaset

Replicaset will monitor the pods and recreate them if they fails. For replicaset to do that we need to create labels for pods while creating the poids. Now we can use the "matchLabels" filters to identify which pods should be monitored by ReplicaSets. Even if you have created additional pods not included in the yaml file definition of the replicaset, it can monitor them based on the labels of the pods and ensure that the number of replicas of the pod(3, in this example) is always running.

If you want to scale your application, you can update the number of replicas in the definition file and run the following command

kubectl replace -f replicaset.yaml

Or you can use the kubectl scale command

or you can increase the replicas on the fly using the following command

kubectl scale --replicas=6 replicaset.yaml





Share:

Saturday, February 18, 2023

Tech basics series : Containers , Microservices & Kubernetes - Part 2

Part 2 : Container orchestration using Kubernetes

In the second part of our tech basics series on Containers, Microservices & Kubernetes , we will talk about container orchestration and Kubernetes. If you are new here, do check out Part 1 of this blog series first!!

Why do you need container orchestration?


Running a single application in a container might look simple and straight forward. However in the real world, there will be multiple other services that the application needs to talk to. The application should scale based on the capacity requirements. There should be a control plane that is capable of orchestrating these connectivity requirements, scaling, scheduling and lifecycle management of containers. That is where container orchestration comes into picture. Container orchestration solutions like Docker swarm, Mesos and Kubernetes offer a centralized tool for managing containers, scheduling and scaling containers. Of the many container orchestration platform Kubernetes is the most popular and widely used one. All leading cloud platforms offer Kubernetes as a managed service, which helps speed up the onboarding process on the platform


Understanding Kubernetes architecture

Kubernetes also known as K8s consist of worker machines nodes  and a control plane. Nodes could be a physical or virtual machine with Kubernetes installed. In a Kubernetes cluster there should be multiple nodes to ensure high available and load sharing. The control plane usually consists of one or more master nodes that has Kubernetes installed and takes care of container orchestration on member nodes. The different components of the container cluster are as shown below:

K8s components*

kube-apiserver: It is the front end of K8s control plane and exposes the Kubernetes API. All users and management services connect to this component and it runs on the master nodes

etcd : This is the  key-value store of K8s , which stores information about the cluster. It is distributed and highly available and stores information about the masters and nodes in a cluster . This information is stored in all nodes of the cluster in a distributed manner and ensures that there are no conflicts especially when there are multiple master

kubelet: It is an agent that ensures that containers run as expected in all servers. It runs on all worker nodes in the cluster

Container Runtime: It is the container run time that is required for deploying containers. It can be Docker or any other container run time like rkt or CRI-O

controller: This component ensures that the cluster is adhering to the desired state. It manages the orchestration process . For eg: when  nodes or endpoints go down the containers are redeployed in a different node by the controller

kube-scheduler : This component on the master nodes decides on which nodes in the cluster the container should be deployed. The nodes are selected based on pre-defined policies , constraints, affinity or anti-affinity  etc.

kube-proxy : This components handles the networking of the cluster . It is a networking-proxy where different rules that govern the networking patterns of the cluster is maintained. Any ingress or egress traffic of the pods are managed by the kube-proxy and the service runs on worker nodes







Share:

Thursday, February 16, 2023

Tech basics series : Containers , Microservices & Kubernetes - Part 1

 I am starting a set of new blog series to help those who are new to cloud technology - junior engineers, tech aspirants & students etc. I will try to explain the basics in simple terms that will help you develop a good foundation of the latest and greatest in cloud technologies. If you are a seasoned cloud expert, this series will act as a good refresher course!

We will kick off with a series on containers, Microservices  & Kubernetes. After covering the basics we will move on to move advanced topics on how you can build and deploy containerized applications on various cloud platforms


Part 1 - Containers



What are containers?

Containers bundle the application code, its dependencies and configurations required to run the application in a single unit. There are different container technologies available - Docker, Containerd, rkt and LXD. The most popular container technology is Docker . Containers are a form of operating system virtualization, where multiple applications can run in the same host but isolated from each other. Each application  running in a container will have access to its own network resources, mountpoints, file system etc. A Docker container image will consist of a base image, customized by adding additional application code, its dependent libraries and configuration files 

Containers come packaged with everything that it needs to run and can be spinned up in matter of seconds. using a container image. Container images can be stored in a centralized repository called as container registry. There are several managed container registries - Docker hub, Google container registry, Azure container registry etc. These registries can either be public - accessible to all, or made private restricting it to people in an organization or group. Containers are hugely popular as they can run on any platform which supports container technology ie Linux, windows or Mac OS. As they have a very small footprint as they are ephemeral and use less CPU and Memory  resources, you can create multiple replicas of the container as per your scaling requirements

Why do you need containers?

Where there are multiple applications running on the same operating system, there is always a requirement to ensure compatibility between all the libraries and the underlying operating system. The same process has to be followed whenever any of the related components are upgraded or changed. Different environments like Dev, test and production environments could also use different versions of the software. Managing all these at scale can be a challenge delaying the application development and deployment timelines

 With containers all these applications could be run in different environments (containers). By creating docker configuration specific to each environment, it becomes easy to build and deploy different environments at scale and manage their dependencies independently. Once packaged into an image, the application will continue to work the same way irrespective of where it is deployed

How does containers work?

 Docker uses LXC containers in the backend , abstracting it and making it easy to deploy and manage containers. Operating systems consist of OS kernel and software sitting on top of it. Docker offers OS virtualization where the OS kernel is shared between different applications. Each container runs in its own independent namespace with access to its own filesystem, processes, libraries and other files . If  the OS has a Linux kernel, Docker can support different flavors of Linux , for eg: Ubuntu, Suse, CentOS etc. However it cannot support containers in the same host that need a different kernel, for eg: Windows 

How are containers different from virtual machines?

Virtual machines uses a hypervisor software for virtualizing the underlying hardware. Each virtual machine will have its own set of virtualized hardware - CPU, Memory, Storage and NIC cards. You can run different OS in the same virtualization host , i.e., Windows and Linux , as there is no OS sharing between two VMs in a virtualization platform. Containers on the other hand does not provide a strong isolation like virtual machines. With containers its the processes, file system and networking that is isolated. However VM are heavier, i.e. , they need full operating system kernel , device drivers and everything that is required to run the machine. Containers on the other hand just need the resources required to run the applications. Because of this  the start up time of containers are faster when compared to virtual machines


Share:

Sunday, February 12, 2023

Google Professional Cloud Security Engineer Exam Prep notes - Part 3

 Integrating existing identity management solution with Google Cloud Platform

Given below are the  steps to integrate a third party identity management platform

  • You should have a domain that is enabled for email. In case of a preexisting domain registered with Google or a non-existent domain you cannot proceed
  • You should have permissions to verify domain ownership by creating txt or CNAME entry
  • Implement SAML SSO if existing identity management system is to be used for authentication to GCP console
  • Create the first cloud identity administration account and account for admin who will manage users in GCP
  • Configure billing accounts- this can either be an online account or offline invoiced account linked to a Purchase Order. For applying for an invoiced billing account you would need to meet certain criteria , ie be a registered business for one year, have min billing of $2500/month for 3 months
  • Create additional admin accounts like network admins or organization admins
  • Use directory sync  to sync identities. Passwords are not synced by default unless you choose to do so. Synching passwords is not a best practices and it is recommended to use SSO instead
  • You can also use third party IDP connectors such as connectors from Ping, Okta or Azure AD G Suite connector

Recommended usage of service accounts

Service accounts are used in access management scenarios where human users are not involved, for e.g.: when applications want to access a DB, storage or similar resources in Google cloud.While using service account to authenticate to other Google Services and APIs, it is recommended to use a user-managed service account and not default service accounts(default service accounts are created when you enable a Google cloud service). After attaching a user-managed service account to resources, you should use application default credentials(ADC) for authentication. ADC  should be configured in the application environment and is automatically used by client libraries for authentication

Envelope encryption

Envelope encryption is the process of implementing multiple layers of key , where one key is encrypted with another. The encryption can either be done at application or storage layer. The default encryption offered by Google is also envelope encryption, but the central keystore is Google's internal key management system. You can also choose to use KMS instead of the internal key management system

Data Encryption Keys(DEK) - used for data encryption, should be generated locally and stored with encryption at rest

Key Encryption Key(KEK)- Used for encryption DEK. Recommended to store centrally(in KMS) and rotate regularly

The process happens as follows

DEK generated locally-> Data encrypted with DEK-> DEK wrapped with KEK-> Encrypted data and wrapped DEK stored in storage system->KEK stored in KMS


Share:

Thursday, February 2, 2023

How cloud and AI are changing the future of our world - Tech talk

On January 28, 2023, I was given the opportunity to give a presentation at my alma mater, College of Engineering Poonjar , as a prelude of IHRD tech fest Tarang23. My manager at Google, Mr Sundar Pelapur , a veteran with two decades of experience in IT industry was my co-presenter . It was a great experience interacting with the next generation of Engineering talent , sharing with them our perspective of the topic " Disrupting the status quo: How Cloud and AI are shaping the future of our world".  

Sharing below an summary of the tech talk, and some useful reference materials, which I think will be helpful for young IT professionals and students who want to make a career in Cloud computing:


We started off with  a brief history of cloud computing , on how the world has moved on from Mainframes in the 1970s to Server/client computing models and to the constructs of Public clouds today




















Let's define in simple terms what cloud computing is.. Its nothing but compute power and storage available on demand













Now we have different favors of cloud computing available - Private, Public and hybrid cloud




















The billion dollar question is, Why Cloud computing? Let's take a look at some of the benefits offered by cloud computing in terms of scalability, flexibility, cost efficiency and security




















Cloud is disrupting almost every industry as we know..






















And is part of our day to day life more that we realize ...


Now moving on to AI .. Let's start with a glossary of most commonly used AI terms.. (of course, it goes without saying that the list is not comprehensive and there is new AI capabilities being built everyday)




Let's check out some of the common use cases of  AI



AI is great, but getting an AI project off the ground is not always simple or straightforward


Synergy between Cloud and AI becomes relevant here..




Hugely popular services like Ebay and Spotify are leaning on  to cloud and AI to innovate and improve user experience..




For students and tech enthusiasts who want to get started in the cloud, I would recommend the following courses. 


You can also start building credentials by taking any of the cloud foundation certifications from any of the hyperscalers. There are lot of free training videos available on platforms like Youtube and Coursera that you can leverage for this purpose..

Do checkout the session uploaded to Youtube  : Part 1 ,  Part 2  , Part 3 , Part 4

Happy learning!!

PS: If you are a student or a professional looking for guidance on career in Cloud, I am happy to connect for 1:1 free mentoring session.. Please feel free to drop in your coordinates in a comment and I will be in touch!!
Share:

Monday, January 30, 2023

Google Professional Cloud Security Engineer Exam Prep notes - Part 2

  This blog covers review notes for logging, DNS security & Google Cloud web Security Scanner Service


1. Aggregated sinks
Sinks can be constructed with the "includeChildren" parameter set to "True" for cloud organisation / folders. The logs from these organizations , folders , projects or billing accounts can be routed to these sinks.
2. DNS security extension
DNS Security Extensions (DNSSEC) is the security protocol that enables authentication of DNS data. It is a DNS protocol extension that adds an additional degree of security by enabling users to digitally sign their DNS records, making it more challenging for attackers to tamper with DNS data. Customers can enable DNSSEC on Google Cloud's Cloud DNS service to safeguard their domains from unauthorized alterations.

3. Google Cloud web Security Scanner Service
To find common vulnerabilities in web applications, such as those listed in the OWASP Top 10, customers can use the Google cloud web security scanner service. It has the ability to scan App Engine-based applications as well as those hosted on other systems like Compute Engine or Kubernetes Engine. It can help identify vulnerabilities like cross-site scripting(XSS), SQL injection and missing security headers. Though not a replacement for security review or penetration testing, it can be used in conjunction with such measures to check for new vulnerabilities
Share:

Total Pageviews

About Me

Cloud Solutions expert with 17+ years of experience in IT industry with expertise in Multi cloud technologies and solid background in Datacentre management & Virtualization. Versatile technocrat with experience in cloud technical presales, advisory, innovation , evangelisation and project delivery. Currently working with Google as Infra modernization specialist, enabling customers on their digital transformation journey . I enjoy sharing my experiences in my blog, but the opinions expressed in this blog are my own and does not represent those of people, institutions or organizations that I may be associated with in professional or personal capacity, unless explicitly stated.

Search This Blog

Powered by Blogger.