This blog covers review notes for logging, DNS security & Google Cloud web Security Scanner Service
Monday, January 30, 2023
Google Professional Cloud Security Engineer Exam Prep notes - Part 2
Google Professional Cloud Security Engineer Exam Prep notes - Part 1
Key points to review before the exam
- All default outbound traffic is allowed(Refer the following document for exceptions: https://cloud.google.com/vpc/docs/firewalls#blockedtraffic)
- All ingress traffic is blocked
- Package single app or piece of software as a container. An application with unique parent process but different possible child processes qualifies for this
- Run a PID1 and register Signal handlers
- Enable process namespace sharing in Kubernetes
- Use a specialized init system
- Optimize for Docker build cache
- Remove unnecessary tools
- Build the smallest image possible using the smallest base image, creating images with common layers and reducing clutter
- Enable image scanning for vulnerability
- Tag images using options like semantic versioning and Git commit hash
- Avoid public images if you have stringent security requirements
Blogs in Medium.com - 2022
Saturday, April 17, 2021
The Cloud Migration Gotchas..
All leading cloud providers have a well defined Cloud Adoption Framework that will help you shape up your cloud migration strategy. Customers would eventually end up with one of the 5 'R's of rationalization - Rehost(Lift&shift) , Refactor, Rearchitect, Rebuild or Replace. Once you have identified the approach , next steps would be planning and execution. However the best plans laid out by a professional services team can be driven off the track by customer specific environment challenges. If you are helping customers with cloud migration, here are few things that you might want to think through again and prepare for before you go all in .
1.Start with stakeholder buy in
The first step called out in Azure Cloud Adoption Framework is Strategy or rather the motivation of the organization to move to cloud. Though this would usually be done in the presales phase and might have the buy in of the C-Suite, it is very important that this acceptance trickles down to stakeholders of respective application. There could be resistances in terms of adopting new technology i.e. fear of the unknown. Most often this can be traced back to lack of skill up efforts . Ensure that you factor in Skill development efforts during the plan phase . Remember, you might be an expert in the cloud but for the customer it could be all very new and scary. It is important to give confidence to customer stakeholders that you will not just help them cross the bridge to cloud, but also help them survive there. It could be through extended support after migration , trainings or engagement with support team for ongoing support.
2. DevOps is not just for software development
Be agile in your cloud migration plan, learn from your mistakes and continuously optimize . The waterfall approach of completing the planning of entire suite of applications before migration will impact your migration timelines. Integrate DevOps culture and agile methodologies in your migration process. For eg: leverage IAC for idempotency of base infrastructure. Identify and automate all migration patterns as much as possible. The success of a migration projects depends on the cohesiveness of teams involved, be it the migration team, application team, infra team and the stakeholders. The culture shift to DevOps helps where responsibilities are equally owned by everyone involved
3.Assumptions can be dangerous
While working as a service provider helping customers with cloud migration, it is important to reach a common understanding on the scope of migration. To be more direct - don't assume that scope of work is crystal clear for everyone involved just because there is a document that is signed off on the same. It is prudent to have a scope discussion during initial phase of migration with stake holders so that everyone is clear on roles and responsibilities. If there are any add-ons in your agreement with the customer, for eg: enabling monitoring, backup , DR etc.. ensure that there are no grey areas on the same. For eg; enabling DR once the application is migrated to cloud can become a project in itself . The activities that will be done post migration for DR has to be clearly defined and agreed with customer to avoid scope creep during execution. Be customer centric, but keep the expectations very realistic and get buy in from stakeholders.
4. When in doubt do a POC
In case of complex system migrations, factor in time and effort to do a Proof-of-Concept( POC) before touching the production systems. This could be a separate environment in itself or one of the non-prod environments of the applications. Especially when you are integrating new cloud native services in your architecture, doing a POC is inevitable irrespective of whether you had done individual component testing independently. This could delay your migration timelines, but its worth the wait than resorting to firefighting post migration.
5. Take Legacy systems with a pinch of salt
Often customers prefer Lift & Shift of legacy workloads to cloud. There could be multiple factors contributing to this - unknown dependencies, efforts required to refactor , application sunset being planned in the long term etc. You can use tools like Azure Service map to detect dependencies to an extend. However its always better to err on the side of caution and factor in buffer time to mitigate any blockers that could crop up due to legacy components. As discussed in the previous case this could be one of those scenarios where a POC might be required before the migration if feasible.
Read more about Microsoft Cloud Adoption framework that is designed to provide end to end guidance for customers on the adoption strategies best suited for your business scenarios here :https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/
Sunday, January 24, 2021
Azure Arc integrated Kubernetes cluster
Sunday, November 15, 2020
AKS-managed Azure AD : How to integrate your AKS cluster with Azure AD
5. Login to cluster using an user account that is part of cluster admin AD group that was used for the integration. When prompted, login using the Azure AD credentials .
Friday, December 7, 2018
Kubernetes best practices in Azure: AKS name space isolation and AAD integration
Once you have decided to run your workloads in AKS service in Azure, there are certain best practices to be followed during design and implementation. In this blog we will discuss two of these recommended practices and the practical aspects of their implementation- Azure AD integration and name space isolation
While AAD helps to authenticate users to your AKS cluster using the existing users and groups in your Azure AD, name space isolation provides logical isolation of resources used by them. It is useful in multi tenant scenarios where the same cluster is being used by different teams/departments to run their workloads. It is also useful in running say a dev, test and QA environment for organization in the same cluster. Combining AAD integration with name spaces allow users to login to their namespace using their Azure AD credentials
AAD integration with AKS :
The following Microsoft document will get you started with AAD integration of AKS cluster.: https://docs.microsoft.com/en-us/azure/aks/aad-integrationPlease note that you cannot convert a non-RBAC enabled cluster to RBAC enabled one. It has to be done during the cluster creation. Before following the steps in the document, you have to make sure that Azure tenant administrator rights to grant permissions to the server and client application.
The 'az aks create' command sample in the reference document should help with the cluster creation. It creates the cluster with three nodes, but if you want to tweak it a lil bit especially if you are playing around with the service for learning purpose and don't want to burn out your subscription credits, you can use the " --node-count 1" argument to limit the number of nodes to 1. Additional options can be used with 'az aks create' command for further customization, for eg if you want to change VM SKU etc. Full reference for the options can be found here : https://docs.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-create
AKS namespaces and RBAC authentication:
Kubernetes has three initial namespaces - default,kube-system and Kube-Public.You can create a new namespace using the following sample namespace.yaml file (Ref: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/ )
apiVersion: v1
kind: Namespace
metadata:
name: testnamespace
Create the namespace using the Kubectl create command
Kubectl create -f namespace.yaml
Next step is to create a role and rolebinding. In the reference document for enabling RBAC for AKS , a role and rolebinding is created but for a cluster-admin role. However we need to create a role and rolebinding to give user access to resources within a namespace. The following K8S reference document has some sample files for role and rolebinding. You might want to tweak it a bit to change the namespace reference to the namespace you had created earlier : https://kubernetes.io/docs/reference/access-authn-authz/rbac/
Sample file for creating role that has access to read pods in a cluster:kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: testnamespace
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
While using AKS with RBAC, it is beneficial to give access to Azure AD groups access to a given namespace by providing the Azure AD group ID reference in the rolebinding yaml as shown in sample below.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: testnamespace
subjects:
- kind: Group
name: <Azure AD Group ID>
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
After applying the configurations, you can login to AKS cluster using credentials of user added to the AD group (Ref: https://docs.microsoft.com/en-us/azure/aks/aad-integration#access-cluster-with-azure-ad )
Tip: Users can change context to the namespace to which they have access before running kubectl commands. Else they would have to use the --namespace switch with each command they want to run in the cluster. Refer this ks8 document for instructions on switching namespace context :https://kubernetes.io/docs/tasks/administer-cluster/namespaces/