Skip to main content

Azure Well Architected framework - An Introduction

When your workloads are in the cloud the constructs of deployment, configuration and operations are strikingly different from what you would have used on-premises. Adopting the right architecture, without doubt, is the key to host an application successfully in the cloud. Azure helps you with this every step of the process through Azure Well Architected framework. Consider this as a blueprint for excellence in Azure cloud. It consist of five main pillars - Cost optimization, Operational excellence, Performance efficiency, Reliability and Security

Cost Optimization : The basic principle is to start small and scale as you go. Instead of making a  huge investment upfront, it is recommended to follow the approach of "Build-Measure-Learn" , aligned with Azure Cloud Adoption Framework (CAF). It focusses on building a minimum viable product(MVP), measuring the feedback and then use a  fail fast approach  to optimize your cost. Azure cost calculator can help to get you the initial cost, then you can use services like Azure cost management to review the ongoing operational cost

Operational excellence :  Success of cloud deployment depends on how well oiled your operations engine is. Starting with automation of deployment to monitoring , logging and diagnostics - the more granular your visibility is, the better placed you are to keep the lights on for your production environment. The monitoring and logging approach has to be consistent across cloud resources to achieve this goal. The raw monitoring data stored in a central storage can be leveraged by tools like log analytics to get to the root of operational issues , enabling faster resolution. Visualization tools can be leveraged to spot trends like resource utilization and unusual traffic and alert the operations team. 

Performance efficiency: One of the main perks of cloud adoption is scaling on demand , which can be vertical scaling or horizontal scaling. Vertical scaling helps you to increase the compute power and capacity of your resources on demand, for eg: increase the number of CPU cores when your workloads are in peak demand. Horizontal scaling on the other hand is adding more instances of the resources , automatically if possible. Horizontal scaling embodies the true power of cloud scale deployments, and is often more cheaper than increasing capacity of a single instance

Reliability:   No matter how airtight the architecture is , the possibility of a downtime or failure is not 100% eliminated . Hence it is important to design your systems to be reliable i.e. they should be able  recover from failures with minimal damage. Reliability is a combined function of resiliency and availability. Due to distributed nature of deployments in the cloud, failure of one component would impact multiple other components. The thumb rule is to leverage the built-in resiliency features for the native cloud services , be it your VMs, databases or storage services . Not just your infra layer, but your application logic should also be built on this principle for the solution to be completely reliable

Security: Security in the cloud is multilayered . You need to consider the infra and network security , application layer security and data security at-rest and in-transit. As identity is considered the new security perimeter , selecting and implementing the right identity management solution is the first step in securing your applications. Security of the management plane and data pane should be taken into account here. RBAC leveraging Azure AD takes care of the management plane , by helping you implement fine grained access control to Azure resources. For data plane , there are multiple options depending on the Azure service being used, for eg: Data encryption, TDE etc. In addition to application development security best practices, leverage services like application gateway that can provide layer 7 security from common attack vectors.

There are many nuances to adopting the well architected framework, the starting point would be to evaluate the current state of the deployment. You can leverage the Azure well-architected review to get started .


Popular posts from this blog

Windows server 2012: where is my start button??

If you have been using Windows Server OS for a while, the one thing that will strike you most when you login to a Windows server 2012 is that there is no start button!!.. What??..How am I going to manage it?? Microsoft feels that you really dont need a start button, since you can do almost everything from your server  manager or even remotely from your desktop. After all the initial configurations are done, you could also do away with the GUI and go back to server core option.(In server 2012, there is an option to add and remove GUI). So does that mean, you need to learn to live without a start button. Actually no, the start button is very much there .Lets start looking for it. Option 1: There is "charms" bar on the side of your deskop, where you will find a "start" option. You can use the "Windows +C" shortcut to pop out the charms bar Option 2: There is a hidden "start area"in  the bottom left corner of your desktop

Install nested KVM in VMware ESXi 5.1

In this blog, I will explain the steps required to run a nested KVM hypervisor on  Vmware ESXi. The installation of KVM is done on Ubuntu 13.10(64 bit). Note: It is assumed that you have already installed your Ubuntu 13.10 VM in ESXi, and hence we will not look into the Ubuntu installation part. 1) Upgrade VM Hardware version to 9. In my ESXi server, the default VM hardware version was 8. So I had to shutdown my VM and upgrade the Hardware version to 9 to get the KVM hypervisor working. You can right click the VM and select the Upgrade hardware option to do this. 2)In the ESXi host In /etc/vmware edit the 'config' file and add the following setting vhv.enable = "TRUE" 3)Edit the VM settings and go to VM settings > Options  > CPU/MMU Virtualization . Select the Intel EPT option 4) Go to Options->CPUID mask> Advanced-> Level 1, add the following CPU mask level ECX  ---- ---- ---- ---- ---- ---- --H- ---- 5) Open the vmx

Virtual fibre channel in Hyper V

Virtual fibre channel option in Hyper V allows the connection to pass through from physical  fibre channel HBA to virtual fibre channel HBA, and still have the flexibilities like live migration. Pre-requisites: VM should be running Windows Server 2008, 2008 R2 or Windows Server 2012 Supported physical HBA with N_Port Virtualization(NPIV) enabled in the HBA. This can be enabled using any management utility provided by the SAN manufacturer. If you need to enable live migration, each host should be having two physical HBAs and each HBA should have two World Wide Names(WWN). WWN is used to established connectivity to FC storage.When you perform migration, the second node can use the second WWN to connect to the storage and then the first node can release its connection. Thereby the storage connectivity is maintained during live migration Configuring virtual fibre channel is a two step process Step 1: Create a Virtual SAN in the Hyper-V host First you need to click on Virtual