Skip to main content

Azure Arc integrated Kubernetes cluster


Multi and Hybrid cloud deployments have become more of a norm than exception and how seamlessly you can manage resources deployed across multiple environments would determine the success of your digital transformation. Azure Arc enables this by providing a solution that enables consistent management of workloads across environments. It helps onboard resources from  heterogeneous deployments  and manage them using familiar premises of Azure Resource Manager. Azure Arc currently supports VM, Kubernetes clusters(preview) and databases(preview) , and you can monitor and manage them from Azure irrespective of where it is deployed.

Azure Arc can  be used for centralized monitoring and management of k8s clusters deployed across different cloud environments or on-premises. This service is currently in preview. As part of my weekend tinkering , I explored Azure Arc enabled Kubernetes cluster. The process for setting it up for a lab is pretty straight forward, and you will get most of this information from publicly available documents. I have made few tweaks to get them to suit the k8s clusters that I created

To start with, you need to get the kubeconfig file of the cluster that should be integrated with Azure Arc. For testing purpose I created a k8s cluster for testing the integration though kubeadm. That was an interesting experiment in itself as the deployment was done in an Azure VM. The steps to be followed are based on the following article: https://www.mirantis.com/blog/how-install-kubernetes-kubeadm/ . However to make the cluster accessible over  a public DNS, some additional configuration was required. For instance, the kubeadm deployment exposes the API server over port 6443. So inbound connection to this port has to be enabled in the NSG of the VM.

My tweaks for to get Kubeadm based cluster deployment working in Azure in addition to the steps mentioned in the document are as follows

1. Deploy an Ubuntu 18 machine from marketplace
2. Create  a DNS entry for the VM and map it to the public IP.
3. Create NGS with that allows inbound connection at port 6443 from internet in addition to  the default SSH port
4. Use the DNS name of the VM in the kubeadm init command while creating the cluster. Else the certificate will not be bound to the DNS and you will not be able to access the cluster from external and add it to AzureArc. Sample command I used is given below
        
  kubeadm init --pod-network-cidr=192.168.0.0/16 --control-plane-endpoint  kubeadmclstr.eastus2.cloudapp.azure.com
5. Calico installation should be done using the following steps
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml

You can  check out the below video for the full installation process 


Now our K8s cluster is created using kubeadm.  Copy/upload the kubeconfig file to the environment from where you are configuring AzureArc integration. I configured the AzureArc intergation from cloudshell, hence uploaded kubeconfig file to be Azure CloudShell session. Follow this document to enable integration with AzureArc: https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/connect-cluster 

 I have recorded a video of the integration process, you can refer to it below.  

There my k8s cluster is listed in AzureArc!! Now if you want to get additional insights into your cluster performance, enable monitoring of the cluster using the steps in the  MS document. Its easy-peasy really, you can simply follow through the document. 

I tried it for one of my AzureArc enabled clusters , you can refer the video below to view the process .

Note: I enabled monitoring through Bash and integrated my k8s cluster with an existing log analytics work space. For automated CI/CD deployments , you can also use service principals as described in the doc

Viola, Now I can view my K8s cluster and view the associated metrics & logs directly from Azure portal. Of course in real world, this would be your production k8s clusters. As the service is now in preview, you can use it for test and dev purposes and not in production. Hope this blog + videos will help you get started with that. Happy learning!!





Comments

  1. This is really too useful and have more ideas and keep sharing many techniques. Eagerly waiting for your new blog keep doing more.
    DevOps Training in Chennai
    DevOps Online Course
    DevOps Course in Coimbatore

    ReplyDelete

  2. Save up to 75% on your cloud bill by recognizing cost leaks in your infrastructure. With AWS, Azure and GCP continually changing cloud services, SKUs and pricing; optimization has become an on-going action to pick the perfect resource at the right time.
    Automate Cloud Control

    ReplyDelete

Post a Comment

Popular posts from this blog

Install nested KVM in VMware ESXi 5.1

In this blog, I will explain the steps required to run a nested KVM hypervisor on  Vmware ESXi. The installation of KVM is done on Ubuntu 13.10(64 bit). Note: It is assumed that you have already installed your Ubuntu 13.10 VM in ESXi, and hence we will not look into the Ubuntu installation part. 1) Upgrade VM Hardware version to 9. In my ESXi server, the default VM hardware version was 8. So I had to shutdown my VM and upgrade the Hardware version to 9 to get the KVM hypervisor working. You can right click the VM and select the Upgrade hardware option to do this. 2)In the ESXi host In /etc/vmware edit the 'config' file and add the following setting vhv.enable = "TRUE" 3)Edit the VM settings and go to VM settings > Options  > CPU/MMU Virtualization . Select the Intel EPT option 4) Go to Options->CPUID mask> Advanced-> Level 1, add the following CPU mask level ECX  ---- ---- ---- ---- ---- ---- --H- ---- 5) Open the vmx...

Virtual fibre channel in Hyper V

Virtual fibre channel option in Hyper V allows the connection to pass through from physical  fibre channel HBA to virtual fibre channel HBA, and still have the flexibilities like live migration. Pre-requisites: VM should be running Windows Server 2008, 2008 R2 or Windows Server 2012 Supported physical HBA with N_Port Virtualization(NPIV) enabled in the HBA. This can be enabled using any management utility provided by the SAN manufacturer. If you need to enable live migration, each host should be having two physical HBAs and each HBA should have two World Wide Names(WWN). WWN is used to established connectivity to FC storage.When you perform migration, the second node can use the second WWN to connect to the storage and then the first node can release its connection. Thereby the storage connectivity is maintained during live migration Configuring virtual fibre channel is a two step process Step 1: Create a Virtual SAN in the Hyper-V host First you need to click on Vir...

Windows server 2012: where is my start button??

If you have been using Windows Server OS for a while, the one thing that will strike you most when you login to a Windows server 2012 is that there is no start button!!.. What??..How am I going to manage it?? Microsoft feels that you really dont need a start button, since you can do almost everything from your server  manager or even remotely from your desktop. After all the initial configurations are done, you could also do away with the GUI and go back to server core option.(In server 2012, there is an option to add and remove GUI). So does that mean, you need to learn to live without a start button. Actually no, the start button is very much there .Lets start looking for it. Option 1: There is "charms" bar on the side of your deskop, where you will find a "start" option. You can use the "Windows +C" shortcut to pop out the charms bar Option 2: There is a hidden "start area"in  the bottom left corner of your desktop...