Skip to main content

Kubernetes best practices in Azure: AKS name space isolation and AAD integration



Once you have decided to run your workloads in AKS service in Azure, there are certain best practices to be followed during design and implementation. In this blog we will discuss two of these recommended practices and the practical aspects of their implementation- Azure AD integration and name space isolation



While AAD helps to authenticate users to your AKS cluster using the existing users and groups in your Azure AD, name space isolation provides logical isolation of resources used by them. It is useful in multi tenant scenarios where the same cluster is being used by different teams/departments to run their workloads. It is also useful in running say a dev, test and QA environment for organization in the same cluster. Combining AAD integration with name spaces allow users to login to their namespace using their Azure AD credentials

AAD integration with AKS :

The following Microsoft document will get you started  with AAD integration of AKS cluster.: https://docs.microsoft.com/en-us/azure/aks/aad-integration


 Please note that you cannot convert a non-RBAC enabled cluster to RBAC enabled one. It has to be done during the cluster creation. Before following the steps in the document, you have to make sure that Azure tenant administrator rights to grant permissions to the server and client application.


The 'az aks create' command sample in the reference document should help with the cluster creation. It creates the cluster with three nodes, but if you want to tweak it a lil bit especially if you are playing around with the service for learning purpose and don't want to burn out your subscription credits, you can use the  " --node-count 1" argument to limit the number of nodes to 1. Additional options can be used with 'az aks create' command for further customization, for eg if you want to change VM SKU etc. Full reference for the options can be found here : https://docs.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-create


AKS namespaces and RBAC authentication:


Kubernetes has three initial namespaces - default,kube-system and Kube-Public.You can create  a new namespace using the following sample namespace.yaml file (Ref: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/ )


apiVersion: v1
kind: Namespace
metadata:
  name: testnamespace



Create the namespace using the Kubectl create command


Kubectl create -f namespace.yaml



Next step is to create a role and rolebinding. In the reference document for enabling RBAC for AKS , a role and rolebinding is created but for a cluster-admin role. However we need to create a role and rolebinding to give user access to resources within a namespace. The following K8S reference document has some sample files for role and rolebinding. You might want to tweak it a bit to change the namespace reference to the namespace you had created earlier : https://kubernetes.io/docs/reference/access-authn-authz/rbac/


Sample file for creating role that has access to read pods in a cluster:kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: testnamespace
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]




While using AKS with RBAC, it is beneficial to give access to Azure AD groups access to a given namespace by providing the Azure AD group ID reference in the rolebinding yaml as shown in sample below.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-pods
  namespace: testnamespace
subjects:
- kind: Group
  name: <Azure AD Group ID>
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader 
  apiGroup: rbac.authorization.k8s.io


After applying the configurations, you can login to AKS cluster using credentials of user added to the AD group (Ref: https://docs.microsoft.com/en-us/azure/aks/aad-integration#access-cluster-with-azure-ad )  
Tip: Users can change context to the namespace to which they have access before running kubectl commands. Else they would have to use the --namespace switch with each command they want to run in the cluster. Refer this ks8 document for instructions on switching namespace context :https://kubernetes.io/docs/tasks/administer-cluster/namespaces/







































Comments

  1. This is a nice and helpful post you have shared here. I love this kind of post. You should read
    virtual receptionist zooom. From here you can get a better information.

    ReplyDelete
  2. The thoughts on Managed services are clear and well explained. Thank you for sharing your work, truly worth reading.

    ReplyDelete

Post a Comment

Popular posts from this blog

Windows server 2012: where is my start button??

If you have been using Windows Server OS for a while, the one thing that will strike you most when you login to a Windows server 2012 is that there is no start button!!.. What??..How am I going to manage it?? Microsoft feels that you really dont need a start button, since you can do almost everything from your server  manager or even remotely from your desktop. After all the initial configurations are done, you could also do away with the GUI and go back to server core option.(In server 2012, there is an option to add and remove GUI). So does that mean, you need to learn to live without a start button. Actually no, the start button is very much there .Lets start looking for it. Option 1: There is "charms" bar on the side of your deskop, where you will find a "start" option. You can use the "Windows +C" shortcut to pop out the charms bar Option 2: There is a hidden "start area"in  the bottom left corner of your desktop

Install nested KVM in VMware ESXi 5.1

In this blog, I will explain the steps required to run a nested KVM hypervisor on  Vmware ESXi. The installation of KVM is done on Ubuntu 13.10(64 bit). Note: It is assumed that you have already installed your Ubuntu 13.10 VM in ESXi, and hence we will not look into the Ubuntu installation part. 1) Upgrade VM Hardware version to 9. In my ESXi server, the default VM hardware version was 8. So I had to shutdown my VM and upgrade the Hardware version to 9 to get the KVM hypervisor working. You can right click the VM and select the Upgrade hardware option to do this. 2)In the ESXi host In /etc/vmware edit the 'config' file and add the following setting vhv.enable = "TRUE" 3)Edit the VM settings and go to VM settings > Options  > CPU/MMU Virtualization . Select the Intel EPT option 4) Go to Options->CPUID mask> Advanced-> Level 1, add the following CPU mask level ECX  ---- ---- ---- ---- ---- ---- --H- ---- 5) Open the vmx

Virtual fibre channel in Hyper V

Virtual fibre channel option in Hyper V allows the connection to pass through from physical  fibre channel HBA to virtual fibre channel HBA, and still have the flexibilities like live migration. Pre-requisites: VM should be running Windows Server 2008, 2008 R2 or Windows Server 2012 Supported physical HBA with N_Port Virtualization(NPIV) enabled in the HBA. This can be enabled using any management utility provided by the SAN manufacturer. If you need to enable live migration, each host should be having two physical HBAs and each HBA should have two World Wide Names(WWN). WWN is used to established connectivity to FC storage.When you perform migration, the second node can use the second WWN to connect to the storage and then the first node can release its connection. Thereby the storage connectivity is maintained during live migration Configuring virtual fibre channel is a two step process Step 1: Create a Virtual SAN in the Hyper-V host First you need to click on Virtual