Skip to main content

Azure VM migration using PowerShell


Microsoft recommends usage of ARM for all new deployments in Azure. All new developments/features/services will be available in ARM going forward.  But  there are lot of services that are yet to be migrated to ARM. What if one of the services that you want to use is not currently available in ARM and you have already set up rest of your environment in ARM?  In such a scenario, you can always set up a site to site VPN between the classic V1 VNET and the ARM VNET. This process is also well documented:

 
That being the case,  what if we want to test the interoperability of services and you want to move few already set up VMS in ARM to classic? I know that it is not a very common scenario. Also it is not a recommended approach for production deployment, ARM is definitely the way to go. However, for enabling that test run you might very badly want to do before taking the plunge, we will look at the process of creating a new VM in classic portal from a hard disk of VM in ARM portal using Azure PowerShell.

First you will have to login to your Azure account

 Login-AzureRmAccount

Enter the Source blob uri, ie location of the ARM VM's VHD

$sourceBlobUri = https://<Source-storagename> .blob.core.windows.net/vhds/<vhdname>.vhd

Set the Source context

$sourceContext = New-AzureStorageContext  –StorageAccountName <Source-Storagename> StorageAccountKey <Storage access key>

In the destination context, give the name of your classic storage and its key

$destinationContext = New-AzureStorageContext  –StorageAccountName <dest-storagename> -StorageAccountKey <Storage access key>

Copy the vhd to the destination storage

Start-AzureStorageBlobCopy -srcUri $sourceBlobUri -SrcContext $sourceContext -DestContainer "vhds" -DestBlob "rds1201647182929.vhd" -DestContext $destinationContext

This command will copy the VHD from the source storage to the container named 'vhds' in the destination storage. Ensure that your VM is in a stopped deallocated  status during this procedure. Also if you want to The copy over took only a few minutes in my experience

 Now we need to add this VHD as an OS disk in the gallery. Start with importing the publish settings file of the subscription

 Get-AzurePublishSettingsFile

Download the publish settings file and import it

 Import-AzurePublishSettingsFile  '<Publish settings file name>'

Set the current subscription

Set-AzureSubscription -SubscriptionName "<Subscription name>"

Now add the OS disk

Add-AzureDisk -DiskName "OSDisk" -MediaLocation "https://<Source-Storagename> .blob.core.windows.net/vhds/<vhdname>.vhd" -Label "My OS Disk" -OS "Windows"

Refresh Azure classic portal

OS disk will be listed in the gallery. Now you can go ahead and create new VM from the disk!!

Note:  The reverse of what is explained here is also possible, that is Migration from classic to ARM.  There are well documented tools available for the same: https://github.com/fullscale180/asm2arm


Ref: https://azure.microsoft.com/en-in/documentation/articles/storage-migration-to-premium-storage/

Comments

Popular posts from this blog

Install nested KVM in VMware ESXi 5.1

In this blog, I will explain the steps required to run a nested KVM hypervisor on  Vmware ESXi. The installation of KVM is done on Ubuntu 13.10(64 bit). Note: It is assumed that you have already installed your Ubuntu 13.10 VM in ESXi, and hence we will not look into the Ubuntu installation part. 1) Upgrade VM Hardware version to 9. In my ESXi server, the default VM hardware version was 8. So I had to shutdown my VM and upgrade the Hardware version to 9 to get the KVM hypervisor working. You can right click the VM and select the Upgrade hardware option to do this. 2)In the ESXi host In /etc/vmware edit the 'config' file and add the following setting vhv.enable = "TRUE" 3)Edit the VM settings and go to VM settings > Options  > CPU/MMU Virtualization . Select the Intel EPT option 4) Go to Options->CPUID mask> Advanced-> Level 1, add the following CPU mask level ECX  ---- ---- ---- ---- ---- ---- --H- ---- 5) Open the vmx...

Virtual fibre channel in Hyper V

Virtual fibre channel option in Hyper V allows the connection to pass through from physical  fibre channel HBA to virtual fibre channel HBA, and still have the flexibilities like live migration. Pre-requisites: VM should be running Windows Server 2008, 2008 R2 or Windows Server 2012 Supported physical HBA with N_Port Virtualization(NPIV) enabled in the HBA. This can be enabled using any management utility provided by the SAN manufacturer. If you need to enable live migration, each host should be having two physical HBAs and each HBA should have two World Wide Names(WWN). WWN is used to established connectivity to FC storage.When you perform migration, the second node can use the second WWN to connect to the storage and then the first node can release its connection. Thereby the storage connectivity is maintained during live migration Configuring virtual fibre channel is a two step process Step 1: Create a Virtual SAN in the Hyper-V host First you need to click on Vir...

Windows server 2012: where is my start button??

If you have been using Windows Server OS for a while, the one thing that will strike you most when you login to a Windows server 2012 is that there is no start button!!.. What??..How am I going to manage it?? Microsoft feels that you really dont need a start button, since you can do almost everything from your server  manager or even remotely from your desktop. After all the initial configurations are done, you could also do away with the GUI and go back to server core option.(In server 2012, there is an option to add and remove GUI). So does that mean, you need to learn to live without a start button. Actually no, the start button is very much there .Lets start looking for it. Option 1: There is "charms" bar on the side of your deskop, where you will find a "start" option. You can use the "Windows +C" shortcut to pop out the charms bar Option 2: There is a hidden "start area"in  the bottom left corner of your desktop...