Tuesday, December 10, 2013

Windows Azure: Powershell script to update instance type in .csdef file

Here is a simple PowerShell script to change your azure instance type in .csdef file. You need to run this from your code repository and enter the instance type ie "small", ExtraSmall","Medium" etc when prompted

$allCsDefFiles = Get-ChildItem -Recurse -filter *.csdef | ForEach-Object -Process {$_.FullName}
$newvmsize = Read-Host 'Enter the instance type'

foreach ($thisCsDefFile in $allCsDefFiles)
    [xml]$thisCsDefXml = Get-Content $thisCsDefFile
$root = $thisCsDefXml.get_DocumentElement();

If (!$root.WebRole.vmsize)
{ Write-Host "No webrole found in $($root.name) "
    $root.WebRole.vmsize = $newvmsize
Write-host "Webrole size of $($root.name) changed to $($root.WebRole.vmsize)"
If (!$root.Workerrole.vmsize)
{ Write-Host "No Workerrole found in $($root.name) "
$root.Workerrole.vmsize = $newvmsize
Write-host "Workerrole size of $($root.name) changed to $($root.Workerrole.vmsize)"



Monday, December 2, 2013

Virtual fibre channel in Hyper V

Virtual fibre channel option in Hyper V allows the connection to pass through from physical  fibre channel HBA to virtual fibre channel HBA, and still have the flexibilities like live migration.


  • VM should be running Windows Server 2008, 2008 R2 or Windows Server 2012
  • Supported physical HBA with N_Port Virtualization(NPIV) enabled in the HBA. This can be enabled using any management utility provided by the SAN manufacturer.
  • If you need to enable live migration, each host should be having two physical HBAs and each HBA should have two World Wide Names(WWN). WWN is used to established connectivity to FC storage.When you perform migration, the second node can use the second WWN to connect to the storage and then the first node can release its connection. Thereby the storage connectivity is maintained during live migration
Configuring virtual fibre channel is a two step process

Step 1: Create a Virtual SAN in the Hyper-V host

First you need to click on Virtual SAN manager available on Hyper-V manager,

Select option to create a new Virtual fibre channel SAN , give a name and  select your available physical SAN HBAs .

 Thus the physical SANs are made available to the Virtual machines, but you will still need to add those SANs to your VMs when required . Single host can be connected to multiple SAN Volumes

Step 2: Add the Virtual fibre channel adapter to VM and connect to your SAN

Upto four virtual fibre channel adapters are possible on a VM, however you cannot add virtual fibre channel adapter when VM is switched on.

Right click on your VM-> Settings->New hardware and select Fibre channel adapter and click ok

Select the Virtual SAN that we created in Step 1 and click OK

Now start the VM and use the virtual HBA to connect to the physical SAN storage

What happens during Live migration?

Each Virtual HBA will have two sets of addresses to facilitate live migration.

Lets find out what exactly happens during a live migration.

Initially the SAN is connected to the VM, it will be using one of the World wide address A to connect to the SAN.

When we initiate the Live migration, it will start using the second set ie world wide set B.

Thus the  FC connectivity is maintained and once migration is completed connection is flicked over to the second set

This will ensure availability during Live migration

Image courtesy / Ref: http://www.microsoftvirtualacademy.com/ & Techtarget.com

Tuesday, November 26, 2013

Hyper V Server 2012 remote management from Windows 8

Now that we have had  a look at how to do the Installation and initial configuration of Hyper V Server 2012 in my previous blog post, lets start on the management part.

In this blog, I will explain how to manage your Hyper V installation from a Windows 8 machine

Remote management:

Since Hyper V Server 2012 server core machine, you may want to manage it remotely using the familiar GUIs and MMC consoles. You can do so remotely, but before that you need to set the firewall rules to allow that.In the command prompt windows of the server, get a poweshell prompt by typing in "powershell". Now you can execute the following powershell command

Enable-NetFirewallRule -DisplayGroup *

Note: I used this command since it is my test network, you may want to lockdown the firewall rules a bit if in case of production network

Inorder to connect to the Hyper V server using MMC from my PC, I had to run the following command in the PC command prompt

cmdkey /add:<ServerName> /user:<UserName> /pass:<password>

Servername - Used IP of the Hyper-v server
Username,Password -> Provided credentials of the Hyper-V

Ref: http://technet.microsoft.com/en-us/library/ddb147c1-621c-4b89-9003-81c93ba050d7#BKMK_1.4

In this scenario, Hyper V was not member of Domain , by my PC was..hence by default when we try connecting to Hyper V server through MMC, it will try to connect using your domain credentials and you will get an error.

Managing from Windows 8 PC:

1) Hyper V management tools is available as a feature in Windows 8. You can install the same from "Turn Windows features on or off" windows

PS: You can also manage Hyper V from your windows 7 PC by installing the Remote administration tools pack . Somehow the installation was taking ages in my Windows 7 machine and hence I opted for Windows 8

2) Now you need to set the windows firewall rules in Windows 8 to allow access to the Hyper V server. In an elevated powershell window, run the following command
Enable-NetFirewallRule -DisplayGroup *

3) In my test scenario, both Hyper V and Windows 8 PC(hereby referred to as client machine) were not members of the domain. So if you want to manage the hypervisor from the client machine, you need to create a local admin account in Hyper V that matches your client admin credentials. You can do so by using option 3 in sconfig.cmd window

4) Now if you try conecting to the Hyper V Server from the Hyper V manager, you might get the
“Access denied. Unable to establish communication between Client and Server”. You will have to tweak the COM security permissions on your client to sort this out. This can be done from the DCOMCNFG MMC.

Open the console, go to Component Services > Computers > My Computer
Right click, select properties of "My Computer" -> COM Security Tab
Select "Edit Limits" on the Access permissions area

Scroll down to find the "Anonymous Login" group and ensure that "Remote access" is allowed

 5) You can set the Server name in the Hyper V server 2012 using the 2nd option in the sconfig.cmd window and use this server name in Hyper V manager to connect to the Hypervisor

Note: If you client machine is not in domain, you will need to add a host entry in the client host file to ensure that the name resolution happens

After this, you should be able to connect to the Hyper V Server from the  management console and create VMs !!!.. 

Hyper V Server 2012 installation on VMware Workstation 8

Having heard a lot about the latest free virtualization from Microsoft, Hyper-V server 2012 , I coudnt resist giving it a whirl.. After all, it is not daily that Microsoft comes out with "free" offerings ;)

Let us admit it..Ever since the advent of virtualization, we have few physical servers lying around.All of them have joined the virtualization bandwagon. My case was no different, so I decided to try out  Hyper-V server 2012 as a virtual machine in VMware Workstation 8 installed in my PC.

Installation preparation:

Few things to be taken care before you start the actual installation

1)Download the Hyper V Server 2012 ISO from Microsoft site:


2)VMware workstation 8 does not have Server 2012 in the Windows OS list.Hence you need to select the option "Windows Server 2008 R2 x64" when you create the Virtual machine

3) There is a small tweak to the Processor settings that should be done before starting the installation. Edit the Virtual machine settings->Processors and select the option "Virtualize Inter VT-x/EPT or AMD-V/RVI"

4) The last step is to tweak the vmx file of the VM and add the following setting

PS: The vmx file can be found in the installation directory of the VM,Go to VM settings-> Options->Genaral and refer to the Working directory setting on the right pane

All done now!! You can connect the downloaded ISO and start the installation..

Installation procedure:

It is pretty straight forward, screenshots below

1)Select the language,time & keyboard format

2)Accept the License agreement

 3)Now the installation will start

4)Once completed, you will get a prompt saying that the administrator password needs to be changed.

5)Set the administrator password and login!!

Now that you have logged in , you will be welcomed by two windows, One command prompt in a normal shade of black and another command prompt in a pretty shade of blue , called sconfig.cmd

As you guessed correctly, this is a stripped down server core edition of Windows Server 2012 with Hyper-V and hence there will not be any GUI. You need to do the initial configurations from the sconfig.cmd prompt

Initial Configuration:

First things first, lets get the network configured

1)Select option 8. It will show the current network connection settings.


2)If you already have a DHCP server in your network, you will automatically get the IP from it. However, it is always good to set a static IP from a management perspective. Inorder to set a static IP, select the Index number of the adapter. You will  get options to set the IP address, DNS server as well as to clear the DNS settings

3) While all was setup and done, I realized that I was unable to ping to the hyper-V server from any of the other machines in the network. However, the server was able to ping to other machines. Turned out that ping is not enabled by default, we need to enable it through the Renote management option in sconfig. Select option 4 to do this

4)You need to select the option number 3 in above menu  ie "Configure Server response to Ping" to enable ping to the machine

In my next blog post, I will explain how to manage your Hyper V server remotely..

Reference for Installation prep: This nice blog from Veeam

Wednesday, November 20, 2013

Windows server 2012: where is my start button??

If you have been using Windows Server OS for a while, the one thing that will strike you most when you login to a Windows server 2012 is that there is no start button!!.. What??..How am I going to manage it??
Microsoft feels that you really dont need a start button, since you can do almost everything from your server  manager or even remotely from your desktop. After all the initial configurations are done, you could also do away with the GUI and go back to server core option.(In server 2012, there is an option to add and remove GUI).

So does that mean, you need to learn to live without a start button. Actually no, the start button is very much there .Lets start looking for it.

Option 1:

There is "charms" bar on the side of your deskop, where you will find a "start" option. You can use the "Windows +C" shortcut to pop out the charms bar

Option 2:

There is a hidden "start area"in  the bottom left corner of your desktop , in the blank space next to your server manager icon(PS:The desktop is also called start screen in Server 2012 jargon). Just hover your mouse over there and the start button will pop out.

You can click on the start option and then start typing your shortcuts , the search option will come up and dutifully find out your application for you.

You can the right click them and add them as icon on your desktop, or pin it to your task bar etc

Option 3:

If you click the windows button on your keyboard, it will take you to the start menu and you can type your shortcuts  there.

Hope that helps!!

Windows Server 2012 Editions & hardware requirements.

This article gives a brief about the various editions of Windows server 2012 available:

If you are purchasing or downloading the ISO, there are only two editions of Windows server 2012 available . They are 

  •  Windows Server 2012 Standard Edition 
  •  Windows Server 2012 Datacenter Edition

As opposed to windows server 2008,  there is functionally no difference between both editions, ie clustering, hyper v etc possible in both. Also there is no hardware limitations between the editions. Only difference is in the virtualization rights. While standard edition licenses upto 2 vituial instances , Datacenter provides license for unlimited virtual instances.

There are other flavors of the OS that are available through OEM. Given below are the details :

  • Windows Server 2012 Foundation server 
  • Windows Server 2012 Essentials 
  • Windows storage Server 2012 workgroup
  • Windows storage Server 2012 standard
  • Windows Multipoint Server 2012 Standard
  • Windows Multipoint Server 2012 Premium
  • Microsoft Hyper-V Server 2012

Among the above, Foundation server has a limit if 15 users , which cannot be expanded . Also it doesnt support virtualization. Essentials server can support upto 25 users,provides some basic backup/restore functionalities, but again no virtualization functionality

Hyper-V server 2012 is free to download, but doesn't come with any free virtual machine licenses.That means  the hosted Virtual machines should be individually licensed.

Minimum hardware requirements:

Proessor - x86-64
Processor speed - 1.4 ghz
Mem- 512 MB
Hdd space - 32 GB


Monday, November 18, 2013

DNS Round Robin

DNS Round Robin and NLB are two configurations that can be used to ensure application availability in scenarios where there are no shared storages in use. They are usedful for applications which handle one time requests and need not be handled by a singler server throughout the session. This article aims at explaining the basics of DNS Round Robin technique

DNS Round Robin:

Here the load-balancing act happens at the Name resolution stage. There will be multiple entries in the DNS server for a host name , pointing to application server IPs across which the load should be balanced. For eg: there will be n number of IP addresses associated with a host name . When the first client request a name resolution, the first IP from the list is returned.When a second client request a name resolution, the next IP is returned. Thus we can ensure that the incoming requests for a particular application is equally distributed among the available application servers.

An additional option named netmask ordering can be used, if you want to take in consideration the subnet of the querying client. If this option is enabled, the host IP address that is in the same subnet as the querying client is returned. For eg: Te host entry app.testme.com has two records created with IP address and and the netmask ordering option is enabled. When a client from the subnet 192.168.10.x/24 makes a query, the IP is returned. When a client from the subnet 192.168.20.x/24 makes a query, the IP is returned.

Both the DNS round robin and netmask ordering options are available in the properties of your DNS server. ie from DNS manager console->DNS server name->properties-Advanced. You need to select the option "Enable round robin" and "Enable netmask ordering" and enable them.


Saturday, November 16, 2013

Windows server 2003 to 2008: upgrade considerations

If you are planning to upgrade from Windows server 2003 to 2008, here are some
  •  The normal boot-from-CD procedure doesnt work for the upgrade.You will have to start the upgrade process from within the windows server 2003
  • You can upgrade to an equalent or higher edition of windows server 2008 ie you can upgrade from from windows server 2003 standard edition to server 2008 standard or Enterprise edition, but you cannot upgrade from 2003 Enterprise edition to 2008 standard edition
  • However the upgrade options are slightly different in case of Web or datacenter Edition. You can only upgrade from Windows server 2003 web Edition to Windows server 2008 Web edition. Same with Datacenter Edition
  • The final condition is that Windows server 2003 Service pack 1 should be installed if you want to upgrade to server 2008.This means that if you have Windows server 2003 R2, the upgrade is possible without any further service pack installation
The following upgrade paths are possible:

Windows server 2003 standard Edition -> Windows server 2008 standard Edition &
Enterprise Edition

Windows server 2003 Enterprise Edition ->Windows server 2008 Enterprise Edition

Windows server 2003 2003 Datacenter Edition ->Windows server  2008 Datacenter

Windows server 2003 2003 Web Edition -> Windows Web server 2008

Windows server 2003 2003 for Intanium Enterprise Edition -> Windows sever 2008
for Intanium based system

Note: You cannot upgrade to a different processor architecture, ie you cannot
upgrade from Windows server 2003 x86 Standard edition to Windows server 2008 x64
Edition even if the processor is 64 bit and will support the OS

Friday, November 15, 2013

Understanding different editions of Windows server 2008

It is important to understand the various 'flavors' or Editions of Windows server 2008 before you start planning the deployment of same in your infrastructure. Given below is a brief description of the various versions and scenarios

Standard Edition:

This edition is ideally suited for the role of DC, File and print server, DNS,DHCP & application server in medium-small sized buisiness. Basically all your infrastructure network requirements can be met by this edition.It also supports Network load balancing clusters

Processing Power maximums: 

  • 4 GB RAM, & 4 Processors in SMP configuration(32-bit(x86) version)
  • 32 Gb RAM & 4 processors in SML configuration(64-bit(x64)version)

Limitation: Cannot be used in failover clustering or installation of enterprise edition features like AD federation services. Though it spoorts Hyper-V, it will bundle windows license for only one VM.Hence it is not an ideal choice for large scale virtualization

Enterprise Edition:

This edition is more suitable for large buisinesses. You can use this edition if you plan to install SQL server enterprise edition, Exchange server 2007, Active directory fedeartion services or install failover clustering etc. The said products would need the extra processing power that enterprise edition supports

Processing Power maximums:

  • 64 GB RAM, & 8 Processors in SMP configuration(32-bit(x86) version)
  • 2 TB RAM & 8 processors in SMP configuration(64-bit(x64)version)

Limitation: One limitation that I can think about is again in Virtualization area. Though it bundles more licenses(ie for 4 VMs) than standard edition, again not very useful for large scale virtualization

Datacenter Edition:

This edition is dierectly targetted at large buisinesses. The main advantage is that it offers unlimited Vituial image rights.This will be the first choice for organizations going for large scale virtualization.It also supports enterprise edition feattures like failover clustering and ADFS. Datacenter edition is onlu available through OEM manufactures and implies a significant captital investment

Processing Power maximums:

  • 64 GB RAM, & 32 Processors in SMP configuration(32-bit(x86) version)
  • 2 TB RAM & 64 processors in SMP configuration(64-bit(x64)version)

Web server Edition:

This is a stripped down version of Windows server 2008, which is specifically targetted for Web applications. It doesnt support high end hardware configurations like other editions of the server. However, it does support Network load balancing clusters

Processing Power maximums:

  • 4 GB RAM, & 4 Processors in SMP configuration(32-bit(x86) version)
  • 32 GB RAM & 4 processors in SMP configuration(64-bit(x64)version)

Windows server 2008 for Itanium based systems:

The Intel Itanium 64 bit arhitecture is significantly different than the usual x64 based architecture in Inter Core 2 Duo or AMD Turion processors. You will need the Windows server 2008 Itanium edition if you are using an Itanium 2 processor. It provides both application and web server capabilities, but lacks other roles like virtualization  & Windows deployment services

Processing Power maximums:

  • 2 TB RAM & 64 processors in SMP configuration

Tuesday, November 12, 2013

Azure SQl administration: useful commands

Command to create a new Db as a backup/clone of existing DB:

Connect to the master DB and  execute the following command:

CREATE DATABASE <newDBname> AS COPY OF <name of DB to be backed up>;


One important thing  to note is that the actual Db copy wouldn't be completed even if the command complete successfully. Inorder to check the status of the copying , you can use the following command

SELECT name, state, state_desc FROM sys.databases WHERE name = 'Databasenew'

The value of  State_desc column in the output will be  'online' when the copying is completed and DB is ready for use.The status will be shown as 'copying' when the DB copy is in progress

Rename database:

Again you need to connect to the masterDB and execute the following query

USE master;
Modify Name = <new DB name> ;

Rename Table:

If you need to rename a table in a DB, use the following command after connecting to the DB

sp_rename '<tablename>', '<tablename-new>'


Securing Windows Azure SQL using service accounts

When you create an SQL server in Windows Azure,you need to create an administrator username and password . This will be the super user account for that server, using which you can carry out any operation in any databases of the databases. That means you can also delete or rename databases using this account.Hence you need to be very careful if you are planning to use this credentials in your application to access the Azure SQL database.

Creating service accounts for SQL is a safe option to restrict access to you database , and also to avoid use of the super admin account.You could create service accounts and add them to appropriate SQL roles which has required permissions in the database, say read, write, execute etc..Lets see how to achieve this:

  • First create  an SQL login after connecting to the Master DB. Note that you would need your super admin account for connecting to the master DB.

          CREATE LOGIN <ServiceAccountname> WITH password='<password>'

          For eg: CREATE LOGIN testuser1 WITH password='Password'

  • Service accounts are intended to connect to a specific database. As the next step connect to your target database and create a new user from the login you created above

            CREATE USER <ServiceAccountname> FROM LOGIN <ServiceAccountname>;
            For eg: CREATE USER testuser1 FROM LOGIN testuser1

  • Now you have created the service account in the database, you will need to assign required level of permissions for the user in the database. We will accomplish this using SQL roles with the correct permission levels.Connect to the target DB and execute the following to create what we can call as a service account role

       CREATE ROLE <rolename>

      For eg:
      CREATE ROLE rolserviceaccount

  • Now assign the required rights for the service accounts role (again to be executed on the target DB)
      EXEC sp_addrolemember N 'db_datawriter', N '<rolename>'
      EXEC sp_addrolemember N'db_datareader', N'<rolename>'
      EXEC sp_addrolemember N'db_ddladmin', N'<rolename>'

     For eg:
     EXEC sp_addrolemember N 'db_datawriter', N 'rolserviceaccount'
     EXEC sp_addrolemember N'db_datareader', N'rolserviceaccount'
     EXEC sp_addrolemember N'db_ddladmin', N'rolserviceaccount'

Please that the roles used above are inbuilt sql roles, which had read,write and ddladmin rights as the names indicate.You are adding the role that you created as member of those inbuilt roles for getting the required permissions

  • If you need to provide execute permission, first you could create a db_execute role and provide it execute permissions, and then later make your service account role a member of db_execute
      CREATE ROLE [db_execute] AUTHORIZATION [dbo]
      GRANT EXECUTE TO [db_execute]

     EXEC sp_addrolemember N 'db_execute', N '<rolename>'

  • The last step is to make your service account as member of the corresponding serviceaccount role
        For eg:
         EXEC sp_addrolemember N'rolServiceaccount', N'testuser1'  

  • You can verify that the permissions are all set correctly using the following sql query    

select m.name as Member, r.name as Role
from sys.database_role_members
inner join sys.database_principals m on sys.database_role_members.member_principal_id = m.principal_id
inner join sys.database_principals r on sys.database_role_members.role_principal_id = r.principal_id


Monday, November 11, 2013

Windows Azure architecture and workflow

So,you just need your .cspkg and .cscfg file to do a deployment to Azure. When the deployment is complete, the instances are spinned up, application is up and running and during the whole process you didn't have to move a finger!!! That is what we call PAAS magic. But what actually happens in the background, lets find out..

Red Dog Front End(RDFE) : When you interact with the Azure platform through management portal or Visual Studio, you are actually talking to the API called RDFE .The request are passed on by the RDFE to Fabric Front end(FFE) layer

Fabric Front End(FFE): It receives the request from RDFE and  converts them to Azure fabric commands which are then passed on to what we call Azure Fabric Controller. FFE decides on the location of the VM based on inputs such as affinity group and Geo Location, and also based on the Fabric inputs such as machine availability

Azure Fabric controller: This is considered to be the kernel of the Cloud OS, simply because it manages all the resources in the datacenter. Fabric controller is responsible for the provisioning and managing the  VMs, their underlying hosts,deploying applications, monitoring the health of the services and  redeploy them if required.

 As we all know, Azure uses Hyper V based Virtualization. The architecture of Hyper V uses the concept of  root partition(aka host machine) and Child partition(aka Guest VMs). When the fabric controller builds a root partition ie host in the data center, it installs an agent called 'Host Agent' in these root partitions. Each of the Guest VMs will have a Guest agent installed in them, known as 'WindowsAzureGuestAgent'. Another agent "WaAppAgent" is actually responsible for the installation, configuration and update of the WindowsAzureGuestAgent. This means that your guest agent update is decoupled from the Guest OS upgrades. The "HostAgent" does communicates with the WaAppAgent to do guest OS hearbeat checks and also gives instructions to bring a role to its goal state. If the hearbeat is not received for 10 minutes, the guest OS will be restarted.

 In a role instance, WaAppAgent is listed as "RdAgent" in windows service list


 WindowsAzureGuestAgent has the following functions:

 - Guest OS level configurations , such as firewalls, ACls, certificates, configuring as per service package file etc
 - Communicates the role status to the Fabric controller
 - Set up SID for the user which the role will be using
 - Starts the waHostBootStrapper application

 If you login to a role instance, you can see this listed as a service "Windows Azure Guest Agent"


 - It is responsible for starting all appropriate tasks and processes in the role as per the role configuration file
 - This service also monitors the child processes and raise staticheck event on the role host process
 - Executes the simple startup tasks
 -depending on the role type, it will start the host processes. ie WaWorkerHost.exe in case of worker role ,WaIISHost.exe in case of full IIS web role or WaWebhost.exe in case of SDK 1.2 HWC Web role
 -In case of full IIS web role, WaHostBootstrapper starts the IISConfigurator.exe process and configures the IIS Apppools, it is pointed to E:\siteroot\<index> where <index> is a 0 based website index.

 WaHostBootStrapper is listed as a process in the task manager with description " Microsoft Windows Azure Runtime Bootstrapper". It doesnt have  a windows service associated to it.WaWorkerHost.exe,WaIISHost.exe,WaWebhost.exe,IISConfigurator.exe etc are also listed as processes inside the role instance

Reference: http://blogs.msdn.com/b/kwill/archive/2011/05/05/windows-azure-role-architecture.aspx


Wednesday, November 6, 2013

SSL cert considerations in Windows Azure

If your windows Azure application is using an SSl certficate, you need to configure it in both your service definition file and .cscfg file. The whole process is explained clearly in the following Microsoft article:


Here, I am going to discuss about few considerations while configuring SSL. As you can see from the above Link, the certificate should be defined in the csdef file

        <Certificate name="SampleCertificate" 
                     storeName="CA" />

The store can be either 'LocalMachine' or 'CurrentUser'. And the storenames can be one of the following -MyRootCATrustDisallowedTrustedPeopleTrustedPublisher,AuthRoot, and AddressBook.
You can also create your custom store name, which in case the store will be created.

 Interestingly, Microsoft by default does not allow direct import to the trusted root store. Even if you give the Storename as "CA" , the cert will be downloaded only to the intermediate cert store. You will have to write a startup task with elevated permissions to move the cert to root store. However, you need to do this only if your SSl cert is issued by a provider who is not included in the Microsoft root certificate program . If a provider is part of the root certificate program, the root certificate corresponding to your SSL certificate will automatically be downloaded to your Azure instance when you deploy it.

The comprehensive list of cert providers included in the root certificate program can be found in this link

Note: Azure had an issue with OS version 2.19_201309-01, where the root certs of providers from the MS root certificate program was not getting downloaded automatically. They have corrected it now and re-released the OS. It is sorted in OS versions 2.19_201309-03 and later..

Tuesday, November 5, 2013

Net use : System error 67 has occured

While trying to map a sharepoint location using net use command,the following error was thrown.

System error 67 has occurred.

The network name cannot be found.

Command used was : net use m: https://<sharepointurl>  /user:domain\user  <password>

Solution: This can happen if the "desktop experience" feature is not installed in Windows server 2008 R2. Install the feature from server manager, restart the server  and it will sort the issue.


Tuesday, October 29, 2013

Windows Azure administration: Useful resources

Here are some useful links for Windows Azure administrators

Windows Azure service dashboard: 


This dashboard gives a general overview of the Azure services across the world. You will get a region wise status of various service offerings , say Compute, Storage, Active directory etc. If Microsoft has detected any issues with any of its datacenters you will find it here. The data is refreshed every 10 mins. Also there is an option to view the historic data as well. This would be one of the first places to check if you feel that Azure is not behaving as expected.

Azure powershell cmdlet reference:


If you are an automation enthusiast and would like to automate your Azure management chores, Azure powershell cmdlets opens a world of opportunities. Of  course you need to install them first. Please refer my blogpost here for getting started. The link above provides you the complete powershell cmdlet reference for Azure.

Azure guest OS Releases :


If we use PAAS from Azure, Microsoft will take care of the guest OS update procedure(is if you set the oS system version as automatic) . Guest OS update is similar to the monthly patch update on your on premises computers. What if you suspect that something is broken because of a recent OS update? The above RSS feed provides you a brief of what all went into the  latest OS update of Azure.

Azure SDK compatibility matrix & guest OS families:


This link provides comprehensive details of the Azure SDK versions and the guest OS families in Azure that they are compatible with. Additionally it provides information on the latest OS familly versions, when they will expire etc.

Monday, October 28, 2013

Tip of the Day: Find OS version/servicepack/build number of Windows OS

How do you find which OS version and service pack and build that you have?

Go to start-> run and type 'winver' (without the quotes)

A window will pop up which will show the OS version, service pack and build number

The build number glossary of Windows OS can be found here

vMotion : Introduction

vMotion is the process of moving running virtual machines from one ESXi host to another.The disk files are not migrated(they stay in the shared storage), only the VMs memory and CPU processing moves from one server to another. In fact if you ping the VM while it is moving, you may at the most loose at most one or two ping packets.

vMotion happens in three stages:

-vCenter server verifies that teh VM is in a stable state
-VM state is copied over to the destinatiom. State includes the memory, registers and network connections
-VM is resumed in the destination host

vMotion can happen due to any of the following reasons:

- Balance the load on ESXi hosts using DRS
-When the VMs are being moved off from a host so that the host can be shutdown by DPM(distributed power management)
-You need to intsall patches using update manager or do a hardware maintenance, the VMs are migrated using vMotion and host is put into maintenance mose

vMotion requirements:

-You will need vSphere Essential plus,Standard, Enterprise or Enterprise plus license
-Shared storage between ESXi servers- iSCSI,FC or NFS.*
-VMkernal interface on both ESXi servers with vmotion enabled
- Same network label in source and destination hosts, either standard or distributed switches can be used
-CPU compatibility between hosts , or they need to be of the same processor family if you are planing to use Enhanced vMotion Compatibility(EVC). That means you cannot migrate VMs from a host with intel processor to a host with AMD processor.

*Starting with vSphere 5.1, vMotion without shared storage is possible, provided the destination host have access to the destination storage


Tuesday, October 22, 2013

VMware : Linked Clones

The Linked clones concept is similar to the normal VM cloning process, but with a storage saving twist ;)

When we create a linked clone, a new VM is created from a base VM , at the same state. This clone with use the base VM's hard disk for all read operations, however all writes to the disk ie any change to the data from the original disk is written on  a new disk. This is very similar to the concept of snapshots where the original VMDK is read only and all subsequent writes are done to a delta disk.

The main advantage of using Linked clone is to avoid deduplication of data. You can have n number of VMs created from the base virtual machine, but the base disk remains the same. This will considerably reduce the disk space usage, especially in cases like web server farms with multiple servers


Monday, October 21, 2013

Azure IAAS : Enable RDP to Load balanced VMs in a cloud service

I faced a confusing situation recently, where I had to enable RDP to two VMs in the same cloud service using endpoints included in a load balanced set.

A load balanced set was created for the RDP port 3389 and both VMs were included in the set. However, if we select the invidual VMs from the management portal-> click connect, you will get the following error message

"An external endpoint to the Remote Desktop port(3389) must first be added to the role"

That was pretty confusing, since the port is already defined in the load balanced set  !! .

After playing around for a bit, I found out that I was doing it all wrong !!.. The load balancer set works from a cloud service perspective. So the RDP load balanced set along with the other load balanced ports are defined for the cloud service. That means I can actually RDP  by providing the cloud service name, and it will land me on one of the VMs in the cloud service. From the VM, you can rdp to any other VM in the cloud service by simply providing the VM name, not even the cloudapp.net suffix is required!! So that is how you RDP to your VMs in a cloud service, though there is a chance of multiple hops if  you have multiple VMs.

But, is this the only option? What if you dont want to 'multi hop ' to the VMs.  Of course, there is a straight forward way of adding RDP endpoints individually to the VMs rather than creating a load balanced set. However the catch here is that you need to use multiple public ports. If you wish to use default port '3389', you can very well do so..but only for one VM in a cloud service. Azure wouldnt allow you to use the same public port twice within VMs in the same cloud service. Hence you will have to go for a different/random port. Problem comes when you  try an RDP to these random ports from within a firewalled network. You would need this port to be opened in your perimeter firewall to the Azure IP address to enable the RDP. Not a bright idea, I would say , since the Azure IP ranges keep changing. Even Microsoft doesn't recommend hardcoding their IP ranges to create firewall rules in your organization network. Hence better go the 'multi hop' way .

Tuesday, October 15, 2013

VMware NSX: An introduction

After server and desktop virtualization, VMware is now focusing on network virtualization. Essentially the company has been focusing so far on the 'compute' market for Virtualization and now it has started working on a similar product for Network.

Lets admit it, Networks take more time to provision. With virtualization coming in, the creation of Servers, desktops etc now takes minutes when compared to hours/days/weeks situation in the pre-virtualization era. But if we have a new network requirement for the VMs, it can be sorted to an extend using vSwitch. What if the requirement goes beyond that? Say a router /firewall/VPN that should be used by the VM. Of course, we should get in touch with the networking guys and it could take some time for the stuff to get sorted out

With NSX, Vmware aims to address this bottle neck. The idea is to provision,backup and manage networks similar to how you manage your VMs now. There will be logical switches,routers, firewalls and VPNs.You can create virtual networks using these logical devices, connect your VMs to them , backup your network topology,create templates and deploy on demand. Your underlying physical network will act as the "packet forwarding backplane" as per VMware.

Interesting concept actually!! Would love to see how this gets implemented in real world . Vmware is yet to come up with details of this, and it has already tied up with various partners like Dell,HP,Juniper networks etc to make this a reality. So the key is to wait and watch :)


SSL Web server cert analysis

Came to know about this site from a colleague of mine today

This is quite useful if you want to do a deep analysis of any SSL web server in the internet. It provides details about the cert used, cetifiction paths, protocols etc..

Friday, September 20, 2013

Set Network ACLs using Windows Azure Powershell Commands

In the latest update of Azure PowerShell commandlets, there is an option to set network ACLS for VM end points. Using this option, you can

  • Allow/block access to an endpoint based on the IP address range
  • Maximum of 50 ACL rules are possible per VM
  • Lower numbered rules take precedence over higher number rules
  • If you create a permit ACL, all other IP ranges are blocked.
  • Similarly, if you define a Deny rule, All other Ips are permitted 
  • If no ACLs are defined, it is permit all by default
Steps for setting a permit ACL for a particular IP is given below. Before executing the same, make sure that you have set the subscriptions correctly as per my previous post.
  • Create a new acl object
  • Create the permit rule and add it to the acl
Set-AzureAclConfig -AddRule -ACL $acl -Order 50 -Action Permit -RemoteSubnet "" -Description "Test-ACL confguration"

Here I am explicitly permitting access from a public IP

  • Now we need to apply this rule to the VM endpoint. Inorder to get the available endpoints in the VM, you can use the following command
get-azureVM -ServiceName testvm1 -Name testvm1 |Get-AzureEndpoint

Then you need to set ACL for the required endpoint. In this example, I am going to set an ACL for the RDP endpoint of my test VM

Get-AzureVM -Servicename rmtestmis2 -Name testvm1 | Set-AzureEndpoint -Name 'Remote Desktop' -Protocol tcp -LocalPort 3389 -PublicPort 3389 -ACL $acl | Update-AzureVM

  • Once the task is completed successfully, we will test the acl status using the following comand
$endpoint = Get-AzureVM -ServiceName testvm1 -Name testvm1 |Get-AzureEndpoint -Name 'Remote Desktop'


Back to basics : Networking - Part 2

IPV6 Basics:

  • IPV6 uses 32 bit address space whereas IPV6 uses 128 bit address space
  • Represented by eight groups of hexadecimal quadrants and uses Classless Interdomain Routing(CIDR)
  • First 48 bits of the address are the network prefix, next 16 characters are subnet ID and last 64 characters are interface identifiers
  • There are three kinds of IPV6 addresses  are Unicast,Multicast and Anycast
  • Unicast: Identifies a single interface, equalent to IPV4 address of a machine
  • Multicast: Identifier for Multiple network interfaces. Commonly used for sending signals to a given group of systems or for braodcasting videos to multiple computers etc
  • Anycast: The pacaket is delivered to the nearest(in terms of routing) device
  • IPV6 does not have broadcast messages
  • Unicast and Anycast addresses have the following scopes:
  • Link-local: Scope is local link(ie nodes on same subnet).Prefix for link-local addresses is FE80::/64
  • Site-Local:Scope is organization ie private site addressing.Prefix is FECO::/48
  • Global: Used for IPV6 internet addresses, which are globally routable
Difference between TCP and UDP:

  • TCP is connection oriented protocol, Data will be delivered even if the connection is lost, because the server will requiest the lost part. Also there will not be any corruption while  transferring a message. Whereas UDP is a connection less protocol, in the sense that you send it and forget it. There is no guarentee of corruption free transmission
  • TCP:If they messages are send one after the other, the message that is sent first will reach first. In case of UDP, you cannot be sure of the order in  which the data arrives
  • TCP: Data is sent as as a stream with nothing distinguising where the packet starts or ends. UDP(dats is sent as datagrams and will be whole when they reach
  • TCP examples: world wide web.SMTP.FTP,SSH
  • UDP examples: DNS,VOIP,TFTP etc
Spanning tree protocol: Ensures that there are no loops while creating redundant paths in your network

One switch is selected as root switch, which take decisions such as which port to put in forwarding mode and which port in blocking mode etc is taken by this switch

Command to set root switch for a vlan: 
set spantree root vlan_id 


Managing Windows Azure using Powershell commandlets

Inorder to start managing your Azure subscriptions using Powershell commandlets, first you need to install the Windows Powershell from here

  • Open the Azure PowerShell windows from Start-> all programs->Windows Azure->Windows Azure Powershell
  • Inorder to manage a subscription, you will have to import the management certificate for the same . You can use the below commands for the same

$cert = new-object System.Security.Cryptography.X509Certificates.X509Certificate2
$Filepath ="D:\certs\managementcert.pfx" --> Provide the path to your management cert here
$password='Password' --> Give your certificate password here
$cert.Import($Filepath,$password,'Exportable,PersistKeySet')  -->At this point the variable $cert will have your management certificate loaded

  • Now you need to import your subscription id & subscription name. You can get the value from the management portal->Settings
$subscriptionId = '1935b212-1179-4231-a4e6-g7614be788s4'
$subscriptionName = 'YOUR_SUBSCRIPTION_NAME'

  • Next you need to set the Azure subscription 
Set-AzureSubscription -SubscriptionName $subscriptionName -SubscriptionId  $subscriptionID -Certificate $cert

 Now you can start executing the azure commandlets against the resources in your subscription.

Complete reference of Azure Powershell commandlets can be found here: 


Wednesday, September 18, 2013

Windows Azure fault domain and upgrade domain

Fault Domain: In simple words, fault domain can be considered as a single point of failure. For eg:, servers hosted in a rack in a data center can be considered as a fault domain, because power failure to the rack will bring down all the servers in it. During deployment time, the instances in a role are assigned to different fault domains, to provide fault tolerance (only when there are multiple fault domains)

Upgrade Domain: This concept is applicable during a deployment upgrade.Each upgrade domain can be considered as a logical unit of deployment. During an application upgrade, it is carried out on a per upgrade domain basis, ie the instances in the first upgrade domain are stopped, upgraded  , brought back to service, followed by the the second upgrade domain. Thsi ensures that the application is accessible during the upgrade process though with reduced capacity

Windows Azure storage concepts

You can create a storage accounts in windows Azure and provide your applications access to the tables, Blobs and queues in it.

  • The maximum capacity of data for storage accounts is 200TB, if it was created after June 8th 2012 and 100 TB if created before that.
  • Geo redundant Storage(GRS): Replicates the storage to a secondary, geographically separate location. Data is replicated asynchronously to the secondary location in the background. If there is any failure in primary location, storage will failover to the secondary location
  • Locally redundant Storage(LRS) : For any storage, the data is replicated three times within the same datacentre. All Windows Azure storages are locally redundant
  • Affinity group: It is a geographical grouping of cloud deployments and storage accounts.By grouping the services used by your application in a affinity group in  a particular geographical location, you can improve your service performance
  • Storage account endpoints: Highest namespace for accessing the tables, queues and blobs in a storage. Default endpoints will have the following values
Blob service: http://mystorageaccount.blob.core.windows.net
Table service: http://mystorageaccount.table.core.windows.net
Queue service: http://mystorageaccount.queue.core.windows.net

  • Storage account URLS: URls for accessing an object in a storage account For eg: http://mystorageaccount.blob.core.windows.net/mycontainer/myblob.
  • Storage access key: This is the 512 bit access key generated by Windows Azure when you create a storage account. there will be two keys, primary and secondary. You can choose to regenerate the keys at a later point if required

Blobs:   Blobs are mainly used to store large amount of  unstructured data . All blobs must be created inside a container, there can be unlimited number of these in an account. There can be two types of blobs- Page blobs(maximum size of 1TB) and block blobs(maximum size of 200GB)

Tables: Tables are used to store structured but non-relational data. It is a NoSQL datastore that can service authenticated calls from inside and outside of Windows Azure cloud.Table is a collection of entities, but it doesnt force a schema on the entities.This means that the a single table can have entities with different set of properties. Entity is a set of property, similar to a DB row. It can be upto 1 MB in size. Whereas Property is a name-value pair. An entity can have upto 252 properties for storing data. Each Entity will have three system defined properties ie apartition key,row key and a timestamp

Queues:  It is a service for storing messages, that can be accessed using authenticated http or https calls. A single queue message can be upto 64 KB in size. It can have millions of messages , limited only by the maximum storage capacity. It is mostly useful in scenarios where there is a backlog of messages to be processes asynchronously or to pass messages from the Windows Azure web role to a worker role.

Windows Azure host and guest OS updates

Windows Azure host OS is the root partition, which is responsible for creating child partitions to execute Windows Azure services and guest OS. The host OS is updated atleast once in a quarter to keep the environment secure. Updating the Host OS means that the VMs hosted in it should be shutdown and then restarted. While the upgrade is done, Azure ensures that the VMs in different update domains are not down simultaneously thereby affecting the availability of hosted applications. An optimal order of updating the servers are identified first before proceeding with the upgrade.

Windows Azure guest OS runs on the VMS that host your applications in Azure. The OS is updated periodically when each time a new update is released. You can choose to get this done automatically or manually upgrade it at a chosen period.Microsoft recommends automatic OS updates, so that known security vulnerabilities are taken care of and you application will run on an up-to-date environment.

Inorder to configure your Guest OS for automatic OS updates, you need to edit the ServiceConfiguration element in the cscfg file as follows

<ServiceConfiguration serviceName="RM.Unify.Launchpad" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"  osFamily="2" osVersion="*" schemaVersion="2012-05.1.7">

osVersion="*"  defines that the OS should be updated automatically

PS: The different OS families are identified by the OS family number and should be read as follows

Windows Server 2008 SP2 - osFamily 1
Windows Server 2008 RS - osFamily 2
Windows server 2012 - osFamily 3

Configuring Diagnostics for Windows Azure cloud service

Steps for configuring the Windows Azure diagnostics are as follows:

  • Import the Diagnostics module in the csdef file
      <Import moduleName="Diagnostics" />
  • The option for tracing and debugging can be included in the Windows Azure application code
  • Custom performance counters can be created for web and worker roles using powershell scripts in startup tasks. You can collect data from the existing performance counters as well
  • Store dignostics data in an Azure storage, since the collected data is only cached and hence does not perisist. The diagnostics storage can be defined in the cscfg file using the following settings
<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=storagename;AccountKey=storageaccesskey" />

Replace the storagename and storageaccesskey using the name and access key of your diagnostics storage

Tuesday, September 17, 2013

Input and Internal Endpoints in Windows Azure

Azure cloud services had two types of environments- Production and Staging. The production environment will have  permanent DNS name associated with it and it resolves to a Single Public Virtual IP. The DNS name of Staging environment will keep changing and it will also resolve to a Public VIP.

Intput endpoints are defined for enabling external connections to the Public VIP of the cloud service. HTTP,HTTPS or TCP  protocol can be used for connection. The ports , protocols and certificates to be used for the connection can be defined in the csdef file in the <Endpoints> configuration session. Sample given below

      <InputEndpoint name="httpsin" protocol="https" port="443" certificate="SSL" />
      <InputEndpoint name="httpin" protocol="http" port="80" />

  • Each defined endpoint must listen on a unique port
  • A hosted service can have upto maximum of 25 input endpoints, that can be distributed among roles
  • Azure load balancer uses the port defined in the config file to make sure that the service is available in internet
Internal endpoints are used for role to role communication. Again maximum of 25 endpoints are available per hosted service. When you define the internal endpoint, the port is not mandatory. If port is not defined, Azure fabric controller will  assign them

         <InternalEndpoint name="InternalHttpIn" protocol="http" port="1000"/>


Configure RDP for Windows Azure cloud service instance

 In order to RDP to a windows azure cloud instances execute the steps given below:

  • Generate an encryption certificate and upload to the respective cloud service. This certificate is used to encrypt the RDP communication
  • Encrypt the RDP password using teh certificate thumbprint. You can use the csencrypt command line utility available with the Windows Azure SDk to encrypt the password- Ref: http://msdn.microsoft.com/en-us/library/windowsazure/hh403998.aspx
  • Import the RemoteAccess and RemoteForwarder modules in the csdef file
      <Import moduleName="RemoteAccess" />
      <Import moduleName="RemoteForwarder" />
  • Update the Remote desktop connection configuration values in the cscfg file. The settings are

<Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true " />
<Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value=" " />        <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="" />
<Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2014-06-27T23:59:59.0000000+05:30" />
<Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" />
  • In the above settings the value of username and encrypted password should be updated
  • The cscfg file updated with the above settings can be deployed to the cloud service along with the cspkg file
  • Once the deployment is completed, login to Azure management portal-> cloud service-> Instances. Select the instance you want connect using RDP, and click on connect in the bottom menu
  • An RDP file will be downloaded,which you an open/save and then use the username password provided in the .cscfg file to connect to the selected instance
  • In case you need to reset the password, go to cloud service->configure and select Remote on the bottom menu.You will get options to enable/disable RDp, set new password, select the certificate, update the expiry date etc

Windows Azure cloud services - Roles and config files

Windows Azure cloud service, is in simple terms an application designed to be hosted in a Cloud with a configuration file that defines how the service should be run.

Two files decide the settings for the cloud service - Service definition  file(.csdef) & Service configuration file (.cscfg)

Service definition file:

This file defines the settings that will be sued for configuring a cloud service. It defines the following settings
Sites - Definition of websites or applications hosted in IIS7
InputEndPoints - End points used for contacting the cloud service
InternalEndPoints - Endpoints for role instances to talk to each other
Configuration Settings - Settings specific for a role
Certificates - Defines certificates used by a role
Local Resources - Details of local storage, this will be a reserved directory in the file system of the virtual machine in which a role is running
Imports - Defines the modules to be imported for a role. For eg: to enable the RDP connection to a VM, we need to import the modules RemoteAccess & RemoteForwarder. To enable dignostics, we need to import the module named Diagnostics
Startup - used to define startup tasks that will be executed when the role starts

The Service definition file is packages along with the application in the .cspkg file used for creating/updating a cloud service

Service configuration file:

The values of the settings defined in the service definition files are updated in the service configuration file. For eg: The number of role instances,Remote desktop settings like username, encrypted passwords and other application specific configuration values. This file is uploaded separately and is not included in the application package. We can also change the values even when the cloud service is running

Cloud service roles:

Two types of roles are supported in Windows Azure cloud service

Web role: It is a role customized for web applications.If you select this role type IIS 7 comes pre installed with the VM. It is most commonly used for hosting the web frontend.

Worker Role: This role is mainly used for the background processing for web role. Long running processes or intermittent tasks should be configured to be executed in this role


Friday, September 13, 2013

Back to basics : Networking - Part 1

Range of different classes of IP addresses:

Based on the range of first octet
Class A:  1-126
Class B:  128-191
Class C: 192-223

Private IP ranges

Class A: to
Class B: to
Class C: to

APIPA address: to

MAC address:

Media access control addree is associated with a  network adapater, often known as hardware address

12 digit hexadecimal, 48 bits in length

Written in format- MM:MM:MM:SS:SS:SS

First half is address of the manufacturer and second half is serial number assigned to adapter by manufacturer

MAC address work at layer 2, Ip address at layer 3

OSI Model:(Open System Interconnection)

 Physical: Defines the physical media ie cables, connectors etc

Data Link: defines data format.Converts raw bits from physical layer to data frames for delivery to network layer. Common devices that work at this layer: Switch

Network layer: Addressing, determining routes,subnet traffic control etc. IP addresses are added at this point and data at this layer is called packet. Common device at this layer: Router

Transport layer: End-to-End message delivery. Reliable and sequential packet delivery through error recovery and flow control mechanisms. uses mechanisms like Cycle redundancy checks, windowing and acknowledgement: Eg: TCP  & UDP

Session Layer: manager user sessions and dialogues. Controls establishment and termination of logical links between users: Eg: Web browser make use of sessions layer to download various elements of a web page from a web server

Presentation layer: Encoding, decoding, compression, decompression, encryption, decryption etc happens at this layer: Eg: conversion of .wav to mp3

Application layer: Display data and images to human recognizable format. Eg: Telnet, FTP etc

Reference: http://www.inetdaemon.com/tutorials/basic_concepts/network_models/osi_model/osi_model_real_world_example.shtml


Tuesday, September 10, 2013

DHCP superscope

DHCP superscopes is in simple terms a logical grouping of DHCP Scopes. They  are used in scenarios where there are multiple subnets created in a particular Vlan. In this case, your Vlan configuration would look like this:

Interface vlan 107
ip address
ip address secondary
ip address secondary

Create scopes for all the above subnets in your DHCP , then create a superscope and add the scopes to it.

The ideal case is to have one subnet per Vlan and to create individual scopes in DHCP for these Vlans. You will have to configure IP helper address for these Vlans and point them to your DHCP IP address so that the clients in various subnets get IPs from the DHCP. Your Vlan configuration would look like this (assume that the Ip of your DHCP is

vlan 12
interface vlan12 ip address
vlan 13
interface vlan13 ip address
ip helper-address
interface vlan14 ip address
ip helper-address

Here we have created a virtual interface (at layer 3), which can do inter-VLAN routing. DHCp requests for a Vlan received at the virtual interface is forwarded to the DHCP server, after changing the giaddr to the interface IP. DHCP when it receives the request, compares the subnet from the interface with the scopes configures. When it finds a match, the IP allocation process is initiated


Thursday, September 5, 2013

VMware data recovery troubleshooting

If the VDP backup fails , the following troubleshooting steps can be used

  1. SSH to the the VDP appliance and browse to the /usr/local/avamarclient
  2. Search for logs related to the VM :   grep -r -a "VM_NAME" ./*
  3. If you suspect it is snapshot related issue : grep -r -a " VM_name" ./* | grep "FATAL"
  4. To be more specific and to check messages for a certain date, try searching using the date : grep -r -a " VM_name" ./* | grep "2013-08-02"
  5. Sometimes we could get very useful information from the "info" messages as well. Inorder to narrow down to the same, you can use the command: grep -r -a "VM_name" ./var-* | grep "2013-07-03"
  6. The baove command will search only through the 'var-proxy' directories. It will display the entire log file. You can less it to view details for a specific date eg: less ./var-proxy-5/VMGROUP1-1378306800496-35fj52c29f48eeejef090b27edaeba3d868719e8-4016-vmimagew.log
    /2013-07-03 07:10:00

Error messages:

Message 1:
avvcbimage FATAL <16018>: The datastore information from VMX '[STORAGE-1] VMNAME_1/VMNAME.vmx' will not permit a restore or backup. 

Reason: The most common reason is that a snapshot file is present but it is not getting displayed in the snapshot manager.Inorder to resolve this, 
  1. SSH to the esx hosting the VM 
  2. Browse to the VM's datastore :cd /vmfs/volumes/datastore_name/VM_name/
  3. Check if there are any delta files in it ie files with -delat in name or -00001 etc
  4. Now check if any of these files are in use by checking the vmx file : grep "vmdk" ./*.vmx
  5. If the files are not being referenced in the vmx, we can safely delete or move the delta files to a temp directory: mkdir 0ld-delta-files ; mv vm_name.000*.vmdk old-delta-files/
  6. Confirm that the files have been deleted
Message 2:

avvcbimage FATAL <14688>: The VMX '[STORAGE-1] VMNAME_1/VMNAME.vmx could not be snapshot.

Reason:One possible reason is that you execute a backup and it overruns the scheduled backup in VDR

Message 3:
2013-07-03 17:00:57 avvcbimage Info <14642>: Deleting the snapshot 'VDP-137830742335fc52c29f98eeebef090b22edaeba3p868716e8', moref 'snapshot-17946'
2013-07-03 17:00:57 avvcbimage Info <0000>: Snapshot (snapshot-17946) removal for VMX '[STORAGE-1] VMNAME_1/VMNAME.vmx  task still in progress, sleep for 2
2013-07-03 17:00:57 avvcbimage Info <0000>: Snapshot (snapshot-17946) removal for VMX '[STORAGE-1] VMNAME_1/VMNAME.vmx task was canceled.

2013-09-04 17:00:57 avvcbimage Info <0000>: Removal of snapshot 'VDP-VDP-137830742335fc52c29f98eeebef090b22edaeba3p868716e8' is not complete, moref 'snapshot-17946'

Reason:This is because VDP doesnt get enough time to delete the snapshots created during the backup operation.Solution is to change the timeout value to allow enough time for snapshots to commit.

To increase this timeout value:
1.Open an SSH session to the VDP server.
2.Change to the /usr/local/avamarclient/var directory using this command:
# cd /usr/local/avamarclient/var
3.Open the avvcbimage.cmd file using a text editor. For more information, see Editing files on an ESX host using vi or nano (1020302).
4.Add this entry to the file:
5.Restart the avagent service using this command:
# service avagent restart

Reference: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2044821

Thanks to my colleague Tom for his valuable inputs for this article


Total Pageviews

About Me

Cloud Solutions expert with 17+ years of experience in IT industry with expertise in Multi cloud technologies and solid background in Datacentre management & Virtualization. Versatile technocrat with experience in cloud technical presales, advisory, innovation , evangelisation and project delivery. Currently working with Google as Infra modernization specialist, enabling customers on their digital transformation journey . I enjoy sharing my experiences in my blog, but the opinions expressed in this blog are my own and does not represent those of people, institutions or organizations that I may be associated with in professional or personal capacity, unless explicitly stated.

Search This Blog

Powered by Blogger.

Blog Archive