Friday, August 30, 2013

TTL value in DNS

You have changed the DNS entry of a record in an authoritative DNS server in your domain. The DNS resolution is working fine from your domain. However, when you try to resolve the DNS entry from a different domain where this particular record needs to be resolved recursively, you still get the old value!!

The culprit here is the TTL(Time To Live) value set for the DNS zone. When a caching/recursive server gets the DNS value from the authoritative DNS server, it is cached in the server for the duration specified by the TTL value. If it receives a DNS query before the TTL has expired, it will simply reply back with the cached values rather than querying the authoritative name server once again. This is what happened in the above situation. Though the DNS record was changed, the cache server was providing the response from its cache.  The larger the TTL value, the longer the DNS values are cached. But if you reduce the TTL value, there is a chance of authoritative name server getting overloaded. However, this approach can be used if you are changing the DNS entry of a critical service like Web servers, MX records etc. However it should be planned in way that there should be enough time for the already cached value to be expired in the non recursive servers.

Thursday, August 29, 2013

VMware vsphere 5.5

VMware has released the latest version of vSphere in VMworld 2013 .Release 5.5 includes a number of significant improvements as well as new features over the past releases. Lets have a look.

What is new?
  • Application HA: The HA feature extended to applications like MSSQL,Tomcat,IIS,TC server runtime etc.It can monitor applications and take actions liek restarting the application, resetting the VM , raise an alarm or send email notifications .A vFabric Hyperic should be installed in each guest OS.Also an AppHA virtual applicance and Hyperic server is required for thsi to work
  • Reliable memory Technology:This enables ESXi to analyze the reliability of memory ,predict failures and stop using unreliable parts of memory. It will also help put critical processes like watchdog and hostd in the reliable area. 
  • Flash Read Cache: This was previously avaliable in Beta as vFlash. This technology leverages the local flash devices reources in a host to provide a clustered flash resources for VMs and Hosts.These flash devices could be PCIe flash cards or SAS/SATA SSD drives.Flash resources can be used for VM I/O request read caching as well as storing the host swap file.
  • Big Data Extension: A big-data plugin is introduced to manage enterprise Hadoop clusters.It provides an graphical interface for easy administration of hadoop clusters running in vSphere.
  • Hot plug SSD PCIe devices: Now you can hot plug SSD PCI express devices just like Sata or SAS Hard drive.
  • Enhanced DPM using policy setting: You can  finegrain your power management using the following policies

  • VM hardware version 10: The latest VM hardware comes with enhancements like advanced host controller interface(AHCI). It comes with a new Virtual SATA controller which supports upto 30 devices per controller and a total of four controller per VM.It also provides graphic acceleration for Linux OS including Ubuntu 12.04 and later,Fedora 17 & later and RHEL 7

What is changed?
  • Maximum VMDK size increased from 2 TB to 62 TB for VMFS5 and NFS
  • Removed the 32 GB physical memory limitation of ESXi free version
  • Maximum RAM per host increased from 2 TB to 4 TB
  • Virtual CPUs per host increased from 2048 to 4096
  • Logical CPUs per host changed from 160 to 320
  • Extnded support for AMD and Intel GPUs
  • Support for windows 2012 guest clustering
  • Support for 40 Gpbs NICs

Tuesday, August 27, 2013

Understanding hot-add and hot-plug in Vmware vSphere

The hot-add and hot-plug of resources are very useful feature in vSphere where you can pile in more compute resources on the fly without a downtime for machines.Few points about this feature

  • "Hot-add" refers to adding more memory to a VM whereas "Hot-plug" refers to adding a virtual CPU to a VM
  • Inorder to change the hot-add/hot-plug status of a VM , the machine should be powered off i e if the feature is disabled, you should first shutdown the machine before you can enable it. This is enabled from VM settings-> options-> advanced->memory/cpu hotplug
  • The feature is not enabled by default
  • Minimum VM hardware version of 7 is required for hot-add/hot-plug to work. If you are using a lower version of virtual hardware, first you need to upgrade it
  • Even if it hot add/plug enabled, for the hot add/plug to be effective, it should be supported by the VM guest operating system for the resources to be available to the VM
  • Hot add/plug feature is not compatible with VMware FT
  •  Feature is available only on Advanced, Enterprise and Enterprise plus versions of vSphere
  • Hot "remove" of memory and cpu is not supported by vSphere, no matter what the guest OS is

Note: Supporting OS for hot add/plug feature

  •  Windows server 2003 Enterprise (x64 and x86 ), Windows server 2008 Std, Ent and datacenter editions support memory hot add. 
  • Windows server 2003 Std x64 and Windows server 2008 Ent x64 editions support CPU hot-plug
  • Linux OS supports hot add of memory but not hot-plug of CPU

Independent disks in a Virtual machine

When we add a new disk in  a VM hosted in Vmware ESXi, we can choose whether the disk should be Independent or not. If we choose the disk to be independent, it is not included in snapshots. If you browse the datastore after taking a snapshot of the machine, you will not see any delta disks related to the independent disks. There are two types of independent disks:

Persistant: The data written to the disk is retained after we powercycle the machine. It is like any normal disk that we add to a machine, only difference is that we cannot return to a point in time for data in that disk

Non-persistant: Data is deleted when we powercycle the VM. I have tried it on a Windows VM and the disk is listed as unallocated space each time I powercycle the VM, thereby deleting all data saved in it. I had to initialize the disk from the disk management console and format it  as a drive before using it again. Interestingly, if we restart the windows OS the disk and the data that it contains is retained.This could be because when we restart the OS, the VM power state is not changed. It will still be ON

Note: While taking snapshot of VMs with independent disks, make sure that the option 'Snapshot the Vitual machine's memory' is not selected. If this option is selected, snapshot operation will fail with an error "cannot take memory snapshot, since the virtual machine is configured with independent disks"

This is because snapshots are used to revert to a point-in-time state of the VM, and the persistant attribute of disks prevent it from doing so. The solution is to deselect the options to snapshot the VM memory,quiesce option or to ensure that the disks are not of type 'independent'


Monday, August 26, 2013

What is Big data,Hadoop,NoSQL and MapReduce?

Big Data, Hadoop, MapReduce,NoSQl- Three buzz words that we hear a lot these days. here is a small description on these technologies.

Big Data: The name itself discloses the identity!! As we all know this is an era of information overload. Organizations have hoards of unstructured data lying around , amounting to many petabyte and exabytes. It would be a costly affair if we try to use relational databases for analyzing this data. The main purpose of analyzing this data is to recognize any repeated patterns, associations (one event connected to another),classification(looking for new patterns), and patterns in data that could lead to reasonable predictions

Hadoop: It is part of the Apache project, and provides a framework for large scale data processing in a distributed computing enviornment. It can handle upto Terabytes of data and work across thousands of nodes. This is always closely associated with cloud computing, considering the requirement for large number of servers as well as processing power mainly on demand.It is based on Google's MapReduce which enables  parallel processing of large data sets. A Hadoop ecosystem consists of Hadoop Kernel,MapReduce, Hadoop distributed file system(HDFS) and Hadoop YARN(a framework for job scheduling and cluster resource management)

MapReduce: It is software framework for processing large amount of unstructured data in parellel across distributed cluster of processors or standalone computers. It is divided into 
Map - Used to distribute work to multiple nodes
Reduce - Function for collecting the results of work and consolidating it into a single value
It provides a fault tolerant robust framework , the nodes in the cluster are is expected to report back with completed work and status updates.If node remains silent for long, the master node redistribute the work to other nodes

NoSQL:  It is a database which is tailored to handle unstructured data. In some scenarios, it is not possible to always implement the Relational database management system which is more suited for predictable , structured data. It does not uses SQL to manipulate Data. In case of RDMS we may have to horizontally or vertically scale the servers when the data grows. In case of NoSQl, it can grow by distributing itself over a cluster of ordinary servers. It offers high performance with high availability , horizontal scaling and most importantly Open-source!!



VMware snapshot files

VMware snapshot includes the following:

Virtual machine snapshot database file - vmsd:
 This is the snapshot database file, which is used by the snapshot manager to display the snapshot details, relations between the various parent/child snapshots etc. There will be one vmsd file created per virtual machine when a snapshot is taken
Virtual machine memory state - vmsn:
 This file is created when you select the option to snapshot virtual machine memory as well . One benefit of taking snapshot of memory is that it will let you revert to a snapshot with virtual machine in switched on state. If snapshot of memory is not taken, when you revert a snapshot the VM will go back to switched off state
Delta disks - delta.vmdk:
These are delta disks created when we take a snapshot. The point of time at which the snapshot is taken, the current disk is turned to read-only, and a delta disk is created for subsequent writes. A delta disk, like other vmdk files consist of two files. One descriptor file and one data file. Descriptor file will contain the details such as geometry and child-parent relation of the disk. The second disk will contain the actual data. The delta disk are made up of child disks or redo logs.It uses Copy-on-write(COW) mechanism for the purpose of saving space, ie only data that needs to be modified for a write are copied over. The unit of measure used by this disk is called a 'grain', default size being 64 KB


Friday, August 23, 2013

Authorized vs standalone DHCP

Standalone DHCP:

When a standalone DHCP server(ie server not added to the domain) starts,it sends a DHCP information message to its subnet to get information about the domain in which it is located and to see if there is any other authorized DHCP server in the network. If it doesn't get any valid response back either from an authorized DHCP server in the network it starts providing DHCp services to client. If it receives a response from an authorized DHCP server with details of the root domain,it will not provide DHCP services. This detection process is run every 10 mins for an unauthorized/standalone DHCP server

Authorized DHCP :

Only an Enterprise admin can authorize a DHCP server in a domain.This server should either be a domain controller or member of domain. During startup the DHCP will query the Active directory for the list of authorized DHCP servers. If it finds itself in the list, it will start DHCP services. This check is done every 60 minutes. If the DHCP server doesn't find itself in the authorized DHCP server list, it will stop providing DHCP services.

Thursday, August 22, 2013

How many ntds.dit files would be present in an Active directory?

There would be two ntds.dit files present in case on Windows server 2008

%SystemRoot%\NTDS\Ntds.dit   - This is the Ad database used by the domain controller, which holds the values for a domain and replica of values for forest

%SystemRoot%\System32\Ntds.dit    -This file is used while promoting a windows 2000  server  to active directory. This is usually called the distribution copy of the database. This file allows you to run dcpromo on a 2000 server without having to use the OS CD. During promotion, the ntds.dit file is copied from %SystemRoot%\System32 directory into the %SystemRoot%\NTDS directory. Active Directory is then started from this new copy of the file, and replication updates the file from other domain controllers.



Different raid levels

Raid 0 :

  • Data is split or 'striped' across multiple disks,No mirror and no parity
  • Minimum 2 disks are required
  • No redundancy, no parity
  • Good read/write performance, used in cases where speed is a priority

Raid 1:

  • Data is mirrored across two disks
  • Good redundancy
  • Excellent speed for reads(twice the read transaction), same write transaction as in single disks
  • Used in cases where reliability/redundancy is a priority
Raid 5:
  • Distributed parity
  • Minimum 3 disks required
  • Good performance and redundancy(blocks are striped and due to presence of distributed parity)
  • Better performance on read operations, write will be slow
  • Best used in case of read-intensive applications/Dbs, not recommended for write based applications


Wednesday, August 21, 2013

How to publish a software to users using GPO

1)In the AD group policy management console, navigate to the OU where the user resides
2)Right click and select the option 'Create a GPO in this domain, and link it here"
3)Provide a name for the GPO
4)Right click the GPO and click edit
5)Navigate to user configuration->Policies-> Software Settings-> Software installation
6)Right click software installation. Select new-> package
7)Select the installer packages. Files with extension .msi and .zap can be selected
8)Select the required option. You will see three options there- Published, Assigned and Advanced. Click ok
      - If you want the user to installed the application by himself by going to add/remove program, select published option
     - If you want the program to be installed when the user logs in, choose this option.
     - You can right click the installation file in the GP editor window and change the options between published and  assigned
    - If the option selected is published, the option to install on logon will be greyed out. In case of 'assigned' this option is available to be selected


Monday, August 19, 2013

What is new in Windows server 2012

    • Server core improvements: no need of fresh installation, you can add/remove GUI from   server manager
    • Remotely manage servers 
      -add/remove roles etc using Server manager
      -manage 2008 and 2008 R2 with WMF 3.0 installation, installed by default in Server 2012
    • Remote server administration tools available for windows 8 to manage Windows server 2012 infrastructure
    • Powershell v3
    • Hyper-V 3.0
      - supports upto 64 processors and 1 TB RAM per virtual machine
                -upto 320 logical hardware processors and 4 TB RAM per host
     - Shared nothing live migration, move around VMs without shared    storage

    • ReFS(Resilient file system), upgraded version of NTFS
      - supports larger file and directory sizes. 
      - Removes the 255 character limitation on long file names and paths, the limit on the path/filename size is now 32K characters!
    • Improved CHKDSK utility that will fix disk corruptions in the background without disruption

Saturday, August 10, 2013

What is new in VMFS5

VMFS 5 enhancements are as follows

- Unified block size: VMFS 5 offers a unified block size of 1 MB compared to the multiple block sizes in VMFS 3. The size of largest single VMDK file is not limited by this block size. The maximum vmdk size possible is 2 TB - 512
- Smaller sub block:  The sub-block size in VMFS 5 is 8 KB compared to the 64 KB size of VMFS 3. This means that smaller files with size less than 8 KB( but greater than 1 KB) will consume only 8 KB disk space in place of 64 KB. This reduces the amount of disk space hogged by smaller files, and thereby better utilization of disk space
- Large single extend volumes:  In VMFS 3 the largest single extend possible was 2 TB. In VMFS 5, this has been increased to approx 60 TB
- Small file support :  For files smaller than 1 KB, a file descriptor location in the storage metadata is used rather than file blocks. When they grow above 1 KB, file blocks are used. This again reduces the disk space utilized by small files
-Increased file count: VMFS 5 supports more than 1 lakh files, where as VMFS 3 could support only approx 30 thousand files
-Atomic test and set(ATS) enhancement:  ATS technology which is part of VAAI(vSphere Storage API for Array Integration), is used for file locking which considerably increases the file locking performance of VMFS 5 over its predecessor

VMFS 3 & VMFS 5 block size difference

The VMFS block size defines the maximum file size as well as the space occupied by a file.

The following block sizes were available in vmfs 3 - 1MB, 2MB, 4 MB and 8 MB. These block sizes decided the maximum possible size of a single VMDK file that you can create for a virtual machine. The maximum disk size was limited as follows

1 MB - 256 GB
2 MB - 512 GB
4 MB - 1 TB
8 MB - 2 TB - 512 B

VMFS5 offers a unified block size of 1 MB,larger blocks are not required to create disks of larger size. This means that the 1 Mb block size of vmfs 3 is not same as 1 MB  block size of vmfs 5. The maximum VMDK file size supported is 2 TB - 512 B


Friday, August 9, 2013

DHCP Dora process

DORA in simple words is the process through which a DHCP client acquires an IP address from a DHCP server in the network

D- Discover: When a machine boots up in lan and it doesnt have an IP address configured , it would send a DHCP discover broadcast to the network. It will have a destination IP of It also includes it mac address encapsculated in the package. The layer 2 destination would be ff:ff:ff:ff:ff:ff, ie to add devices in the network. The switch port which received the package would then forward it to all other ports in the switch except on the one from which the request is received.

O- Offer: If there is a dhcp server listening on the network, it will respond back to the DHCPrequest package with an offer package. The offer package is again a broadcast to, but it will have the destination mac address set to the DHCP client's mac address. The source mac address will be that of the DHCP server. The offer package will contain the IP address,DNS,gateway etc as well

R- Request: The dhcp client will get similar offers from all dhcp servers in the network and it will typically accept the first one that it receives.It will then send a request to the DHCP for the offered IP address.

A-Acknowledge: When DHCP received the DHCp Request from the client for the IP address, it will send back a DHCP aknowledge, thereby allocating that IP address to the client

Thursday, August 8, 2013

Is it mandatory to install AD and DNS on the same server?

The answer is yes, if it is the first DC in a forest. In that case DNS will be installed by default during the AD promotion process. This is done because DNS is an integral for the proper functioning of AD. Without a properly configured DNS, the AD infrastructure would not  work


Coming back to our question, do you need to install DNS in every AD in your domain ? The answer is no. If a DNS server already exists in a domain, you will get an option to choose whether  you want to install DNS in the Ad server. It will also list the available DNS servers in the domain . If you choose not to install a DNS, then that is perfectly fine.The AD will use one of the available DNS servers. However just make sure that the DNS settings in your network card is configured correctly to point to one of those available DNS servers

Total Pageviews

About Me

Cloud Solutions expert with 17+ years of experience in IT industry with expertise in Multi cloud technologies and solid background in Datacentre management & Virtualization. Versatile technocrat with experience in cloud technical presales, advisory, innovation , evangelisation and project delivery. Currently working with Google as Infra modernization specialist, enabling customers on their digital transformation journey . I enjoy sharing my experiences in my blog, but the opinions expressed in this blog are my own and does not represent those of people, institutions or organizations that I may be associated with in professional or personal capacity, unless explicitly stated.

Search This Blog

Powered by Blogger.