Skip to main content


Showing posts from August, 2013

TTL value in DNS

You have changed the DNS entry of a record in an authoritative DNS server in your domain. The DNS resolution is working fine from your domain. However, when you try to resolve the DNS entry from a different domain where this particular record needs to be resolved recursively, you still get the old value!! The culprit here is the TTL(Time To Live) value set for the DNS zone. When a caching/recursive server gets the DNS value from the authoritative DNS server, it is cached in the server for the duration specified by the TTL value. If it receives a DNS query before the TTL has expired, it will simply reply back with the cached values rather than querying the authoritative name server once again. This is what happened in the above situation. Though the DNS record was changed, the cache server was providing the response from its cache.  The larger the TTL value, the longer the DNS values are cached. But if you reduce the TTL value, there is a chance of authoritative name server gett

VMware vsphere 5.5

VMware has released the latest version of vSphere in VMworld 2013 .Release 5.5 includes a number of significant improvements as well as new features over the past releases. Lets have a look. What is new? Application HA: The HA feature extended to applications like MSSQL,Tomcat,IIS,TC server runtime etc.It can monitor applications and take actions liek restarting the application, resetting the VM , raise an alarm or send email notifications .A vFabric Hyperic should be installed in each guest OS.Also an AppHA virtual applicance and Hyperic server is required for thsi to work Reliable memory Technology: This enables ESXi to analyze the reliability of memory ,predict failures and stop using unreliable parts of memory. It will also help put critical processes like watchdog and hostd in the reliable area.  Flash Read Cache: This was previously avaliable in Beta as vFlash. This technology leverages the local flash devices reources in a host to provide a clustered flash resources for

Understanding hot-add and hot-plug in Vmware vSphere

The hot-add and hot-plug of resources are very useful feature in vSphere where you can pile in more compute resources on the fly without a downtime for machines.Few points about this feature "Hot-add" refers to adding more memory to a VM whereas "Hot-plug" refers to adding a virtual CPU to a VM Inorder to change the hot-add/hot-plug status of a VM , the machine should be powered off i e if the feature is disabled, you should first shutdown the machine before you can enable it. This is enabled from VM settings-> options-> advanced->memory/cpu hotplug The feature is not enabled by default Minimum VM hardware version of 7 is required for hot-add/hot-plug to work. If you are using a lower version of virtual hardware, first you need to upgrade it Even if it hot add/plug enabled, for the hot add/plug to be effective, it should be supported by the VM guest operating system for the resources to be available to the VM Hot add/plug feature is not compatibl

Independent disks in a Virtual machine

When we add a new disk in  a VM hosted in Vmware ESXi, we can choose whether the disk should be Independent or not. If we choose the disk to be independent, it is not included in snapshots. If you browse the datastore after taking a snapshot of the machine, you will not see any delta disks related to the independent disks. There are two types of independent disks: Persistant: The data written to the disk is retained after we powercycle the machine. It is like any normal disk that we add to a machine, only difference is that we cannot return to a point in time for data in that disk Non-persistant: Data is deleted when we powercycle the VM. I have tried it on a Windows VM and the disk is listed as unallocated space each time I powercycle the VM, thereby deleting all data saved in it. I had to initialize the disk from the disk management console and format it  as a drive before using it again. Interestingly, if we restart the windows OS the disk and the data that it contains is ret

What is Big data,Hadoop,NoSQL and MapReduce?

Big Data, Hadoop, MapReduce,NoSQl- Three buzz words that we hear a lot these days. here is a small description on these technologies. Big Data: The name itself discloses the identity!! As we all know this is an era of information overload. Organizations have hoards of unstructured data lying around , amounting to many petabyte and exabytes. It would be a costly affair if we try to use relational databases for analyzing this data. The main purpose of analyzing this data is to recognize any repeated patterns, associations (one event connected to another),classification(looking for new patterns), and patterns in data that could lead to reasonable predictions Hadoop: It is part of the Apache project, and provides a framework for large scale data processing in a distributed computing enviornment. It can handle upto Terabytes of data and work across thousands of nodes. This is always closely associated with cloud computing, considering the requirement for large number of servers as

VMware snapshot files

VMware snapshot includes the following: Virtual machine snapshot database file - vmsd:  This is the snapshot database file, which is used by the snapshot manager to display the snapshot details, relations between the various parent/child snapshots etc. There will be one vmsd file created per virtual machine when a snapshot is taken Virtual machine memory state - vmsn:  This file is created when you select the option to snapshot virtual machine memory as well . One benefit of taking snapshot of memory is that it will let you revert to a snapshot with virtual machine in switched on state. If snapshot of memory is not taken, when you revert a snapshot the VM will go back to switched off state Delta disks - delta.vmdk: These are delta disks created when we take a snapshot. The point of time at which the snapshot is taken, the current disk is turned to read-only, and a delta disk is created for subsequent writes. A delta disk, like other vmdk files consist of two files. One

Authorized vs standalone DHCP

Standalone DHCP: When a standalone DHCP server(ie server not added to the domain) starts,it sends a DHCP information message to its subnet to get information about the domain in which it is located and to see if there is any other authorized DHCP server in the network. If it doesn't get any valid response back either from an authorized DHCP server in the network it starts providing DHCp services to client. If it receives a response from an authorized DHCP server with details of the root domain,it will not provide DHCP services. This detection process is run every 10 mins for an unauthorized/standalone DHCP server Authorized DHCP : Only an Enterprise admin can authorize a DHCP server in a domain.This server should either be a domain controller or member of domain. During startup the DHCP will query the Active directory for the list of authorized DHCP servers. If it finds itself in the list, it will start DHCP services. This check is done every 60 minutes. If the DHCP server

How many ntds.dit files would be present in an Active directory?

There would be two ntds.dit files present in case on Windows server 2008 %SystemRoot%\NTDS\Ntds.dit    - This is the Ad database used by the domain controller, which holds the values for a domain and replica of values for forest %SystemRoot%\System32\Ntds.dit     -This file is used while promoting a windows 2000    server    to active directory. This is usually called the distribution copy of the database. This file allows you to run dcpromo on a 2000 server without having to use the OS CD. During promotion, the ntds.dit file is copied from  %SystemRoot%\System32 directory into the %SystemRoot%\NTDS directory. Active Directory is then started from this new copy of the file, and replication updates the file from other domain controllers. reference:

Different raid levels

Raid 0 : Data is split or 'striped' across multiple disks,No mirror and no parity Minimum 2 disks are required No redundancy, no parity Good read/write performance, used in cases where speed is a priority Raid 1: Data is mirrored across two disks Good redundancy Excellent speed for reads(twice the read transaction), same write transaction as in single disks Used in cases where reliability/redundancy is a priority Raid 5: Distributed parity Minimum 3 disks required Good performance and redundancy(blocks are striped and due to presence of distributed parity) Better performance on read operations, write will be slow Best used in case of read-intensive applications/Dbs, not recommended for write based applications

How to publish a software to users using GPO

1)In the AD group policy management console, navigate to the OU where the user resides 2)Right click and select the option 'Create a GPO in this domain, and link it here" 3)Provide a name for the GPO 4)Right click the GPO and click edit 5)Navigate to user configuration->Policies-> Software Settings-> Software installation 6)Right click software installation. Select new-> package 7)Select the installer packages. Files with extension .msi and .zap can be selected 8)Select the required option. You will see three options there- Published, Assigned and Advanced. Click ok       - If you want the user to installed the application by himself by going to add/remove program, select published option      - If you want the program to be installed when the user logs in, choose this option.      - You can right click the installation file in the GP editor window and change the options between published and  assigned     - If the option selected is published, the opti

What is new in Windows server 2012

Server core improvements: no need of fresh installation, you can add/remove GUI from   server manager Remotely manage servers  -add/remove roles etc using Server manager -manage 2008 and 2008 R2 with WMF 3.0 installation, installed by default in Server 2012 Remote server administration tools available for windows 8 to manage Windows server 2012 infrastructure Powershell v3 Hyper-V 3.0 - supports upto 64 processors and 1 TB RAM per virtual machine                 -upto 320 logical hardware processors and 4 TB RAM per host      - Shared nothing live migration, move around VMs without shared    storage ReFS(Resilient file system), upgraded version of NTFS - supports larger file and directory sizes.  - Removes the 255 character limitation on long file names and paths, the limit on the path/filename size is now 32K characters! Improved CHKDSK utility that will fix disk corruptions in the background witho

What is new in VMFS5

VMFS 5 enhancements are as follows - Unified block size: VMFS 5 offers a unified block size of 1 MB compared to the multiple block sizes in VMFS 3. The size of largest single VMDK file is not limited by this block size. The maximum vmdk size possible is 2 TB - 512 -  Smaller sub block:   The sub-block size in VMFS 5 is 8 KB compared to the 64 KB size of VMFS 3. This means that smaller files with size less than 8 KB( but greater than 1 KB) will consume only 8 KB disk space in place of 64 KB. This reduces the amount of disk space hogged by smaller files, and thereby better utilization of disk space - Large single extend volumes:   In VMFS 3 the largest single extend possible was 2 TB. In VMFS 5, this has been increased to approx 60 TB - Small file support :   For files smaller than 1 KB, a file descriptor location in the storage metadata is used rather than file blocks. When they grow above 1 KB, file blocks are used. This again reduces the disk space utilized by small files - In

VMFS 3 & VMFS 5 block size difference

The VMFS block size defines the maximum file size as well as the space occupied by a file. The following block sizes were available in vmfs 3 - 1MB, 2MB, 4 MB and 8 MB. These block sizes decided the maximum possible size of a single VMDK file that you can create for a virtual machine. The maximum disk size was limited as follows 1 MB - 256 GB 2 MB - 512 GB 4 MB - 1 TB 8 MB - 2 TB - 512 B VMFS5 offers a unified block size of 1 MB,larger blocks are not required to create disks of larger size. This means that the 1 Mb block size of vmfs 3 is not same as 1 MB  block size of vmfs 5. The maximum VMDK file size supported is 2 TB - 512 B

DHCP Dora process

DORA in simple words is the process through which a DHCP client acquires an IP address from a DHCP server in the network D- Discover: When a machine boots up in lan and it doesnt have an IP address configured , it would send a DHCP discover broadcast to the network. It will have a destination IP of It also includes it mac address encapsculated in the package. The layer 2 destination would be ff:ff:ff:ff:ff:ff, ie to add devices in the network. The switch port which received the package would then forward it to all other ports in the switch except on the one from which the request is received. O- Offer: If there is a dhcp server listening on the network, it will respond back to the DHCPrequest package with an offer package. The offer package is again a broadcast to, but it will have the destination mac address set to the DHCP client's mac address. The source mac address will be that of the DHCP server. The offer package will contain the IP addres

Is it mandatory to install AD and DNS on the same server?

The answer is yes, if it is the first DC in a forest. In that case DNS will be installed by default during the AD promotion process. This is done because DNS is an integral for the proper functioning of AD. Without a properly configured DNS, the AD infrastructure would not  work ref: Coming back to our question, do you need to install DNS in every AD in your domain ? The answer is no. If a DNS server already exists in a domain, you will get an option to choose whether  you want to install DNS in the Ad server. It will also list the available DNS servers in the domain . If you choose not to install a DNS, then that is perfectly fine.The AD will use one of the available DNS servers. However just make sure that the DNS settings in your network card is configured correctly to point to one of those available DNS servers