In computing, virtualization is the creation of a virtual (rather than actual) version of something, such as a hardware platform, operating system (OS), storage device, or network resources. Virtualization can be viewed as part of an overall trend in enterprise IT that includes autonomic computing, a scenario in which the IT environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and overall hardware-resource utilization. With virtualization, several operating systems can be run in parallel on a single central processing unit (CPU). This parallelism tends to reduce overhead costs and differs from multitasking, which involves running several programs on the same OS.

Why Virtualize ?

Virtualization allows an IT manager to manage systems with more flexibility and control, at a lower cost, with added security. Disaster recovery plans are a built in part of virtualization because virtualized images can recover all of a systems’ servers. Virtualization also allows a manager to set up multiple systems, easily, at a faster rate that also reduces costs.

Flexibility and Control:  Virtualization is an extremely flexible option because it allows IT managers to expand, shrink or move the virtual computer without modifying the hardware. With virtualization data can be moved without affecting access to the data. Data is no longer bound to a physical hard drive like a desktop which gives companies more flexibility to grow their data field or change in their file storage environment.

Environmentally Friendly: Virtualization reduces the number of servers a company uses which decreases the energy it takes to operate and cool the servers. Without virtualization many data centers run at only a fraction of their capacity and thus are very inefficient at converting electricity into IT work. Virtualization reduces energy use by helping systems run at peak performance and optimize energy use. The reduced about of servers decreases energy while maintaining the same processing power. IT managers are also allowed to turn off computers from a centralized location in order to not waste electricity and money. All of this means a significant improvement in IT efficiency and reduction in greenhouse gas emissions.

Cost Benefit: In computing, the benefits of virtualization are primarily reducing the hardware costs by upto 70%. Organizations can dramatically increase the efficiency of their existing data center through virtualization and potentially avoid the massive cost of building a new one. With virtualization, instead of buying new computers every 3-5 years, older computers can run new applications though the virtualized server. Sometimes companies only run one application on a server because they don’t want to risk the application failing and crashing the other computers. With virtualization, computers becoming a multi-tasking one and multiple servers into a computing pool that can adapt to large workloads. Another added cost benefit is a reduction in time it takes to send out patches and updates. Virtualized machines can be turned off from a centralized location which also reduces utility costs.
Virtualization is a long lasting solution to reduce the hassles IT managers face with managing, securing and upgrading computers. Through a virtualized system it’s easier to keep desktops updated and secure and the cost benefits and environmental benefits companies reap from upgrading to a virtualized system are an added bonus.GT has the competence to assist customers to evaluate, migrate or deploy “virtualization”. Thus convert your IT Infrastructure into complete Virtual environment. GT has partnered with Microsoft, VMware & Citrix to provide Virtualization Solution to its customers.


Citirix XenServer is a hypervisor platform that enables the creation and management of virtualized server infrastructure. It is developed by Citrix Systems and is built over the Xen virtual machine hypervisor. XenServer provides server virtualization and monitoring services.Citrix XenServer is a server virtualization platform based on the Xen hypervisor that allows IT administrators to host, deploy and manage virtual machines.The main components of Citrix XenServer are the hypervisor, XenCenter integrated management and XenMotion live migration. The platform also delivers free tools for physical-to-virtual and virtual-to-virtual server conversion. It supports up to 500 virtual machines (VMs) and 4,000 virtual CPUs per host. XenServer is designed to work specifically with Citrix Systems’ XenApp and XenDesktop tools for desktop and application virtualization.



VMware is a virtualization and cloud computing software provider based in Palo Alto, California. Founded in 1998, VMware is a subsidiary of Dell Technologies. EMC Corporation originally acquired VMware in 2004; EMC was later acquired by Dell Technologies in 2016. VMware bases its virtualization technologies on its bare-metal hypervisor ESX/ESXi in x86 architecture.

With VMware server virtualization, a hypervisor is installed on the physical server to allow for multiple virtual machines (VMs) to run on the same physical server. Each VM can run its own operating system (OS), which means multiple OSes can run on one physical server. All of the VMs on the same physical server share resources, such as networking and RAM.


Microsoft Hyper-V

Microsoft Hyper-V, codenamed Viridian and formerly known as Windows Server Virtualization, is a native hypervisor; it can create virtual machines on x86-64 systems running Windows.Starting with Windows 8, Hyper-V superseded Windows Virtual PC as the hardware virtualization component of the client editions of Windows NT. A server computer running Hyper-V can be configured to expose individual virtual machines to one or more networks.Hyper-V was first released alongside Windows Server 2008, and has been available without charge for all the Windows Server and some client operating systems since.Hyper-V is also available on the Xbox One, in which it would launch both Xbox OS and Windows 10.

Hyper-V Server

Hyper-V Server 2008 was released on October 1, 2008. It consists of Windows Server 2008 Server Core and Hyper-V role; other Windows Server 2008 roles are disabled, and there are limited Windows services.Hyper-V Server 2008 is limited to a command-line interface used to configure the host OS, physical hardware, and software. A menu driven CLI interface and some freely downloadable script files simplify configuration. In addition, Hyper-V Server supports remote access via Remote Desktop Connection. However, administration and configuration of the host OS and the guest virtual machines is generally done over the network, using either Microsoft Management Consoles on another Windows computer or System Center Virtual Machine Manager. This allows much easier “point and click” configuration, and monitoring of the Hyper-V Server.


Hyper-V implements isolation of virtual machines in terms of a partition. A partition is a logical unit of isolation, supported by the hypervisor, in which each guest operating system executes. A hypervisor instance has to have at least one parent partition, running a supported version of Windows Server (2008 and later). The virtualization stack runs in the parent partition and has direct access to the hardware devices. The parent partition then creates the child partitions which host the guest OSs. A parent partition creates child partitions using the hypercall API, which is the application programming interface exposed by Hyper-V.
A child partition does not have access to the physical processor, nor does it handle its real interrupts. Instead, it has a virtual view of the processor and runs in Guest Virtual Address, which, depending on the configuration of the hypervisor, might not necessarily be the entire virtual address space. Depending on VM configuration, Hyper-V may expose only a subset of the processors to each partition. The hypervisor handles the interrupts to the processor, and redirects them to the respective partition using a logical Synthetic Interrupt Controller (SynIC). Hyper-V can hardware accelerate the address translation of Guest Virtual Address-spaces by using second level address translation provided by the CPU, referred to as EPT on Intel and RVI (formerly NPT) on AMD.

Child partitions do not have direct access to hardware resources, but instead have a virtual view of the resources, in terms of virtual devices. Any request to the virtual devices is redirected via the VMBus to the devices in the parent partition, which will manage the requests. The VMBus is a logical channel which enables inter-partition communication. The response is also redirected via the VMBus. If the devices in the parent partition are also virtual devices, it will be redirected further until it reaches the parent partition, where it will gain access to the physical devices. Parent partitions run a Virtualization Service Provider (VSP), which connects to the VMBus and handles device access requests from child partitions. Child partition virtual devices internally run a Virtualization Service Client (VSC), which redirect the request to VSPs in the parent partition via the VMBus. This entire process is transparent to the guest OS.

Virtual devices can also take advantage of a Windows Server Virtualization feature, named Enlightened I/O, for storage, networking and graphics subsystems, among others. Enlightened I/O is a specialized virtualization-aware implementation of high level communication protocols, like SCSI, that allows bypassing any device emulation layer and takes advantage of VMBus directly. This makes the communication more efficient, but requires the guest OS to support Enlightened I/O.

Currently only the following operating systems support Enlightened I/O, allowing them therefore to run faster as guest operating systems under Hyper-V than other operating systems that need to use slower emulated hardware:

Windows Server 2008 and later
Windows Vista and later
Linux with a 3.4 or later kernel