Let’s begin with a simple definition of Virtualization from Wikipedia: "Virtualization" is a term that refers to the abstraction of computer resources.
That is concise enough to begin this discussion. Virtualization can be applied to many types of computer resources: Storage, Network, and Compute (CPU / Memory etc.) We will concentrate on server virtualization for the purposes of this article.
There are several approaches to virtualizing servers, including GRID approaches (where discrete workloads are distributed among multiple physical servers, and the results eventually collected), OS –level virtualization (sometimes called containers where multiple instances of an application can run in isolation from one another on a single OS instance, and hypervisor-based virtualization (which currently appears the most widespread.)
Within hypervisor virtualization there are several sub-approaches to achieve the same goal; to run multiple workloads (defined here as Operating Systems, such as Windows or Linux along with applications) on a single physical host. For high-availability, multiple hosts can be “pooled” together to form “clusters” or “farms” often sharing the storage on which the virtual machines themselves reside. In this fashion VMs (guests) can be moved from host to host merely by using a management tool to point the file(s) containing the VM to a different host. A common technique called Live Migration allows a virtual machine to be moved from host to host without the need to be shut down and restarted. A technique called Distributed (or Dynamic) Resource Scheduling can make use of this capacity to actively load-balance VMS among multiple hosts in order to most efficiently utilize the resources available to the VMs.
Hypervisors fall roughly into three categories:
In reality, many of today’s hypervisors make use of combinations of these techniques. Additionally, hypervisors are usually classified as Type II (run on top of a standard OS such as Windows or Linux) and Type I (bare metal) which run directly on a hardware platform (host), typically utilizing a “stripped down” version of a standard OS in order to provide some services to the VMs (guests.)
The good thing for Cloud users is that the abstraction (which we recall is the key attribute of virtualization) means that one need not be concerned about any of the above. It is the Cloud provider who needs to select among the various platforms and live with the consequences.
Several mature choices exist, each having sufficient differences and features to ensure that there is no single choice.
VMware provides several versions including (Type I and II), as does Microsoft. VirtualBox is a popular type II product, though rarely used in large production environments. Xen and KVM are Open Source solutions, although various supported distributions are available commercially from companies like Citrix, Oracle and Red Hat.
Type II solutions suffer from a disadvantage (real or imagined) that reliance on an underlying Operating System means that all the guest VMs inherit the vulnerabilities and weaknesses of that OS. Type I hypervisors are overwhelmingly preferred for large Production environments.
While I had said earlier, that the end-user is by and large spared the concern of which hypervisor a Cloud Provider selects, there is a material effort involved in converting current workloads to virtual versions to be used in the Cloud.
While I must stress that Virtualization is by no means a necessity for Cloud Computing, the agility and cost-effectiveness ensure that many (if not most) Cloud Providers will be offering up hypervisor-based solutions, especially for Platform and Infrastructure as a Service.
Hypervisor vendors (and third parties) often provide P2V (Physical to Virtual) tools which can convert a workload on physical server to a compatible format for running on a specific Virtualization platform. As there are few standards among these formats, it is necessary for the user to become aware of the one chosen by their Cloud Provider. It is also possible to convert Virtual to Virtual (V2V) but bear in mind these are rarely trivial, fast, or without some downtime. Many companies like ATT VPN offer a variety of solutions.
An initiative called OVF (or Open Virtualization Format) is being adopted to ensure some level of transportability for applications among various virtualization platforms. By isolating applications into “virtual appliances” which are not dependent on features of a specific hypervisor, there is some mitigation of the issue of “vendor lock-in.” This is a work-in-progress held by the DMTF (Distributed Management Task Force) but is generally welcomed by the Virtualization community.
Possibly the greatest issue facing virtualization as it applies to Cloud Computing is that economics tends to favor multi-tenancy which essentially means that VMs belonging to multiple customers will likely reside on a common host (at least at some point.) Customers often fear that their most valued intellectual property could be running right next to a server of their fiercest competitor. Guaranteeing security in such circumstances remains a stumbling block to Cloud adoption, but this certainly could be relieved to an extent by paying a premium to guarantee that one’s VMS would run on dedicated, isolated hosts. This could represent the ultimate “Private Cloud” where an organization can take advantage of the benefits of Public Cloud without sacrificing security.
Jan Klincewicz is a Virtualization architect.