Client virtualization in a cloud environment: a complex landscape

21.12.2010
This article is based on the book “Creating the Infrastructure for Cloud Computing: An Essential Handbook for IT Professionals”

The virtualization models for clients are, arguably, more diverse than those for servers.  For servers there are essentially two, the earlier model of static consolidation and the more recent dynamic model where virtual machines lightly bound to physical hosts can be moved around with relative ease.

With virtualized clients there are also two main models, depending on whether the application execution takes place in servers in a data center or on the physical client.

But whereas server-based client virtualization services can be delivered through terminal services (session virtualization) or through virtual hosted desktops (virtual desktop infrastructure), client-based virtualization can be delivered through one of five computation models: operating system streaming, remote operating system boot, application streaming (application virtualization), virtual containers and rich distributed computing (rich clients.)  

The first four models are also known as dynamic virtual clients or DVCs, characterized by centrally managed application or operating system images and client-side execution.

Each variant exhibits specific management, security and TCO features. The specific choices are driven by the intended target application scenarios.

At least for server-based clients, their presence may be an indication of technology convergence between clients and server products in cloud space, a continuation of the trend that started when clients were used as presentation devices for traditional three-tier applications.

All the models depend on NAS or SAN storage for accessing operating system images or for operating system or application streaming and delivered through either a desktop or laptop PC or a thin client.

Following are some considerations for evaluating client virtualization solutions.

* Client Devices and Compute Models: Conversations around compute models often get intertwined with the device on which it will be deployed.  The analysis becomes easier if devices and models are treated separately.  For example, the business scenario may dictate server-based computing for a certain application, such as a patient information database.  However, this “thin client” model need not be deployed on a thin terminal.  A desktop or laptop PC may actually be a more appropriate device, depending on a user’s total application and mobility needs.  

* Mixed Compute Models: In most cases, IT will deploy a mix of computation models depending on needs for data security, performance and mobility.  Individual users may have a hybrid of models.  For example, a construction estimator in the field may use a cellular modem to access the centralized job scheduling tool via a terminal server session, but also have Microsoft Office locally installed for word processing and spreadsheet work.

* Benchmarking Applications: There are no industry standard benchmarks for alternative compute models.  Under the current state of the art it is not meaningful to carry out performance comparisons across computation models. IT managers should evaluate performance claims carefully to understand applicability to their situations.

* Streaming and Application Virtualization: Streaming and application virtualization are not synonyms, even though they are often used interchangeably.  Streaming refers to the delivery method of sending the software over the network for execution on the client.  Streamed software can be installed in the client operating system locally, or in most cases, it can be virtualized.  With application virtualization, streamed software runs on an abstraction layer and does not install in the operating system registry or system files.  An advantage of application virtualization is that it can limit the continuous accumulation of randomness in the operating system registry and system folders (affectionately known as “bit rot”) that lead to system instability over time.

* Application versus Image Delivery: A helpful way to think of the models and how they fit with customer requirements is whether the problem needs to be solved at the application level or image level.  In this case, an image is the complete package of the operating system and required applications.  Some computation models solve application problems, some solve image problems.  It is important to understand the customer’s need in this area.  

* Public versus Private Images: When centrally distributing a complete desktop image with either virtual hosted desktop or operating system streaming, it is important to comprehend the difference between a common public image and a customized private image.  

Public images are standardized operating system and application stacks managed, patched and updated from a single location and distributed to all authorized users.  Files and data created by the applications are stored separately. Customization of the image is minimal, but since all users access a single copy of the OS and application, storage requirements are relatively small.

Private images are operating system and application stacks personalized to each user.  Although users enjoy a great deal of customization, each private image must be stored and managed individually, much like managing rich, distributed clients.  Current products do not allow private images to be patched or updated in their stored locations, but rather require them to be actively loaded and managed in-band, either on the server or the client.  The storage requirement of private images is much higher, since each user’s copy of the operating system and application must be stored.

Let’s now briefly touch on the essential characteristics of each of the client virtualization models.

* Server-based Virtualization Models: Terminal services represent the quintessential server-based model.  Here, the client is merely a display and input device.  All computation is done centrally on the server and all data is stored in a datacenter.  Nothing is persistent on the client.  It is the most proven, reliable server-side model, harkening back to the days of mainframe computing.  Remote Display Protocol (RDP) or Independent Computing Architecture (ICA) are used to deliver an image of the server-based application to a terminal viewer on the client, and return keystrokes and mouse clicks to the server.

Most enterprises of significant size use terminal services for some applications and users.  Bank tellers accessing the transaction system, call center workers entering orders in database and healthcare professionals working with text-based patient records are examples where terminal services may be a good solution.

The newest entry into server-side computing is virtual hosted desktops, more commonly known by VMware’s acronym, Virtual Desktop Initiative (VDI).  Given that additional vendors are creating similar products to VDI, we will use the generic term virtual hosted desktop (VHD) acronym for this discussion.

Similarly to terminal services, VHD is a server-side compute model.  All computation and storage are centralized, with images of the user’s desktop pushed over the network to the client via RDP or other protocol.  The major difference is that VHD offers each user their own complete virtual machine, including the OS, applications and settings. VHD is designed to replicate the user experience of a rich PC with all the management and security of server-side models.

* Client-based virtualization models: Streaming both the OS and applications combines the simplicity of a stateless client with the performance of local execution.  Here, the client is essentially “bare-metal” with no OS or applications installed.  

At power-up, the operating system and applications are streamed to the client over the network, where they execute locally on the client’s own CPU, graphics processor, etc.  Application data is usually stored in a datacenter.  The client may be PC with no hard drive, using main memory exclusively.

Remote operating system boot is similar to operating system streaming in that it delivers a complete operating system and application image to a “stateless” PC whose hard drive has been deactivated or removed.  Unlike operating system streaming, remote operating system clients boot directly from the SAN, and the image is unmodified from the “gold” image that would be used on a local disk.  

Under the application streaming model, the operating system is locally installed, but applications are streamed on-demand from the datacenter to the client, where they are executed locally.  Streamed applications frequently do not install on the client operating system, but instead interface with an abstraction layer and is never listed in the operating system registry or system files (hence the term, application virtualization, that some vendors use).  

This simplifies the interactions between the streamed application, other locally installed software and the operating system, virtually eliminating software conflicts and image management problems.  It can also effectively “sandbox” applications in isolated containers, allowing better security.  

Blade PCs repartition the PC, leaving basic display, keyboard and mouse functions on the user’s desk and putting the processor, chipset and graphics silicon on a small card (blade) mounted in a rack on a central unit.  PC blades, unlike server blades, are built from standard desktop or mobile processors and chipsets.  The central unit, which supports many individual blades, is secured in a datacenter or other IT-controlled space.  In most cases, remote display and I/O is handled by dedicated, proprietary connections rather than using RDP over the data network.

Blades promise a higher level of manageability and security than distributed PCs through restricted physical access, software image policies and limits on the types of activities users can do on the client device.  OS, application and data storage is centralized in a storage network.

Blade PC vendors initially targeted a user-to-blade ratio of one-to-one, where each user was dynamically assigned a blade and they had exclusive use of it.  However, as blade solutions and virtualization software has advanced, most vendors are now enabling one-to-many capabilities.

In conclusion, to virtualize or not to virtualize on the desktop no longer represents a critical planning question. The new question is, “What desktop environment strikes the balance between productive users and IT's need for increased manageability and security?”

Emerging client virtualization technologies such as operating system streaming, remote operating system boot, application streaming and virtual containers need to deliver a cost effective desktop solution tailored to each user scenario.

This means that the traditional desktop model may become, by comparison insecure, inflexible, and hard-to-manage, very much an anachronism in this context. Organizations will instead identify desktop users by criteria like task-based, knowledge, or power users and will deliver dynamic desktops accordingly.

Client virtualization is not just an emerging trend; it represents the future of the corporate PC.

About the Authors: Enrique Castro-Leon is an enterprise and data center architect and technology strategist for Intel Digital Enterprise Group, Bernard Golden is the CEO of Navica, a Silicon Valley IT management consulting firm, and Miguel Gomez is a Technology Specialist for the Networks and Service Platforms Unit of Telefónica Investigación y Desarrollo.  Visit the Intel Press web site to learn more about this book: http://www.intel.com/intelpress/sum_virtn.htm, or our Recommended Reading List for similar topics: www.intel.com/technology/rr

in Network World's Data Center section.