by Roy Atkinson
Date Published - Last Updated February 25, 2016

 

Never let anyone tell you that computers can’t be fooled. The essence of virtualization is fooling an operating system into thinking it’s installed on hardware.

At the dawn of the computerized workplace, most larger organizations had mainframes, and many people did their work on dumb terminals—little more than a screen and a keyboard—that were connected to the mainframes. The screens had green or amber characters, and the computers were operated by commands input at the terminals. All the real work of computing, such as it was, took place inside the mainframe. The network connecting the terminals to the mainframes didn’t have to be very robust because only keystrokes were being passed.

In the last two decades of the twentieth century, small personal computers appeared and became more and more powerful. The day-to-day work of computing largely moved out of the chilly, air-conditioned rooms where the mainframes were housed and began to take place right on people’s desks. Processors grew more powerful, and software developers quickly learned to take advantage of that power. Soon, desks were the habitat of gigaflop computers that consumed hundreds of watts of electricity and spewed heated air out into the office space.

Those specially chilled spaces didn’t go away, however. They stayed and became the home of servers. The servers provided the environment for industrial-strength applications like enterprise resource planning (ERP), customer relationship management (CRM), and many other multiuser applications. Servers handled larger and larger databases, took over the work of access control for our networks, and served up webpages for our intranets and extranets. The more work we needed to do, the more servers we bought and racked inside our data centers. Power became an enormous expense. Cooling became not only another huge expense, but also more and more difficult. Rack space began to run out.

When these servers were analyzed, administrators discovered that only 5–10 percent of their potential capacity was being used. Large investments in hardware, space, cooling, and power were being extremely underutilized.

Seemingly just in the nick of time, software engineers developed ways of creating virtual machines—computers that didn’t need to be constructed out of silicon and copper. Someone asked, “If we give an operating system all the information it needs about memory, processor, and storage, does it need to have all that in real hardware?” The answer was no, and thus virtual machines were born. These virtual machines shared whatever physical resources they needed with other virtual machines. Instead of loading in and racking up another server, the day arrived when the server could be created from a template in a matter of minutes and added to a pool of servers using the same physical machine. The physical hardware that was being used to run one application could now run many virtual servers, leading to far greater efficiencies. Again, as more and more work became computer-based, more and more servers were added, but without the high costs in power, cooling, and rack space, thanks to virtualization.

Soon, some hardware companies began to offer something called a thin client, a device that could connect a display, keyboard, and mouse to a server, not unlike the way dumb terminals used to work. With virtualization, however, they could connect to a desktop instance that looked like a full-blown Windows computer, not just lines of text on a screen. And that Windows PC was really just a virtual desktop running on a server—and likely a virtual server, at that. Almost all of the computing power was leaving the desk again, heading back into the data center. Thin client desktops and laptops could do pretty much anything standard laptops and desktops could do (outside of very processor-intensive or graphics-intensive work). They consumed far less power than their full-powered computer counterparts, cost less to buy, and were far easier to monitor, control, and back up, since no data were stored on the thin client itself. In addition, thin clients ran on far less energy than their PC counterparts—about one-tenth as much in most cases.

Of course, all the traffic—optimized though it was—between the data center and the thin client relied upon a highly robust and speedy network, and advances were made in both fiber optic and copper network cable to allow for vastly improved speed and bandwidth at costs that organizations could afford. SAN and NAS (network-attached storage) technologies also improved, allowing for greater flexibility and governance of user and application data file storage. All the technology was coming together.

Once the PC operating system had been virtualized, any device capable of running an interface to that virtual desktop could be used as a replacement for the desktop computer. Well-known companies began producing clients for tablets and smartphones that were able to connect to and use virtual desktops. In addition, these virtual desktops could be added, deleted, updated, managed, and modified from a central console, largely eliminating the need to visit various locations to install software or make changes, and simplifying the licensing and tracking of operating systems and software. Moreover, thin client laptops and desktops do not store user data locally, but rather in storage that is connected to the virtualized systems, either on server-attached disks or in storage area networks (SANs). The days of backing up thousands of individual hard drives could be coming to an end.

However, not all applications do well in a virtual environment. Some have specific hardware dependencies that virtual machines cannot successfully emulate. But in many cases, applications can live in the same types of virtual environments as operating systems, and, like the OS, be served up from data center or cloud to virtual machines.

This can resolve some thorny issues. Let’s say, for example, that a process is used by the finance department a few times a year, and that process is executed best in an application that will not run on any operating system newer than Windows 95. Instead of having to put an old PC aside and hope it will start up when it is needed, a Windows 95 virtual machine can be built and the application installed on it. When that process is needed, the employee simply has to start up the virtual instance of Windows 95 and run the application.

In the 2012 HDI Desktop Support Practices & Salary Report, we asked about the current state of virtualization, and the responses show that most organizations now have multiple virtual environments, including applications, desktops, and servers.

Virtual servers, applications, and desktops are now very common, as you can see. From the standpoint of efficiency and consistency, virtual environments are a huge step forward. Application licensing, inventory management, access controls and security, upgrades, and, perhaps most important, the process of termination asset recovery—all become centralized.

On the downside—and there’s always a downside—virtual desktops require a connection to the data center or cloud. Since nothing more than a basic bootable system resides on the thin client, a stable network connection is required. This makes the road warrior’s life somewhat difficult, since hotel and coffee shop wireless connections are often overburdened by a high number of users and their varying data demands.

But what does all of this mean for support centers? The ability to configure, provision, update, patch, and upgrade users’ desktop environments from a central administrative console can make a huge difference in the way incidents are approached and resolved at the front line. The degree of control and administrative access the service desk has to perform these functions is the key to determining first contact resolution and overall customer satisfaction. Desktop support’s ability to simply swap in a new thin client (or even mail one out to a remote user) for one that has failed—without having to worry about transferring data and tweaking user controls—gets people back to work faster and more easily.

All of that really means that the careful work of getting the right applications with the right permissions and configurations to the right people is a different type of job today than it has been in the past. The knowledge and expertise requirements are different, revolving more around the careful configuration and management of the end-user experience and the ability to resolve cases and/or fulfill requests through the utilization of administrative systems than understanding the workings of an individual operating system running on hardware.

To a great extent, a well-configured virtual infrastructure can be the basis for the efficient and flexible delivery of services to thin clients and many other types of devices. Consider the current “bring your own device” (BYOD) trend, for example. If an employee’s personal laptop can run the client for an organization’s virtualized infrastructure, many objections (concerns about data residing on personal hardware, etc.) can be laid to rest. In many cases, tablets and even smartphones can access virtual environments, depending on configuration, performance, and—of course—bandwidth. Consider the possibilities of extending the life of current hardware well beyond the planned lifecycle. An aging PC might not be able to run an installed version of Windows 7 (or the forthcoming Windows 8), but it may very well be able to access the virtual environment, obviating the need to replace it.

None of this implies a utopian future for support. Virtual environments are not suitable for every computing need. There will still be computers that have specific hardware requirements, particularly those attached to imaging, measuring, and monitoring devices, or in situations where tablets with complex database layouts would not be effective or efficient.

What may be more important, though, is the consideration that when a failure happens, it will “happen big.” Instead of one desktop PC failing, hundreds—even thousands—of virtual machines may go offline simultaneously because of a failure in the network or SAN, or because of a denial of service (DoS) attack. Knowing this, employees or customers may be hesitant to trust the technology, especially if the frontline and desktop support staffs—the voice and face of IT—can’t explain it clearly and briefly. Here’s the simple answer: Most of us watch television, and we understand that there aren’t little people hiding in the screen; the “real” action is taking place somewhere else. Virtualization is like that: You see it on your screen, but the real action is taking place in the data center or in the cloud.

 

Roy Atkinson is HDI’s senior writer/analyst. He is a certified HDI Support Center Manager and a veteran of both small business and enterprise consulting, service, and support. In addition, he has both frontline and management experience. Roy is a member of the conference faculty for the FUSION 12 Conference & Expo and is known for his social media presence, especially on the topic of customer service. He also serves as the chapter advisor for the HDI Northern New England local chapter.

Related:

More from Roy Atkinson :


Comments: