Trends 2006:Hardware Virtualization

Hardware chips already have made two major transitions in 2004-2005. First, the transition to 64Bit computing lead by AMDs design adopted by Intel helped to secure the acendancy of the x86 architecture as the dominant one in the server space. it certainly helped that there were 64bit software operating systems, lead by Linux and Solaris followed by Windows (do we hear a cheer for Open Source beating the innovation of the highest cost, highest profit Windows franchise?). Second, there has been the emergence of dual and even multi-core processors – again led in the x86 space by AMD followed by Intel. Dual cores put two processors on one chip with nearly linear increases in processing power.

The result of these two trends has been that chip processing power is not being lead by ever faster cycle times as processors peak around 3.5-4GHz. Increasing processor speeds run up against a number of constraints including mismatches with bus speed, heat dissipation, and a variety of hardware and synchronization problems. So look for chipmakers to be able to continue with Moores law improvements in processing power for at least another decade; but using new design strategies.

However, on the desktop, even with Apple and Microsoft appliying major changes to their GUIs including transparency, 3D designs, and larger virtual workspaces – most of that computing power is really lost. Only multimedia such as video, animations, plus some specialized simulation, gamng, and math modeling/analysis programs can use the extra computing power. Speech recognition and other AI aplications may also draw big chunks of cmputing power – but that may come through a specialized chip.

So the price of desktop units will continue to drop by a factor of two or more every 18 months or so. For all software vendors but especially the highest cost providers (read Microsoft in operating systems and Office applications; Adobe in graphics; IBM, Microsoft, and Oracle in databases;, etc) – the hand writing is on the wall: Pricing at significant premiums to the hardware will, in the face of ever more viable Open Source and low cost competitors, be ever less tenable. 2006 should mark the first major switch – and look for it in desktop utilities and office applications and graphics.

Emergence of x86 as ominant Deskop and Server Hardware

The other consequence of the hardware trends just mentioned has been the emergence of x*6 as the dominant hardware on desktops (both Sun Solaris but most notably Apple Mac are now x86 based). This will lead to the first round of the Thrilla in Manilla square off between Apple and Microsoft for desktop supremacy. But it also has meant that the very high end of servers is open to inroads from x86. nd yet again, AMD is leading the charge. Intel may be handicapped because these highend server markets are reserved for its ne Itanium chips.

What AMD is doing is bringing on chip a lot of operations and hardware switches for virtualization, partitioning and other operating system context switching. Virtualization allows two different operating systems to be operated on the same computer simultaneously-Solaris and Lnux or Linux and Windows. Partiioning is a drawback from the 1970s . IBM MVS veterans will recognize the old partitions which had assigned memory, CPU, and disk resources for specific tasks or batch jobs. Same ideas apply here but the allocation of resources is more dynamic in keeping with the concepts of on-demand and utility computing. In short, the server side operations will be greatly enhanced because critical OS operations will be the beneficiaries of some clever hardware perfrmance boosts. These improvements will over time put x86 architecture in direct competition with high-end Power Processor, Sparc, Itanium and IBM Z-series mainframe offerings – probably accelerating IBMs exit trend from the hardware side of the computing business.

Another payoff for virtualization and partitioning is the ability to isolate higher security risk, Internet facing tasks from the core organizational infrastructure. But of course one of the problems with multi-core and on-demand computing is how to acount for software licensing. As well, maintenance, recovery, back-up and hotswapping all tae on new challenges when the operatng workspace is potentially so dynamic. Finally, look for some of the tricks of the trade to make its way to the desktop – perhaps in a home cnter system partitioned to support online, media center, and house control tasks in a robust fashion. or a special partition to talk to the Web and fend off virus/sware attacks.

In sum,broadly available general purpose computing that was launched by the PC era has already started to fracture with all sorts of specialized embedded and mobile devices. Now on the server side that same sort of specialization by virualization and partitioning will also begin to proliferate as well. the last ingrdient will be te emergence of specialized plugin hardware for speech regonition, gestures and handwriting decipherment, auto-discovery, and other specialized but high payoff tasks worthy of hardware acceleration. This should be the baance point to foster new computing device innovation.

(c)JBSurveyer 2005

Pin It on Pinterest