The venerable Intel microprocessor architecture is entering its fourth decade. Is it time for a change?

(Computerworld) It's impossible to look at the x86 family of microprocessors without wondering if, after three decades of dominance, the architecture might be running out of steam. Intel, naturally, says the x86 still has legs, while hastening to add that its battles with competing architectures are far from over.

Justin Rattner, Intel's chief technology officer, cites the architecture's flexibility as a key to both its past and future success. Although people often refer to the x86 instruction set as though it were some kind of immutable specification, he says, both the instruction set itself and the architecture that implements it have gone through tremendous evolution over the years.

For example, he elaborates, the x86 beat back an assault in the 1990s from a raft of specialized media processors with its built-in MMX and SSE instruction set extensions, which greatly sped up the number-crunching needed for multimedia and communications applications. He also cites advancements that have been added to the chip and refined over the years, such as hardware support for memory management and virtualization.

Equally important, Rattner notes, is that Intel has maintained backward compatibility across the x86 family at each step of the evolution. Advances in the instruction set plus intra-family compatibility have enabled the x86 to span a very wide range of single-user and enterprise computers, from portables to supercomputers.

David Patterson, a computer science professor at the University of California, Berkeley, says, "It's important to understand that the x86 is not a frozen design. They have added about one instruction per month for 30 years. So they have something like 500 instructions in the x86 instruction set [now], and every generation they add 20 to 100 more. Backwards compatibility is sacrosanct, but adding new things for the future happens all the time."
A shift in strategies

Even without its application-specific advances, the performance improvements from the x86's long march to the tune of Moore's Law would have to put it among the more amazing of IT success stories. The 8086 that was introduced in 1978 worked at 10 MHz and had 29,000 transistors. A 3-GHz, quad-core Intel processor for desktops today is 300 times faster and has 820 million transistors -- a little less than 28,000 times as many -- in a slightly larger package and for a comparable cost.

"There have been tremendous technical challenges in continuing to shrink the size of transistors and other things, and Intel has invested tremendously in that," says Todd Mowry, a computer science professor at Carnegie Mellon University and an Intel research consultant. One of those challenges led to what Intel calls a "right-hand turn" at the company; heat became such a problem as circuits shrank that now performance advancements can come only from adding more processor cores to the chip, not from increasing the clock speed of the processor.

And that, in turn, has shifted the quest for performance from hardware to software, Mowry says. "In the research community now, the focus is not so much on how do we build one good core as much as on how do we harness lots of cores."

One of the most promising approaches today to exploiting the parallelism in multicore chips is the use of something called "software transactional memory," he says. That's a way to keep parallel threads from corrupting shared data without having to resort to locking, or blocking, access to that data. It's an algorithmic approach -- primarily a job for software -- but support for the technique can be built into the x86 hardware, he notes.

Mowry says the only limit to the continued addition of more cores to processor chips is the ability of software developers to put them to good use. "The biggest hurdle is to go from thinking sequentially to thinking in parallel," he says.

Rattner predicts that we'll see core counts in "the low hundreds" per chip in the next five to seven years. Since each will have multithread capabilities, the number of parallel threads of execution supported by those chips might be around 1,000 or so, he says. But, he concedes, "there aren't too many people walking around the planet today who know how to make use of 1,000 threads."

New developments

Rattner mentions some other "pretty interesting" things being developed in Intel's labs. For example, he says the x86 will include new hardware support for security -- "making it more robust in the face of belligerent attacks" -- but he declines to elaborate.

He also points to the coming x86-based Larrabee chip, a graphics processing unit to compete with the dedicated GPUs from NVIDIA Corp. and the ATI unit of Advanced Micro Devices Inc. Larrabee will contain "an entirely new class of instructions aimed at visual computing," he says. Unlike the highly specialized GPUs of its competitors, he adds, Larrabee is significant because it is an extension of the general-purpose x86 architecture. "Here we are making a strong assertion about the robustness and durability of the architecture, that we can take it into domains that most people felt were beyond its capabilities."

AMD apparently has a similar plan. In January, the company said it would introduce a hybrid CPU-GPU chip called Fusion as an extension of the existing Phenom line of processors. It will ship first in 2009 as a dual-core unit for notebook computers, the company said.

VIA Technologies, which just announced its VIA Nano processors (formerly Isaiah) for the mini-notebook market, says it will continue to target the mobile market with its power-efficient line of x86 processors, but will edge toward the desktop market.

Of the possibility that some brand new microprocessor architecture could come along and blow the x86 out of the water, Rattner says the architecture is still partly protected by that Wintel software inventory that helped save it from the threat posed by RISC processors in the late 1980s. "Unless you can come in and say -- and I think this has been the challenge for [the high-end, non-x86 Intel] Itanium -- that if you use this different instruction set you'll get five times better performance, there just isn't a big enough incentive to switch."
The sky's the limit?

But that's not to say the x86 instruction set won't be implemented in entirely new ways as silicon transistors increasingly bump up against the laws of physics. For 40 years, transistors have been located just under the surface of the silicon wafer. Now, technology is emerging to allow them to be placed on top of that surface.

That would make it possible to build the transistors out of materials other than silicon, materials like gallium arsenide that have better energy and performance characteristics. "We won't be at the surface for another generation or two [about two to four years]," Rattner says, "but the decade ahead will see a lot of innovation in materials."

"There is an inherent limit for the x86 at the low end, for something like your toaster or the fuel injector in your car," says Glenn Henry, president of the Centaur unit of VIA. "And there probably is a limit at the very high end if you are going to do something like simulate atomic bombs. In between, the x86 has proven over and over that it can adapt."

While Intel is working to develop new transistor-based electronics, Rattner says the company is "not more than dabbling" in more far-out possibilities such as processors for quantum and DNA computing. "Those really change the mathematical foundation of computing and are much more risky," he explains. Moreover, he says, they are likely to be restricted to narrow application domains, not to general-purpose computing.

Mowry predicts that a move to those esoteric technologies is 20 years out. "My guess is it won't be until we really start reaching the end of what we can do with conventional technologies that people will get serious about these things," he says. "When you are trying to build wires out of single strands of atoms, things get very strange, and you don't know what to do exactly."