There’s no denying the fact that some people are willing to pay a premium for speed. Business travelers would take a plane rather than a ship, while workers will flag down a cab, spending a few more bucks, just to get to their office on time. This unquenchable need for speed is very apparent in the broadband industry where rapid growth rate has become noticeable as users sign up for DSL service in favor of dial-up to acquire faster download speeds.

On the computer hardware side, chip manufacturers are investing billions of dollars to keep up with Moore’s Law. Several years ago, system developers struggled with the problem of performance discrepancies between microprocessors and hard disk drives. Technological advances have resulted in exponential increases in processor speeds, while storage access times have only improved marginally. As a result, fast processors are forced to slow down and wait on mechanical storage devices to deliver its data.

This quest for faster processing capabilities gave birth to solid-state disks (SSDs). By solving the problem of latency with its ingenious use of solid-state memory, SSDs have narrowed the CPU-storage performance gap, providing networks with faster transactions and increased productivity overall.

However, processor makers encountered problems in improving processor performance by increasing the operating frequency. To push the performance bar a few notches higher, the industry tinkered with an innovative processor architecture design. Multi-core technology, which involves the placement of two or more powerful computing cores on a single processor, promises better handling of applications such as complex 3D simulations, larger databases, streaming media files, and more sophisticated user interfaces. By providing each core with its own cache, multi-core systems have sufficient resources to handle most computing intensive tasks in parallel.

Although the first dual core design was introduced by IBM in 2000, this architecture is slowly gaining momentum among other major suppliers. AMD launched its dual-core Opteron server/workstation processors on April 2005, the same month that Intel announced its Intel Pentium® processor Extreme Edition. A month later, AMD rolled out its dual-core desktop processors, the Athlon 64 X2 family.

Intel opened the year with a bang by releasing the Core Duo processor, a dual-core chip that is the heart and soul of Apple’s latest line of iMacs and Macbook Pros. However, a bigger bang awaits the industry with the impending launch of the multi-core “Cell” processor, a joint project of IBM, Sony, and Toshiba. Powered by eight “synergistic processor cores,” Cell developers claim that their 64-bit Power processor-based architecture can deliver 10 times the performance of the latest PC processors in entertainment and rich media application, thanks to “supercomputer-like floating-point performance” and “clock speeds in excess of 4GHz.”

Sony’s PlayStation 3 video game console will contain the first production application of the Cell processor. On the enterprise side, IBM demonstrated a blade server prototype based on two Cell processors running the 2.6.11 Linux kernel. Although the processors ran at 2.4- 2.8 GHz, IBM expects to run them at 3.0 GHz, providing 200 GFLOPS single-precision floating-point performance (theoretical) per CPU, or about 400 GFLOPS per board. By arranging seven blades in a single rack mount chassis, IBM estimates a total theoretical performance of 2.8 TFLOPS (or 284 GFLOPS in double precision) per chassis.

With these mind-boggling capabilities on the horizon, the microprocessor industry is again on the verge of breaking the CPU-storage performance gap wide open. Dual- and multi-core processors thrive on multithreading applications, and despite significant advancements made in increasing data storage access times, an entire network will slow down to a crawl if the performance gap issue is not resolved. By the looks of it, storage device manufacturers will bite the dust once again—or will they?