The
first ripples are beginning to appear. Major vendors, including Intel
Corp., Silicon Graphics Inc., Hewlett-Packard Co. and Microsoft Corp.,
are quietly collaborating on plans to create a new type of commercial
systems architecture--one we call Computing Fabrics.
It
will combine inexpensive microprocessors from the PC world with
technologies from the world of technical computing, where massively
parallel supercomputers already bring together the power of hundreds
of processors in one machine.
Computing
Fabrics will take these supercomputer architectures a step further.
The new architecture will distribute the power of these massively
parallel computers across corporate campuses by erasing the distinctions
between networks and computer architectures. It will link thousands
of processors and storage devices into a single system while maintaining
latencies similar to those of today's multiprocessing servers.
The
result for corporations prepared to take advantage of Computing
Fabrics will be huge amounts of cheap computing power that will
allow for applications such as massive data warehouses and advanced,
distributed supply chain management systems. Such products aren't
possible with today's client/server and network computing architectures.
A Computing
Fabric will consist of nodes--packages of processors, memory and
peripherals--linked together by interconnects that allow thousands
of processors to communicate. Within the Fabrics are regions of
nodes and interconnects that are so tightly coupled they appear
as single nodes. These are cells.
Tight
coupling within cells is achieved with hardware, software or both.
Cells in the Fabric are then loosely coupled with each other and
the Fabric as a whole. Each cell can grow or shrink in a dynamic
fashion, meaning that nodes and links can be added and removed.
Fluid
system boundaries are the essence of Computing Fabrics. The many
processors in a Computing Fabric can come together as a tightly
coupled system one moment. They can become loosely coupled with
other systems in the Fabric the next moment. Then they can dissipate
and reassemble, essentially allocating processing cycles on demand.
Flexibility
like that is not possible with today's commercial mainframes, SMP
systems and even nonuniform memory systems.
Key
vendors are already working hard to make Computing Fabrics a reality.
SGI, for example, is at the crest of the wave. The company recently
announced plans to move all of its systems to Intel processors--even
its largest SGI Cray supercomputers. That will be a first step in
the fusion of high-performance, massively parallel technical computing
and inexpensive commodity hardware.
And
it's just the beginning. SGI, augmented by its Cray Research Inc.
acquisition, is applying its expertise in distributed, modular systems
to the lower-cost commercial computing space.
SGI
officials said the company plans in two years to merge its largest
shared-memory multiprocessor server, the Origin line, with the SGI
Cray T3E, a massively parallel machine that interconnects more than
2,000 Alpha processors. The merged system will be called SN1 (Scalable
Node 1).
Although
SN1 will initially ship with a MIPS Technologies Inc. R14000, the
final entry in the MIPS processor line, it will be field-upgradable
to Intel's IA-64 Merced processors. With its superscalar architecture
and large instruction caches--what HP and Intel call Explicitly
Parallel Instruction Computing--the IA-64 will be perfect for large
multiprocessors. With Intel's backing, the IA-64 is also destined
to become a high-volume commodity processor.
Total
integration of massive technical architectures and lower-cost commodity
components will occur by 2002, when SGI will merge its Cray SVI
(Scalable Vector 1) supercomputer with the SN1 to create the SN2.
Computing
Fabrics will emerge as supercomputer technologies such as SN1 and
SN2 make their way into the commercial computing space. These technologies
include modularly scalable interconnect technologies, such as SGI's
Craylink; distributed shared-memory architectures, such as SGI's
ccNUMA (Non-Uniform Memory Access) and Sequent Computer Systems
Inc.'s NUMA-Q; and high-performance hypernetworks, such as those
based on the High Performance Parallel Interface-6400 ISO standard.
Also
critical will be a cellular operating system that can coordinate
processors linked in a distributed Fabric. SGI is working on such
an operating system, Cellular Irix, which is due in the first quarter
of 2000.
Microsoft
has expressed interest in Cellular Irix's development and may, we
believe, license it and integrate elements into its Windows NT kernel.
Microsoft is also working on its own implementation of Computing
Fabrics in its Millennium distributed object project, which will
take advantage of the Virtual Interface Architecture clustering
model being developed by Microsoft, Compaq Computer Corp. and Intel.
Initially,
Computing Fabrics' architecture boundaries will be static, defined
manually at system startup. Computing Fabrics with fluid system
boundaries will evolve over time. Eventually, systems such as Cellular
Irix will enable reconfiguration as a dynamic system property, alterable
at run-time.
Ultimately,
Cellular Irix will evolve to support system-driven reconfiguration
based on workloads and predefined policies. These elements differentiate
Computing Fabrics from clusters of bus- and switch-based multiprocessor
systems with their inherently rigid system boundaries.
The
inexpensive processing power produced by Computing Fabrics will
create a market where processing cycles are purchased as needed
and are available from a variety of sources. Processing cycles available
for applications will be offered by Internet service providers and
via central office switches, cable head ends and junction boxes.
But
Computing Fabrics won't just mean more available processing. They'll
also mean better utilization of whatever processing is in place.
With Computing Fabrics, the total compute cycles within offices,
schools, labs, factories and neighborhoods may for the first time
be exploited, since the processors and memory on their Fabrics will
rarely sit idle. Management will move beyond total cost of ownership,
looking increasingly at overall exploitation of cycles and the incremental
return on distributing new cycles.
Computing
Fabrics will also make it easier to develop and deploy distributed
systems. Today, development of distributed systems using technologies
such as Distributed Computing Environment, Common Object Request
Broker Architecture and Distributed Component Object Model takes
months or even years. An inherently distributed architecture such
as Computing Fabrics will shrink that development time to weeks,
days or hours. Ultimately, development and deployment of distributed
systems could become a real-time process, with the emphasis on finding
the desired information, software and processing resources on the
Fabric and having them properly configured.
The
virtually unlimited computing power and distributed nature of Computing
Fabrics will also permit applications that aren't possible today.
Data warehouse applications that access online transaction processing
information will finally be possible. And Computing Fabrics architectures,
which speak interoperability as a first language, will enable tighter
integration between supply chain partners as Fabrics span enterprise
boundaries. Virtual corporations will emerge as sufficient amounts
of inexpensive processing power become available for collaboration
on tasks such as advanced design, materials science, and genetics
and drug research.
It
will also be easier for corporations to cozy up to consumers who
have the computational power of the Fabric at their command and
are no longer limited by the capabilities of their isolated home
PCs.
The
swells that will eventually become the Computing Fabrics wave are
just beginning to form. But large organizations, technology vendors
and forward-thinking companies should immediately begin to consider
Computing Fabrics in their analyses and planning. The choices companies
make today on partnerships, system development methodologies and
standards, and even building and campus wiring, will have a substantial
impact on their ability to exploit Fabrics and reap the rewards
of this new era of computing. Erick Von Schweber is chief science
officer for Infomaniacs. Linda Von Schweber is chief creative officer
for Infomaniacs. They can be contacted at thinktank@infomaniacs.com.
Copyright
(c) 1998 Ziff-Davis Inc. All Rights Reserved.
Erick
Von Schweber is Chief Science Officer for Infomaniacs, a think tank
in Sedona, Ariz specializing in technology convergence.. Linda Von
Schweber is Chief Creative Officer for Infomaniacs. They can be
contacted at thinktank@infomaniacs.com
or www.infomaniacs.com.
|