A Wakeup Call for Supercomputing

Main Article Content

Janusz S. Kowalik
John Lixvar

Abstract

America's supercomputing establishment needs to reinvent itself if it is to maintain relevance in today's economic environment. Now that we have your attention, please let us explain the basis of this somewhat overstated thesis.

High performance computing technology is spreading rapidly beyond the confines of its original scientific/engineering domain. Parallel and distributed systems of enormous size are being increasingly deployed in support of commercial, enterprise-wide computing applications; yet if this trend is to be sustained, some fundamental improvements in supercomputing's support structure are needed.

In many large industrial and commercial organizations, enterprise computing has become the indispensable fabric underlying the corporate information infrastructure. These systems typically support many 1000s of users and manage vast amounts of data—often geographically dispersed and interconnected by networks of considerable complexity. Furthermore, these systems are expected to meet stringent performance expectations in an extremely cost-competitive manner.

HPC technology is central to meeting these challenges, but it is not enough. The killer application currently driving the deployment of high performance computing in the world of manufacturing is product development management—a business solution that endeavors to provide seamless access to product data throughout its full life cycle. The scope of this activity includes everything from digital product definition to manufacturing assembly process planning to after delivery support. These next generation PDM systems are data-rich environments that manage the core business activities of an entire enterprise; they enable innovation, direct the design process, and support product production. Unfortunately the successful realization of this technical tour de force is proving extremely difficult. It is disappointing, given the critical nature of these applications, that there are no well-established methodologies for designing enterprise systems. The HPC research community has performed admirably in the scientific/engineering arena, but has directed precious little energy in facilitating the advance of this technology into the business world. HPC publications, which frequently devote ample page space to ever more sophisticated algorithms in numerical analysis, are largely silent on the issues of enterprise system scalability and performance, data distribution policies, or the development of effective methods for capacity analysis of heterogeneous, multi-tier systems. The Gordon Bell price/performance prize recognizes individual achievement in number crunching; why is there no equivalent for outstanding work in OLTP/DSS design? The imbalance in research attention between these two application areas was again highlighted at this year's SC99 supercomputer conference in Portland, where not even a single session was devoted to the use of HPC in the business environment.

One can only speculate on the reasons for this blindspot: HPC scientists are not familiar with enterprise systems; the HPC community is too deeply rooted in traditional technical problems; the scope of HPC is broad enough without adding extra considerations such as database locality optimizations, network/server configuration tradeoffs, and other arcana generally outside the HPC mainstream. The unfortunate consequence of this developmental shortfall is that the job of building scalable, cost-effective, business-oriented HPC systems has fallen into the hands of the technically ill-equipped and unprepared.

In the better world of the new millennium we would expect to have the tools and techniques that will allow American enterprise to effectively employ the best technology supercomputing has to offer.

Janusz Kowalik & John Lixvar,
The Boeing Company

Article Details

Section
Editorial