SC2000 EXHIBITOR FORUM

EXHIBITOR FORUM

The exhibitor forum provides industry exhibitors the opportunity to give presentations which showcase recent breakthroughs in research and development and to discuss new application areas, new directions, and new technologies in areas related to HPNC.

Exhibitor Forum presentations will be held in room D268.

Questions: forum@sc2000.org


EXHIBITOR FORUM CHAIR
JOE McCAFFREY, MISSISSIPPI STATE UNIVERSITY


TUESDAY NOVEMBER 7

Implementing Intelligent Infrastructures Using SAN Appliances
Robert Woolery, Vice President of Corporate Development and Strategic Planning, DataDirect Networks, Inc.
Tuesday 10:00-10:30am

With massive bandwidth and storage capacities readily available, and servers, desktops and supercomputers linked like never before, new applications and rich media delivery methods promise to bring real world data to individuals and groups worldwide.

This paper will explore the use of cutting edge, intelligent network technology that enables lower latency, high-speed data access within a new extended network architecture-providing high performance computing environments with high Quality of Service (QoS) and greater user experience. Extensive discussion will be devoted to creating flexibility and scalability in these new architectures with "data aware" network technology deemed necessary in building next generation infrastructures. This discussion will detail how data-aware technology will enable scalable lower latency, data access with an intelligent network infrastructure device developed to manage and transport vast quantities of text, numeric data, audio and video images at very high speeds to and from multiple servers and clustered workstations to sheared storage resources.

Tarantella-Web Enable Any Application on Any Platform and Access from Anywhere
Ron Green, Unicomp, Inc.
Tuesday 10:30-11:00am

Tarantella lets you use the web to deliver any application immediately to any client-without any client software installation. This revolutionary solution ensures users have access to the latest applications and data, wherever they are on the 'net. In addition, the innovative Tarantella Adaptive Internet Protocol (AIP) ensures optimal network performance, even over low-bandwidth connections. It offers a powerful and cost-effective method for deployment over the web using standard browser technology. With Tarantella, your Windows users can have access to your Unix or Linux applications without using X-Windows managers. Your Linux and Unix desktop users can remotely run any Windows applications as if they were sitting in front of the PC. All of this with no rewrites and no client installs. Come see the future of web-enabled application delivery!

We are a Caldera Business partner, providing solutions, training, and consultation in the field of high performance computing.

Supercomputing - High Bandwidth and High Performance
David Parks, NEC - HNSX Supercomputers
Tuesday 11:00-11:30am

Industrial users around the world continue to rely on the performance provided by parallel vector supercomputers for their mission critical applications. NEC, as the leading provider of parallel vector systems in the world today, continues to improve the technologies and make them more cost effective. The session will discuss the differences between commodity scalar processors and vector processors, how that effects performance on various applications, and how both scalar and vector processors are evolving through enabling technologies.

Liberating Design Teams Via the Internet
J. Paul Grayson, Alibre, Inc.
Tuesday 11:30-noon

The appeal of the Internet is based on the lower requirement for hard assets, a direct pipeline to customers, and a new level of speed and operational efficiency. For the mechanical design and manufacturing industries in particular, the Internet represents the next wave of value innovation through a new business model - the Application Service Provider (ASP).

The ASP business model for the MCAD market is compelling because it removes the barriers associated with traditional mechanical design products. The high cost of ownership-software, hardware, IT supports and system administration-is significantly reduced. Distributed computing power and storage capacity made available via the Internet is potentially unlimited. In addition, the Internet also enables real-time team collaboration between best-in-class experts, suppliers, contractors, engineers and others-allowing companies to get products to the market faster and cheaper, with reduced development time, fewer design changes and less travel expense.

Open Source Community
John H. Terpstra, TurboLinux
Tuesday 12:30-1:00pm

The Open Source Software community is turning much of the information technology world upside down. This presentation identifies the key developments that will impact the supercomputing environment, looks at who and what is driving the key projects as well as the companies that are trying to gain maximum benefit from these initiatives.

The spectrum of concerns and opportunities are elucidated and each is colorfully articulated from both the perspective of the Open Source community as well as from that of the end user. Business and Organizational users need to see how the Open Source community addresses the process of problem identification and resolution.

Building a 30 TeraOPS Supercomputing System
Richard Kaufmann, Compaq Computer Corporation
Tuesday 1:00-1:30pm

On August 22, 2000 the U.S. Department of Energy's (DOE) National Nuclear Security Administration (NNSA) selected Compaq to build the world's fastest and most powerful supercomputer, a 30+ TeraOPS system code-named 'Q'-the latest advancement in the Accelerated Strategic Computing Initiative (ASCI). This presentation reviews the pronounced technical challenges in developing and building a system to meet the ASCI requirements for a 30 TeraOPS code development, execution and operational environment.

Objectivity/DB - The High Performance Database Engine
Leon Guzenda, CTO, Objectivity
Tuesday 1:30-2:00pm

Objectivity/DB is the leading high performance, distributed, and scalable database engine with unrivaled support for mixed language development and mixed hardware environments. Objectivity's distributed architecture provides fault tolerance and data replication, boosts developer productivity and shortens development time. Objectivity provides the ideal platform for mission critical applications requiring continuous performance and adaptability to future technologies.

Fujitsu's High Performance and Highly Scalable Supercomputers and Servers
Kenichi Miura, Fujitsu
Tuesday 2:00-2:30pm

The presentation is about the Fujitsu 64-bit architecture VPP5000 supercomputers and the SPARC64-based servers, PRIMEPOWER. It will cover the major features of the VPP5000 such as the vector and scalar units, and the crossbar network. It will also include the performance of various commercially available third party application software on the VPP5000 and how the VPP5000 is used by various customers worldwide.

The presentation will also cover the features of the PRIMEPOWER HPC servers such as Solaris compatibility as well as enhancement features such as the Cross-bar Switch, the Partition Feature, Fault Tolerant Technologies, etc.

Concept-to-Production: End to End High Performance Network Monitoring and Performance Analysis Solutions
Erik Plesset, Spirent Communications
Tuesday 2:30-3:00pm

Spirent Communications test solutions are enabling the development and deployment of new Internet and communications technologies worldwide. A comprehensive range of tests are used to address functionality, reliability, performance, and conformance to standards etc., of networks and network devices. These abilities are exemplified by their use in the SC2000 Demonstration Network, and the presentation will explain how these lessons can be used in any high performance network management strategy.

High Bandwidth Supercomputers of Today and Tomorrow
Dave Kiefer & Burton Smith, Cray Inc.
Tuesday 3:00-3:30pm

Cray Inc. designs and builds supercomputers that have high global bandwidth, making them particularly well suited for communications-intensive problems in science and engineering. Cray supercomputers have two essential ingredients: custom latency tolerant processing hardware to enable global bandwidth, and high performance interconnection networks to actually deliver it. Cray will continue to innovate in high bandwidth system architecture, and this talk will describe some new ideas and product directions in both scalar and vector supercomputers.

Cost Effective High-Performance Computing on Clusters of Commodity Processors
Robert D. Bjornson, Scientific Computing Associates, Inc.
Tuesday 3:30-4:00pm

High-performance computing on cost effective clusters of commodity processors is of growing importance in many industries. Scientific Computing Associates, Inc. (SCA), a pioneer in cluster computing, is partnering with leading hardware and software vendors to help industry leverage this technology. This presentation describes several recent SCA initiatives which make clusters more accessible to the broad user community.

Scientific Computing Associates, Inc. is the foremost commercial developer of cluster software tools, middleware, and applications. Products from Gaussian (the ab initio quantum chemistry program) to Crystal Ball Turbo (an Excel-based parallel Monte Carlo simulation tool) depend on SCA's technology. SCA's focus is on tools that help in the development of cluster enabled applications.

Collaborative Visualization Solutions for HPC Involving Reality Center and Immersive Visualization Products
Linda Jacobson, SGI
Tuesday 4:00-4:30pm


WEDNESDAY NOVEMBER 8

The Challenge of Next Generation High Performance Networks
John Freisinger, Essential Communications
Wednesday 10:00-10:30am

John will discuss what Essential has learned in providing gigabit per second technologies to the industry and the obvious hurdles that face the next generation of multi-gigabit networking technologies. The challenges include CPU utilization, buffering to enable multiple speeds within the same network, cable plant limitations and bridging to legacy technologies. Solutions that will be discussed in the talk include TCP/IP offload, interrupt coalescing, OS bypass, and multiprotocal backboning.

"Teaming" Technology - Staggering Performance Power Through Simplicity
Lynn West, CTO, Times N Systems
Wednesday 10:30-11:00am

TNS has designed an innovative approach to harness the collective computational power of industry standard components providing a scalable, expandable infrastructure to support multiple parallel and non-parallel applications. This technology is especially well-suited for today's fast-paced Internet environment. By unifying-or Teamin-the disk, memory, and processors of individual computers via a unique means of sharing memory we deliver system-wide, the speed and power necessary to alleviate network bottlenecks and system latency problems. The ability to span local disk and memory resources allows the construction of multi-terabyte hard drives and multi-gigabyte RAM drives.

The TNS technology can be integrated into specific existing compute infrastructures to maximize performance efficiencies and balance scaling of processing power, Disk I/O bandwidth, and memory bandwidth for up to 128 processors.

New Capabilities and Plans for High Performance Storage System
Ramin Nosrat (tentative), High Performance Storage System (HPSS)
Wednesday 11:00-11:30am

Legion - An Applications Perspective
Andrew Grimshaw, Applied MetaComputing Inc.
Wednesday 11:30-noon

Legion is a reflective metasystem project at the University of Virginia designed to provide users with a transparent interface to resources in a wide-area system, both at the programming interface level as well as at the user level. Legion addresses issues such as parallelism, fault-tolerance, security, autonomy, heterogeneity, resource management and access transparency in a multi-language environment.

While fully supporting existing codes written in MPI and PVM, Legion provides features and services that allow users to take advantage of much larger, more complex resource pools. With Legion, for example, a user can run a computation on a supercomputer at a national center while dynamically visualizing the results on a local machine. As another example, Legion makes it trivial to schedule and run a large parameter space study on several workstation farms simultaneously. Legion permits computational scientists to use cycles wherever they are, allowing larger jobs to run in shorter times through higher degrees of parallelization.

Key capabilities include the following:

These features also make Legion attractive to administrators looking for ways to increase and simplify the use of shared high-performance machines. The Legion implementation emphasizes extensibility, and multiple policies for resource use can be embedded in a single Legion system that spans multiple resources or even administrative domains.

Deep Blue Update
David Turek, IBM Vice President, Deep Computing, Web Server Division
Wednesday 12:30-1:00pm

IBM's Deep Blue technology has spawned a number of advances in supercomputing. This presentation will highlight the emerging areas of research which are being enabled by this technology such as Life Sciences and will discuss the emerging LINUX phenomenon.

High Performance Alpha Clusters
Tom Morris, API
Wednesday 1:00-1:30pm

The marriage of cluster computing paradigms with the performance of the Alpha microprocessor is allowing researchers to address problems on a grand scale without investing in expensive specialized systems. Achieving optimal price/performance in computer system design requires balance among the various components of the system as well as a good fit between the application demands and the system design. We'll look at the measured performance characteristics of different elements such as system interconnects, processor speeds, and L2 cache speeds & sizes to show how they impact real world application performance. We'll also look at future developments in processors, packaging, and interconnect to understand how they affect delivered application performance.

Linux on the Bleeding Edge, CPlant and FSL
Greg Lindahl, High Performance Technologies, Inc.
Wednesday 1:30-2:00pm

Linux is used today in many bleeding edge applications, such as Sandia's CPlant cluster (2,000 cpus total), a traditional supercomputer installation with extremely high reliability at the Forecast Systems Lab in Boulder, CO, and commercial clusters at Google (5,000 cpus) and Incyte (3,000 cpus). What makes Linux adequate-or superior-for such systems?

Remote Management of a Storage Network
Derek Gamradt, StorNet, Inc.
Wednesday 2:00-2:30pm

Many companies today are faced with managing distributed storage infrastructures across multiple locations. However, the lack of availability of storage-centric systems administrators has limited the ability of companies to effectively manage their data storage networks.

This presentation will provide an overview of the challenges involved with managing and monitoring a Storage Management Infrastructure remotely. We will discuss the various elements of a storage management solution, including centralized disk storage, highly available data applications, disaster protection and storage networking. The presentation will compare the various methods that are available to allow a data center to keep up with the exploding amount of data, while centralizing the storage management function. The core approach of this presentation is the importance of defining a storage management availability goal and developing a long range plan for achieving it. StorNet has been designing enterprise storage applications since 1990 and has helped hundreds of organizations deploy and manage mission critical storage management strategies. StorNet is an independent storage solutions provider and can develop a storage management plan without the "one-size-fits-all" approach common to systems manufacturers.

The Art of Internet Computing
Dr. Steve Armentrout, CEO, Parabon
Wednesday 2:30-3:00pm

The new field of Internet computing harnesses the excess capacity of Internet-connected computers, and makes this massive resource available for scientific research and commercial R&D. In addition to unprecedented power and superior accessibility, this new computational model offers relief from the fixed capacity, rapid obsolescence, and rising maintenance costs of traditional high-performance computing.

Dr. Armentrout, CEO of Parabon Computation, will describe the core technological infrastructure required to create a stable and secure Internet computing platform suitable for general use. He will also discuss the realities of computing with unreliable resources and doing so amidst varied security threats.

Dr. Armentrout will conclude with a discussion of the problem domains and solution strategies that are well suited for Internet computing. Illustrative case studies that include benchmarks will be drawn from the biotech, financial, and film rendering industries.

Delivering a New World of Flexibility for HPC
Ben Passarelli, SGI
Wednesday 3:00-3:30pm

With the introduction of SGI(tm) Origin(tm) 3000 series servers and SGI(tm) Onyx(r) 3000 series graphics systems, technical and creative professionals are given the same modularity, freedom of choice, and ease of upgrade that consumers have appreciated and benefited from in such areas as home-entertainment systems. Build-to-suit HPC is now possible with systems from SGI.

NUMAflex(tm) technology delivers breakthrough capability and flexibility by using modular bricks to add specialized capacities in graphics, central processing, storage, PCI expansion or I/O capacity. Whether SSI or cluster, systems are easily built, modified or upgraded to fit customer requirements and changes over time.

NUMAflex is the suite of benefits provided by SGI's innovative and flexible implementation of NUMA architecture for multiprocessor computers. SGI(tm) NUMA (nonuniform memory access) exceeds the capabilities of SMP (symmetric multiprocessing) architecture and delivers superior scalability and performance. Leveraging the company's long history and experience in leading-edge computing, SGI is the only computer manufacturer capable of offering this robust third-generation of NUMA, and paving the way for a new world of modular computing.

Real-World Performance Analysis
Werner Krotz-Vogel, Pallas
Wednesday 3:30-4:00pm

To be useful "in the real world," performance analysis tools must meet stringent requirements: scalability in processes and time, support for shared-memory clusters, ease of use and partially automatic operation. The Pallas tools (Vampir and Vampirtrace) are continually enhanced to meet these requirements, with the latest developments including support for ASCI-scale scalability and for hybrid OpenMP/MPI applications.

InfiniBand Management Solutions
VIEO (formerly Power Micro Research)

Wednesday 4:00-4:30pm

Wide-Area Parallel Computing: A Production-Quality Solution with LSF
Ian Lumb, Integration Architect, Platform Computing Corporation
Wednesday 4:30-5:00 pm

Scientific and engineering applications are driving the need to harness (potentially) thousands of processors in their computations. By effectively exploiting the distributed-memory parallel computation paradigm offered by the message-passing interface (MPI), in tandem with leading-edge computing hardware and interconnect technologies, scientists and engineers are making progress on problems that are 'Grand Challenge' in their scope. However, massive scalability to thousands of processors, has been traditionally limited to resources available at a single site. Although resources might be available at other locations, the compute infrastructure has not allowed such geographically distributed resources to be seamlessly incorporated. For over two years, Platform Computing has made available multi-site (via LSF MultiCluster) and MPI-integrated (via LSF Parallel) layered, but independent, components of its distributed resource management (DRM) suite. Production-quality parallel computation involving multiple, geographically distributed sites is highlighted through a seminal integration which allows MPI jobs to span multiple LSF clusters. Illustrative examples, drawn from customer cases, will showcase the present-day viability of this technological convergence.


THURSDAY NOVEMBER 9

Use of Juniper Routers in Research and Education Networks
John Jamison, Juniper Networks
Thursday 10:00-10:30am

Juniper Routers are in use in the networks of over 20 high performance research networks, supercomputer centers, universities and research labs. This presentation will describe how these networks and institutions use Juniper Routers to support: traffic engineering, high data rate applications (both unicast and multicast), and security enhancing line rate packet filtering. It will also include a discussion of how future Juniper software upgrades will enable users to implement explicit routing solving the "Fish Problem" that has been the bane of R&E networks since the days of the NSFnet.

TurboLinux Cluster Cockpit
Peter H Beckman, TurboLinux
Thursday 10:30-11:00am

Over the last decade, clusters have grown from handfuls of nodes to thousands. For example, one of the world's most popular Internet search engines is a Linux cluster with thousands of nodes. While the complexity of managing these ever-larger clusters has been steadily increasing, the software tools and practices have not kept pace. Part of the problem is that supercomputers are almost always compared on the basis of peak theoretic megaflops and price, not manageability. TurboLinux has created a new and unique software framework for managing clusters. The framework is based on component technology, with modules that can be loaded or unloaded on the fly, and standardized interfaces for basic module functionality. Furthermore, XML technology is used to describe many components and data. During this session, the component architecture of Cluster Cockpit as well as the issues for managing Linux clusters will be presented.

Enabling HPC Research in DoD
Tim Campbell, Logicon
Thursday 11:00-11:30am

Advances in scalable multi-resolution algorithms coupled with access to HPC platforms are enabling DoD users to model problems that were previously impractical or unattainable. Logicon ISS provides the DoD with HPC expertise and is making substantial contributions to the advance of DoD HPC research. This presentation will focus on performance analysis of the new 2 TeraFlop IBM SP (presently ranked number 4 in the top 500 list of supercomputers) at the Naval Oceanographic Office DoD Major Shared Resource Center. In particular, record breaking results will be presented for important materials modeling applications like quantum and classical molecular dynamics. These simulations, which involve up to 8 billion atoms on 1280 processors, are representatives of a new state of the art in materials modeling. Results will also be presented on recent successes in porting important ocean modeling applications to parallel architectures.

How Clusters Fill a Vendor Void
David Rhoades, High Performance Technologies, Inc.
Thursday 11:30-noon

Most large vendor offerings require programmers to redesign their code to take advantage of the computing architecture. Straightforward MPI is not enough, as the SMP on the 4-, 8-, or 16-processor node requires explicit programming techniques to be fully utilized. This talk will address the various vendor approaches to parallel computing, and explore why many researchers are responding by building their own clustered systems. Here are some issues that we will address. Are single- or dual-node clusters more efficient and cost-effective? What changes have occurred in the clustered systems over the past year, and what changes will we likely see in the next? Will clustered systems exist five years from now? Who should consider a cluster, and who should not?

Internet Computing: Distributed Computing Leads Third Generation Internet Applications
Jim Madsen, President and CEO, Entropia
Thursday 12:30-1:00pm

Internet and Intranet Computing is changing the way people think about high power computing. Entropia's solutions complement existing computing resources by breaking compute ceilings with on-demand scalable power, delivering results in a fraction of the time and at a fraction of the cost normally associated with big iron. Behind the firewall, solutions leverage ROI on sunk costs in normal desktops within companies, and massive compute power can be imported to meet peak demand, much like the electric grid distributes power where needed. This truly "organic to the internet" application is leading the way in a third generation of peer-to-peer applications. Entropia, with its distributed computing grid running on the internet for three years now, continues to lead the way in this new space. Entropia will discuss the milestones achieved and challenges ahead, and a future in which internet computing is pervasive.

New Compiler Features for High-Performance Computing
Doug Miles, PGI
Thursday 1:00-1:30pm

The Portland Group (PGI) is moving into the 21st century with cutting edge methods to assist scientific and technical developers with the latest in compiler technology. PGI provides new solutions to take advantage of the latest x86 processor features-32- and 64-bit Streaming SIMD Extensions, cache prefetch instructions, and large cache sizes. In addition, PGI supports parallel applications via full OpenMP capability as well as automatic parallelization. PGI also provides a solution for cluster computing with the Cluster Development Kit (CDK), which allows users to harness the power of workstations networked together. Applications take advantage of both distributed-memory and SMP parallelism on Linux clusters. PGI has bundled its compilers, tools, and a variety of open-source cluster products into a single turn-key package.

This talk will describe how users can take advantage of these new and exciting features offered by PGI.

StorageTek
StorageTek
Thursday 1:30-2:00pm

MAN/LAN/WAN - Blurring Boundaries for Switching and Routing
Earl Ferguson Chief Technical Officer of Foundry Networks
Thursday 2:00-2:30pm

Gigabit Ethernet, wire-speed switching, and next generation routing are fueling the latest generation of network technologies. As well, emerging players promote Layer 4-7 switching as the solution to yet unheard of Web-based networking problems. Terabit Routing companies offer advice on the removal and replacement of older routers connecting to the WAN. Does increasing the speed of access to the Internet via "next generation" Internet Routers really solve our problems? What should your concerns be? How much will it cost to implement these solutions? What are the pros and cons of these various technologies? Are they separate or should they be combined together? Will they help or hinder network design? Do specific technologies live only in one portion of a network deployment or do the newly emerging product solutions cross traditional networking boundaries? This presentation will discuss the various approaches to "next generation" technologies as vendors rush to combine them together to solve current and future networking problems.

Optimization and Parallelization with the Intel Compilers and the KAI Tools
Joe H. Wolf, Intel
Thursday 2:30-3:00pm

We will discuss the advanced optimization techniques employed by the Intel(r) C++ and Fortran Compilers V5.0 for the IA-32 architecture, including the Intel(r) Pentium(r) 4 processor. It also includes the parallelization technology from Kuck and Associates (KAI), an Intel company. Learn how using the Intel compiler's vectorization and parallelization features, including OpenMP*, in conjunction with the KAI tool set, gives maximum application performance in a multiprocessor system. Each of the optimization techniques and tools are illustrated with sample code and on-stage demonstrations.

Large-Scale Distributed Computing: The Benefits and Advantages of Compute Farms
Rich Edelman, Director of Software Development, Blackstone Technology Group
Thursday 3:00-3:30pm

With companies that require massive amounts of compute power increasingly turning toward the distributed computing model, compute farms are emerging as a practical alternative to supercomputers. While both compute farms and supercomputers require specific skills in programming and set-up, compute farms offer distinct advantages in both design and functionality.

This presentation will detail the benefits and advantages of the compute farm solution, which include access to unlimited compute power on-demand, and incremental scalability that allows for handling demand in peak-use periods. Compute farms eliminate the need for costly and time-consuming queuing systems, and allow companies to concentrate on their core competencies, rather than spending time managing their mainframe. Also discussed in-depth will be features of the compute farm infrastructure, including service levels, security, monitoring and network performance.

SRC Computers Inc: Background, Status, and Plans
Michael J. Henesey, SRC Computers Inc.
Thursday 3:30-4:00pm

SUN Microsystems
Thursday 4:00-4:30pm