SC2000 PANELS

PANELS

Panels provide an opportunity for participants to interact with a group of experts in an open question-and-answer environment.

Questions: panels@sc2000.org

PANELS CHAIR
PHILIP PAPADOPOULOS, SAN DIEGO SUPERCOMPUTER CENTER / UNIVERSITY OF CALIFORNIA SAN DIEGO


TUESDAY NOVEMBER 7
10:30 - NOON

Venture Capital: Who Wants to be a BIllionaire?

Moderator: Steve Wallach, CenterPoint Venture Partners / Chiaro Networks
Panelists: Matt Blanton, Startech, Jackie Kimzey, Sevin Rosen Funds, Randall West, Compspace

Okay, let's play! For $100, what is an IPO?

A. Immediated Public Outcry
B. Illogical Policy—Ongoing
C. Internal Political Overthrow
D. Initial Public Offering

The right answer is actually E: an Incredible Panel Opportunity to hear four of the best venture capital "life-lines", talking about turning innovative ideas into business success. Key questions to be addressed include: how do you select the right strategic partners? And what are the most common potential stumbling blocks?


WEDNESDAY NOVEMBER 8
3:30 - 5:00

Is TCP an Adequate Protocol for High Performance Computing Needs?

Moderator: Hilarie Orman, Novell,
Panelists: Jamshid Mahdavi, Novell, Volker Sander, Central Institute for Applied Mathematics (ZAM) John von Neumann Institute for Computing (NIC), Wu-chun Feng, Los Alamos National Laboratory and Purdue University, Stuart Bailey, Infoblox, Lawrence Brakmo, Compaq, Deepak Bansal, MIT LCS, Brian L. Tierney, Lawrence Berkeley National Laboratory

TCP/IP is the standard for data transmission in the web era. This fundamental protocol has undergone many changes to continually improve its dynamic range for performance and stability. TCP itself is essential for stability of today's wide-area networks. However, this protocol isn't for everyone. For example, high performance cluster builders dispense with TCP as being too heavyweight and tuning TCP for high performance (e.g. Gigabit) WAN connections is very time consuming.

This panel will address issues of how the high performance community and the massive consumer Internet-at-large community can converge on a protocol that satisfies all emerging networking needs? Some questions include: where are the fundamental problems and can they be over-come? What is the current level of performance that can be attained and how difficult was it to tune the network?


THURSDAY NOVEMBER 9
3:30 - 5:00

Petaflops Around the Corner: When? How? Is it Meaningful?

Moderator: Neil Pundit, Sandia National Laboratories,
Panelists: Marc Snir, IBM Research, Bill Camp, Sandia National Laboratories, Thomas Sterling, JPL/NASA/Caltech, Paul Messina, DOE HQ, Rick Stevens, Argonne National Laboratory, Pete Beckman, Turbolabs

Teraflops-capable machines are being deployed now and a petaflops-capable computing complex is just around the corner. This panel will address questions about the necessity of computers with this level of power, what are the competing design issues to attain a petaflop, and can applications actually make good use of such an extreme capability by the end of this decade. Expert panelists represent competing design schemes, extreme application drivers and dissenting opinions about the need for this level of power in this decade.


FRIDAY NOVEMBER 10
8:30 - 10:00

Convergence at the Extremes: Computational Science Meets Networked Sensors

Moderator: David Culler, University of California, Berkeley
Panelists: Deborah Estrin, University of California, Los Angeles, Larry Arnstein, University of Washington, James Demmel, University of California, Berkeley, Paul J. McWhorter, Sandia National Laboratories
URL: www.cs.berkeley.edu/~culler/sc2000-extremes-panel

The SC conference has ridden the technology waves of the "killer micro" and the "killer network" over the past decade. Two new "killer technologies" are emerging that hold the potential to radically change the nature of computational science. Micro-electromechanical systems (MEMS) make it possible to incorporate tiny sensors into all sorts of computational devices. Low-power CMOS radios make it possible to communicate with minute devices deeply embedded in the physical environment. Not only do these technologies fundamentally change the degree of instrumentation possible in experiments, they provide a unique means of linking computational science and real-time measurement in novel environments. This panel will explore the potential of these emerging technologies, what really is new, and what difference it all makes.

Computational Grids: A Solution Looking for A Problem?

Moderator: Jennifer Schopf, Northwestern University
Panelists: Ian Foster, Argonne National Laboratory, Cherri Pancake, Oregon State University, Marc Snir, IBM Research Geoffrey Fox, Syracuse University

The computational grid, or simply "The Grid" is the latest buzz in computing. Among all the hype are real problems and issues that need addressing before the Grid becomes a ubiquitous and transparent service. Questions that this panel will address include: Will Grid computing "make it" and be successful? What will the Grid really look like? What's the most important problem still to solve to make it successful? And, how do Grids differ from what's being done already in distributed computing, clusters, and OS design?

10:30 - NOON

Open Source: IP in the Internet Era

Moderator: Robert Borchers, NSF
Panelists: Susan Graham, University of California, Berkeley, PITAC member, Richard Gabriel, Sun Microsystems , Todd Needham, Microsoft Research, José Muñoz, U.S. Dept. of Energy

Open source operating systems, tools and community scientific codes represent a fundamental shift in the ownership and intellectual property of software. Of particular importance is the fact that open source can provide continuity of the high-performance software stack as underlying hardware technology undergoes dramatic changes. However, open source gives up control of critical intellectual property and has become a two-edged sword, igniting passions from all communities that focus on software This panel will address the following questions: Can the open source paradigm be applied to other disciplines and should the term become open intellectual property? Can open intellectual property be a component of a profitable business plan? What constitutes truly open source code/intellectual property? And, what software needs to be open source to serve the needs of the high-end/scientific computing user community?

Megacomputers

Moderator: Larry Smarr, University of California, San Diego
Panelists: Andrew Chien, Entropia, Ian Foster, Argonne National Laboratory, Thomas Sterling, JPL David Anderson, United Devices Andrew Grimshaw, University of Virginia

We are nearing a major discontinuity in the evolution of high-performance computing, creating a third era of supercomputing. The first era, which ended with the Cray 1, sought performance by building the fastest single processor possible. The second era, starting with the Illiac IV and continuing to today's tightly coupled computing complexes (PC Superclusters, MPPs, SMPs, and DSMs) sought performance through tens to thousands of identical processors. The third era couples very large numbers of heterogeneous computers on networks to create a virtual supercomputer. This third era is now going through an important phase change as numerous academic and private companies move from the intranet to the Web itself, harnessing tens of thousands to millions of PCs on the Net. Questions to be addressed by the panel include: What is the software architecture model? Can general-purpose computing machines be created in this manner? What are the classes of applications most likely to move to this new computing fabric? And what new computer science research frontiers are important in this more "biological" effervescent style of architecture?