SC2000 SCINET

SCINET

A significant part of the technology showcased at each year's SC conference is the SCinet network that supports the conference and makes it (for the duration of the show) one of the most intense networks on the planet.

This year, Scinet 2000 is working with Qwest Communications International, Cisco, Nortel Networks, Juniper, Marconi, Foundry, Extreme, Spirent and others to establish flexible, wide area connections to the show floor. Using Qwest's fiber infrastructure in the Dallas area and Qwest SONET, ATM and IP backbones nationwide, the wide area network will feature multiple OC-48 (2.5 gigabits per second) connections, several OC-12 connections and possibly other connections, using the very latest technology and protocols. The total connectivity between SC 2000 and the outside world will be 10.5 to 11 gigabits per second-a new record for the SC conferences! In addition to commodity Internet access, WAN connection links to SCinet 2000 will include:

  • ESnet
  • Abilene
  • HSCC
  • vBNS

SCinet plans to deploy and support IPv4, IPv6, ATM, and Packet over SONET connections, Myrinet, Quality of Service demonstrations, and advanced network monitoring. Other types of connections might be possible based on discussions with requestors.

SCinet will install and operate more than 40 miles of fiber optics throughout the conference areas, including the types of connections listed below. SCinet 2000 is planning to support an all-ST-terminated, all-fiber show floor network to interconnect booths using switched technologies.

  • 100BaseFX, Fast Ethernet
  • 1000BaseSX, Gigabit Ethernet
  • 1000BaseLX, Gigabit Ethernet
  • OC-3c ATM
  • OC-12c ATM
  • OC-48c ATM
  • 1.28 and 2.0 Gigabit Myrinet.

In order to support the complex logistics and requirements of an exhibition on the scale of SC2000, SCinet is deploying four overlapping networks within the Dallas Convention Center. A diagram of the network is below.

SCinet 2000

They are all interconnected, but can operate independently of each other. At the lowest level, several days before the show starts, SCinet deploys a commodity Internet network to connect show offices, the conference Education Program, and the show's e-mail facilities. At the next level, there is a production network, provisioned with leading-edge hardware from various vendors. This year, this network will feature Gigabit Ethernet and OC-48 ATM.

The Network Operations Center is also developed from scratch just before the show starts. This year, in addition to the traditional functions of supporting the network equipment, providing a help desk, and work areas for the network engineers, the NOC will also house a variety of displays and information. Spirent is providing their "SmartBits" technology to monitor aspects of SCinet. The SCinet "bit-o-meter" will display aggregate network traffic. Specific applications and events will be monitored throughout the show. Scinet will also use the "Bro" package from LBNL to monitor network traffic for intrusion. Further displays such as from the Netflow package will also be viewable.

One of the most impressive things about SCinet is that every year, it brings together the best network professionals from across the country to help create the entire network and support the multiple aspects of the program. This year, staff from many organizations are supporting SCinet, including:

Aaronsen Group, Argonne National Laboratory (DOE), Army Research Laboratory (DOD), Avici Systems, Caltech, CISCO Systems, the Dallas Convention Center, the Dallas Convention and Visitor's Bureau, Extreme Networks, Foundry Networks, GST Telecom, Internet2, Juniper Networks, Lawrence Berkeley National Laboratory (DOE), Lawrence Livermore National Laboratory (DOE), Marconi, MCI, MITRE Corporation, National Center Scientific Applications (NSF), Northeast Regional Data Center/ University of Florida, Nichols Research/CSC, Nortel Networks, Oak Ridge National Laboratory (DOE), Oregon State University, Pacific Northwest Laboratory (DOE), Qwest Communications, Sandia National Laboratory (DOE), SBC Communications, Spirent, Texas A&M University, U.S. Army Engineer Research and Development Center Major Shared Resource Center (DOD), University Corporation for Advanced Internet Development, University of Tennessee/Knoxville, the Very high performance Backbone Network Services - vBNS (NSF)

These people are the critical ingredients who make SCinet work. Each year they attempt to top the previous year as well as give the best possible service and support the show. They worked year around planning and implementing SCinet, and then will spend more than three weeks in Dallas building and running SCinet. Without the people, all the fiber, all the routers and all the infrastructure would not pass one bit of information.

For the first time SCinet will offer tours of the NOC and other equipment for attendees and exhibitors. A sign-up sheet will be available at the NOC for a limited number of tours. It is a chance to see the equipment from behind the scenes and get to talk to some of the vendors and volunteers who put together this most intense network.

Wireless

Working with Cisco Systems, SBC Communications and other vendors, SCinet is creating a large 11 Mbps wireless network on the exhibit floor, in the Education Program area, and other locations throughout the conference space, possibly the entire SC 2000 conference area. This wireless network will support the Education Program and the eSCape2000 activities, among other things.

Wireless connectivity is planned for attendees as well. A standards-based 802.11b network with DHCP service will cover the exhibit floor. Attendees with laptops equipped with standards-compliant wireless Ethernet cards, and an operating system which will configure network services as a client of DHCP should have immediate connectivity. A selection of cards and operating systems known to work are listed on the SCinet web page along with links to vendors, drivers, and instructions. SCinet personnel will not be able to provide direct support to attendees who have trouble connecting.

SCinet will not be providing wireless cards for individual systems. SCinet does not support setup, configuration and/or diagnosis of individual systems, but will provide links to information about these subjects at the web site.

The priority areas supported for wireless are the exhibit areas, education area, convention center lobby, meeting rooms and other spaces. If limits are necessary, we will attempt to indicate range and limits with signage. SCinet discourages people/groups/exhibits from bringing their own base stations, because of issues with base station conflicts. SCinet also reserves the right to disconnect any base station that interferes with the SCinet network.

For additional information please see www.scinet.sc2000.org/wireless.php3.

Xnet

Xnet is the leading edge, technology-development showcase segment of SCinet. Since exhibitors, users and attendees become more and more dependent on SCinet to provide robust, high-performance, production-quality networking, it has become increasingly difficult for SCinet to showcase bleeding edge, potentially fragile technology. Simultaneously, vendors have sometimes been reticent about showcasing bleeding-edge hardware in SCinet as it became a production network.

Xnet provides the solution to this dichotomy. It provides a context which is, by definition, bleeding-edge, pre-standard, and in which fragility goes with the territory. It thus provides vendors an opportunity to showcase network gear or capabilities which typically does not exist in anyone's off the shelf catalog.

This year, Xnet will demonstrate early, pre-production 10-Gigabit Ethernet equipment connecting several show floor booths.

In order to encourage the demonstration of bandwidth-intensive applications on this unique, once-a-year network, SCinet is sponsoring innovative, bandwidth intensive application demonstrations. These are applications that will both stress the capabilities of the SCinet network infrastructure and deliver innovative application value. The most impressive of these applications will be recognized with special awards at the SC2000 Awards Program session on Thursday, Nov. 9. Specific questions regarding the network infrastructure can be e-mailed to scinet@sc2000.org. Anyone interested in volunteering to help with SCinet 2000 either before or during the conference, please send e-mail to scinet@sc2000.org

SC2000 NETWORK CHALLENGE

Post-conference update: please see the Awards page for a complete list of all awards presented at SC2000.

The SC conference has long been a place where high-performance computers and high-speed networks meet. At SC2000, the SCinet team is creating a particularly exciting network and has spent a lot of effort to find real applications that have been waiting for such a high-powered network to be feasible. SCinet solicited proposals for the demonstration of innovative (especially bandwidth-intensive) applications as a way to challenge the community to show that this unique network can be used for exciting applications. This effort has turned into the SC2000 Network Challenge, sponsored in part by Qwest Communications.

The 11 applications selected will both fully utilize (and some claim overwhelm!) the SCinet network infrastructure and deliver innovative application value. A team of judges will review the performance of the applications at the show and make awards during the award session on Thursday. Qwest has pledged award funding for this Challenge as well. The challenge will be the first step in a sustained network-oriented prize that will be featured at SC2000 and future SC conferences. The list of SC2000 Network Challenge entries appears below. For more information on this award, see www-fp.mcs.anl.gov/sc2000_netchallenge.

A Data Management Infrastructure for Climate Modeling Research
A. Chervenak, C. Kesselman, University of Southern California/Information Sciences Institute, I. Foster, S. Tuecke, W. Allcock, V. Nefedova, D. Quesnel, Argonne National Laboratory, B. Drach, D. Williams, Lawrence Livermore National Laboratory, A. Sim, A. Shoshani, Lawrence Berkeley National Laboratory

We will demonstrate our infrastructure for secure, high-performance data transfer and replication for large-scale climate modeling data sets. Climate modeling data sets typically consist of many files, ranging to many gigabytes in size. These files may be replicated at various locations. When a climate modeling researcher requests a particular view of the data, we initiate a secure transfer of the relevant files from the data replica that offers the best performance.

Our application includes several components. First, a user specifies at a high level the characteristics of the desired data (for example, precipitation amounts for a certain time period and region). A metadata infrastructure maps between these high-level attributes and file names. Next, we use a replication management infrastructure to find all physical locations of the desired files. We select among these physical locations by consulting performance and information services such as the Network Weather Service and the Globus Metacomputing Directory Service to predict relative performance of transfers from each location. Once a particular physical replica is selected, we initiate secure, high-performance data transfer between the source and destination sites. Finally, the desired data is presented graphically to the user.

This project is joint work by three groups. Researchers at Lawrence Berkeley National Laboratory created a request manager that calls low-level services and selects among replicas. Scientists at Lawrence Livermore National Laboratory provided the user interface and visualization output for the application as well the metadata service that maps between high-level attributes and files. Finally, the Globus project team at USC Information Sciences Institute and Argonne National Laboratory provided basic grid services, including replica management, information services, and secure, efficient data transfer.

Project DataSpace
R. Grossman, G. Reinhart, E. Creel, M. Mazzucco, S. Connelly, A. Turinsky, H. Sivakumar, J. Jamison, University of Illinois at Chicago, B. Hollebeek, P. Proropapas, University of Pennsylvania, C. Rocke, T. Arons, University of California Davis, Y. Guo, S. Hedvall, Imperial College, London, P. Milne, G. Williams, ACSys, Canberra, M. Cornelson, P. Hallstrom, Magnify, Inc.

Project DataSpace will link 14 sites across five continents to demonstrate a new infrastructure to handle (1) remote data access, analysis, and mining and (2) distributed data analysis and mining. Led by researchers at the University of Illinois at Chicago, the team will demonstrate a variety of tools using its new Data Space Transfer Protocol (DSTP) to publish, access, analyze, correlate and manipulate remote and distributed data. The team hopes that the DSTP infrastructure will provide the same ease of use for distributed data analysis and data mining that HTTP provided for viewing remote documents.

We will showcase DSTP Servers, DSTP Clients, and a variety of DSTP applications. The applications also use the Predictive Model Markup Language (PMML), an emerging standard for statistical models. For example, in the Sky Survey application, the DSTP Client downloads stellar object catalog data from a DSTP Server, creates a machine learning model based on PMML, and scores large amounts of data at high rates using high performance DSTP applications we have developed. Last year, we were able to move 250 Mbits/sec (~113 GB/hr) from our lab at the University of Illinois (Chicago) to the SC99 showroom floor in Portland with no network tuning. We expect even higher rates at this year's SC2000 using a new release of our software.

The Network Storm application will demonstrate the flexibility of the DSTP Protocol. Locations will be set up on three continents to collect network traffic data. DSTP Servers are already installed at those locations and any person utilizing the DSTP protocol can download data and view the state of the network. Ultimately the group plans to build an infrastructure that will predict network storms and allow for improved network traffic management. We will also demonstrate several other high performance DSTP applications.

Intrepid Network Collaborator (INC)
L. Keely, Numerical Aerospace Simulation Division, NASA

A scientific visualization application is instrumented to provide real-time images, which are multicast onto the SCinet network. The application may reside on a host on the conference floor or on a host at the Numerical Aerospace Simulation Division at Ames Research Center. participants on the conference floor will be able to display these images in network browsers through the use of a "thin" client application. Additionally, participants will be able to steer the scientific visualization application through a TCP connection from the client. A prototype system with compressed images and a frame of about 8 frames/sec puts data on a 100Mbs Ethernet at approximately 15 Megabits/sec.

Gigabyte Per Second File Transfer in a Clustered Computing Environment
T. Pratt, J. Naegle, L. Martinez, M. Barnaby, Sandia National Laboratories

Cluster computation and high performance networks have opened the way for network file transfers that compete with the performance of local disk systems. We intend to demonstrate file transfer rates of 1 gigabyte/sec transfer using parallel transfer methods developed by ASCI's DISCOM program.

Terascale scientific simulations create terabytes of data. This data generation capability can quickly swamp any size local filesystem. The result of this is that the computer performance can end up being paced by its ability to transfer data out to other file systems.

Reservoir Simulation and History Matching - Grid Based Computing and Interactive Dataset Exploration
J. Saltz, University of Maryland/Johns Hopkins University, A. Sussman, University of Maryland, T. Kurc, U. Catalyurik, University of Maryland/Johns Hopkins University, M. Wheeler, S. Bryant, M. Peszynska, University of Texas

Modeling of flow and reactive transport arises in many physical applications. For example, environmental quality modeling, (atmospheric, bays and estuaries, groundwater, wetlands), reservoir modeling, and in medical applications such as blood flow.

Reactive transport can involve twenty or more components, thousands of time steps, hundreds of multiple realizations, and fine grid resolution leading to terabytes of data. Local access is critical in efficient interpretation of this data. On the other hand complicated three-dimensional heterogeneous flow fields can be computationally intense and runtime depends strongly on the availability of optimized libraries.

We will demonstrate the use of MetaChaos software to support the flow and the reactive transport components of the UT Austin subsurface modeling code PARSIM. The flow components will run on Blue Horizon at SDSC and the reactive transport components will run on a local Beowolf cluster at SC2000. Data will be stored in the Active Data Repository (ADR) on the local cluster. As new data is generated and stored in ADR, ADR will support clients that (1) carry out interactive exploration of new datasets, (2) compare features seen in the new dataset with analogous portions of multiple previously computed datasets, and (3) allow the use of this data exploration and visualization capability to carry out computational steering of PARSIM.

Telemiscroscopy over IPv6
M. Hadida, National Center for Microscopy and Imaging Research, San Diego Supercomputer Center, A. Durand, Computation Center, Osaka University, Y. Kadobayashi, Computation Center, Osaka University, T. Hutton, San Diego Supercomputer Center, B. Fink, Lawrence Berkeley National Laboratory, M. Ellisman, National Center for Microscopy and Imaging Research, UCSD

Our telemicroscopy system makes use of advanced networks to enable researchers at remote sites to interactively control a high power electron microscope located at the University of California, San Diego (UCSD).

At SC2000, we envision showing live remote control of the UCSD microscope from the showroom floor. Further, we plan to collaborate in real-time with our colleagues attending the Society for Neuroscience conference in New Orleans, which coincides with SC2000.

Bandwidth Thirsty Particle Physics Event Collection Analysis and Visualization Using Object Databases and the Globus Grid Middleware
J. Bunn, H. Newman, J. Patton, California Institute of Technology, Caltech, K. Holtman, CERN

We will demonstrate high bandwidth retrieval of particle physics event collections from object databases hosted at Caltech and at CERN. The application we will use is based on Globus middleware and the Objectivity ODBMS, and will rely heavily on optimized network paths between Dallas and both Caltech and CERN.

Visapult - Using High-Speed WANs and Network Data Caches to Enable Remote and Distributed Visualization
W. Bethel, J. Shalf, S. Lau, Visualization Group, Lawrence Berkeley National Laboratory, D. Gunter, J. Lee, B. Tierney, Data Intensive and Distributed Computing Group, Lawrence Berkeley National Laboratory, V. Beckner, Center for Computational Sciences and Engineering, Lawrence Berkeley National Laboratory, J. Brandt, D. Evensky, H. Chen, Networking Security and Research, Sandia National Laboratory, G. Pavel, J. Olsen, B.H. Bodtker, Advanced Communications and Signal Processing Group, Electronics Engineering Technologies Division, Lawrence Livermore National Laboratory

Visapult is a prototype application and framework for remote visualization of large scientific datasets. We approach the technical challenges of tera-scale visualization with a unique architecture that employs high speed WANs and network data caches for data staging and transmission. High throughput rates are achieved by parallelizing i/o at each stage in the application, and by pipelining the visualization process. Visapult consists of two components; a viewer and a back end. The back end is a parallel application that loads in large scientific data sets using a domain decomposition, and performs software volume rendering on each subdomain, producing an image. The viewer, also a parallel application, implements. Image Based Rendering Assisted Volume Rendering using theimagery produced by the back end.

On the display device, graphics interactivity is effectively decoupled from the latency inherent in network applications. This submission seeks to make two high-water marks: remote visualization of a dataset that exceeds 1 terabyte in size, and application performance that will exceed 1.5 gigabytes per second in sustained network bandwidth consumption.

World Wide Metacomputing
M. Mueller, High Performance Computing Center Stuttgart, S. Sanielevici, Pittsburgh Supercomputing Center, A. Breckenridge, Sandia National Laboratories, S. Sekiguchi, Electrotechnical Laboratory, Japan, J. Brooke, Manchester Computing Centre, F-P. Lin, National Center for High-Performance Computing, Taiwan, T. Hirayama, Japanese Atomic Energy Research Institute, Japan

We will show a broad range of applications from the participating partners, among them are:

  • URANUS: The CFD-code URANUS (Upwind Algorithm for Nonequilibrium Flows of the University of Stuttgart) has been developed to simulate the reentry phase of a reusable space vehicle in a wide altitude velocity range. Simulation is the only way to get an idea what exactly happens during the reentry phase of a space vehicle, because measuring is very limited and expensive. Due to the role of chemistry and chemical reactions near the hot surface of the space vehicle, even wind tunnel experiments are not comparable to the flows and reactions appearing during a real reentry.
  • P3T-DSMC: This is a Direct Monte Carlo Simulation that was developed to simulate granular matter. Granular materials are ubiquitous in nature, industrial processing and everyday life. Examples range from small scale particles in dust, cement or flour over medium-sized plastic granulates to the planetary rings on the astrophysical scale. Similarly broad are the physical phenomena controlling their behavior in transport, storage and processing.

However, despite their importance, continuum or other large-scale modeling still shows severe deficiencies and the understanding of the mesoscopic physics in these systems, as exemplified by fragmentation, dissipative effects, pattern formation, etc. is incomplete since many theoretical methods otherwise applicable to many-particle systems do not apply.

Often, large-scale computation is the only way to deepen our insight because typical granular systems consist of million of particles and most phenomena are only visible after long time scales.

  • Electronic structure simulation: First-principles electronic structure simulation of magnetic alloys, using an order-N algorithm. Running on 1480 T3E-1200 processors, this code was the first to sustain over 1 teraflop and won the 1998 Gordon Bell Prize. See http://www.psc.edu/science/wang.html for details.
  • Jodrell Bank Radio-Telescope De-dispersion code: The search for fast (milli-second) pulsars has important implications for our knowledge of the large scale structure of our galaxy and for fundamental physics. However the pulsar signal is often below the level of background noise in the signal of the radio-telescope. Specialist hardware used to be deployed to enhance the pulsar signal above this background, but this can now be done more effectively using fast Fourier transforms on a supercomputer. We intend to show two applications, both very demanding in terms of network bandwidth.
    1. A search of data previously gathered from the radio-telescope using the metacomputer to correct for the effects of signal dispersion by the interstellar medium.
    2. Using a known dispersion measure, to perform real-time processing of the signal from the telescope.

    The challenge is to adjust the signal processing algorithms to the differing data transfer rates across the networks. The computing challenge is to ensure that the processing algorithms maintain the balance between processing speed and network bandwidth.

Gigabit/sec High Definition TV over IP
C. Perkins, L. Ghari, A. Mankin, T. Gibbons, USC Information Sciences Institute, D. Richardson, University of Washington, G. Goncher, Tektronix, Inc.

The application driving this demonstration is transport of uncompressed high definition TV signals in SMPTE-292 format over an IP network. This will be the first time transport of studio quality uncompressed HDTV signals over IP has been demonstrated. The uncompressed SMPTE-292M media stream comprises an RTP/UDP/IP flow at approximately 1.5 Gbps. Media encoding and packetization is implemented in special purpose hardware, with the control plane on a standard PC.

More information is available from www.east.isi.edu/projects/NMAA/

Scalable High-Resolution Wide Area Collaboration over the Access Grid
Lisa Childers, Terry Disz, Bob Olson, Rick Stevens, Futures Laboratory, Argonne National Laboratory

The Access Grid is the ensemble of resources used to support group-to-group human interaction across the grid. It consists of large-format multimedia displays, presentation and interactive software environments, interfaces to grid middleware, and interfaces to remote visualization environments. The Access Grid is designed to support large-scale distributed meetings, collaborative teamwork sessions, seminars, lectures, tutorials and training. The Access Grid design point is small (three to 20 people per site) but promotes group-to-group collaboration and communication. Large-format displays integrated with intelligent or active meeting rooms are a central feature of Access Grid nodes, and are primary new features extending collaboration technology from the desktop. Access Grid Nodes are "designed spaces" that explicitly support the high-end audio and visual technology needed to provide a high-quality compelling and productive user experience.

For this demonstration, select Access Grid nodes will use inexpensive or public domain JPEG encoders and decoders to deliver multiple high resolution, high frame rate video streams in addition to the traditional h.261 video streams used by the Access Grid today. The goal of the demonstration is to show both the future of scalable wide area collaboration and to stress today's networks with the requirement of high bandwidth low latency delivery of information. The demonstration will show multiple live group to group interactions as well as interactions with computer simulations via manipulation within the video windows. A typical Access Grid session may contain several dozens of live or computer-generated video streams.

QoS Enabled Audio Teleportation
C. Chafe, CCRMA Stanford University, S. Shalunov, B. Teitelbaum, Internet2, M. Gröger, Deutsche Telekom, R. Roberts. Stanford Networking, S. Wilson, D. Chisolm, R. Leistikow, G. Scavone. CCRMA

Real-time Internet transmission of CD-quality sound will be demonstrated between the SC2000 floor and Stanford University. Uncompressed live audio streams are possible by employing network enhancements that support minimal delay and low-jitter packet delivery over WAN. Applications include two-way communication (full-bandwidth voice and music) and "surround sound" multi-channel eavesdropping on Stanford spaces. And the Internet itself can be listened to as a vibrating acoustic medium, as if it were a guitar string, with a new technique for generating sound waves on the Internet from real-time echoes (SoundWIRE). This auditory "ping" is used as a tool for evaluation of network constancy. Network quality of service (QoS) for this demonstration consists of marking application traffic for Expedited Forwarding (EF), shaping and policing it at the network edge, and sending it over the Stanford University, CalREN2, and Abilene backbone network, where EF-marked traffic is preferentially serviced. The QoS network design in this demo reflects the architecture of the Internet2 QBone Premium Service. Heavy congestion is created at one or more points near the edge and effective protection of application traffic is demonstrated. For comparison purposes, QoS configuration is dynamically enabled and disabled via the Globus GARA tools and application quality without QoS protection is demonstrated as well.

High-Resolution Visualization Playback on Tiled Displays
M. Papka, R. Stevens, Futures Laboratory, Argonne National Laboratory

High-resolution movies are needed to visualize the time-dependent results of large-scale computer simulations. This is due to the fact that the amount of data generated does not allow for real-time visualization because today's graphics systems are incapable of rendering the output in real-time. High-resolution output is necessary for scientists to gain understanding and insight from the results of their simulations. Therefore, it is common for movies to be made in an offline batch mode for playback and review at a later time. Building from this premise and others, we have constructed systems for playback of movie files on high-resolution tiled displays. These displays are constructed out of numerous individual desktop systems to form an integrated high-resolution display. Most use of these systems today is with local playback of the movie files.

The goal of the demonstration is the playback of a high-resolution tiled visualization movie over the network. A 6-tile mural will be part of Argonne's booth at Supercomputing, the tiled display will be capable of ~3072 x 1536 of pixel resolution. A 24bit-uncompressed movie would require approximately 500 MB/s for smooth animation. Currently several bottlenecks exist in a system of this sort, the goal here is to demonstrate that next generation networking infrastructure, such as what is being deployed at SC2000, will remove networking as a bottleneck.

EXHIBITOR NETWORK REQUESTS

SCinet2000 is now accepting network requests for the SC2000 conference. If you are exhibiting at SC2000 and require network services, please go to scinet.ca.sandia.gov and submit a network request. More information will be provided at that site.

Questions: scinet@sc2000.org


CONFERENCE VICE CHAIR, INFORMATION ARCHITECTURE
BILL KRAMER, LAWRENCE BERKELEY NATIONAL LABORATORY