At TeraGrid ’08 Conference, UC San Diego ’s Smarr Urges University Campuses
to Remove Network Bottlenecks to Supercomputer Users
San Diego, CA and Las Vegas, NV, June 11, 2008 -- The director of the California Institute for Telecommunications and Information Technology (Calit2), a partnership of UC San Diego and the UC Irvine, said today that all the pieces are in place for a revolution in the usability of remote high performance computers to advance science across many disciplines. He urged early adopter application scientists to drive the creation of end-to-end dedicated lightpaths connecting remote supercomputers to their labs, greatly enhancing their local capability to analyze visually massive datasets generated by TeraGrid's terascale to petascale computers.
In a featured keynote today at the TeraGrid ’08 Conference being held in Las Vegas this week, Calit2 Director Larry Smarr said “the last ten years have established the state, regional, national, and global optical networks needed for this revolution, but the bottleneck is on the user’s campus.” However, as a result of research funded by the National Science Foundation (NSF), there now is a clear path forward to removing this last bottleneck.
This opens the possibility for end users of the NSF’s TeraGrid to begin to adopt these optical network technologies.
[Click here to download Smarr's slides from TeraGrid '08 in Las Vegas. The Calit2 director's other recent presentations are also available from the site.]
The TeraGrid integrates high-performance computers, data resources and tools, and high-end experimental facilities from the eleven partner sites around the country.
The OptIPuter project, led by Smarr, is not designed to scale to millions of sites like the normal shared Internet, but to create private networks with much higher levels of data volume, accuracy, and timeliness for a few data-intensive research and education sites. Led by Calit2, the San Diego Supercomputer Center (SDSC), and the University of Illinois at Chicago ’s Electronic Visualization Laboratory (EVL), OptIPuter ties together the efforts of researchers from over a dozen campuses.
In his talk, Smarr described how the user-configurable OptIPuter global platform is already being used for research in collaborative work environments, digital cinema, biomedical instrumentation, and marine microbial metagenomics. He issued a challenge to the TeraGrid users to begin to adopt this technology to support remote use of the TeraGrid resources.
“OptIPuter technologies can enhance the ability of scientists to use remote high-performance computing resources from their local labs, particularly applications with persistent large data flows, real-time visualization and collaboration, and remote steering,” Smarr said.
“OptIPortals are the appropriate termination device for 10Gbps lambdas, allowing the end user to choose the right amount of local storage, compute, and graphics capacity needed for their application,” said Smarr. “In addition, the tiled walls provide the scalable pixel real estate necessary to analyze visually the complexity of supercomputing runs.”
The OptIPuter project prefers OptIPortal clusters to run on SDSC’s Rocks, an open-source Linux cluster distribution that enables end users to easily build computational clusters, grid endpoints and visualization tiled-display walls. Rocks is developed under an NSF-funded SDCI project led by SDSC’s Philip Papadopoulos, who is also a co-principal investigator on the OptIPuter project. There are currently over 1,300 registered clusters running Rocks, providing a global and vibrant open-source software community. The Rocks “Rolls” provide a convenient method of distribution of software innovations coming from community members.
To handle multi-gigabit video streams, OptIPuter researchers at EVL developed the Scalable Adaptive Graphics Environment (SAGE), specialized graphics middleware that supports collaborative scientific visualization environments with potentially hundreds of megapixels of contiguous display resolution. In collaborative scientific visualization, it is crucial to share high-resolution imagery as well as high-definition video among groups of collaborators at local or remote sites.SAGE enables the real-time streaming of extremely high-resolution content − such as ultra-high-resolution 2D and 3D computer graphics from remote rendering and compute clusters and storage devices, as well as high-definition video camera output − to scalable tiled display walls over high-speed networks. SAGE serves as a window manager, allowing users to move, resize, and overlap windows as easily as on standard desktop computers. SAGE also has standard collaboration desktop tools, such as image viewer, video player, and desktop sharing capabilities, enabling participants to resize, pan, zoom and move through the data.
Although scalable visualization displays have been under development for over a decade, first as arrays of projectors, the use of commodity hardware and open-source software in the OptIPortal makes this visualization power affordable to individual researchers. The typical cost of an N-tiled wall is about the same as N/2 deskside PCs. As a result, adoption of OptIPortals has been rapid over the past two years. Besides the United States there are OptIPortals installed in Australia , Taiwan , China , Japan , Korea , Canada , the UK , the Netherlands , Switzerland , the Czech Republic and Russia , as well as a number of corporations.
However, there has been a critical “missing link” blocking widespread adoption of the OptIPuter/ OptIPortal metacomputer: few campuses have installed the optical fiber paths needed to connect from the regional optical network campus gateway to the end user. Smarr quoted NSF Director Arden Bement, who three years ago said prophetically: “Those massive conduits [e.g., NLR lambdas] are reduced to two-lane roads at most college and university campuses. Improving cyberinfrastructure will transform the capabilities of campus-based scientists.”To make effective use of the 10Gbps lightpaths from the TeraGrid resources to the campus gateways, Smarr said, “the user’s campus must invest in the equivalent of city ‘data freeway’ systems of switched optical fibers connecting the campus gateway to specific buildings and inside the buildings to the user’s lab.”
A full scale experiment of this vision is underway at UCSD with funds provided by the campus and an NSF-funded Major Research Instrumentation grant called Quartzite, which has SDSC’s Papadopoulos as PI and Calit2’s Smarr as one of the co-PIs. The Quartzite optical infrastructure includes a hybrid packet-circuit switched environment, interconnecting over 45 installed 10Gbps channels crisscrossing the UC San Diego campus, with 15 more planned by the end of this year. More than 400 endpoints are connected to Quartzite through access or direct connection to the core switch. Geographically, these are located in seven different buildings, including 17 laboratories within these buildings. Large projects (CAMERA, CineGrid) use Quartzite directly.
“Quartzite provides the ‘golden spike’ which allows completion of end-to-end 10Gbps lightpaths running from TeraGrid sites to the remote user’s lab,” said Smarr, adding: “Like the OptIPortal, Quartzite was designed using commercial technologies that can be easily installed on any campus.”
With this complete end-to-end OptIPuter now in hand, the stage is set for a wide variety of applications to be developed over this global high performance cyberinfrastructure. “When we were conceptualizing the OptIPuter seven years ago, I always thought that remote supercomputer users would provide the killer applications,” said Smarr, the founding director in 1985 of the National Center for Supercomputing Applications (NCSA). “TeraGrid users are located in research campuses across the nation, but they all share the characteristic that they need to carry out interactive visual analysis of massive datasets generated by a remote supercomputer.”
Smarr showed a number of DoE, NASA, and NSF supercomputer centers that have large tiled projector walls located in the center for visual analysis of these complexities. “The time has come to take that capability out to end users in their labs with local OptIPortals connected to the supercomputer center using the OptIPuter,” said Smarr. “I believe that we will see early adopters step forward in the next year to set up prototypes of this cyberarchitecture.”
To make this OptIPuter distributed analysis more efficient, EVL has developed LambdaRAM, which can prefetch data from disk storage and temporarily store it in the cluster’s Random Access Memory (RAM), masking the substantial disk I/O latency, and then move the data from this “staging” computer to the computer running the simulation. Smarr showed how NASA Goddard Space Flight Center in Maryland uses the OptIPuter and LambdaRAM to optimize the use of NLR for severe storm and hurricane forecasts carried out at the Project Columbia supercomputer at NASA Ames in Mountain View , California , and to zoom and pan interactively through ultra-high-resolution images on local OptIPortals at Goddard. EVL modified LambdaRAM so that it would work seamlessly with legacy applications to locally access large data files generated by the remote supercomputer.
Finally, Smarr described how, with the integration of high definition and digital cinema video streams, which easily fit inside a 10Gbps lightpath, the OptIPuter architecture is rapidly creating an OptIPlanet Collaboratory in which multiple scientists can analyze a complex dataset while seeing and talking to each other as if they were physically in the same room. Smarr showed photos of “telepresence” sessions in January and May 2008 where this was demonstrated on a global basis between Calit2 at UC San Diego and the100-Megapixel ‘OzIPortal,’ constructed earlier this year at the University of Melbourne in Australia, connected over a transpacific gigabit lightpath on Australia's Academic and Research Network (AARNet). “Petascale problems will require geographically distributed multidisciplinary teams analyzing enormous data sets—a perfect application of the OptIPlanet Collaboratory,” said Smarr.
In conclusion, Smarr said, “After a decade of research carried out at dozens of institutions, we are seeing the OptIPuter take off on a global basis. I look forward to working with many of the TeraGrid ‘08 participants as they become early adopters of this innovative, high performance cyberinfrastructure—rebalancing the local analysis and network connectivity with the awesome growth NSF has made possible in the emerging petascale computers.”
In addition to Smarr and Papadapoulos, co-principal investigators on the OptIPuter initiative include Calit2’s Thomas DeFanti; Jason Leigh, from the University of Illinois at Chicago ; and Mark Ellisman, from UC San Diego. The project manager is Maxine Brown, from the University of Illinois at Chicago . Andrew Chien, now Vice President of Research at Intel, served as the system software architect while he was at UCSD.
The NSF-funded TeraGrid links compute resources among 11 partner sites across the U.S. It currently has a combined compute capability approaching one petaflop (10^15 calculations per second), or equal to the computing power of about 200,000 typical laptops.
For details on the OptIPuter project see www.optiputer.net, including 250 reports and publications.
Presentation by Larry Smarr to TeraGrid '08
California Institute for Telecommunications and Information Technology (Calit2)
San Diego Supercomputer Center (SDSC)
SDSC Rocks Project
Internet2 Dynamic Circuit Network
National Science Foundation
Cross-Platform Cluster Graphics Library
Doug Ramsey, Calit2 Communications, 858 822-5825, or email@example.com
Warren R. Froelich, SDSC Communications, 858 822-3622 or firstname.lastname@example.org
Jan Zverina, SDSC Communications, 858 534-5111 or email@example.com
Laura Wolf, University of Illinois at Chicago, Electronic Visualization Laboratory, 312 996-3002 or firstname.lastname@example.org