iGrid Part II: Remote-Control Demonstrations

8.27.2005 -- This second-in-a-series looking toward iGrid September 26-29 describes planned demonstrations of applications that focus on remote control of high-performance capabilities. (The next issue, to be published this coming week, will focus on high-definition, interactive, and multicast applications.)

Nortel, Canada, is leading a collaboration that will demonstrate efficient and secure large-scale, remote workflow execution linking Xen-based Linux Virtual Machines. This demo will exploit a high degree of pipelining between the staggered operations of assembling the data to be transferred, verifying its integrity, and halting and transferring execution using data sets in Amsterdam, Chicago, Ottawa, and San Diego. The application-driven control of lightpaths enables setup, maintenance, and repair according to the space/time requirements of the application, without operator intervention, and with the choice of accessing lightpaths on demand or on a time-of-day reservation basis. The workflow execution will be migrated across all sites, driven by data affinity or to recoer from simulated failures. This work has the potential to scale effectively to much-larger-bandwidth environments. (For more information, see www.science.uva.nl/reesarch/air; demo NL103.)

The Poznań Supercomputing and Networking Center in Poland will present the Virtual Laboratory, an online experiment executed on an NMR spectrometer in Europe. The system framework allows access to many scientific facilities, high-performance computing, and visualization capabilities. The physical layer of the architecture incorporates existing high-speed networks: Géant and national optical networks like PIONIER in Poland. This application can be used in all disciplines focused on experimental data acquisition, such as chemistry (spectrometer), radio astronomy (radio telescope), and medicine (CAT scanner). The types of equipment used in these experiments are very costly and typically available only to large-scale research centers. In this context, this project will demonstrate how such capabilities can be made available to smaller-scale institutes and research labs lacking the resources to purchase such equipment. (See vlab.psnc.pl; demo PL103.)

The Laboratory for Computational Science and Engineering at the University of Minnesota will remotely control and interact with a supercomputer turbulence simulation running on hardware at the Pittsburgh Supercomputing Center. Typically such large-scale simulations run for weeks or months to achieve a single result. This group, by contrast, will show the ability to do short, exploratory runs to enhance scientific productivity. They are counting on the very high bandwidth and low latency of the National LambdaRail as the optical link to make this happen. They will interact with the simulation, selecting variables to examine and dynamically determining the viewpoint and rendering parameters. At the conclusion of the run, they will access data from the complete archived history of the run and stream it back to their lab in Minnesota for viewing on a PowerWall tiled display. (See www.lcse.umn.edu/PPMdemo; demo US112.)

The brain data acquisition project, led by the National Center for Microscopy and Imaging Research at UCSD, will present transparent operation of a scientific experiment on a testbed consisting of globally distributed resources – visualization, storage, computation, and networks. The technology “platform” will consist of dynamic lambda allocation, high-bandwidth protocols for optical networks, the Distributed Virtual Computer, and the Ethereon multi-resolution visualization system running on the Scalable Adaptive Graphics Environment (SAGE). Supported by this technology, a biologist at UCSD will take images of a physical specimen of a rat brain positioned on a high-voltage electron microscope in Osaka, Japan, progressively drilling down in magnification (from 20x to 5000x) from the rat cerebellum to an individual spiny dendrite. This demonstration will showcase persistent infrastructure that efficiently uses resources via multiple network latencies at sites in Korea, Japan, Europe, Taiwan, and the US to support a single scientific experiment. (See www.ncmir.ucsd.edu/; demo US114.)

iGrid Part I: iGrid to Push Edge of Networking Frontier by Demonstrating World’s Most Demanding Applications