2.08.05 -- How do you support a wide range of Internet applications that have different quality-of-service requirements? Implicit in this question, of course, is the need to support the more demanding applications such as downloading video and music files, video conferencing, and online gaming.
UCSD Computer Science and Engineering faculty Joe Pasquale, Alex Snoeren, and Amin Vahdat and their students have been exploring this question and recently presented preliminary results in the context of the Center for Networked Systems during a two-day research review for the center’s five industrial partners.
In one project, they considered Internet content distribution as the “application” and distribution methods of that content as the “service” using a game-playing metaphor. They took a fixed number of files, each broken into fixed sized “tokens” with unique IDs. The “board” for this game was the network: a graph with edges (links between every two points).
The group wanted to determine the fewest number of moves to get all the tokens to all the nodes that wanted them. What were those moves and in what order?
The way the game took place was to make a move(s), score that state, evaluate moves that become possible from that new state, and continue.
The problem is that, in a contained (read: “toy”) system, the system has complete information. Every node knows the global state of the system at any given time and all actions that have been taken up to that point.
The more challenging problem is to adapt this approach to a larger system that includes uncertainty and a hostile environment, such as a game adversary that is out to beat you.
The second problem they focused on is alternate-path routing. The key idea here was to exploit network bandwidth (throughput) diversity to obtain better network performance. Overlay routing has been applied in various ways to the issue of reliability, but, this group wondered, could it be applied to the issue of performance? “How can we create a QoS model out of a mechanism that provides more higher-performance routing?” they asked.
Using a set of “blue-coded” nodes forming part of Planet Lab, including UCSD, Caltech, Carnegie-Mellon, MIT, Duke University, University of Illinois (Urbana-Champaign), University of Michigan, and nodes in Israel and India , they identified a second set of “red-coded” popular destination sites, including Amazon, AOL, CNN, Google, Microsoft, and Yahoo. Then they set up a matrix measuring the time it took to send large files between all combinations of Planet Lab nodes and destination sites.
They also experimented with “adding stops on the way” because of the unintuitive potential for increasing the speed of delivery.
For example, if the direct link between India and Amazon was slow, could the transmission time be shortened by inserting an intermediate node, say UCSD, to take advantage of the speed of the much faster link between UCSD and Amazon to shorten the overall transmission time? It turned out the answer could be yes.
But blue-node to red-node traffic has a bursty quality. So even within the U.S. , for example, it sometimes turned out to be faster to route traffic, for example, from MIT to Microsoft via Caltech.
So the conclusion, at this stage of the research, is that such alternate-path routing can offer the potential of higher performance and be used as the basis for an overlay QoS network.
Still many research questions remain: If a fast path is discovered, does it stay fast and, if so, for how long? How often does such alternate-path routing work? Between any two points, what is the number of alternate paths that can be taken? Is this approach scalable? What happens if the network topology changes?
While many questions remain, this research shows that the geometry maxim about “the shortest distance between two points being a straight line” does not necessarily hold true – at least for the Internet.