The work we do ranges from the large to the tiny-from zesty core routers that keep data flowing through the Internet and go as fast as anything on earth breathing steam, down to little sensors, battery-powered nodes that you scatter around the environment to collect data.
- Dr. Mike MacGregor, Professor of Computing Science, University of Alberta
In today's era of YouTube and World of Warcraft, it's easy to forget that not long ago, the Internet wasn't so good at streaming video or orchestrating games that people all over the world can play at the same time.
But in just a decade, the Internet has upgraded from tortoise to hare. It's much faster and far more capable of handling immense volumes of data and complex tasks like video on demand and immersive multi-player games. But there's still room to improve. And we Internet users are like curlers-we want to go harder.
So does Mike MacGregor, a professor of computing science at the University of Alberta (U of A). After working at Telus in the 1990s and playing a key role in advancing Internet service in Alberta, MacGregor now does a variety of research on improving communication networks.
One such project is about designing routers that will make the Internet work faster and better to meet our ever-growing service demands.
What are routers?
Routers make sure that data travelling through the Internet goes where it's supposed to go, as fast as possible. They're somewhat like intersections for cars, though that's not a perfect analogy, since they communicate with each other and do other things that intersections don't.
"Intersections out in residential areas don't have to cope with a lot of traffic," says MacGregor. "But when you get towards downtown, the traffic gets heavier, and the intersections have to be able to deal with more traffic."
"It's the same thing with routers," he continues. "As you go further in towards the core of the network, the routers get more and more capable. And finally you get to the backbone of the Internet, and these routers are extremely high-speed devices, as capable as any modern high-powered supercomputer.
"The only real difference is that core routers run a limited set of (traffic-related) applications, whereas supercomputers run diverse applications, anything from simulating the life cycles of stars to mapping the human genome."
Leading companies like Cisco Systems and Nortel sell core routers for well over a million dollars, and their designs are fairly rigid.
"If you want to change something significant," MacGregor explains, "you do what's called a forklift upgrade, which is you bring the forklift in, pick up the current box, and put in a new one. And it costs you several zeros to do that."
A different kind of router
Many researchers have become interested in trying to build a different kind of router-a router that's cheaper and changeable, software-based rather than hardware-based. MacGregor wanted to try building one too, and he wanted it to go fast. When he met PhD student Qinghua Ye, he knew he had found the man for the job.
"Qinghua had worked at an outfit called Lenovo, and he was on the team that produced a supercomputer that was in the top five worldwide (at the time)," says MacGregor.
"I was confident he could make this machine run fast, and that's exactly what he's done… If you're going to try to go fast, you've got to be capable and experienced and careful, and he's all of those things."
For a software foundation Ye used Click, an open-source router software project that was started at MIT. He modified the software to extend its functions and then installed it on commodity hardware, off-the-shelf PCs. And he connected the PCs together with cluster architecture, a type of network architecture often used in supercomputing.
As a result of all this, Ye's router is fast and flexible-and roughly one-tenth the price of a commercial core router.
"Because the system is based on general-purpose PCs, you can install or change or configure things however you want," explains Ye. "You can even add your own code, because the code for the router is open-source."
Ye's research focus is the scalability of the router: extending the capabilities of the nodes or individual PCs. "If you use one PC, it can process maybe 1.5 million packets a second. But if you use four or eight nodes, you can expand the forwarding capability to six or 10 million packets a second."
Scalability is important because as traffic gets heavier, the router can be easily expanded to keep up.
"For example," says Ye, "when a company is quite small, maybe it uses just one node to handle its traffic. That's enough. But as the company grows, it will need more forwarding capabilities from its router. (If the company is using a scalable cluster-based router), all it has to do is add more nodes and connect them together to get higher performance."
Moving forward with cluster-based routers
With features like scalability, affordability, and flexibility, there is great potential for routers like Ye's to be useful to businesses that can't plunk down a million dollars for a router and that want a router that they can adapt to their specific needs.
This type of router is also particularly useful to researchers. "If you want to do research on some specific functionality, you can change what you need to test that and then analyze the router's performance," says Ye.
There are still problems to overcome with cluster-based routers, but now that Ye has built one at the U of A, the university is able to do more research that tackles the problems. "My next group of grad students will use (Ye's router) as their experimental platform," says MacGregor. "What it gives us is a leg up."