• Home

Exploring Internet Alternatives: the GENI Project

bridges vol. 14, July 2007 / Feature Article

by Ellen W. Zegura


The Internet has had an incredible impact on all aspects of our lives. Only a decade or so has passed since the Internet moved out of the realm of research and was made available for public use, yet almost all of us routinely use it to work, to play, and to learn. There are few aspects of our lives that are not touched in some way by the Internet, and few technological developments have had such broad impact in such a short time.

If we dig a little deeper, however, we encounter some troubling indications that, below the surface, the Internet may be degrading, putting both current and future uses at unacceptable risk.

For example, most of our critical infrastructures, such as the phone system and the air traffic control system, are available roughly 99.999% of the time - so-called, five nines of availability. In contrast, estimates of the Internet's availability are typically on the order of 99.9% or less; thus the Internet's downtime is two orders of magnitude higher than that of our other critical infrastructures. This level of availability has not slowed the Internet's amazing rise to prominence, but it has prevented various critical functions such as 911 emergency calls or critical corporate communication from using the Internet as the primary communication infrastructure. Stated more personally, would you have tele-surgery over today's Internet?


{access view=guest}Access to the full article is free, but requires you to register. Registration is simple and quick - all we need is your name and a valid e-mail address. We appreciate your interest in bridges.{/access} {access view=!guest} We might (simply) hope that the Internet will become more reliable over time. The evidence indicates, however, that the opposite is occurring. One indication of trends in reliability is the costs associated with network operations. Network operators are the handy-persons of the Internet. They work behind the scenes to keep the network functioning by monitoring, diagnosing faults, repairing problems, and upgrading systems. The effort expended on network operations is a reasonable indication of how reliable (or fragile) the underlying systems are. Managing a large data network today is immensely difficult. Let us cite two examples: The costs of the people and systems that manage a network typically exceed the costs of the network equipment. Further, more than half of all network outages are caused by operator error rather than by equipment failures. The current network management situation is like having the handy-person on speed-dial. At some point, it makes you wonder if more fundamental structural changes are called for.

Beyond reliability, the Internet has other limitations that make it ill-prepared for the world of tomorrow. Mobile and wireless devices are rapidly becoming the dominant technology for accessing the Internet in the developed world. In addition, the Internet is expanding in developing regions, where wireless is easier to deploy than fixed infrastructure. The Internet is not prepared for a future with vast numbers of wireless and mobile devices.

For example, mobile and wireless devices experience network connectivity that varies tremendously over short time periods, unlike fixed devices that can count on low variance. Think of talking on a cell phone while driving through a tunnel or walking inside a building - the quality of connectivity varies from very good to non-existent. The fundamental protocols of today's Internet, including those that find routes and ensure reliable transfer of information, simply don't work when the end devices or routers on the path are subject to intermittent connectivity. In the future, the need to deal with this phenomenon, often called delay- or disruption-tolerant networking, may not be restricted to niche applications such as space missions, but will need to be incorporated into the protocols we use for everyday communications.

Perhaps the most obvious limitation of the current Internet is the security issues that we all encounter on a regular basis, ranging from spam to phishing attacks to Web site masquerading. Even savvy users must exercise increasing care not to fall for the tricks employed by bad guys. (Did my mother really send me that e-birthday card?) We've surely seen just the tip of the iceberg in organized cyber attacks that deny financial and other critical services.

These security problems are not exclusively network problems; indeed, it is somewhat unfair to call them limitations of the Internet. However, they depend on networks for propagation and they can exploit network characteristics to make it easier to launch attacks and avoid detection. Further, policy decisions made about networks (e.g., how open or closed) have important implications for the ease of attacks and the technologies that can be used to counter them.

Lest the situation seem dire, there is some good news. Researchers in the computer networking and distributed systems communities have promising proposals for addressing many of these concerns. The bad news is that we currently can't validate these proposed solutions, so scientific progress in the field is hindered and deployment is extremely unlikely.

House or building architects who want to brainstorm alternatives to an existing design use a transparent paper called (at least in some corners of the US) "white trash." This paper can be overlaid on a current design so that any unchangeable constraints, such as the location of load-bearing walls, can be captured, while otherwise allowing for free form thought. Current science and engineering alternatives to the current Internet work a lot like this transparent paper process. Some set of constraints is assumed, and designers develop ideas for alternatives on paper or whiteboards.

It isn't feasible, at the brainstorming phase, to build and deploy the new idea on the Internet, so validation is done using analysis, simulation, and small-scale experimentation, just like a building architect might create a virtual 3D fly-through or build a mock-up. But these validation techniques, in both domains, are naturally limited. They reflect some aspects of reality, but they often don't get everything right. Indeed, conventional wisdom about the layout of footpaths on a campus indicates that you should wait to build them until you see where the students choose to walk!

GENI - the Global Environment for Network Innovation - has been proposed by the research community to fill the gap between innovation on paper and potential for deployment. GENI is an experimental facility that would transform Internet development by allowing the exploration of radical designs for a future global networking infrastructure. A key feature of GENI is that it will support and encourage real end users - allowing the "footpaths" to be tested and even discovered.

In technical terms, GENI will be a shared, programmable, instrumented network substrate. Multiple experiments can run on GENI simultaneously and in non-interfering ways. Long-running experiments that attract and support real users will be possible. GENI will have control and monitoring capabilities that allow protocols to be tested over a wide range of conditions, with detailed information about behavior.

Returning to the building analogy - GENI is like the coolest, rapidly reconfigurable space you can imagine. Think of Hogwart's Castle in the popular Harry Potter books. Walls, staircases, and rooms can all move about on a moment's notice. One day the space has three floors, the next it has five. One day all doors are open, the next you must have the secret password to enter some rooms.

GENI is being proposed to the United States National Science Foundation (NSF) as a Major Research and Equipment Facility Construction (MREFC) project. The MREFC funds have been used in the past for large projects such as the IceCube Neutrino Observatory in physics and the Scientific Ocean Drilling Vessel in oceanography. Under the NSF auspices, a GENI Project Office (GPO) has recently been awarded to BBN Technologies, a company known for pioneering the development of the ARPANET, the forerunner of the Internet. A GENI Science Council (GSC) has been formed and its initial members have been selected to represent the computing research community in guiding the science plan and facility design for GENI. The GSC also includes members with expertise in social, ethical, and political issues associated with technical design.

Where might this lead? It's difficult to say with certainty, of course. Projects to provide experimentation capability are being pursued in several other countries, including the CANARIE project in Canada, a Next Generation Internet project in Japan, and the GEANT2 project in Europe. Researchers and scientists may learn that patches are possible for the current Internet that will shore it up far better than we know how to do today, giving patching companies incentives for development. Users may find that GENI, under certain configurations, is so attractive that they don't want to use the current Internet. This scenario has much in common with the move from traditional telephone systems to the current Internet. We may learn that the current Internet does certain things as well as can possibly be done, which would be good to know so that we can turn our energies elsewhere.

Both the GPO and the GSC welcome interaction with international researchers, policy makers, and potential industrial partners. This type of work is inherently multi-disciplinary and multi-national. For contact and other information, see www.geni.net.


Acknowledgements. Many researchers have worked on the ideas behind GENI, as well as contributing directly or indirectly to the text in this article. The author would like to acknowledge David Clark, Larry Peterson, Scott Shenker, Jennifer Rexford, and Helen Nissenbaum, as well as other members of the GENI Planning Group, GENI working groups, and GENI Science Council.



***

The author, Ellen W. Zegura, is Professor, Associate Dean, and Division Chair in the College of Computing at the Georgia Institute of Technology.


{/access}

 Print  Email