The digital chain connecting one’s laptop to a Web site thousands of miles away can be traversed by a single click–so long as no link within the chain refuses to carry the signal. Such refusals, though still rare, are on the rise.
The Net is increasingly getting broken into cantons.
The digital chain connecting one’s laptop to a Web site thousands of miles away can be traversed by a single click–so long as no link within the chain refuses to carry the signal.
Such refusals, though still rare, are on the rise.
The Internet was built on principles of “end-to-end neutrality,” an engineering rule of thumb calling for smarts at edges of the network rather than in the middle. The idea was–and remains–that fancy features work better at the edges. Since we can’t anticipate the uses to which the network itself might be put, globally optimizing it for one use might regrettably disadvantage others.
Thus the basics, such as data encryption between distant users, and verification that data sent is actually received, are left to the computers that attach to the Net rather than to the network itself. The Net’s job has been determinedly simple: Any given intermediary will use best efforts to move the data it receives at least one step closer to its declared destination.
But a number of pressures are converging to complicate that job.
Internet service providers and their customers have long since tired of handling overwhelming volumes of spam. Parents want to shield their children from pornography and hate speech. Governments want to exclude certain content from their respective territories.
They share a common desire to readily categorize and filter out that which they don’t want themselves or others to see. While there are a variety of possible solutions for each problem, one common approach is to ask the network to help: An end user’s computer need not be burdened (or perhaps entrusted) with the task of sorting out what’s desirable and what’s not.
Instead intermediaries could do this so long as they can be enticed–or coerced–to apply exceptions to the end-to-end rule of “whatever this data is, help get it to where it’s going.”
Documenting the new crop of discerning Net couriers among the old-time end-to-enders isn’t easy. Any number of problems might prevent someone from reaching a requested Web page or other Internet resource, including network congestion, misconfigured servers or broken routers.
How, then, can you know when a blockage is due to the explicit filtering of content somewhere within the network at someone else’s initiative? To complicate matters, filtering can take place anywhere along the line that extends from one’s own computer to one’s ISP to intermediate carriers to the destination’s ISP to the destination server itself.
Looking for answers to that question allows an investigator to see the increasingly complicated dynamics that crop up under the hood of the seemingly seamless Net.
The Saudi experience
Some filters are openly and unapologetically used. Saudi Arabia, for example, is quite open about the fact that all network traffic going into and out of the kingdom is routed through a central farm of proxy servers. These servers review each Web page request from each Saudi Internet user, and if the page is listed on the government-maintained blacklist, a message explicitly denying access will be displayed in the user’s browser.
While the list itself isn’t public, the Saudi government was willing to give proxy access to my colleague Ben Edelman and me to test what’s filtered and what’s not. Over 60,000 requested URL’s later–among which there were 2,000 denials–we’ve released “Documentation of Internet Filtering in Saudi Arabia,” a snapshot in time of some of the Web pages filtered there.
Other regimes are much more subtle with their filtering.
In China, for example, users cannot easily tell the difference between a filtered page and one that is not available simply because of transient accident. On the supply side, the French courts have demanded that Yahoo block access to auctions of Nazi memorabilia by French Web surfers. Yahoo has raised many objections to that demand, though technical impossibility is not the strongest among them: A panel of experts led by Internet pioneer Vint Cerf concluded that such filtering was more or less feasible.
These cantons are not simply geographic: They’re multidimensional.
Companies, libraries and schools worldwide frequently adopt firewalls and filters to rein in unwanted types of Web surfing. Entities deemed spammers (or supporters of spammers) can be blacklisted by spam cops, whose judgments are heeded by large numbers of network intermediaries. Recent efforts include collaborative spam filtering on the basis of thousands of users’ weighted “votes”–diffusing responsibility for deeming someone enough of a miscreant to warrant denial of network access. End users may or may not be aware that their outgoing or incoming e-mail is being frisked for spam.
Some of those who identified end-to-end neutrality as an engineering principle now embrace it as a political one. It is becoming, in part, a plea not to overlay one filter on top of another within the collectively hallucinated cloud that is the Internet. In the meantime, it’s important to show where the filtering is happening within the network cloud, and if possible by whom.
We are developing a
distributed application, much like SETI at home, that users can download and, while their computers are idle, test their global reach from their particular point of network presence. The results worldwide can be collated and we can see the boundaries shift as they are created and unveiled.
If we fail to regularly take the pulse of the Internet at its most basic level–as it moves packets from here to there–we might experience the end of end-to-end neutrality before we even realize we’ve lost it. After that, it’s going to be too late to do anything about it.
Author: Jonathan Zittrain
News Service: News.com