oreilly.comSafari Books Online.Conferences.
Articles Radar Books  

Gnutella and Freenet Represent True Technological Innovation
Pages: 1, 2, 3, 4

Freenet basics

The goals of Gnutella and Freenet are very different. Those of Freenet are more explicitly socio-political and, to many people, deliciously subversive:

  • To allow people to distribute material anonymously.

  • To allow people to retrieve material anonymously.

  • To make the removal of material almost insuperably difficult.

  • To operate without central control.

The latter feature characterizes both Freenet and Gnutella, and differentiates them from Napster. A court order can shut down Napster (and any mirror site), but shutting down Freenet or Gnutella would be just as hard as prosecuting all those 317,000 Internet users who allegedly exchanged Metallica songs.

Another technical goal of Freenet proves particularly interesting: it spreads data randomly among sites, where the data can appear and disappear unpredictably. In addition to serving the social goals listed above, Freenet offers an intriguing possible solution to the problem of Internet congestion, because popular information automatically propagates to many sites.

Freenet bears no relation to the community networks with similar names of the 1980s and early 1990s. It grew out of a research project launched in 1997 by Ian Clarke at the Division of Informatics at the University of Edinburgh. He has made a paper from that project available online. (Warning: It's a PDF and I had trouble both viewing and printing it from a couple different PDF viewers.)

The Freenet architecture and protocol is similar to Gnutella in many ways. Each cooperating person downloads a client and sends requests to a few other known clients. Requests are uniquely marked, are handed from one site to another, are temporarily stored on a stack so that data can be returned, and are dropped after each one's time-to-live expires.

The game of find-the-data

The main difference between the two systems is that when a Freenet client satisfies a request, it passes the entire data to the requester. This is an option in Gnutella but is not required. Even more important, as the data passes back along a chain of Freenet clients to the original requester, each client keeps a copy (unless it is a huge amount of data and the client decides that keeping it is not worth the disk space). The client keeps the data so long as other people keep asking for it, but discards the data after some period of time when no one seems to want it.

What is accomplished by this practice, apparently so inefficient compared to the Internet? Ian Clarke tells me it is not all that inefficient -- for large amounts of data its efficiency is comparable to that of the Web -- and that in fact it accomplishes quite a number of things:

  • It allows the transience required to meet the goals of anonymity and persistence.

  • It lets small sites distribute large, popular documents without suffering bandwidth problems. You don't have run out and get a 16-processor UltraSPARC, or rent space on someone else's, just because you put out an exciting video that lots of people want to download.

  • It rewards popular material and allows unpopular material to disappear quietly. In this regard, Freenet is definitely different from the Eternity Service (a model proposed a few years ago but never implemented). Its goal is not to proliferate any kind of garbage people want, but to prevent material from being taken down if a lot of people think it's valuable.

  • It tends to bring data close to those who want it. (Like Gnutella, "closeness" in Freenet has no geographical meaning, but refers only to the number of hops between Internet sites.) This is because the first request from node A to node B may have to pass through many other nodes, but the second and subsequent requests can be satisfied by node B directly. Furthermore, nodes A and B are likely to be one or two hops away from many nodes operated by similar people who like the same kinds of content. All those people will be pleased to find the content coming back quickly after the first request is satisfied.

The last item is particularly interesting architecturally, because the popularity of each site's material causes the Freenet system to actually alter its topology. When a site discovers that it is getting a lot of material routinely from one of its partners, it tends to favor that partner for future requests. Bandwidth increases where it benefits the end users. Building on my Europe/China Silk Road analogy, Clarke says, "Freenet is like bringing China closer to Europe as more and more Europeans ask to trade with it."

Pages: 1, 2, 3, 4

Next Pagearrow

P2P Weblogs

Richard Koman Richard Koman's Weblog
Supreme Court Decides Unanimously Against Grokster
Updating as we go. Supremes have ruled 9-0 in favor of the studios in MGM v Grokster. But does the decision have wider import? Is it a death knell for tech? It's starting to look like the answer is no. (Jun 27, 2005)

> More from O'Reilly Developer Weblogs

More Weblogs
FolderShare remote computer search: better privacy than Google Desktop? [Sid Steward]

Data Condoms: Solutions for Private, Remote Search Indexes [Sid Steward]

Behold! Google the darknet/p2p search engine! [Sid Steward]

Open Source & The Fallacy Of Composition [Spencer Critchley]