A Free Software Agenda for Peer-to-Peerby Andy Oram
Editor's Note: The following speech was given at the February 2002 Free and Open Source Software Developer's Meeting in Brussels, Belgium.
Although this was billed as a speech about peer-to-peer, I'm actually going to talk about how to make Internet-connected computers function better. The suggestions I offer today would make the Internet a more effective place for many types of applications, peer-to-peer as well as others.
New challenges for free software developers
I've never been that happy with the term "peer-to-peer," which has the sound of a marketing term and has never been tied to a clear specification of technical criteria. But I think the concept drives developers to build new systems that stretch our current use of the Internet in valuable ways. Peer-to-peer systems definitely have certain vulnerabilities, so they and their infrastructure are forced to be better than traditional applications. I have repeatedly said that the problems in peer-to-peer systems are neither new nor unique; they make us look for solutions to old problems that we all worked around or tried to ignore before.
Furthermore, the challenges in this area are just right for free software. First, peer-to-peer makes life especially hard for a proprietary software company. Few companies can survive even in the current market for conventional products, but peer-to-peer makes the challenge even worse. Most of the activity in peer-to-peer systems, by definition, goes on at the end points. The situation is like all the users bringing parts for a model airplane, and the proprietary company providing the glue. The companies want you to sniff the glue and come back for more, but it's a very thin basis on which to charge money.
I'm not denigrating what proprietary companies do; I'm just suggesting that they have chosen a steep uphill path. There are many people with bold thinking and good products in the proprietary companies I talked to. Researchers trying to create mega-projects with ambitious goals should examine the commercial products to see what can be realistically expected from the next generation of software.
The 2002 O'Reilly Emerging Technologies Conference explored how P2P and Web services are coming together in a new Internet operating system.
In contrast, free software fits the spirit of peer-to-peer beautifully. Peer-to-peer is about empowering end users. Customization is a common feature; extending the system with new capabilities is to be expected. Because central administrators have no control over what people do on their systems (at least in wide area networks), peer-to-peer systems can't tolerate the proprietary hack of security through obscurity. Security has to be built into the protocol. For these reasons, great advances in peer-to-peer have already emerged from academia and the free software movement, and I expect more to come from these sources.
Some researchers, in Scientific American and elsewhere, call for mega-projects that solve all resource, routing, and indexing problems at once. While I am in awe of some of the research, I suspect that a more quick-and-dirty approach, based on an easier set of assumptions, may be more fruitful. Ted Nelson said dismissively of the World Wide Web, "Berners-Lee picked off the easy stuff." Yes, he did, and that's why the World Wide Web was successful enough to change how all of us work. If you want to develop something really significant, make it easy for yourselves.
Is peer-to-peer worth the attention of leaders in the free software movement? I believe it offers a critical opportunity: think strategically.
It would be tempting, but not productive, to take this opportunity to bash Microsoft. I believe that Microsoft got to its pre-eminent position because it manages to keep up with trends and meet people's needs in some basic ways. It has the most effective programmers in the industry, the most effective marketing people in the industry, and the most effective lawyers in what's left of the industry.
Microsoft, in its position of control over the resources of the individual computer, can take over any initiative centered on that computer, to the extent it's allowed by law. After seeing what happened to disk compression, office software, and (if recent trends continue) audio playback technology, new software companies could easily come to believe that they exist simply to provide ideas for new Microsoft Windows features. Even the TCP/IP stack and the Web browser have not been exempt; everything that wants to insert itself onto the desktop has to pass through the Mines of Moria. (After the second volume of Lord of the Rings has been released in movie form, I will be able to speak of Shebol's lair, an even better metaphor.)
The only hope of breaking the monopoly--and not a sure thing, either--is to develop a radically new form of software that is not easy to capture, that distributes control among many different sites, and that offers so many advantages that people rush out to embrace it rather than stay safely in the sheepfold.
Think also of Microsoft's strategy. Free software is the most popular server software: for instance, BIND, Apache, and sendmail are superior to Microsoft's products by almost any measure. Microsoft hopes to spread upward from the desktop to the server, as a disruptive technology (described in Clayton M. Christensen's book, The Innovator's Dilemma) traditionally does--from the cheaper and simpler system to the more expensive and sophisticated system. The idea behind this strategy is that system administrators will want the management of a corporate server to be as easy as running office applications.
I don't want to let this form of Gresham's law triumph. Why don't we reverse it? Let's provide servers that are so powerful, so easy to customize, and so robust in the face of failure or attack, that every user wants to run a server. This movement will provide more competition, not only to Microsoft, but to other institutions that want to control users' options, such as cable TV companies.