oreilly.comSafari Books Online.Conferences.
Articles Radar Books  

A Free Software Agenda for Peer-to-Peer
Pages: 1, 2, 3, 4

Security and the resource/reputation services it enables

Hanging over everything is the problem of security. Two interesting issues are highlighted by peer-to-peer.



The first is that a peer-to-peer application is both a client and a server. So now everybody is running a server; that's what empowerment means. And now every person has to be paranoid, like a system administrator on a network. Welcome to empowerment.

Let's not exaggerate the risks of peer-to-peer. If somebody exploits a buffer overflow on your Web site or database server, you really have to worry. If somebody exploits a buffer overflow on a random PC in your organization, you probably have less to lose (unless the PC belongs to a vice president and the vice president has left the corporate plans for the upcoming year on the PC). So the first thing companies should do with peer-to-peer is teach their vice presidents to encrypt their files. (Even Windows now has an Encrypted File System.) Apart from that, developers have some responsibilities toward peer-to-peer applications.

It's time to get security right, as much as we can. The long list of recent vulnerabilities in free software that Linux Weekly News published just last week is cautionary. Perhaps it's time to move away from the C language as the default application platform. Can free software developers pledge to use C only when it's called for--that is, when speed is absolutely critical or the software needs access to low-level hardware registers? Otherwise, you should make it a habit to code in something more secure like Java, and accept the performance cost. I'm not religious about Java or any other platform; if you want, you could code in Scheme, probably at a higher performance cost. But let's eliminate the most common classes of security flaws.

The second interesting security issue in peer-to-peer is that it tends to break out from the local area network; the old, rigid idea of the "intranet" is cast aside. You are sending data back and forth with people across the Internet, and you have to protect that data. So we'll finally see end-to-end security everywhere. Firewalls will be less important, although they're still good for preventing common spoofing and denial-of-service attacks.

Data integrity and privacy are known technologies; they may not be simple, but the computer field has plenty of experience implementing them. We have to teach people how to use encryption, how to do integrity checks on files, how to distribute signatures, and how to form Webs of Trust. A Web of Trust is a classic peer-to-peer architecture, and it may be the best type of architecture for certificates. Big, bureaucratic certificate authorities embody risks; that came home to everybody last year when somebody managed to wrangle a false certificate from Verisign certifying that he was Microsoft. Let's take the wonderful PGP (Pretty Good Privacy) infrastructure we've got in the free software movement, and build reputation and trust systems on it. Advogato is a classic free-software example. Some say that the Web of Trust is not scalable, but to a certain extent our relationships with other people are not scalable either. That criterion may not be so important.

Beyond those basics come authentication, and reputation, and non-repudiation, and update protection, and all sorts of other trust issues. Resource allocation also gets mixed in. Someone may put a camera on his bike and believe that the sixty hours of footage he took on his cross-European ride is the greatest thing to share on a peer-to-peer system, but we have to restrain him from uploading it unless he's willing to donate some other resource we can use. These are the sorts of issues that are built on top of the standard security infrastructure of encryption, digests, and signatures.

I find it interesting that we've come so far with a system of privileges that basically comes down to assigning a user to every major function of the system (like mail, DNS, databases, and so on) and allowing that user sole access to the files used by that function. That's a very simple privilege system! The traditional Unix permissions don't go much further. I have encountered hardly any projects that even use the Unix group permission, although the group bits of the inode have moonlighted as a useful place to store hidden information about the file.

Access control lists were supposed to be the big advance in computer security. But I never heard anybody say, "Boy, access control lists really saved this project! If I couldn't assign that user GRANT READ privileges. . . ."

Still, we're going to need a new security superstructure for peer-to-peer. I don't know whether this means we should all be running a system that contains mandatory access control, like SELinux or TrustedBSD, or--the following strikes me as more likely--whether free software developers should work on systems for checking and sharing reputation, such as the one used in Free Haven.

I want to digress for a minute and talk about a subject that's very important for some members of the peer-to-peer community: anonymity. There are problems combining anonymity or pseudonymity with other goals in peer-to-peer systems. Resource allocation, for instance: If you can't exclude someone, you can't control resources.

Peer-to-peer systems like Freenet allow anonymous users, but I think they're incompatible with resource management for the same reason. Pseudonymity is no better, so long as a person can always come back bearing a new pseudonym. Maybe there are clever ways to make people donate resources and gradually raise their participation over time, but those would raise barriers to new users. So I'm not sure anonymity is conducive to robust peer-to-peer systems, at least systems that value some kind of persistence or ongoing collaboration among users. However, anonymity is a very important goal, and there should be systems in place to protect it where appropriate.

I mentioned earlier that proprietary software companies have trouble finding their niches in peer-to-peer. But I think there's a great business opportunity in reputation and resource management systems. Since they are services, they could be developed as free software and become commercially viable. Such services do not have to become massive and try to encompass all kinds of users; they could be limited to serving one population or one application. I think the potential for doing business in the area of reputation and resource management is enormous.

The big problem such a business has to solve, as with any system involving online identities, is using the proper out-of-band procedures to verify and collect information on participants. Perhaps a legally notarized document would be enough.

Pages: 1, 2, 3, 4

Next Pagearrow





P2P Weblogs

Richard Koman Richard Koman's Weblog
Supreme Court Decides Unanimously Against Grokster
Updating as we go. Supremes have ruled 9-0 in favor of the studios in MGM v Grokster. But does the decision have wider import? Is it a death knell for tech? It's starting to look like the answer is no. (Jun 27, 2005)

> More from O'Reilly Developer Weblogs


More Weblogs
FolderShare remote computer search: better privacy than Google Desktop? [Sid Steward]

Data Condoms: Solutions for Private, Remote Search Indexes [Sid Steward]

Behold! Google the darknet/p2p search engine! [Sid Steward]

Open Source & The Fallacy Of Composition [Spencer Critchley]