Koman: I'm just struck by how much more virulent the social conversation is over Napster and peer-to-peer than over, say, the Web. There was concern about child pornography on the Web, but the Web being essentially a one-way publishing or broadcast medium as opposed to a two-way medium -- is that the difference?
Shirky: Yes. When the Web came out, everyone focused on the fact that it was a visual medium, and that's why advertisers were interested in it. But it's plainly the fact that it cannot be annotated or followed up to that was really the reason advertisers were interested in it. You can make outlandish claims without anyone contradicting you.
And you can see the effect of that on the corporate landscape by noting that every time a technology comes along like Third Voice that threatens to let people annotate Web sites, there's a huge backlash and "No one is going to have a conversation about my Web site that I don't control." Peer to peer, by being much more Usenet-like, much more IRC-like, is more resistant to that kind of control and is therefore upsetting the people for whom that bottleneck is a useful bottleneck.
Koman: In our report, "2001 P2P Networking Overview," we talk at some length about making a shift from the Center Net to the Edge Net, and one of the points that Lucas Gonze makes is that that's a shift from being machine-specific to not really knowing which machine you're talking to. It's a shift from machine-centrism to content-centrism. That is, I don't care who you are. I don't care what machine you have or what your IP address is. I just want this file and I'll take any copy of it that's out there.
|Clay will be online all week to discuss these ideas and much more.|
Shirky:I would argue in fact that what the peer-to-peer world has shown us is that all kinds of things can have addresses. And that when you point to Napster, of course you're going to end up with a content-centered model. But when you point to ICQ, you're getting a people-centric model. And if you're going to start ending up in a world where Web services are offered over, say, Jabber, you're going to end up with a service-centric model and so forth.
It's the same problem people have been having in the hardware space. Everybody wanted to know, "What comes after the PC?" And the answer, of course, is "everything." Everything is going to start happening all at once. We've got the PDAs, but we've also got mainframes running Linux and we've got wristwatches and, you know, there is never going to be a sole class of hardware that mediates all our connections together.
In the same way I think that this is the lesson of peer-to-peer: It's the decentering of the address space. We've just lived through 25 years of a completely machine-centric world, where every other protocol is hung off of a machine address somewhere -- you know firstname.lastname@example.org, telnet.oreilly.com, www.oreilly.com. And we're now moving into a world where some things don't have that. When you and I collaborate on Groove, I don't care where your machine is. When you and I talk on ICQ, I don't care where the machine you're using as a terminal is, and so forth. So I think that the huge legacy of peer-to-peer -- even if peer-to-peer is "dead," I think the legacy that is going to fundamentally change the landscape is the notion that address spaces no longer have to be tied to machines, but they don't have to be tied to content either.
Koman: Content, people, resources.
Shirky: Right. Any resource that can be named can be addressed.
Koman: One other thought, this one about Code Red, which is essentially a massive denial-of-service attack on the White House. Denial-of-service attacks seem to me to be a problem of the center of the Net. In a distributed edge Net, where you're free of being tied to one actual machine, and you have sort of massive redundancy of content at least -- that is to say, there are 100,000 copies of whatever the hit song is -- you don't care what the status of any one machine is as long as you can get one of those redundant copies. Does that make the Net more stable in that things like DoS attacks become irrelevant if you don't really care about the status of any single machine?
Shirky: Well, you could have a denial-of-service attack on Napster by attacking the central look-up servers, server.napster.com and server2.napster.com. Gnutella is more resistant to that kind of attack. I think the real core of the resistance is not so much denial of service, although that's a good example, as just another iteration of something we have seen in the history of computer engineering over and over and over again -- which is when a certain part becomes so unreliable, relative to the application it's deployed for, that it's no longer to be trusted, the solution is almost invariably to move out to repetition of that part over several iterations. When the CPU can't be tweaked any more, we go to parallel processing. When a disk drive isn't reliable enough for a bank to entrust its data on, we go to RAID. And so what Napster has shown us is you can build a redundant array of inexpensive servers and it is much harder -- it would be much harder to bring down at the level of the individual service. If you wanted to prevent anyone from downloading a certain Britney Spears song on the Gnutella network, it would be nearly impossible. And I think that that is a model that corporations are going to begin to respond to because they are in fact the people who own thousands and tens of thousands of desktops and for whom redundancy and backup is a permanently critical issue. And since they've already spent the money on the hardware, they might as well use it for these other kinds of things. So I think that it is not so much a question of center and edge as it is a question of redundancy of inexpensive and unreliable parts is often a superior strategy in general, and I think that denial of service attacks show us another place where that is the case.
Koman: Right, the dichotomy is between highly reliable but very rare machines versus very unreliable but massively redundant --
Shirky: Exactly. When Napster was at its height, the chance that any given Napster server was online at any given moment was very small, but the chance that you could get a copy of "Oops, I Did It Again" was 100 percent, with perhaps no copy being more than 10 percent reliable. There was never a moment when you couldn't get that Britney Spears song. That is computationally a really interesting model, in engineering terms, because it follows on from a lot of what we've seen. Napster is forward-error correcting in a sort of metaphorical way by providing tremendous redundancy.
One of the other things -- I mean the equivalent of a denial-of-service attack on Napster was when a bunch of boneheads from the music industry announced that they were going to "flood" the Napster network with bogus copies of popular songs, not realizing of course that the point of Napster is that no one stores songs on their hard drive that they don't listen to. If I download a copy of something that purports to be "Wild Horses" by the Rolling Stones and is in fact an anti-Napster screed, I'm going to delete it. So good content propagates in the system and bad content falls out. And it was a mark of how little people understood the value of the Napster network that they thought they could even begin to flood it with bogus content.
Koman: So back to your keynote theme, and we'll finish up. Whether P2P is alive or dead, it will have a lasting impact in terms of rewiring of the Net, of re-architecting the Net?
Shirky:I wouldn't even say re-architecting. The Net is like a car whose engine you tinker with even as you're driving it. So when I think about peer-to-peer, I don't think, "Oh this is going to replace everything that's gone before." I think that what it's done is put a whole lot of new tools in the tool box. And for all the hype that came and went, there is an army of 23-year-olds for whom what Napster did wasn't a revelation or a revolution, it was just new information. And now they think, "Oh, if we need a 600-gig hard drive, we don't have to buy a 600-gig hard drive. We just need 600 people to give us a gig on their PC."
Someone out there right now is working on an application that needs 30,000 sound cards to run. I don't know what that application is, but I'm going to be awfully interested when it launches. And these kinds of notions, that the domain name system is not the only way to address the IP network, that aggregated resources can either be more reliable or more scalable than buying and bundling all the applications in the center ... These tools and techniques are going to become normal during the next five years until we will take it for granted that certain classes of applications simply distribute the load across the network and not even really notice that that was something that didn't become part of the general lexicon until the last couple of years.
- Trackback from http://www.oliviertravers.com/archives/2001/08/26/the_great_rewiring/index.php
The Great Rewiring
2004-03-08 14:28:39 [View]
2001-08-22 11:49:19 skibiski [View]