O'Reilly NetworkO'Reilly.com
BooksAll ArticlesSafari BookshelfO'Reilly GearFree NewslettersSearch
Web Services DevCenter Tech Jobs | Forum | Articles

O'Reilly Network: Web Services DevCenter












Subject-specific articles, news, and more:

O'Reilly Network subject-specific sites

Subject-specific articles, news, and more
Open Source
Web Development

Services and Links

by Jon Udell

I found a link in my weblog's referrers file last night that seemed emblematic of the current milieu. Here's the text of the link:

http://www.w3.org/2000/06/webdata/xslt? xslfile=http://ipwebdev.com/outliner_xsl.txt& xmlfile=http://weblog.infoworld.com/udell/rss.xml

Here's a clickable instance of the link, and here's a snapshot of its output:

wedgeJon's Radio
wedgeFirst line trivia at AllConsuming.Net
wedgeIt's just freaking cool
wedgeScripting an interactive service intermediary
wedgeWhen not to cooperate
wedgeTinkering with scripts and service lists
wedgeA performance, expressed in text, data, and code
wedgeIs it software?
wedgeBuild your own bookmarklet

As you'll discover by clicking the triangles, this is an active-outline version of my weblog's RSS feed, using Marc Barrot's activeRenderer technology.

What's going on with this link, and why is it so interesting? Let's decompose the link into its three constituent parts, each of which is a resource--or, we might say, a kind of Web service:

  1. http://www.w3.org/2000/06/webdata/xslt

    This is the W3C's XSLT transformation service. I believe it was Aaron Swartz who first drew my attention to it. You call it on the URL-line, and pass two arguments: a pointer to an XSLT script, and a pointer to an XML file. The output is the XSLT transformation of the XML file.

  2. http://ipwebdev.com/outliner_xsl.txt

    This is Marc Barrot's XSLT script for transforming an RSS file into an active outline. (Editor's note: This script is actually an adaptation of Marc's work done by Adam Wendt, cited originally at this URL: ipwebdev.com/radio/2002/06/07.php#a177.

  3. http://weblog.infoworld.com/udell/rss.xml

    This is my weblog's RSS file.

I've written elsewhere about how a URL can be used to coordinate resources in order to produce a novel resource. This notion of coordination seems intuitively clear, and yet after years of exploration I have yet to fully unravel it.

The View Source principle

Clearly this URL-composition idiom is rooted in the classic Unix pipeline. The composite URL says: pipe the referenced XML data through the referenced filter using the referenced transformation rules. The references, though, are global. Each is a URL in its own right, one that may be cited in an email message, blogged to a Web page, indexed by Google, and used to form other composite URLs. This arrangement has all sorts of interesting ramifications. Two seem especially noteworthy. First, there's what I call the View Source principle. I've long believed that the browser's View Source function had much to do with the meteoric rise of the early Web. When you saw an effect that you liked, you could immediately reveal the code, correlate it with the observed effect, and clone it.

This behavior, argues Susan Blackmore in The Meme Machine, is uniquely human:

Imitation is what makes us special. When you imitate someone else, something is passed on. This 'something' can then be passed on again, and again, and so take on a life of its own. We might call this thing an idea, an instruction, a behavior, a piece of information...but if we are going to study it we shall need to give it a name. Fortunately, there is a name. It is the 'meme'.

It's clear that memes, when packaged as URLs, can easily propagate. Less obvious, but equally vital, is the way in which such packaging encourages imitation. My own first use of this technique imitated Aaron Swartz, and operated in a similar domain: production of an RSS feed. Marc Barrot's use of it went the other way, consuming an RSS feed to produce an active HTML outline. But over at the NOAO National Optical Astronomy Observatories, it's been adapted to a very different purpose. Googling for the URL signature of the W3C's XSLT service, I found a link that transforms VOTable data produced by the NOAO's SIM (Simple Image Access) service.

From astronomers, the technique could propagate to physicists, and thence almost anywhere, creating new (and imitatable) services along the way. Now in fact, as Google reveals, it hasn't propagated very widely. You have to be somewhat geek-inclined to form a new URL in this style, and much more so to whip up your own XSLT script. Assuming, of course, that you have a source of XML data in your domain, and some reason to transform it. Historically neither condition held true for most people, but the weblog/RSS movement is poised to change that.

Consider the source of the URL that prompted this column: I found it in my weblog's referrers file. Had I not already known about these things, clicking the link would have shown me:

  • that the W3C's XLST transformation service exists,

  • that activeRenderer exists,

  • that the two can be yoked together to process my RSS feed's XML data into an active outline,

  • that http://ipwebdev.com/outliner_xsl.txt is an instructive XSLT script, available for reuse and imitation,

  • that the service which transforms my RSS feed into an active outline was deployed by merely posting a link,

  • and that I could consume the service--thereby offering an active outline to people visiting my blog--merely by posting another link.

Once this composite service and its constituents are discovered, they are easy to inspect and imitate. It's true that not many people can (or should!) become XSLT scripters. But lots of people can and do twiddle parameterized URL-driven services.

How do people discover these services? That leads to a second principle: the Web, in many ways, is already a good-enough directory of services.

The good-enough directory

Mine was among the heads that nodded sagely, in 1994, when Internet's lack of an authoritative directory was said to be its Achilles' heel. Boy were we wrong. I'm a huge fan of LDAP, and I think that UDDI may yet find its sweet spot, but a recent project to connect book web sites to local libraries reminded me that the Web already does a pretty good job of advertising its services.

My project, called LibraryLookup, sprang from the observation that ISBNs are an implicit link between various kinds of book-related services, and that a bookmarklet could make that link explicit. The immediate goal was to facilitate a lookup from Amazon, BN, isbn.nu, or All Consuming to my local library, whose OPAC (online public access catalog) supports URL-line-driven query by ISBN.

I then realized that this bookmarklet was a kind of service--packaged as a link, and parameterizable by library and by OPAC. Extending the service to patrons of thousands of libraries was merely a matter of tracking down service entry points. The vendor of my own library's OPAC offered a list of nearly 900 other OPAC-enabled libraries on its web site, and it was easy to transform that list into a set of LibraryLookup bookmarklets. Then a librarian pointed me to a more complete and better-categorized list, which a bit of Perl turned into over 1000 bookmarklets for libraries around the world.

Like the OPAC vendor, the maintainer of the Libdex catalog thought of it as a list for human consumption, not programmatic use. There was no special effort to tag the service entry points. But being good webmasters, they instinctively followed a consistent pattern that was easy to mine. We can hope that, when more people realize how this kind of list is a programmatically-accessible directory, webmasters will be more likely to make modest investments in what Mark Pilgrim calls million-dollar markup.

We're lazy creatures, though. The semantic Web requires more effort than most people are likely to invest. Is there a lazy strategy that will take us where we need to go? Perhaps so. As the LibraryLookup project began to add support for other OPAC vendors, I experimented with a strategy I call "Googling for services." The idea is that in a RESTful world, services exposed as URLs will be indexed and can be searched for. Using this strategy, I was able to round up a number of epixtech and Data Research Associates OPACs by searching Google for their URL signatures.

The experiment wasn't entirely successful, to be sure. These auto-discovered service lists are neither as complete nor as well-categorized as the lists maintained by Libdex. Of course, the links that Google found were never intended to advertise service entry points. Suppose more services were explicitly RESTful (as opposed to implicitly so, subject to reverse engineering of HTML forms). And suppose these RESTful services followed simple conventions, such as the use of descriptive HTML doctitles. And suppose that the Google API were tweaked to return an exhaustive list of entry points matching a URL signature.

None of these hypotheticals requires a huge stretch of the imagination. The more difficult adjustment is to our notion of what directories are, and can be. In a paper entitled Web Services as Human Services, Greg FitzPatrick takes an expansive view:

We will exercise considerable breadth as to what we call a directory. Obviously the current UDDI specification was not designed with this sort of thing in mind, but it is perhaps in keeping with the vision of Web services as reaching beyond the known and adjacent world to the unknown and possible world, where hardwiring and business-as-usual are replaced by exploration and discovery.

Later, describing the conclusions reached by the SKICal (Structured Knowledge Initiative - Calendar) consortium, he writes:

The SKICal consortium came to accept that it was not its task to build another portal or directory over the resources of its member's domains, but rather to make better use of the universal directory already in existence--the Internet.

Exactly. There is no silver-bullet solution here, and formal directories will play an increasing role as the future of Web services unfolds. But service advertisement techniques such as UDDI are not likely to pass the View Source test anytime soon, and will not be easy for most people to imitate. What people can do, pretty easily, is post links. Services that express themselves in terms of links will, therefore, create powerful affordances for use, for imitation, and for discovery.

Jon Udell is an author, information architect, software developer, and new media innovator.

Return to Web Services DevCenter

Sponsored by:

Get Red Hat training and certification.

Contact UsMedia KitPrivacy PolicyPress NewsJobs @ O'Reilly
Copyright © 2000-2006 O’Reilly Media, Inc. All Rights Reserved.
All trademarks and registered trademarks appearing on the O'Reilly Network are the property of their respective owners.
For problems or assistance with this site, email

Have you seen Meerkat?