Reclaiming
the Digital Commons
In last
month’s issue, I looked at growing fears that the Internet is in the process
of being privatized by companies exploiting intellectual property laws to assert
ownership over the Web’s infrastructure and wall off more and more content.
Let’s now examine some of the initiatives aimed at protecting and reclaiming
the “digital commons.”
Today, there’s
considerable concern that aggressive use of intellectual property—most notably
copyright and patents—threatens to “enclose” the open nature of the
Internet and therefore privatize it by stealth.
The fears are
twofold. First, critics argue, today’s overgenerous patent system will enable
opportunistic companies to appropriate the Web’s infrastructure and transform
the Internet’s free and open platform into a proprietary network. Second, ever
more powerful copyright laws, coupled with copy-protection tools and industry
consolidation, will enable a small group of media and information companies to
exert increasingly monopolistic control over digital content.
Not only will
these companies come to own a disproportionate amount of copyrighted material,
but through their ownership of proprietary search and navigation tools,
they’ll also be able to exert a semi-monopoly on access to public-domain data.
This will effectively privatize information that’s intended to be freely
available to all.
Not surprisingly,
growing awareness of these dangers has sparked a host of new initiatives that
are aimed at preserving the digital commons.
Free,
Open, Available
The greatest
panics are usually associated with patents, particularly in cases where
companies appear to have been granted broad rights to basic Web functions,
features, or standards.
However, despite
frequent scares, there’s little hard evidence to suggest that patents are as
yet having a detrimental impact on the Web. In fact, there are grounds for
arguing that the costs of obtaining and enforcing patents are becoming so high
that disenchantment will eventually set in. Even the most conservative estimates
suggest that it now costs at least $20,000 per patent application to obtain
global protection.
Certainly,
there’s growing skepticism about the intrinsic value of patents. For instance,
when Amazon.com announced in March that it had filed for a Web advertising
patent, many questioned its usefulness, arguing that the method described in the
patent is of dubious utility. Jim Nail, a Forrester Research ad analyst, told
ZDNet, “The question is, would anyone want to buy advertising that way?”
Large patent-rich
companies like IBM, Hewlett-Packard, and Xerox realized many years ago that
overzealous patenting can waste time and money. Therefore, in the 1950s, they
devised an alternative. Known as defensive publishing, or “making a technical
disclosure,” this system involves publishing details of inventions rather than
patenting them.
The logic is that
while blanket patenting is wasteful, failing to patent an innovation is risky. A
competitor could subsequently patent it and then demand licensing fees, or even
block you from a market that you had created. Once information about an
invention has been published, however, it constitutes “prior art” and
disqualifies the innovation from being patented.
To this end, a
number of specialist journals were created for publishing technical disclosures.
Today, there are also Web-based services like IP.com.
Ironically, in
introducing defensive publishing techniques, IBM and others created the first
organized system that encourages innovators to make conscious choices about
whether to seek proprietary rights for inventions or deliberately put the
information in the public domain. In effect, they must choose between enclosing
an innovation with intellectual property rights or releasing details of it into
the commons for everyone’s benefit.
True, these
companies didn’t consider what they were doing in this light. They
deliberately published in specialist journals with small circulations and have
continued to be aggressive users of the patent system. Nevertheless, the
subversive potential of defensive publishing is undeniable.
Certainly, it
attracted the attention of the Foresight Institute, a nonprofit educational
organization whose mission is to help prepare society for anticipated advanced
technologies. In 2001, the institute partnered with IP.com to create
priorart.org, a Web-based database for software and nanotechnology disclosures.
priorart.org was
intended to be a central resource where software developers could publish their
innovations, thereby limiting the number of software (and also Web) patents that
would be issued. At the time, Robin Gross, a staff attorney at the Electronic
Frontier Foundation, said to Salon.com: “[T]his is about using the law to make
technology free, open, and available.”
To the
institute’s disappointment, however, the software community reacted to
priorart.org with suspicion, and the site was subsequently closed. The fear was
that rather than expanding the commons, the service would become a honey pot
around which large corporations would gather to review the disclosures and
patent-related innovations themselves.
Today, patent
refuseniks are more focused on pressuring standards bodies like the World Wide
Web Consortium not to allow patented technologies to be incorporated into Web
standards.
Preserving
the Commons
More to the point,
perhaps, many have concluded that copyright, not patents, poses the greater
threat to the digital commons. Certainly, most new initiatives that are intended
to prevent enclosure are focused on copyright.
This isn’t
surprising. It costs just $30 to register a copyright. And although registering
brings some benefits, it’s not even necessary since it’s an automatic right.
Moreover, compared with the 20-year monopoly provided by a patent, copyright now
extends for the lifetime of the creator, plus 70 years.
It has also become
apparent that most valuable content ends up not in the hands of individual
creators, but in the ownership of large media and information companies.
As these companies continuously merge with one another, they’re
creating vast warehouses of copyrighted material.
And as this
content is increasingly digitized, many fear that a potent combination of
draconian copyright laws and digital rights management technologies will see
more and more of humanity’s heritage withheld from the public domain,
imprisoned indefinitely behind electronic padlocks.
Should librarians
care? No, says Ron Simmer, who runs the PATSCAN patent search service at the
Simmer, however,
does not hold the majority opinion. Many librarians are concerned that current
developments will significantly reduce what they’re able to offer patrons.
“It used to be that the national library kept every document ever published in
a country,” says
It’s no
surprise, then, that initiatives to preserve the public domain are proliferating
or that librarians are at the forefront of many of them.
For instance, at
The value of
building an online collection in
And in the belief
that copyright laws are helping large publishers like Reed Elsevier privatize
publicly funded research (a charge denied by Reed Elsevier), many librarians are
working with academics and research institutes to create open archives with the
aim of “freeing the refereed literature.”
However,
self-archiving can only achieve a limited freedom, since the authors of the
archived papers generally still sign over copyright to publishers. If they want
to publish in high-impact journals, they may have little choice. But those
seeking a more radical approach are increasingly advocating a “copyleft”
approach.
Reclaiming
the Commons
Invented in 1985
by Richard Stallman, founder of the Free Software Foundation, copyleft refers to
the use of alternative copyright licenses designed to ensure that works remain
freely available for anyone to utilize, even when modified.
The most widely
used copyleft license is the GNU General Public License (GPL). Developed by
Stallman for the free software movement, the GPL has subsequently been adopted
by many open source programmers too. Perhaps the best-known example of a GPL
program is the increasingly popular GNU/Linux operating system.
By using the GPL,
a programmer does not waive copyright but rather stipulates a different set of
usage rules. Thus, while the GPL allows anyone to use, modify, and redistribute
the software, this can only be done under certain conditions.
Importantly, it
requires that any modified or extended versions of the works are themselves also
distributed under the GPL. This a subversive condition that some liken to a
virus “contaminating” other software code incorporated with it, thereby
forcing that code into the commons too. Stallman, it should be noted, objects to
the metaphor. “The GPL’s domain does not spread by proximity or contact,
only by deliberate inclusion of GPL-covered code in your program,” he said.
“It spreads like a spider plant, not like a virus.”
Whatever the
metaphor, the important point is that the GPL uses copyright laws not as tools
for exclusivity, but as a way of ensuring that software remains in the commons.
GPL-type licenses
have subsequently been developed for all other sorts of copyrightable material,
such as music, images, video, and text. They include the Open Audio License, the
GNU Free Documentation License, and the Design Science License.
The first copyleft
license designed specifically for text was the Open Publication License (OPL).
Developed in 1999 by David Wiley, an assistant professor at
In short, copyleft
licenses are becoming an important tool not only for preserving the commons, but
also for reclaiming them. The viral, or “spider plant,” nature of copyleft
enables it to colonize modified works and restore them to the commons.
Creating
a New Commons
Unlike the land
enclosures of pre-industrial
To help facilitate
this, last December an organization called the Creative Commons made a range of
open licenses available on the Web for anyone to use.
Designed to let
content be made freely available, Creative Commons licenses can stipulate a
number of qualifying conditions, including the requirement that credit is given
to the creator; that use is only permissible for noncommercial purposes; that
derivative works, or verbatim copies, are not allowed; or that modified works
are only distributed on a “share-alike” basis.
The last
stipulation has the same viral characteristics of the GPL, since it requires
that any modified work can only be distributed under the same copyleft
principles as the original work.
The Creative
Commons licenses have been well-received. In February, when I spoke to Creative
Commons executive director
Among those using
the licenses are musicians like Roger McGuinn (founder of ’60s band The Byrds),
U.K.-based sound artist
“No one creates
as an island,” said Bennett, explaining why she has started using Creative
Commons licenses. “We have always used what came before us. If we stop making
available what we have, then there is no future for those that are inspired by
what we do. I want my work duplicated as many times as possible. Then it has
more chance of surviving.”
Thanks to the
availability of these new licenses, content creators are increasingly rejecting
enclosure and voting with their feet. As Brown puts it, “People are using
private contract law to simulate public benefits that the law is not
providing.”
Whether they can
make a living from doing so remains to be seen.
The
Darknet Genie
Some believe that
the greatest threat to the digital commons comes from a combination of patented
technologies and copyrighted content. As we saw, for instance, librarians are
concerned about the ability of large content providers to appropriate
public-domain data by monopolizing access to it.
Thus, even where
content is theoretically in the public domain, it may only be available—or at
least readily accessible—via the patented search tools, copyrighted metadata,
and proprietary databases of large information companies like Reed Elsevier and
Thomson Corp. Not surprisingly, both companies have patents or patent
applications related to various types of information classification, thesauri,
and natural language retrieval technologies.
Making content
available in the public domain, therefore, may not be sufficient. Nonproprietary
search-and-retrieval tools will also be required. For this reason, open-content
encyclopedia Wikipedia (http://www.wikipedia.org) is a likely model for the
future. Wikipedia is being written by volunteers who freely contribute articles
licensed under the GNU Free Documentation License. In addition, the
search-and-retrieval tools used for accessing the content are licensed under the
GPL. The aim is to ensure that both the technology and content remain
permanently free of proprietary interests.
Some believe that
commercial content providers will themselves have to adopt open publishing
models. Advocates of this view argue that DRM is simply not capable of
withstanding the ability of hackers to break its electronic padlocks and
“liberate” the enclosed content. This content can then be freely exchanged
via peer-to-peer services like Grokster and Kazaa.
Intriguingly, a
paper written by several Microsoft researchers last year comes to the same
conclusion. The authors say that the “darknet” (“a collection of networks
and technologies such as peer-to-peer networks used to share digital content”)
will make the benefits of DRM technology moot (http://www.bearcave.com/misl/misl_tech/msdrm/darknet.htm).
“We speculate that there will be short-term impediments to the effectiveness
of the darknet as a distribution mechanism, but ultimately the darknet genie
will not be put back into the bottle.”
Rolling
with the Punches
The paper also
points out that DRM techniques raise significant usability issues. “[A]lthough
the industry is striving for flexible licensing rules, customers will
be restricted in their actions if the system is to provide meaningful security.
This means that a vendor will probably make more money by selling unprotected
objects than protected objects.”
If this is
correct, it suggests that the rules may have changed forever and commercial
providers will have to roll with the punches. Indeed, some already are. In
January, for instance, technical book publisher Prentice Hall announced plans to
release a new series of books under the OPL.
Since the books
are about open source software, it could be argued that Prentice Hall is merely
meeting the expectations of a particular market. Nevertheless, in using the OPL,
the company will forego many of the perceived benefits of copyright.
Anyone will be free to copy and redistribute the books and,
while regular print versions will be sold through bookstores, free
electronic copies will be available over the Web.
The point to bear
in mind, says Mark Taub, editor in chief at Prentice Hall PTR, is that adopting
open licensing does not mean abandoning commercial models. Taub anticipates that
offering free electronic versions of the books will increase print sales since
customers will be able to sample them first. “In 99.99 percent of cases,
consumers still want their books in print. So we are not acting in a purely
altruistic fashion here. We think it is good business.”
It’s also
noteworthy that although more than 70,000 free electronic copies of Doctorow’s
novel Down and Out in the Magic Kingdom were
downloaded in the first few weeks after it was released online, print sales have
not been affected. Earlier in the year, the book was number 19 on the Amazon.com
science fiction bestseller list. “This is enormous reach for a first novel,”
boasts Doctorow.
But what does all
this mean for the future of the traditional online information industry? Right
now, we don’t know. All we can say
is that the crisis currently engulfing the music industry suggests that blindly
clinging to traditional business models in the digital age is a high-risk
strategy.
For information
consumers, the future is similarly uncertain. Those inclined to take a sanguine
view of the matter, however, may want to review the dystopian picture painted by
Stallman in the February 1997 issue of Communications
of the ACM (http://www.gnu.org/philosophy/right-to-read.html).