The Ethic of the Essential
Monday, March 5, 2007, 08:09 PM - Theory, Copyfight
In Behind the curtain, Milan Kundera mourns the death of the novel by what he calls the ethic of the archive, the collectionist fever that merges all material produced by an stablished genius with what was intended for publishing -the finished artwork- by the author himself.
(...) "the work" is what the writer will approve in his own final assessment. For life is short, reading is long, and literature is in the process of killing itself off through an insane proliferation. Every novelist, starting with his own work, should eliminate whatever is secondary, lay out for himself and for everyone else the ethic of the essential.
But it is not only the writers, the hundreds and thousands of writers; there are also the researchers, the armies of researchers who, guided by some opposite ethic, accumulate everything they can find to embrace the Whole, a supreme goal. The Whole, which includes a mountain of drafts, deleted paragraphs, chapters rejected by the author but published by researchers, in what are called "critical editions", under the perfidious title "variants", which means, if words still have meaning, that anything the author wrote is worth as much as anything else, that it would be similarly approved by him.
The ethic of the essential has given way to the ethic of the archive. (The archive's ideal: the sweet equality that reigns in an enormous common grave.)
Along with his complaint there is a comment on what belongs to the artist and the concept of natural ownership.
Let us remember: before Cervantes had completed the second volume of his novel, another writer, still unknown, preceded him by publishing, under a pseudonym, his own sequel to the adventures of Don Quixote. Cervantes reacted at the time the way a novelist would react today: with rage. He attacked the plagiarist violently and proudly proclaimed, "Don Quixote was born for me alone, and I for him. He knew about action, I about writing. He and I are simply one single entity."
Since Cervantes, this has been the primary, fundamental mark of a novel: it is a unique, inimitable creation, inseparable from the imagination of a single author. Before he was written, no one could have imagined a Don Quixote; he was the unexpected itself, and, without the charm of the unexpected, no great novel character (and no great novel) would ever be conceivable again.
The birth of the art of the novel was linked to the consciousness of an author's rights and to their fierce defence. The novelist is the sole master of his work; he is his work. It was not always thus, and it will not always be thus. But when that day comes, then the art of the novel, Cervantes's legacy, will cease to exist.
Read the entire paper at The Guardian.
Patenting Life
Saturday, March 3, 2007, 12:12 AM - Theory, Copyfight
Patenting Life: Commodification, the Patent Regime, and the Public InterestThis paper will look at bioinformatics and the various ethical issues raised by the patenting of life forms. Debates over life patents, specifically the 2002 Canadian Supreme Court decision which ruled against patenting the OncoMouse®, highlight how technological discourses on science and technology are inextricably integrated with prevailing economic discourses. The paper will also critique the creation of a patent regime which has privatized public knowledge and resources, and its institutionalization through the World Trade Organization’s TRIPS – the Agreement on Trade-Related Aspects of Intellectual Property Rights–and the tensions inherent when power is vested in the hands of corporations and public goods become transmogrified into private commodities.
Download the paper (PDF)
Inspired by the excellent CAUT conference on Controlling Intellectual Property - The Academic Community and the Future of Knowledg
Can History be Open Source?
Wednesday, February 21, 2007, 06:21 AM - Theory, Copyfight, Media
WIKIPEDIA AND THE FUTURE OF THE PASTby Roy Rosenzweig, originally published in The Journal of American History Volume 93, Number 1 (June, 2006)
History is a deeply individualistic craft. The singly authored work is the standard for the profession; only about 6 percent of the more than 32,000 scholarly works indexed since 2000 in this journal’s comprehensive bibliographic guide, “Recent Scholarship,” have more than one author. Works with several authors—common in the sciences—are even harder to find. Fewer than 500 (less than 2 percent) have three or more authors.Historical scholarship is also characterized by possessive individualism. Good professional practice (and avoiding charges of plagiarism) requires us to attribute ideas and words to specific historians—we are taught to speak of “Richard Hofstadter’s status anxiety interpretation of Progressivism.”2 And if we use more than a limited number of words from Hofstadter, we need to send a check to his estate. To mingle Hofstadter’s prose with your own and publish it would violate both copyright and professional norms.
A historical work without owners and with multiple, anonymous authors is thus almost unimaginable in our professional culture. Yet, quite remarkably, that describes the online encyclopedia known as Wikipedia, which contains 3 million articles (1 million of them in English). History is probably the category encompassing the largest number of articles. Wikipedia is entirely free. And that freedom includes not just the ability of anyone to read it (a freedom denied by the scholarly journals in, say, jstor, which requires an expensive institutional subscription) but also—more remarkably—their freedom to use it. You can take Wikipedia’s entry on Franklin D. Roosevelt and put it on your own Web site, you can hand out copies to your students, and you can publish it in a book—all with only one restriction: You may not impose any more restrictions on subsequent readers and users than have been imposed on you. And it has no authors in any conventional sense. Tens of thousands of people—who have not gotten even the glory of affixing their names to it—have written it collaboratively. The Roosevelt entry, for example, emerged over four years as five hundred authors made about one thousand edits. This extraordinary freedom and cooperation make Wikipedia the most important application of the principles of the free and open-source software movement to the world of cultural, rather than software, production.
Despite, or perhaps because of, this open-source mode of production and distribution, Wikipedia has become astonishingly widely read and cited. More than a million people a day visit the Wikipedia site. The Alexa traffic rankings put it at number 18, well above the New York Times (50), the Library of Congress (1,175), and the venerable Encyclopedia Britannica (2,952). In a few short years, it has become perhaps the largest work of online historical writing, the most widely read work of digital history, and the most important free historical resource on the World Wide Web. It has received gushing praise (“one of the most fascinating developments of the Digital Age”; an “incredible example of open-source intellectual collaboration”) as well as sharp criticism (a “faith-based encyclopedia” and “a joke at best”). And it is almost entirely a volunteer effort; as of September 2005, it had two full-time employees. It is surely a phenomenon to which professional historians should attend.
To that end, this article seeks to answer some basic questions about history on Wikipedia. How did it develop? How does it work? How good is the historical writing? What are the potential implications for our practice as scholars, teachers, and purveyors of the past to the general public?
Keep reading Can History be Open Source?
Obscurity is worse than piracy (an o'reilly classic)
Monday, February 19, 2007, 04:48 AM - Theory, Copyfight
PIRACY IS PROGRESSIVE TAXATIONand Other Thoughts on the Evolution of Online Distribution
by Tim O'Reilly 12/11/2002
The continuing controversy over online file sharing sparks me to offer a few thoughts as an author and publisher. To be sure, I write and publish neither movies nor music, but books. But I think that some of the lessons of my experience still apply.
Lesson 1: Obscurity is a far greater threat to authors and creative artists than piracy.
Let me start with book publishing. More than 100,000 books are published each year, with several million books in print, yet fewer than 10,000 of those new books have any significant sales, and only a hundred thousand or so of all the books in print are carried in even the largest stores. Most books have a few months on the shelves of the major chains, and then wait in the darkness of warehouses from which they will move only to the recycling bin. Authors think that getting a publisher will be the realization of their dreams, but for so many, it's just the start of a long disappointment.
Sites like Amazon that create a virtual storefront for all the books in print cast a ray of light into the gloom of those warehouses, and so books that would otherwise have no outlet at all can be discovered and bought. Authors who are fortunate enough to get the rights to their book back from the publisher often put them up freely online, in hopes of finding readers. The web has been a boon for readers, since it makes it easier to spread book recommendations and to purchase the books once you hear about them. But even then, few books survive their first year or two in print. Empty the warehouses and you couldn't give many of them away.
Many works linger in deserved obscurity, but so many more suffer simply from the vast differential between supply and demand.
I don't know the exact size of the entire CD catalog, but I imagine that it is similar in scope. Tens of thousands of musicians self-publish their own CDs; a happy few get a recording contract. Of those, fewer still have their records sell in appreciable numbers. The deep backlist of music publishers is lost to consumers because the music just isn't available in stores.
There are fewer films, to be sure, because of the cost of film making, but even there, obscurity is a constant enemy. Thousands of independent film makers are desperate for distribution. A few independent films, like Denmark's Dogme films, get visibility. But for most, visibility is limited to occasional showings at local film festivals. The rise of digital video also promises that film making will soon be as much a garage opportunity as starting a rock band, and as much of a garret opportunity as the great American novel.
Lesson 2: Piracy is progressive taxation
For all of these creative artists, most laboring in obscurity, being well-enough known to be pirated would be a crowning achievement. Piracy is a kind of progressive taxation, which may shave a few percentage points off the sales of well-known artists (and I say "may" because even that point is not proven), in exchange for massive benefits to the far greater number for whom exposure may lead to increased revenues.
Our current distribution systems for books, music, and movies are skewed heavily in favor of the "haves" against the "have nots." A few high-profile products receive the bulk of the promotional budget and are distributed in large quantities; the majority depend, in the words of Tennessee Williams' character Blanche DuBois, "on the kindness of strangers."
Lowering the barriers to entry in distribution, and the continuous availability of the entire catalog rather than just the most popular works, is good for artists, since it gives them a chance to build their own reputation and visibility, working with entrepreneurs of the new medium who will be the publishers and distributors of tomorrow.
I have watched my 19 year-old daughter and her friends sample countless bands on Napster and Kazaa and, enthusiastic for their music, go out to purchase CDs. My daughter now owns more CDs than I have collected in a lifetime of less exploratory listening. What's more, she has introduced me to her favorite music, and I too have bought CDs as a result. And no, she isn't downloading Britney Spears, but forgotten bands from the 60s, 70s, 80s, and 90s, as well as their musical forebears in other genres. This is music that is difficult to find -- except online -- but, once found, leads to a focused search for CDs, records, and other artifacts. eBay is doing a nice business with much of this material, even if the RIAA fails to see the opportunity.
Lesson 3: Customers want to do the right thing, if they can.
Piracy is a loaded word, which we used to reserve for wholesale copying and resale of illegitimate product. The music and film industry usage, applying it to peer-to-peer file sharing, is a disservice to honest discussion.
Online file sharing is the work of enthusiasts who are trading their music because there is no legitimate alternative. Piracy is an illegal commercial activity that is typically a substantial problem only in countries without strong enforcement of existing copyright law.
At O'Reilly, we publish many of our books in online form. There are people who take advantage of that fact to redistribute unpaid copies. (The biggest problem, incidentally, is not on file sharing networks, but from copies of our CD Bookshelf product line being put up on public Web servers, or copied wholesale and offered for sale on eBay.) While these pirated copies are annoying, they hardly destroy our business. We've found little or no abatement of sales of printed books that are also available for sale online.
What's more, many of those who do infringe respond to little more than a polite letter asking them to take the materials down. Those servers that ignore our requests are typically in countries where the books are not available for sale or are far too expensive for local consumers to buy.
What's even more interesting, though, is that our enforcement activities are customer-driven. We receive thousands of emails from customers letting us know about infringing copies and sites. Why? They value our company and our authors, and they want to see our work continue. They know that there is a legitimate way to pay for online access--our Safari Books Online subscription service (safari.oreilly.com) can be had for as little as $9.95 a month--and accordingly recognize free copies as illegitimate.
A similar data point comes from Jon Schull, the former CTO of Softlock, the company that worked with Stephen King on his eBook experiment, "Riding the Bullet". Softlock, which used a strong DRM scheme, was relying on "superdistribution" to reduce the costs of hosting the content--the idea that customers would redistribute their copies to friends, who would then simply need to download a key to unlock said copy. But most of the copies were downloaded anyway and very few were passed along. Softlock ran a customer survey to find out why there was so little "pass-along" activity. The answer, surprisingly, was that customers didn't understand that redistribution was desired. They didn't do it because they "thought it was wrong."
The simplest way to get customers to stop trading illicit digital copies of music and movies is to give those customers a legitimate alternative, at a fair price.
Lesson 4: Shoplifting is a bigger threat than piracy.
While few of the people putting books on public web servers seek to profit from the activity, those who are putting up CDs for sale on eBay containing PDF or HTML copies of dozens of books are in fact practicing piracy--organized copying of content for resale.
But even so, we see no need for stronger copyright laws, or strong Digital Rights Management software, because existing law allows us to prosecute the few deliberate pirates.
We don't have a substantial piracy problem in the US and Europe. The fact that its software products have been available for years on warez sites (and now on file trading networks) has not kept Microsoft from becoming one of the world's largest and most successful companies. Estimates of "lost" revenue assume that illicit copies would have been paid for; meanwhile, there is no credit on the other side of the ledger for copies that are sold because of "upgrades" from familiarity bred by illicit copies.
What we have is a problem that is analogous, at best, to shoplifting, an annoying cost of doing business.
And overall, as a book publisher who also makes many of our books available in electronic form, we rate the piracy problem as somewhere below shoplifting as a tax on our revenues. Consistent with my observation that obscurity is a greater danger than piracy, shoplifting of a single copy can lead to lost sales of many more. If a bookstore has only one copy of your book, or a music store one copy of your CD, a shoplifted copy essentially makes it disappear from the next potential buyer's field of possibility. Because the store's inventory control system says the product hasn't been sold, it may not be reordered for weeks or months, perhaps not at all.
I have many times asked a bookstore why they didn't have copies of one of my books, only to be told, after a quick look at the inventory control system: "But we do. It says we still have one copy in stock, and it hasn't sold in months, so we see no need to reorder." It takes some prodding to force the point that perhaps it hasn't sold because it is no longer on the shelf.
Because an online copy is never out of stock, we at least have a chance at a sale, rather than being subject to the enormous inefficiencies and arbitrary choke points in the distribution system.
Lesson 5: File sharing networks don't threaten book, music, or film publishing. They threaten existing publishers.
The music and film industries like to suggest that file sharing networks will destroy their industries.
Those who make this argument completely fail to understand the nature of publishing. Publishing is not a role that will be undone by any new technology, since its existence is mandated by mathematics. Millions of buyers and millions of sellers cannot find one another without one or more middlemen who, like a kind of step-down transformer, segment the market into more manageable pieces. In fact, there is usually a rich ecology of middlemen. Publishers aggregate authors for retailers. Retailers aggregate customers for publishers. Wholesalers aggregate small publishers for retailers and small retailers for publishers. Specialty distributors find ways into non-standard channels.
Those of us who watched the rise of the Web as a new medium for publishing have seen this ecology evolve within less than a decade. In the Web's early days, rhetoric claimed that we faced an age of disintermediation, that everyone could be his or her own publisher. But before long, individual web site owners were paying others to help them increase their visibility in Yahoo!, Google, and other search engines (the equivalent of Barnes & Noble and Borders for the Web), and Web authors were happily writing for sites like AOL and MSN, or on the technology side, Cnet, Slashdot, O'Reilly Network, and other Web publishers. Meanwhile, authors from Matt Drudge to Dave Winer and Cory Doctorow made their names by publishing for the new medium.
As Jared Diamond points out in his book Guns, Germs, and Steel, mathematics is behind the rise of all complex social organization.
There is nothing in technology that changes the fundamental dynamic by which millions of potentially fungible products reach millions of potential consumers. The means by which aggregation and selection are made may change with technology, but the need for aggregation and selection will not. Google's use of implicit peer recommendation in its page rankings plays much the same role as the large retailers' use of detailed sell-through data to help them select their offerings.
The question before us is not whether technologies such as peer-to-peer file sharing will undermine the role of the creative artist or the publisher, but how creative artists can leverage new technologies to increase the visibility of their work. For publishers, the question is whether they will understand how to perform their role in the new medium before someone else does. Publishing is an ecological niche; new publishers will rush in to fill it if the old ones fail to do so.
If we take the discussion back to first principles, we understand that publishing isn't just about physical aggregation of product but also requires an intangible aggregation and management of "reputation." People go to Google or Yahoo!, Barnes & Noble or Borders, HMV, or MediaPlay, because they believe that they will find what they want there. And they seek out particular publishers, like Knopf or O'Reilly, because we have built a track-record of trust in our ability to find interesting topics and skilled authors.
Now, let's take this discussion over to music file sharing. How do people find songs on Kazaa or any of the other post-Napster file sharing services? First, they may be looking for a song they already know. But such searches for a known artist or song title are fundamentally self-limiting, since they depend on the marketing of a "name space" (artist/song pairs) that is extrinsic to the file sharing service. To truly supplant the existing music distribution system, any replacement must develop its own mechanisms for marketing and recommendation of new music.
And in fact, we already see those mechanisms emerging. File sharing services rely heavily on that most effective of marketing techniques: word of mouth. But over time, anyone who has studied the evolution of previous media will see that searches based on either pre-existing knowledge or word of mouth represent only the low-hanging fruit. As the market matures, paid marketing is added, and step by step, we build up the same rich ecology of middlemen that characterizes existing media marketplaces.
New media have historically not replaced but rather augmented and expanded existing media marketplaces, at least in the short term. Opportunities exist to arbitrage between the new distribution medium and the old, as, for instance, the rise of file sharing networks has helped to fuel the trading of records and CDs (unavailable through normal recording industry channels) on eBay.
Over time, it may be that online music publishing services will replace CDs and other physical distribution media, much as recorded music relegated sheet music publishers to a niche and, for many, made household pianos a nostalgic affectation rather than the home entertainment center. But the role of the artist and the music publisher will remain. The question then, is not the death of book publishing, music publishing, or film production, but rather one of who will be the publishers.
Lesson 6: "Free" is eventually replaced by a higher-quality paid service
A question for my readers: How many of you still get your email via peer-to-peer UUCP dialups or the old "free" Internet, and how many of you pay $19.95 a month or more to an ISP? How many of you watch "free" television over the airwaves, and how many of you pay $20-$60 a month for cable or satellite television? (Not to mention continue to rent movies on videotape and DVD, and purchasing physical copies of your favorites.)
Services like Kazaa flourish in the absence of competitive alternatives. I confidently predict that once the music industry provides a service that provides access to all the same songs, freedom from onerous copy-restriction, more accurate metadata and other added value, there will be hundreds of millions of paying subscribers. That is, unless they wait too long, in which case, Kazaa itself will start to offer (and charge for) these advantages. (Or would, in the absence of legal challenges.) Much as AOL, MSN, Yahoo!, Cnet, and many others have collectively built a multi-billion dollar media business on the "free" web, "publishers" will evolve on file sharing networks.
Why would you pay for a song that you could get for free? For the same reason that you will buy a book that you could borrow from the public library or buy a DVD of a movie that you could watch on television or rent for the weekend. Convenience, ease-of-use, selection, ability to find what you want, and for enthusiasts, the sheer pleasure of owning something you treasure.
The current experience of online file sharing services is mediocre at best. Students and others with time on their hands may find them adequate. But they leave much to be desired, with redundant copies of uneven quality, intermittent availability of some works, incorrect identification of artist or song, and many other quality problems.
Opponents may argue that the Web demonstrates precisely what they are afraid of, that content on the Web is "free", that advertising is an insufficient revenue model for content providers, and that subscription models have not been successful. However, I will argue that the story is still unfinished.
Subscription sites are on the rise. Computer industry professionals can be seen as the "early adopters" in this market. For example, O'Reilly's Safari Books Online is growing at 30 percent a month, and now represents a multi-million dollar revenue stream for us and other participating publishers.
Most observers also seem to miss the point that the internet is already sold as a subscription service. All we're working on is the development of added-value premium services. What's more, there are already a few vertically-integrated ISPs (notably AOL Time Warner) that provide "basic" connectivity but own vast libraries of premium content.
In looking at online content subscription services, analogies with television are instructive. Free, advertiser-supported television has largely been supplanted--or should I say supplemented (because the advertising remains)--by paid subscriptions to cable TV. What's more, revenue from "basic cable" has been supplemented by various aggregated premium channels. HBO, one of those channels, is now television's most profitable network. Meanwhile, over on the internet, people pay their ISP $19.95/month for the equivalent of "basic cable", and an ideal opportunity for a premium channel, a music download service, has gone begging for lack of vision on the part of existing music publishers.
Another lesson from television is that people prefer subscriptions to pay-per-view, except for very special events. What's more, they prefer subscriptions to larger collections of content, rather than single channels. So, people subscribe to "the movie package," "the sports package" and so on. The recording industry's "per song" trial balloons may work, but I predict that in the long term, an "all-you-can-eat" monthly subscription service (perhaps segmented by musical genre) will prevail in the marketplace.
Lesson 7: There's more than one way to do it.
A study of other media marketplaces shows, though, that there is no single silver-bullet solution. A smart company maximizes revenue through all its channels, realizing that its real opportunity comes when it serves the customer who ultimately pays its bills.
At O'Reilly, we've been experimenting with online distribution of our books for years. We know that we must offer a compelling online alternative before someone else does. As the Hawaiian proverb says, "No one promised us tomorrow." Competition with free alternatives forces us to explore new distribution media and new forms of publishing.
In addition to the Safari subscription service mentioned above, we publish an extensive network of advertising-supported "free" information sites as the O'Reilly Network (www.oreillynet.com). We have published a number of books under "open publication licenses" where free redistribution is explicitly allowed (oreilly.com/openbook). We do this for several reasons: to build awareness of products that might otherwise be ignored, to build brand loyalty among online communities, or, sometimes, because a product can no longer be economically sold in traditional channels, and we'd rather make it available for free than have it completely disappear from the market.
We have also published many of our books on CD ROM, in a format referred to as the CD Bookshelf, typically a collection of a half dozen or so related books.
And of course, we continue to publish print books. The availability of free online copies is sometimes used to promote a topic or author (as books such as The Cathedral and the Bazaar or The Cluetrain Manifesto became bestsellers in print as a result of the wide exposure it received online). We make available substantial portions of all of our books online, as a way for potential readers to sample what they contain. We've even found ways to integrate our books into the online help system for software products, including Dreamweaver and Microsoft's Visual Studio.
Interestingly, some of our most successful print/online hybrids have come about where we present the same material in different ways for the print and online contexts. For example, much of the content of our bestselling book Programming Perl (more than 600,000 copies in print) is available online as part of the standard Perl documentation. But the entire package--not to mention the convenience of a paper copy, and the aesthetic pleasure of the strongly branded packaging--is only available in print. Multiple ways to present the same information and the same product increase the overall size and richness of the market.
And that's the ultimate lesson. "Give the wookie what he wants!" as Han Solo said so memorably in the first Star Wars movie. Give it to him in as many ways as you can find, at a fair price, and let him choose which works best for him.
The Economics of Programming Languages
Friday, May 19, 2006, 10:01 PM - Theory
David N. Welton proposes <i>the most salient points of the economics of programming languages, and describes their effects on existing languages, as well as on those who desire to write and introduce new languages</i>.
Programming languages, like any product, have certain properties. Obviously, like any other sort of information good, production costs in the sense of making copies are essentially zero. Research and development (sunk costs) are needed to create the software itself, which means that an initial investment is required, and if the language is not successful, chances are the investment can't be recouped. This applies to many information goods, but programming languages also have some qualities that make them special within this grouping. Namely, that they are both a means of directing computers and their peripherals to do useful work, but they are also a means of exchanging ideas and algorithms for doing that work between people. In other words, languages go beyond simply being something that's useful; they are also a means of communication. Furthermore, in the form of collections of code such as packages, modules or libraries, programming languages are also a way to exchange useful routines that can be recombined in novel ways by other programmers, instead of simply exchanging finished applications.
Read the article
the problem with the turing test
Friday, March 3, 2006, 04:32 PM - Theory, Robots
In The New Atlantis, Mark Halpern points out a bug in Turing's test: humans don't judge the intelligence of other humans by their response to questions but by their appearence.
In the October 1950 issue of the British quarterly Mind, Alan Turing published a 28-page paper titled “Computing Machinery and Intelligence.” It was recognized almost instantly as a landmark. In 1956, less than six years after its publication in a small periodical read almost exclusively by academic philosophers, it was reprinted in The World of Mathematics, an anthology of writings on the classic problems and themes of mathematics and logic, most of them written by the greatest mathematicians and logicians of all time. (In an act that presaged much of the confusion that followed regarding what Turing really said, James Newman, editor of the anthology, silently re-titled the paper “Can a Machine Think?”) Since then, it has become one of the most reprinted, cited, quoted, misquoted, paraphrased, alluded to, and generally referenced philosophical papers ever published. It has influenced a wide range of intellectual disciplines—artificial intelligence (AI), robotics, epistemology, philosophy of mind—and helped shape public understanding, such as it is, of the limits and possibilities of non-human, man-made, artificial “intelligence.”
Turing’s paper claimed that suitably programmed digital computers would be generally accepted as thinking by around the year 2000, achieving that status by successfully responding to human questions in a human-like way. In preparing his readers to accept this idea, he explained what a digital computer is, presenting it as a special case of the “discrete state machine”; he offered a capsule explanation of what “programming” such a machine means; and he refuted—at least to his own satisfaction—nine arguments against his thesis that such a machine could be said to think. (All this groundwork was needed in 1950, when few people had even heard of computers.) But these sections of his paper are not what has made it so historically significant. The part that has seized our imagination, to the point where thousands who have never seen the paper nevertheless clearly remember it, is Turing’s proposed test for determining whether a computer is thinking—an experiment he calls the Imitation Game, but which is now known as the Turing Test.
READ the rest of the article by Mark Halpern.
via robots.net
humanoid from here
Turing's cathedral
Monday, February 27, 2006, 02:42 AM - Beautiful Code, Theory
The digital universe was conceived by Old Testament prophets (led by Leibniz) who supplied the logic, and delivered by New Testament prophets (led by von Neumann) who supplied the machines. Alan Turing (1912-1954) formed the bridge between the two.
In a digital computer, the instructions are in the form of COMMAND (ADDRESS) where the address is an exact (either absolute or relative) memory location, a process that translates informally into "DO THIS with what you find HERE and go THERE with the result." Everything depends not only on precise instructions, but on HERE, THERE, and WHEN being exactly defined. It is almost incomprehensible that programs amounting to millions of lines of code, written by teams of hundreds of people, are able to go out into the computational universe and function as well as they do given that one bit in the wrong place (or the wrong time) can bring the process to a halt.
Biology has taken a completely different approach. There is no von Neumann address matrix, just a molecular soup, and the instructions say simply "DO THIS with the next copy of THAT which comes along." The results are far more robust. There is no unforgiving central address authority, and no unforgiving central clock. This ability to take general, organized advantage of local, haphazard processes is exactly the ability that (so far) has distinguished information processing in living organisms from information processing by digital computers.
READ Turing's Cathedral by George Dyson.
Next

In the October 1950 issue of the British quarterly Mind, Alan Turing published a 28-page paper titled “Computing Machinery and Intelligence.” It was recognized almost instantly as a landmark. In 1956, less than six years after its publication in a small periodical read almost exclusively by academic philosophers, it was reprinted in The World of Mathematics, an anthology of writings on the classic problems and themes of mathematics and logic, most of them written by the greatest mathematicians and logicians of all time. (In an act that presaged much of the confusion that followed regarding what Turing really said, James Newman, editor of the anthology, silently re-titled the paper “Can a Machine Think?”) Since then, it has become one of the most reprinted, cited, quoted, misquoted, paraphrased, alluded to, and generally referenced philosophical papers ever published. It has influenced a wide range of intellectual disciplines—artificial intelligence (AI), robotics, epistemology, philosophy of mind—and helped shape public understanding, such as it is, of the limits and possibilities of non-human, man-made, artificial “intelligence.”










