The Economics of Programming Languages
Friday, May 19, 2006, 10:01 PM - Theory
David N. Welton proposes <i>the most salient points of the economics of programming languages, and describes their effects on existing languages, as well as on those who desire to write and introduce new languages</i>.
Programming languages, like any product, have certain properties. Obviously, like any other sort of information good, production costs in the sense of making copies are essentially zero. Research and development (sunk costs) are needed to create the software itself, which means that an initial investment is required, and if the language is not successful, chances are the investment can't be recouped. This applies to many information goods, but programming languages also have some qualities that make them special within this grouping. Namely, that they are both a means of directing computers and their peripherals to do useful work, but they are also a means of exchanging ideas and algorithms for doing that work between people. In other words, languages go beyond simply being something that's useful; they are also a means of communication. Furthermore, in the form of collections of code such as packages, modules or libraries, programming languages are also a way to exchange useful routines that can be recombined in novel ways by other programmers, instead of simply exchanging finished applications.
Read the article
the problem with the turing test
Friday, March 3, 2006, 04:32 PM - Theory, Robots
In The New Atlantis, Mark Halpern points out a bug in Turing's test: humans don't judge the intelligence of other humans by their response to questions but by their appearence.
In the October 1950 issue of the British quarterly Mind, Alan Turing published a 28-page paper titled “Computing Machinery and Intelligence.” It was recognized almost instantly as a landmark. In 1956, less than six years after its publication in a small periodical read almost exclusively by academic philosophers, it was reprinted in The World of Mathematics, an anthology of writings on the classic problems and themes of mathematics and logic, most of them written by the greatest mathematicians and logicians of all time. (In an act that presaged much of the confusion that followed regarding what Turing really said, James Newman, editor of the anthology, silently re-titled the paper “Can a Machine Think?”) Since then, it has become one of the most reprinted, cited, quoted, misquoted, paraphrased, alluded to, and generally referenced philosophical papers ever published. It has influenced a wide range of intellectual disciplines—artificial intelligence (AI), robotics, epistemology, philosophy of mind—and helped shape public understanding, such as it is, of the limits and possibilities of non-human, man-made, artificial “intelligence.”
Turing’s paper claimed that suitably programmed digital computers would be generally accepted as thinking by around the year 2000, achieving that status by successfully responding to human questions in a human-like way. In preparing his readers to accept this idea, he explained what a digital computer is, presenting it as a special case of the “discrete state machine”; he offered a capsule explanation of what “programming” such a machine means; and he refuted—at least to his own satisfaction—nine arguments against his thesis that such a machine could be said to think. (All this groundwork was needed in 1950, when few people had even heard of computers.) But these sections of his paper are not what has made it so historically significant. The part that has seized our imagination, to the point where thousands who have never seen the paper nevertheless clearly remember it, is Turing’s proposed test for determining whether a computer is thinking—an experiment he calls the Imitation Game, but which is now known as the Turing Test.
READ the rest of the article by Mark Halpern.
via robots.net
humanoid from here
Turing's cathedral
Monday, February 27, 2006, 02:42 AM - Beautiful Code, Theory
The digital universe was conceived by Old Testament prophets (led by Leibniz) who supplied the logic, and delivered by New Testament prophets (led by von Neumann) who supplied the machines. Alan Turing (1912-1954) formed the bridge between the two.
In a digital computer, the instructions are in the form of COMMAND (ADDRESS) where the address is an exact (either absolute or relative) memory location, a process that translates informally into "DO THIS with what you find HERE and go THERE with the result." Everything depends not only on precise instructions, but on HERE, THERE, and WHEN being exactly defined. It is almost incomprehensible that programs amounting to millions of lines of code, written by teams of hundreds of people, are able to go out into the computational universe and function as well as they do given that one bit in the wrong place (or the wrong time) can bring the process to a halt.
Biology has taken a completely different approach. There is no von Neumann address matrix, just a molecular soup, and the instructions say simply "DO THIS with the next copy of THAT which comes along." The results are far more robust. There is no unforgiving central address authority, and no unforgiving central clock. This ability to take general, organized advantage of local, haphazard processes is exactly the ability that (so far) has distinguished information processing in living organisms from information processing by digital computers.
READ Turing's Cathedral by George Dyson.
The Semantics Differentiation of Minds and Machines
Saturday, January 21, 2006, 12:52 AM - Theory
In Slashdot today:"In Dr David Ellerman's book Intellectual Trespassing as a Way of Life there are a number of interesting essays. But there is one particular essay, entitled "The Semantics Differentiation of Minds and Machines" that caught my attention and which should be of interest to Slashdot readers. In that essay Dr Ellerman claims that "after several decades of debate, a definitive differentiation between minds and machines seems to be emerging into view." In particular, Dr Ellerman argues that the distinction between minds and machines is that while machines (i.e., computers) make excellent symbol manipulation devices, only minds have the additional capacity to ascribe semantics to symbols." Read the rest of John's review.
Isn't that the difference between intelligence and memory? Read the rest of the review by John David Funge
Pornographic Coding
Friday, January 20, 2006, 11:09 PM - Theory
Program code is like pornography. It has linear logic, but no meaning. There is an accumulation of things already known. The focus is always on the same explicit facts. Repetition and boredom rule.
(Adapted from a Neoist slogan)
Art is sanctioned pornography.
(Neoist slogan)
we demand a shamanic pornography. Capitalist ``progress'' destroys the imagination through a frenzy of the visible. What we see we no longer need to imagine. A a famous zero from the popsicle academy was once moved to write that every time a man had an erection it was a triumph of the imagination. Power to the imagination, and to sex - for they are one and the same thing. Pornographers of the world close your eyes. You have nothing to lose but your bodily fluids! It is time to decondition ourselves by going beyond the known world.
The shamans of old ingested psychedelic mushrooms, and today we are further armed with a battery of chemically synthesised drugs including ecstasy and LSD. These psychedelics are psychic elevators that can power us through the seven levels of human consciousness. The first four levels of consciousness can be reached in ordinary everyday life. Level Five requires either chemical assistance or long hours of arduous interaction with your computer, and when you hit this level sexual activity is vastly enhanced. Once you go above Level Five consciousness you don't necessarily need coitus. Indeed, at Level Six you are telepathic and sexually combined with your fellow hackers, and this integration is even greater at Level Seven (aka total fucking zero and one pornography).
Drugs and code are the ancient and modern tools with which we can investigate our own minds while turning our bodies into one vast erogenous zone. Our message to purveyours of representational porn is HANDS OFF (OUR) EJACULATIONS (both male and female). WE WANT TO CUM IN ALL THE COLOURS OF ALL THE FLAGS OF ALL THE CONSULATES. As an initiated shaman Jean Cocteau was able to come through the sheer power of his imagination, he could do this without using his hands to manipulate his genitals. Let's keep our hands free to imput date on our computer terminals and use the convulsive power of codes to bring us to orgasm.
Florian Cramer and Stewart Home, Crash conference paper, Feb. 11, 2005
Link
theory and techniques of electronic music
Saturday, September 10, 2005, 07:56 PM - Theory
This book is about using electronic techniques to record, synthesize, process, and analyze musical sounds, a practice which came into its modern form in the years 1948-1952, but whose technological means and artistic uses have undergone several revolutions since then. Nowadays most electronic music is made using computers, and this book will focus exclusively on what used to be called 'computer music', but which should really now be called 'electronic music using a computer'.
Most of the available computer music tools have antecedents in earlier generations of equipment. The computer, however, is relatively cheap and the results of using one are much easier to document and re-create than those of earlier generations of equipment. In these respects at least, the computer makes the ideal electronic music instrument--until someone invents something even cheaper and more flexible than a computer.
[Miller Puckette is also the author of Pure data (pd), a graphical programming language for the creation of interactive computer music and multimedia works, written in the 1990s with input from many others in the computer music and free software communities.]
link
Back

In the October 1950 issue of the British quarterly Mind, Alan Turing published a 28-page paper titled “Computing Machinery and Intelligence.” It was recognized almost instantly as a landmark. In 1956, less than six years after its publication in a small periodical read almost exclusively by academic philosophers, it was reprinted in The World of Mathematics, an anthology of writings on the classic problems and themes of mathematics and logic, most of them written by the greatest mathematicians and logicians of all time. (In an act that presaged much of the confusion that followed regarding what Turing really said, James Newman, editor of the anthology, silently re-titled the paper “Can a Machine Think?”) Since then, it has become one of the most reprinted, cited, quoted, misquoted, paraphrased, alluded to, and generally referenced philosophical papers ever published. It has influenced a wide range of intellectual disciplines—artificial intelligence (AI), robotics, epistemology, philosophy of mind—and helped shape public understanding, such as it is, of the limits and possibilities of non-human, man-made, artificial “intelligence.”










