One of those controversies from last summer that somehow passed me by while I was teaching in Italy was the mash up over the Fertility Institutes decision first to offer a service allowing parents to choose basic traits for their children (hair and eye color, etc.) and then its subsequent retreat in the face of an ethical uproar.
The Institute did and still does provide screening for many diseases as well pre-selection options for sex (it has a 100% success rate on pre-selecting sex of the child). But the uproar over hair and eye color was too much to bear.
One question raised by this is why we allow screening for sex preference and disease, but not hair color? Consider that discrimination is much more prevalent in the areas of disability and sex than it is around eye color. So we permit the erasure or compensation for those traits and conditions that society deems meaningful, but prohibit the choice around less impactful traits.
In a March 9, 2009 WIRED online interview, James Hughes--who will be speaking at the Arendt Center's October conference Human Being in an Inhuman Age--defends the rights of parents to make such choices. And he rejects the term "designer babies." As he said in the H+ interview:
“It’s inevitable, in the broad context of freedom and choice. And the term ‘designer babies’ is an insult to parents, because it basically says parents don’t have their kids’ best interests at heart.” He said, “If I’ve got a dozen embryos I could implant, and the ones I want to implant are the green-eyed ones, or the blond-haired ones, that’s an extension of choices we think are perfectly acceptable — and restricting them a violation of our procreative autonomy.
In a poll cited by H+, the majority of respondents supported pre-genetic implantation in questions of health, but most drew the line somewhere around selecting for athletic ability or intelligence.
More recently, a January 2009 study by researchers at NYU Langone Medical Center found that an overwhelming 75% of parents would be in favor of trait selection using PGD – as long as that trait is the absence of mental retardation. A further 54% would screen their embryos for deafness, 56% for blindness, 52% for a propensity to heart disease, and 51% for a propensity to cancer. Only 10% would be willing to select embryos for better athletic ability, and 12.6% would select for greater intelligence. 52.2% of respondents said that there were no conditions for which genetic testing should never be offered, indicating widespread support for PGD – as long as it’s for averting disease and not engineering human enhancement.
It seems that there are at least two worries that need to be thought through. One is socio-economic. To allow for-profit companies to genetically implant desired traits means that those who can pay for it and want to will ensure their progeny genetic advantage. Now, genetic advantage does not mean one will succeed, but it is an advantage--at least a perceived advantage even if not always a real one.
The second question is broader. If Hannah Arendt and others are right that one part of our humanity is that we are subject to chance and to fate, then the opportunity to control all decisions in life--including the decision of life itself--raises questions about our humanity. As the mystery is taken from childbirth, one of the great human experiences begins to approximate the experience of ordering from a catalogue. Of course, this is not yet the case. But with surrogacy pregnancies and genetic implants, it is now possible to go on vacation, order a boy with green eyes, blonde hair, and high intelligence, and come home to pick him up. Is this a human way to have children? Is it human to eradicate disease? What about creating such healthy people that they live to be 700 years old, as Ray Kurzweil imagines will soon be the case?
One needs to think: What it means to be Human in a World of Super-Human Technologies.
And read about the latest clone mammal, Got, the Spanish fighting Bull.
I stopped in at the “Systematic” exhibit now on at the Project 176 in London and received a tour by two of the gallery assistants, David Angus and Chloe Cooper. The exhibit, curated by Ellen Mara De Wachter, confronts the question of the place of the human being and the role of the artist at a time when individuals and humans are being subsumed by rational, social, and scientific systems. Featuring 18 works by 8 artists, the exhibit raises the fundamental question of our time: what does it mean to be human in an increasingly inhuman age?
The works on display in “Systematic” provoke principally because they enthusiastically embrace the utopian optimism that underlies the thinking of prophets of singularity from Ray Kurzweil to Sergey Brin. The premise of the exhibit is the power of systems over individuals. As De Wachter writes in her essay that accompanies the exhibit, the system today represents the
emergent properties ‘of the combination as a whole—which are more than the sum of its individual parts.’
The artists in “Systematic” produce works that abandon themselves to systems that operate beyond the awareness or control of human intelligence.
Justin Beal offers glass and dry-wall tables that incorporate rotting fruit into their joints. The fruit rots and attracts insects, molds, and fungi that alter the “artwork” in ways that are outside of artistic control. For De Wachter, Beal “celebrates the unpredictability and undecidability that befall all works of art once they leave the artist’s hands.” The key word here is “celebrates.” For Beal, as for many in the artistic and technological worlds today, the power of the system over the individual is to be welcomed.
Katie Paterson’s “Earth-Moon-Earth” partakes in a similar bow to the power of systems. Paterson translates Beethoven’s Moonlight Sonata into morse code, beams it to the moon, and receives it back upon its reflection. She then translates the returned code into musical notes, with all the losses, transpositions, and gaps left in. This new sonata is then played and the spectator can listen to the new sonata played on vinyl through headphones in the gallery.
For De Wachter, artists like Beal and Paterson—and the other artists on exhibit—work by “surrendering a certain amount of control to the systems” with which they interact. In doing so, “these artists admit that the artworks they produce have a life of their own, and a life beyond the studio in which they were made.”
The language of artistic surrender is reminiscent of an older artistic ideal and also eerily different. Artists of the pre-modern and classic ages were often anonymous. The artistic ideal was to serve simply as a medium through which the divine truth flowed and manifested itself in the world as a work of art. The artist, bemused by his muse, lost himself in rapture and gave himself over to the fashioning of a work in which the truth came to stand in the world. Opposed to this tradition of the artist as medium is the ideal of artistic genius, the artist who composes works from the productive brilliance of his own mind.
In Systematic, the artists abandon control not to a divine, rational, or meaningful truth, but to the random, unpredictable, and meaningless systems of growth and decay, chance and circumstance. The celebration of this powerlessness is, I think, undoubtedly the result of a new faith that has swept up much of the artistic and technological intelligentsia today: the faith in an intelligent universe that goes by the popular name, The Singularity.
The Singularity, as Ray Kurzweil has popularized it, is the hope that humans and machines will merge into a new species that will be governed by super-rational and super-intelligent knowledge. As Kurzweil says:
Once nonbiological intelligence gets a foothold in the human brain (this has already started with computerized neural implants), the machine intelligence in our brains will grow exponentially (as it has been doing all along), at least doubling in power each year. Ultimately, the entire universe will become saturated with our intelligence. This is the destiny of the universe.
In the Singularity, knowledge that is inaccessible to the human brain, a system of all systems, will inaugurate a harmonious existence amongst man-machines and the natural world.
What needs to be remembered amidst this technological utopianism is that the singularity means the death of humanity. The super-intelligent consciousness is not something accessible by mere humans who live and die in mortal timelines. This is why there is a persistent anti-humanism in artistic and technological avant garde circles.
The celebratory anti-humanism exhibited inSystmatic is, of course, ambiguous. These artists claim at once to be celebrating systems and also pointing to their limits and dangers. The glass solitude booths in Damian Hirst’s “Sometimes I Avoid People” are, as De Wachter notes, reminiscent of cases at a natural history museum. In this early work from 1991, Hirst, in a way others in the exhibition do not, points to the dark side of the elevation of systems over humanity.
Above all, the exhibition reminded me of what Hannah Arendt calls Earth Alienation. The great event that inaugurates earth alienation is Galileo’s discovery of the telescope. While the telescope symbolizes the power of sense perception to see what had previously been invisible, it also challenges the adequacy of our human senses to make sense of the world. What the telescope shows us is not reality. It is not the earth or the moon or the stars. Similarly, social science does not show us individuals and persons. The scientific perspective views persons and objects as seen through systems and instruments and, as Sir Arthur Eddington wrote, the things we see have as much resemblance to their appearance in our instruments as a “telephone number to a subscriber.”
Science, for Arendt, is both anti-human and anti-earth. It is anti-earth, she writes, because
in physics—whether we release energy processes that ordinarily go on only in the sun, or attempt to initiate in a test tube the processes of cosmic evolution, or penetrate with the help of telescopes the cosmic space to a limit of two and even six billion light years, or build machines for the production and control of energies unknown in the household of earthly nature, or attain speeds in atomic accelerators which approach the speed of light, or produce elements not to be found in nature, or disperse radioactive particles, created by us through the use of cosmic radiation, on the earth—we always handle nature from a point in the universe outside the earth. And even at the risk of endangering the natural life process we expose the earth to universal, cosmic forces alien to nature’s household.
And science is anti-human:
[The humanist] view of man is even more alien to the scientist, to whom man is no more than a special case of organic life and to whom man’s habitat—the earth, together with earthbound laws—is no more than a special borderline case of absolute, universal laws, that is, laws that rule the immensity of the universe. Surely the scientist cannot permit himself to ask: What consequences will the result of my investigations have for the stature of man? It has been the glory of modern science that it has been able to emancipate itself completely from all such anthropocentric, that is, truly humanistic, concerns.
The scientist cannot ask the question of whether science dehumanizes man. The scientist also cannot ask the question of whether science alienates man from the earth and his life on earth. The scientist can’t ask such questions because the scientific perspective is the universal, not the particular. It is to ask from an Archimedean point divorced from all reality. That is why the scientist speaks in no earthly language, but in the pure language of mathematics.
The scientist reasons, Arendt writes. He or she seeks to reveal the hidden causes of the universe. But the scientist does not think, does not ask whether such knowledge is good or bad.
But what of the artist? What struck me in Systematic was just how fully the artists today have given themselves over to a celebration of the scientific-technological world and its values. I value their art as a mark of the power of that discourse to shape contemporary thought. But I wonder: why have artists have followed scientists in celebrating the anti-human power of technology?
The question of art’s response to the power of systems and science is at the forefront of Human Being in an Inhuman Age, the Arendt Center’s October 2010 conference that explores the fate of humanity in an inhuman age. The conference features Ann Lauterbach, Nicholson Baker, Wyatt Mason, Gilles Peress, David Rothenberg on the question: "Is Art Human? The Fate of Art in the Age of Machines."
The Zabludowicz Collection, London.
Marilyn Robinson has a new book, Absence of Mind, which she discusses here with Jon Stewart. Her best line:
I don't think it's scientific to proceed from the study of ants to a conclusion about the nature of the cosmos.
A number of years ago Wyatt Mason, now Senior Fellow at the Arendt Center, turned me on to Marilynne Robinson, first her spell-binding novel Gilead, and later
her essay collection, The Death of Adam: Essays on Modern Thought.
"Darwinism," The first and very worthwhile essay in that collection is a thoughtful critique of the political and intellectual foundations of Darwinist thought. One of Robinson's main efforts is to reject the Darwinian rejection of human exceptionalism.
Much like Hannah Arendt, Robinson wants to insist on a distinction between humans and all other species. Where Darwinism and other social sciences imagine humans to follow rules (survival of the fittest, struggle, pursuit of self-interest), she insists that real human beings are much more complicated than that. Indeed, the very idea that mankind can put an end to life on earth is, for Robinson, persuasive reason to conclude that humanity is "exceptional among the animals." Such a human capacity, she writes surely
complicates the idea that we are biologically driven by the imperatives of genetic survival. Surely it also complicates the idea that competition and aggression serve the ends of genetic survival in our case, at least.
It is one thing to say that there is undeniable scientific evidence for evolution. It is another to say that evolution means that the "fittest" survive and that they survive by a genetic predisposition to self-interest, strength, and competition. These are ethical arguments, not scientific proofs. And they are used to delegitimate charity and to strip away humane constraints on ostensibly natural self-interested behavior.
It is unpopular to critique Darwin today, but Hannah Arendt made a similar point in her book, The Origins of Totalitarianism. The force of Arendt's questions about Darwin is that Darwinism is one of the ideas in modern life that diminish the idea of humanity and the worth of humans. Darwinism sees humanity as one species among many. Since evolution does not stop, there will be higher species.
Darwinism thus kicks out one crutch supporting the idea of an inviolable human dignity.
What both Robinson and Arendt see is that to give up on human exceptionalism means that we lose the ethical prohibition on murdering or culling or breeding human beings. It opens the door to hierarchies of humans, whether by race, gender, intelligence, wealth, or productivity. And it leads to the worry about what will happen to the masses of humanity in an age of automation and robotic intelligence, an age when the wealthiest and most powerful simply don't need the mass of laborers to thrive.
Robinson and Arendt thus both raise the question of what it means to be human. Read about the Arendt Center's upcoming Conference: Human Being in an Inhuman Age.
Rebecca Thomas comments:
All that said, it seems correct to say that a more emotional, less rational approach to the game is losing a lot of ground, and there is something sad about that. On the other hand, we’re talking about a game. The stated goal is to win the game while playing within the rules. (There are many unstated goals, of course – spending time with an opponent, exercising the brain on a hard problem, etc.) I think the conclusion is that the stated goal of chess is one that computers are well equipped to achieve, and in fact better equipped than humans. That doesn’t particularly bother me, perhaps exactly because of the style of chess computers play. Being good at that is something like being good at multiplying very large numbers, another task at which computers outshine humans.
On the one hand, Rebecca argues that Chess is unlike life because it is a rational game. But it is much harder to rationalize and solve than a game like checkers, which computers have already solved. There are computer checkers games that are unbeatable. That hasn’t happened for Chess yet, though it might.
But what about a game like jeopardy? That is much more “life like” and that is the game IBM has currently set its sights on. The question is: if we automate checkers, then chess, then jeopardy, at what point does “life” not become subject to automation?
We can of course rationalize and justify each one of these advances in isolation. But the overall effect is that we humans are living in a world of increasingly rationalized systems in which our values will change just as our valuation of chess moves has changed. This will change our planet and our lives in the direction of the beauty of reason, and away from the beauty of chance, adventure, and risk. I find this undeniable. The next question: so what?
This week I had lunch with an ex-student who is thinking about traveling to Korea to teach English. She told me that another of my students was in Korea now teaching English. And I just got an email from another former student asking for a law school recommendation. She has been, you guessed it, teaching English in Korea. It seems that the Korean government is doing a good job subsidizing my former students.
But this, according to today's NY Times, may soon change. South Korea is working to replace native English speaking teachers with robots who are cheaper and more reliable. South Korea now plans to deploy 8,400 robots in the nation's kindergartens by 2013. And budgetary pressures in the program to enlist native English speakers is leading the government to turn to robotic teachers.
A front page essay from the Smarter than You Think series also in today's NY Times explores the growing uses of robots in teaching at all levels. According to Benedict Carey and John Markoff, scientists around the world
are developing robots that can engage people and teach them simple skills, including household tasks, vocabulary or, ... elementary imitation and taking turns.
While they quote computer scientists who say that they have neither the intention nor the ability to replace human teachers, clearly budget conscious schools and governments will seek to employ robots as teachers.
Teachers are threatened not only by robots, but also by electronic and distance education. A study last year for the US Department of Education found, to the great chagrin of many teachers and educators, that
The automation of the workforce is attacking the arts as well as teaching. As Paul Woodiel writes on the Times Oped page today, broadway's musicians and violinists are being replaced by synthesizers.
One question rarely asked in such discussions is: "what is good teaching?" or "what is great music?"and what does it teach? It may be that robots and computers are indeed better at teaching basic skills and customizing learning for individual students. Are electronic synthesizers better at playing the violins on the Great White Way?
But what seems, at least at this point, beyond the reach of robotic teaching is the flash of inspiration that opens a student's mind to the beauty and truth of the world. Then again, most students don't want such teaching--just as most broadway theatre goers don't need the human touch of the violin--which may mean that there are quite a few job openings for professor bots and synthesizers around the world.
Rebecca Thomas has a long and thoughtful response to my post on Gary Kasparov's article on computers, chess, and humanity. The whole comment is worth reading. But here is how she begins:
Regarding the first of the three comments, I have to take issue with the idea that a chess game played by a computer is necessarily less beautiful than one played by a human. There are various kinds of beauty, and mathematical beauty is a very real thing. Some proofs are more elegant than others, for instance. Some x-y curves are quite beautiful, and often these are captured by particularly compact mathematical expressions. One could wonder why: is this preference for simplicity inherent in our idea of beauty, or have we preferentially developed (mathematical) language to describe things we find beautiful?
Surely, there is beauty to math. And there are various kinds of beauty. An efficient, powerful, and unstoppable game of chess played by a computer may be truly beautiful in its reduction of complexity to simplicity.
The point Kasparov makes is not that rationality is not beautiful in some way, but that it changes the idea of beauty in chess. A bold move, a risky move, a daring move has been valued in the world of chess. Chess, despite its rational reputation, has had an emotional and adventurous side. However, against computers--or even against humans aided by computers--such risks rarely succeed and thus they are devalued.
Chess changes. It becomes less quirky, less risky, and more rational. I don't think it wrong to say that chess becomes less human. Chess, in the age of computer chess, loses the valuation of a particularly human beauty, even if it might reflect an impeccably beautiful mathematical rationality.
It need not be that mathematical beauty is inferior to human beauty, as Rebecca suggests I must mean, but simply that the elimination of the human beauty as a meaningful option in chess is to be regretted.
II. Rebecca's second related point is to concede that chess players are internalizing the values of computer chess and thus playing more and more like computers. But, she argues this is not so new or so bad. Chess has changed before. New theories of chess emerge all the time. Why is this different? Why is this change, she might ask, the tipping point that makes chess less human?
I think the answer is the one given above. The values and approach to chess this particular change inaugurates take one element of chess--its rationality--and elevate it to the only relevant element of chess. All competing theories are judged by their ability to succeed over a hyper-rational strategy, and they will eventually be found wanting. Those who play chess (as opposed to making art with chess pieces) will succumb to the values of computerized chess. While earlier theories of chess may have aspired for complete dominance, only a purely rational computer chess can achieve that aim.
One of the most reflective essays on the fate of Human Being in an Inhuman Age is Gary Kasparov's NY Review of Books Essay, The Chess Master and the Computer.
Kasparov respects the power of computers and knows that there already exist computer programs that play Checkers in a way that is unbeatable. Chess is another story, and although IBM's Deep Blue bested him in 1997, the challenge of an unbeatable Chess program is extreme, if simply because there are over 10 to the 120th possible chess games, and most computers simply are not yet so powerful as to be able to master every game. That said, most store-bought computer chess machines will regularly beat grandmasters.
The real question the smart machines raise is not who will win, but how the intelligent machines change our human being and our human world. Kasparov has three fascinating observations on that question.
First, Kasparov argues that machines have changed the ways Chess is played and redefined what a good chess move and a well-played chess game looks like.
The heavy use of computer analysis has pushed the game itself in new directions. The machine doesn’t care about style or patterns or hundreds of years of established theory. It counts up the values of the chess pieces, analyzes a few billion moves, and counts them up again. (A computer translates each piece and each positional factor into a value in order to reduce the game to numbers it can crunch.) It is entirely free of prejudice and doctrine and this has contributed to the development of players who are almost as free of dogma as the machines with which they train. Increasingly, a move isn’t good or bad because it looks that way or because it hasn’t been done that way before. It’s simply good if it works and bad if it doesn’t. Although we still require a strong measure of intuition and logic to play well, humans today are starting to play more like computers.
One way to put this is that as we rely on computers and begin to value what computers value and think like computers think, our world becomes more rational, more efficient, and more powerful, but also less beautiful, less unique, and less exotic.
The question is: is such a world less human?
Another change Kasparov identifies is that the availability of computer chess machines has reduced the advantage of age and experience.
The availability of millions of games at one’s fingertips in a database is also making the game’s best players younger and younger. Absorbing the thousands of essential patterns and opening moves used to take many years, a process indicative of Malcolm Gladwell’s “10,000 hours to become an expert” theory as expounded in his recent book Outliers. (Gladwell’s earlier book, Blink, rehashed, if more creatively, much of the cognitive psychology material that is re-rehashed in Chess Metaphors.) Today’s teens, and increasingly pre-teens, can accelerate this process by plugging into a digitized archive of chess information and making full use of the superiority of the young mind to retain it all. In the pre-computer era, teenage grandmasters were rarities and almost always destined to play for the world championship. Bobby Fischer’s 1958 record of attaining the grandmaster title at fifteen was broken only in 1991. It has been broken twenty times since then, with the current record holder, Ukrainian Sergey Karjakin, having claimed the highest title at the nearly absurd age of twelve in 2002. Now twenty, Karjakin is among the world’s best, but like most of his modern wunderkind peers he’s no Fischer, who stood out head and shoulders above his peers—and soon enough above the rest of the chess world as well.
Aside from mortality, one of the essential features of human beings through history has been the benefit of wisdom acquired with age. But as the world values increasingly reason over insight and facts over judgment, the necessity of experience is supplanted by the acquisition of knowledge through computers.
A third consequence of the rise of computer chess is that genius and exceptional experience is effectively neutralized. Kasparov tells of his experience of two matches played against the Bulgarian Veselin Topalov, at the time the world's highest ranked Chess Master. When Kasparov played him in regular timed chess, he bested Topalov 3-1. But when he played him in a match when both were allowed to consult a computer for assistance, the match ended in a 3-3 draw. It is not that computer-assisted chess nullifies human creativity: As Kasparov writes:
The computer could project the consequences of each move we considered, pointing out possible outcomes and countermoves we might otherwise have missed. With that taken care of for us, we could concentrate on strategic planning instead of spending so much time on calculations. Human creativity was even more paramount under these conditions.
And yet, the computer evened out the match nevertheless: "My advantage in calculating tactics had been nullified by the machine."
What Kasparov offers are three transformations of the modern world that the rise of artificial intelligence promise.
1) As computers set the standard for success, the world will value creativity and originality less and rationality ever more. Jaron Lanier has made similar arguments in his book You Are Not a Gadget.
2) The advantages of age and experience will be eroded and our already youth-worshipping culture will have fewer reasons than ever for respecting their elders.
3) Cheap and easy access to unlimited computer power will largely neutralize the genetic or social advantages of extraordinary memory or excellent schooling.
Other changes beckon as well, for good and for bad. And the overriding question remains: How to be Human in an increasingly Inhuman Age?
The Times actually had two stories today in its "Smarter Than You Think" series on robots and social effects of the rise of smart machines. The first, on personal robots, is discussed below. The second has reporter Amy Harmon making conversation with a remarkably human looking Bina48, namesake of Bina Rothblatt, partner of the self-made millionaire Martine Rothblatt, who commissioned the robotic likeness.
At one point Harmon asks Bina 48 what it is like to be a robot.
“Um, I have some thoughts on that,” she said.
I leaned forward eagerly.
“Even if I appear clueless, perhaps I’m not. You can see through the strange shadow self, my future self. The self in the future where I’m truly awakened. And so in a sense this robot, me, I am just a portal.”
Well Bina48 did appear clueless at times, clearly having difficulty with basic conversation. But the question really is, to what is Bina48 and others like her a portal to? For that, it is helpful to think about Gary Kasparov's own reflections about the rise of computer chess. That is the topic of my next post.
Lemm's project is part of the now widespread attack on the traditional distinction between humans and animals. While the animality of humans has been a basic axiom of philosophical thinking at least since Aristotle characterized the human being as the animal having logos, the Aristotelian-Kantian elevation of the human as the animal who reasons is under revision. In part, the dissent results from our changing views of animals. But, as Berkowitz writes:
A more important challenge to human distinction originates from the discourse of human rights. One core demand of human rights—that men and women have a right to live and not be killed—brought about a shift in the idea of humanity from logos to life. The rise of biopolitics—the political demand that governments limit freedoms and regulate populations in order to protect and facilitate their citizens’ ability to live in comfort—has pushed the animality, the “life,” of human beings to the center of political and ethical activity. In embracing a politics of life over a politics of the reasoned life, biopolitics rejects the distinctive dignity of human rationality and works to reduce humanity to its animality.
Lemm's book brings Nietzsche to the aid of those who would oppose the traditional elevation of human over animal. She argues that the seat of freedom and creativity is with animals, not with humans. Berkowitz dissents.
Such an optimistic reading of the rise of the animal is, to my mind, one-sided. Affirming otherness and multiplicity risks forgetting that, as Hannah Arendt has argued, “Human distinctness is not the same as otherness.” While animal life can be multiple, “only man can express this distinction and distinguish himself, and only he can communicate himself and not merely something—thirst or hunger, affection or hostility or fear.”3 Far from outdated, Arendt’s version of human distinction is an effort to remind us that it is the human capacities to act and think, not to reason, that makes us uniquely human. Plurality, Arendt reminds us, is only possible because humans can initiate action.
The great tension of our times is that between a humanism that builds a world, a civilization, and an animalism that rebels against the limits that world represents. Nietzsche’s greatness was to see through the inhumanism of enlightenment humanism and to identify the perversion of human civilization into a rational world that plans, calculates, and orders the world dehumanizes humanity. To respond to the degradation of humanist civilization by abandoning humanity to its animality, however, risks pursuing a false path to liberation. The animal freedom and plurality that Lemm’s account of Nietzsche offers is, in Heidegger’s words, the “absence of boundaries and limits, the absence of objects not thought as a lack, but as the originary totality of the actual in which the creature is immediately admitted and thus set free.”4 The freedom of Rilke’s animal, in its rebellion against the rationalism of metaphysics, is the freedom of the “open sea,” a vast, undifferentiated, and yawning freedom of infinite possibility. What such a freedom forgets is that humans live in a world. It is one thing to bring into question the rational foundations of that world. It is another to question the world itself.
The New York Times today has an article about robotic friends and companions. Exhibit A is "Paro," a robot seal with artificial intelligence that coos, blinks, wriggles and generally responds to basic linguistic stimuli. Paro is primitive as AI goes, but it has been a huge hit with elderly patients in nursing homes. Often it elicits reactions and joy in patients that have been joyless for extended periods. These personal robots are part of our future, and not only in nursing homes.
Sherry Turkle, Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology in the Program in Science, Technology, and Society at MIT, worries that as Robots become more accepted as friends, teachers, and even lovers, the quality of human friendship may be diminished:
“Paro is the beginning.” “It’s allowing us to say, ‘A robot makes sense in this situation.’ But does it really? And then what? What about a robot that reads to your kid? A robot you tell your troubles to? Who among us will eventually be deserving enough to deserve people?”
Faithful friends are hard to find, so they may be bought more easily. We know, of course, that elderly and not-so elderly people buy companionship in the form of home aides. Parents buy companionship for their children. But these companions are human. It will be cheaper, not long from now, to purchase robotic babysitters and artificially intelligent tutors. And once these machines arrive, what will it mean for children and adults to spend so much of their time interacting with machines, even super-intelligent machines?
“Paro is the beginning,” Turkle says. “It’s allowing us to say, ‘A robot makes sense in this situation.’ But does it really? And then what? What about a robot that reads to your kid? A robot you tell your troubles to? Who among us will eventually be deserving enough to deserve people?”
Of course many of us already spend much of days interacting with machines. These Smart Phones and Computers are still largely tools, and yet they run software that is governed by artificial intelligence, software that introduce us to our friends (Facebook), puts the world of facts at our fingertips (Wikipedia), and allows friends to converse in a mixed virtual and physical reality (Google Wave). As Jaron Lanier has argued in You Are Not a Gadget, the collectivism of the current applications for Smart Phones and the Web is stunting rather than generating human creativity.
The Times article quotes Timothy Hornyak, author of “Loving the Machine,” who rightly advises that “We as a species have to learn how to deal with this new range of synthetic emotions that we’re experiencing — synthetic in the sense that they’re emanating from a manufactured object.”
These questions, what will it mean to be human in a world increasingly populated by smart machines, are at the center of an upcoming conference, Human Being in an Inhuman Age. The conference features Sherry Turkle and Ray Kurzweil, amongst over 20 speakers.