In October, a yellow school bus full of Bard High School Early College Students made the trip to Bard College for the Hannah Arendt Center's Human Being in an Inhuman Age Conference. Alexandra Eaton--videographer, Bard grad, and friend of the Arendt Center--was on the bus with the students and followed them along during the conference. Here is her remarkable video essay of the early college students and their engagement with the conference.
Robert Sapolsky of Stanford has an excellent essay in The Stone today about the neural confusions that make the human brain so incredibly subtle. Sapolsky begins by reminding us that human brain neurons and fruit fly neurons are identical. The difference is in numbers. Humans have 100 billion neurons in our brain, a 1,000,000:1 ratio to the fruit fly. With all of those neurons, we humans can make connections and leaps of fancy that other species cannot.
The most interesting part of Sapolsky's essay are the discussions of recent studies by neurologists that show the way our brain mixes up moral and sensual metaphors. In one study
Volunteers were asked to recall either a moral or immoral act in their past. Afterward, as a token of appreciation, Zhong and Liljenquist offered the volunteers a choice between the gift of a pencil or of a package of antiseptic wipes. And the folks who had just wallowed in their ethical failures were more likely to go for the wipes.
In a related study, some participants were given an opportunity to wash their hands after being asked to recall an immoral act they had committed. Those who washed their hands were then less likely to respond to a request for help that the study then confronted them with, suggesting that the physical act of hand washing offered some moral solace as well. As Sapolsky aptly quips:
Apparently, Lady Macbeth and Pontius Pilate weren’t the only ones to metaphorically absolve their sins by washing their hands.
These studies, and many others Sapolsky references, show how powerfully symbols and metaphors impact human thinking and human decision making. Such a "neural confusion" that mixes hand washing and moral cleansing also lies our willingness to work with criminals who are charming. Sapolsky argues that peace in the middle east is less a matter of water rights and land borders, and more about intangible qualities from recognition of Israel by the Palestinians to an apology by Israel for the forced Palestinian exile of 1948.
Sapolsky ends his essay with a rewording of Nelson Mandela's famous advice:
“Don’t talk to their minds; talk to their hearts.”
What Mandela actually meant, is don't talk to their minds, talk to their neurons, to the "insulas and cingulate cortices and all those other confused brain regions" that make up the incredibly powerful and also deeply associative and thus confused human brain. He argues that once we understand brain's confusion we can speak to it in ways that will allow us to better control our human reasoning. In other words, neuroscience has brought us to a point where finally, after thousands of years, humans might be able to so understand our disorderly brains that we can bring some order to them and "make for a better world."
This double move of at once celebrating our all-too-human confusion and also seeking to understand and control that confusion is the all-too-human ambiguity that drives Sapolsky's essay forward. For it is a celebration of humanity and humanism, of subtlety and nuance, of metaphor and metonymy, and yet it is in the end born of a confidence that all of these complex human associations and emotions are, in the end, products of a finite and comprehensible series of calculations.
I don't know what Sapolsky thinks about Ray Kurzweil's dream of reverse-engineering the human brain within the next 30 years, but nothing in his essay suggests any difficulty with that project. And if that is the case, the very metaphorical sophistication that he celebrates and that allows us humans to know that Kafka's Metamorphosis is not about a cockroach will no longer be either human or mysterious.
How do we humans respond to this exciting and also terrifying prospect? We cannot prevent the reverse engineering of brains. Nor should we. And yet, once it happens, the consequences for humanity and for the earth will be incalculable.
On Wednesday, Sept. 15th, The Hannah Arendt Center inaugurated our new series of Wednesday lunchtime talks. The talks take place in the Mary McCarthy House seminar room.
For the first lunchtime talk, I invited three colleagues to discuss Ray Kurzweil's book, The Singularity is Near. The talks are viewable below.
If there is one core assumption that some, like Ray Kurzweil, make, it is that what we do, how we think, and what we are as humans can, as can all things in nature, be understood. Quite simply, Kurzweil adopts the fundamentally scientific view that the world is an ordered universe that can be analyzed, comprehended, and ultimately mastered. The fundamental scientific hypothesis is the principle of sufficient reason: that everything that is has a reason. Thus, nothing can be at all without a reason. Since human beings are exist, we too must be rationally comprehensible. Why can't we too be understood, figured out, reverse engineered, and even engineered?
One rejoinder to Kurzweil's scientific optimism comes from recent work in neuroscience, the science of the brain.
cannot be dominated by a theoretical gaze, but must be explored with an open world to which we belong, in whose construction we participate.
The point is that the there are limits to the human ability to know the world, a world that is not a static object before us, but is a growing and open experience that we are helping to build.
The impact of this human limitation is made vivid and visible in Neuroscience. Connolly offers examples from the neuroscience literature of people like Philip, who lost his left arm and, like many others, is plagued by 'phantom pain'--a pain that cannot be relieved because there is no arm to treat. Discussing the work of V.S. Ramachandran, Connolly argues that there is a
gap between third-person observations of brain/body activity, however sophisticated they have become in recent neuroscience, and phenomenological experience correlated with those observations.
Why is there this gap? Why is it that scientific attempts to observe and explain the brain and thinking activities do not match up with those experiences themselves?
Confronted with Philip and his phantom limb, the Neuroscientist V.S. Ramachandran put Philip in front of a "mirror box" that allowed him to see an image in which both his arms seemed to be working normally. This has worked, with Philip and other patients, to reduce the phantom pain.
According to Ramachandran, when the limb is lost, the ususal messages sent from the arm to the brain cannot be sent. There are thus no signals that might counteract and self-correct the pain signals being sent by the brain. The "mirror box" seems to trick the brain into seeing the arm again, allowing the signals of a pain-free arm to be reactivated, even though the arm is still missing.
What Ramachandran takes from his "mirror box" treatment is a strong skepticism about the computer-oriented models of the brain that many, like Kurzweil, work from. The brain, he argues, is not like a computer with each part performing one highly specialized job. Instead,
[The Brain's] connections are extraordinarily labile and dynamic. Perceptions emerge as a result of reverberations of signals between different levels of the sensory hierarchy, indeed even across different senses. That fact that visual input [i.e. the mirror box] can eliminate the spasm of a nonexistent arm and then erase the associated memory of pain vividly illustrates how extensive and profound these interactions are.
Ramachandran's neuroscience shows that thinking, our human thinking, is a complex and layered activitity with dissonant relays, complicated feedback loops that connect different and competing centers of bodily and brain activity. The brain, in other words, is not simply a conception machine that works upon logical inputs. Instead, body image, affect, the unconscious, and other images and sensations are parts of thinking.
There are more than 100 Billion neurons in the brain and each neuron has between 1,000 and 10,000 connections with other neurons. All told, "the number of possible permutations and combinations of brain activity, in other words the numbers of brain states, exceeds the number of elementary particles in the known universe."
Connolly's analysis of Ramachandran's work on Neuroscience leads him to argue that since our brain works according to such a complex and infraperceptual model the confidence of scientists like Kurzweil that we can model the brain is misplaced. The presence of such infraperceptions in our brain and our thinking are evidence of the "layered character of everyday perception" and suggest that the brain is not a logical, computer-like, reasoning mechanism that can be modeled and reverse engineered.
I think Connolly is right. And yet, even if human thinking is multilayered, complex, and not rationally deliberative, it is not inconceivable that computers might someday approach human thinking. More importantly, however, is the fact that as human beings continue to "enhance" their thinking with technology and computer-assistance, their own thinking will increasingly be rationalized. A few questions:
1) What will happen when humans have processors in their brain or assisting their thinking? Will we, technologically given the ability to process stimuli at speeds not humanly imaginable, be able to cognitively evaluate our stimuli and overcome the limits of human thinking?
2) As we augment our own senses with neural implants, will humans be overwhelmed by intense sensations beyond the human sensory capacity so that we either are paralyzed by sensory overload or depend on increased cognitive power from computers simply to make sense of our world?
If the answers to both of these questions is at least a qualified yes, the human thinking that Connolly celebrates becomes something that can be and likely will be overcome. Is this a problem? As Ben Stevens keeps asking on this blog: so what? Does "non-human" mean "inhuman"?
Hannah Arendt certainly thinks so. At the most basic level, Arendt thinks that humans, to be human, must be subject to chance, to fate, and to the spontaneity of a world beyond their control. Humans must, in other words, be mortal beings who are born and die in ways beyond their control. Without that mortal subjection to fate, humans increasingly lose our freedom and humanity.
The question remains: how to be human in an increasingly inhuman age?
One of those controversies from last summer that somehow passed me by while I was teaching in Italy was the mash up over the Fertility Institutes decision first to offer a service allowing parents to choose basic traits for their children (hair and eye color, etc.) and then its subsequent retreat in the face of an ethical uproar.
The Institute did and still does provide screening for many diseases as well pre-selection options for sex (it has a 100% success rate on pre-selecting sex of the child). But the uproar over hair and eye color was too much to bear.
One question raised by this is why we allow screening for sex preference and disease, but not hair color? Consider that discrimination is much more prevalent in the areas of disability and sex than it is around eye color. So we permit the erasure or compensation for those traits and conditions that society deems meaningful, but prohibit the choice around less impactful traits.
In a March 9, 2009 WIRED online interview, James Hughes--who will be speaking at the Arendt Center's October conference Human Being in an Inhuman Age--defends the rights of parents to make such choices. And he rejects the term "designer babies." As he said in the H+ interview:
“It’s inevitable, in the broad context of freedom and choice. And the term ‘designer babies’ is an insult to parents, because it basically says parents don’t have their kids’ best interests at heart.” He said, “If I’ve got a dozen embryos I could implant, and the ones I want to implant are the green-eyed ones, or the blond-haired ones, that’s an extension of choices we think are perfectly acceptable — and restricting them a violation of our procreative autonomy.
In a poll cited by H+, the majority of respondents supported pre-genetic implantation in questions of health, but most drew the line somewhere around selecting for athletic ability or intelligence.
More recently, a January 2009 study by researchers at NYU Langone Medical Center found that an overwhelming 75% of parents would be in favor of trait selection using PGD – as long as that trait is the absence of mental retardation. A further 54% would screen their embryos for deafness, 56% for blindness, 52% for a propensity to heart disease, and 51% for a propensity to cancer. Only 10% would be willing to select embryos for better athletic ability, and 12.6% would select for greater intelligence. 52.2% of respondents said that there were no conditions for which genetic testing should never be offered, indicating widespread support for PGD – as long as it’s for averting disease and not engineering human enhancement.
It seems that there are at least two worries that need to be thought through. One is socio-economic. To allow for-profit companies to genetically implant desired traits means that those who can pay for it and want to will ensure their progeny genetic advantage. Now, genetic advantage does not mean one will succeed, but it is an advantage--at least a perceived advantage even if not always a real one.
The second question is broader. If Hannah Arendt and others are right that one part of our humanity is that we are subject to chance and to fate, then the opportunity to control all decisions in life--including the decision of life itself--raises questions about our humanity. As the mystery is taken from childbirth, one of the great human experiences begins to approximate the experience of ordering from a catalogue. Of course, this is not yet the case. But with surrogacy pregnancies and genetic implants, it is now possible to go on vacation, order a boy with green eyes, blonde hair, and high intelligence, and come home to pick him up. Is this a human way to have children? Is it human to eradicate disease? What about creating such healthy people that they live to be 700 years old, as Ray Kurzweil imagines will soon be the case?
One needs to think: What it means to be Human in a World of Super-Human Technologies.
And read about the latest clone mammal, Got, the Spanish fighting Bull.
The first installment was a wide-eyed look at the Singularity movement, profiling Ray Kurzweil who will be speaking at the Arendt Center Conference on Friday, Oct. 22. Kurzweil will speak for an hour, followed by a discussion between Mr. Kurzweil and Bard President, Leon Botstein.
The latest entry looks at the rapid advance of speech software and artificial intelligence that allows machines to have meaningful conversations and perform tasks that require comprehension, conversation, and some level of thinking and learning.
For decades, computer scientists have been pursuing artificial intelligence — the use of computers to simulate human thinking. But in recent years, rapid progress has been made in machines that can listen, speak, see, reason and learn, in their way. The prospect, according to scientists and economists, is not only that artificial intelligence will transform the way humans and machines communicate and collaborate, but will also eliminate millions of jobs, create many others and change the nature of work and daily routines.
This raises as well the question of what the role of humans will be in the world when machines do more of the work that we have traditionally done. As Bill Joy worried in Wired, what besides altruism will lead our political leaders to keep superfluous workers alive when not only factory work but also teaching, warfare, and administration can be done by an automated workforce?
The advances, according to The Times, herald an era in which we interact with machines much as we do with friends and co-workers:
"Our young children and grandchildren will think it is completely natural to talk to machines that look at them and understand them,” said Eric Horvitz, a computer scientist at Microsoft’s research laboratory who led the medical avatar project, one of several intended to show how people and computers may communicate before long.
The question here, as with so much in the world of artificial intelligence, is what actually distinguishes humans from machines. If it is simply the ability to think, to reason, and to calculate, the day is coming when that difference will be no difference. This requires that we ask about the soul, the heart, and the ineffable nature of humanity. And as more and more of the beings we interact with and institutions we work with are governed by computer rationality, what does it mean today to be human?