Monday, August 16, 2010: “Earth Alienation: From Galileo to Google”
Lecturer: Roger Berkowitz, Associate Professor of Political Studies and Human Rights at Bard College; Academic Director, Hannah Arendt Center for Politics and the Humanities.
In this lecture, Roger Berkowitz welcomes the incoming Class of 2014 at Bard College with an important question: “Is humanity important?” The human race has witnessed impressive scientific and technological achievements, some of the most remarkable of which have occurred in the past 50 years. While some of these have advanced the history of humanity, others threaten to dampen its spark. Nuclear and biological weapons are capable of killing untold millions of people, and the urge to embrace automation in our everyday lives cultivates the fear that society may one day embrace euthanization as a way to rid itself of “superfluous persons”. Acknowledging this increasingly dangerous world we live in, Berkowitz argues it is imperative that we at this moment in time take a closer look at ourselves and consider our significance. He proposes two sources that can help us in our task: Galileo and Google.
Science fiction, Hannah Arendt tells us, has too long been undervalued by those who would seek to comprehend the human condition. It is in the human fantasies of our future that mankind reveals our desires, both possible and not yet possible. For Arendt, some of those deepest and longest-held desires included the desire to flee the earth, to play God and to make human beings, and to make labor unnecessary. Her book, The Human Condition, is in part an effort to think through the fact that many of these human desires were, for the first time in millennia, threatening to become possible.
We make a mistake to ignore science fiction, especially in an era where the unprecedented advance of technological ability makes it possible that today’s dreams will soon be realized. With that in mind, it is worth looking at Alex Mar’s profile of life, death, and cryogenic preservation of FM-2030, otherwise known as Fereidoun M. Esfandiary.
Writing in The Believer, Mar introduces FM-2030, one of the founders of the transhumanism movement. FM-2030 has a single defining dream for the future of man, that we overcome our given and earthly and biological limits. If man, as Arendt writes, is both someone who lives in a given and fated world and someone who can change and re-make that world, the transhumanists like FM-2030 imagine a time in the near future in which all biological, temporal, and physical limits will be overcome. Including death.
The ultimate goal for transhumanists has never been merely to improve mankind, but to defeat our greatest opponent: death. Of course, not all champions of Progress make the titanic leap to Immortality—the jump is so vast, so wildly immodest and presumptuous as to cross over into the realm of the kind of uncomfortably eccentric. But as FM would put it, “No one today can be too optimistic.” Transhumanists, in their crusade against time, have begun to buy themselves some of it, at the cost of a pricey life-insurance policy. With some cryoprotectants and a lot of liquid nitrogen, humanity—or at least the one-thousand-ish people affiliated with Alcor, currently the largest cryonics group in the country—has been gifted with the semi-scientific semi-possibility of radically extended life. Die a clinical “death,” go to sleep, wake up eons later, when existence is a whole new ball game. So when will immortality come?
If you want to understand the human condition, that means knowing well too our most human dreams. Today, technological optimism is at the center of those dreams. Fereidoun M. Esfandiary was for many the first great transhumanist of the late 20th century, the precursor to Ray Kurzweil, who also dreams of his own immortality. This story of his untimely death, and efforts to preserve him, reveal much about the movement he helped to found.
Read the article here.
Read related essays on the human dream of a non-human future here.
You can also purchase the inaugural issue of HA, the Hannah Arendt Center Journal, which features a selection of articles by Nicholson Baker, Babette Babich, Rob Riemen, Marianne Constable, and Roger Berkowitz from our 2010 conference, “Human Being in an Inhuman Age.”
In October, a yellow school bus full of Bard High School Early College Students made the trip to Bard College for the Hannah Arendt Center's Human Being in an Inhuman Age Conference. Alexandra Eaton--videographer, Bard grad, and friend of the Arendt Center--was on the bus with the students and followed them along during the conference. Here is her remarkable video essay of the early college students and their engagement with the conference.
On Wednesday, Sept. 15th, The Hannah Arendt Center inaugurated our new series of Wednesday lunchtime talks. The talks take place in the Mary McCarthy House seminar room.
For the first lunchtime talk, I invited three colleagues to discuss Ray Kurzweil's book, The Singularity is Near. The talks are viewable below.
If there is one core assumption that some, like Ray Kurzweil, make, it is that what we do, how we think, and what we are as humans can, as can all things in nature, be understood. Quite simply, Kurzweil adopts the fundamentally scientific view that the world is an ordered universe that can be analyzed, comprehended, and ultimately mastered. The fundamental scientific hypothesis is the principle of sufficient reason: that everything that is has a reason. Thus, nothing can be at all without a reason. Since human beings are exist, we too must be rationally comprehensible. Why can't we too be understood, figured out, reverse engineered, and even engineered?
One rejoinder to Kurzweil's scientific optimism comes from recent work in neuroscience, the science of the brain.
cannot be dominated by a theoretical gaze, but must be explored with an open world to which we belong, in whose construction we participate.
The point is that the there are limits to the human ability to know the world, a world that is not a static object before us, but is a growing and open experience that we are helping to build.
The impact of this human limitation is made vivid and visible in Neuroscience. Connolly offers examples from the neuroscience literature of people like Philip, who lost his left arm and, like many others, is plagued by 'phantom pain'--a pain that cannot be relieved because there is no arm to treat. Discussing the work of V.S. Ramachandran, Connolly argues that there is a
gap between third-person observations of brain/body activity, however sophisticated they have become in recent neuroscience, and phenomenological experience correlated with those observations.
Why is there this gap? Why is it that scientific attempts to observe and explain the brain and thinking activities do not match up with those experiences themselves?
Confronted with Philip and his phantom limb, the Neuroscientist V.S. Ramachandran put Philip in front of a "mirror box" that allowed him to see an image in which both his arms seemed to be working normally. This has worked, with Philip and other patients, to reduce the phantom pain.
According to Ramachandran, when the limb is lost, the ususal messages sent from the arm to the brain cannot be sent. There are thus no signals that might counteract and self-correct the pain signals being sent by the brain. The "mirror box" seems to trick the brain into seeing the arm again, allowing the signals of a pain-free arm to be reactivated, even though the arm is still missing.
What Ramachandran takes from his "mirror box" treatment is a strong skepticism about the computer-oriented models of the brain that many, like Kurzweil, work from. The brain, he argues, is not like a computer with each part performing one highly specialized job. Instead,
[The Brain's] connections are extraordinarily labile and dynamic. Perceptions emerge as a result of reverberations of signals between different levels of the sensory hierarchy, indeed even across different senses. That fact that visual input [i.e. the mirror box] can eliminate the spasm of a nonexistent arm and then erase the associated memory of pain vividly illustrates how extensive and profound these interactions are.
Ramachandran's neuroscience shows that thinking, our human thinking, is a complex and layered activitity with dissonant relays, complicated feedback loops that connect different and competing centers of bodily and brain activity. The brain, in other words, is not simply a conception machine that works upon logical inputs. Instead, body image, affect, the unconscious, and other images and sensations are parts of thinking.
There are more than 100 Billion neurons in the brain and each neuron has between 1,000 and 10,000 connections with other neurons. All told, "the number of possible permutations and combinations of brain activity, in other words the numbers of brain states, exceeds the number of elementary particles in the known universe."
Connolly's analysis of Ramachandran's work on Neuroscience leads him to argue that since our brain works according to such a complex and infraperceptual model the confidence of scientists like Kurzweil that we can model the brain is misplaced. The presence of such infraperceptions in our brain and our thinking are evidence of the "layered character of everyday perception" and suggest that the brain is not a logical, computer-like, reasoning mechanism that can be modeled and reverse engineered.
I think Connolly is right. And yet, even if human thinking is multilayered, complex, and not rationally deliberative, it is not inconceivable that computers might someday approach human thinking. More importantly, however, is the fact that as human beings continue to "enhance" their thinking with technology and computer-assistance, their own thinking will increasingly be rationalized. A few questions:
1) What will happen when humans have processors in their brain or assisting their thinking? Will we, technologically given the ability to process stimuli at speeds not humanly imaginable, be able to cognitively evaluate our stimuli and overcome the limits of human thinking?
2) As we augment our own senses with neural implants, will humans be overwhelmed by intense sensations beyond the human sensory capacity so that we either are paralyzed by sensory overload or depend on increased cognitive power from computers simply to make sense of our world?
If the answers to both of these questions is at least a qualified yes, the human thinking that Connolly celebrates becomes something that can be and likely will be overcome. Is this a problem? As Ben Stevens keeps asking on this blog: so what? Does "non-human" mean "inhuman"?
Hannah Arendt certainly thinks so. At the most basic level, Arendt thinks that humans, to be human, must be subject to chance, to fate, and to the spontaneity of a world beyond their control. Humans must, in other words, be mortal beings who are born and die in ways beyond their control. Without that mortal subjection to fate, humans increasingly lose our freedom and humanity.
The question remains: how to be human in an increasingly inhuman age?
One of those controversies from last summer that somehow passed me by while I was teaching in Italy was the mash up over the Fertility Institutes decision first to offer a service allowing parents to choose basic traits for their children (hair and eye color, etc.) and then its subsequent retreat in the face of an ethical uproar.
The Institute did and still does provide screening for many diseases as well pre-selection options for sex (it has a 100% success rate on pre-selecting sex of the child). But the uproar over hair and eye color was too much to bear.
One question raised by this is why we allow screening for sex preference and disease, but not hair color? Consider that discrimination is much more prevalent in the areas of disability and sex than it is around eye color. So we permit the erasure or compensation for those traits and conditions that society deems meaningful, but prohibit the choice around less impactful traits.
In a March 9, 2009 WIRED online interview, James Hughes--who will be speaking at the Arendt Center's October conference Human Being in an Inhuman Age--defends the rights of parents to make such choices. And he rejects the term "designer babies." As he said in the H+ interview:
“It’s inevitable, in the broad context of freedom and choice. And the term ‘designer babies’ is an insult to parents, because it basically says parents don’t have their kids’ best interests at heart.” He said, “If I’ve got a dozen embryos I could implant, and the ones I want to implant are the green-eyed ones, or the blond-haired ones, that’s an extension of choices we think are perfectly acceptable — and restricting them a violation of our procreative autonomy.
In a poll cited by H+, the majority of respondents supported pre-genetic implantation in questions of health, but most drew the line somewhere around selecting for athletic ability or intelligence.
More recently, a January 2009 study by researchers at NYU Langone Medical Center found that an overwhelming 75% of parents would be in favor of trait selection using PGD – as long as that trait is the absence of mental retardation. A further 54% would screen their embryos for deafness, 56% for blindness, 52% for a propensity to heart disease, and 51% for a propensity to cancer. Only 10% would be willing to select embryos for better athletic ability, and 12.6% would select for greater intelligence. 52.2% of respondents said that there were no conditions for which genetic testing should never be offered, indicating widespread support for PGD – as long as it’s for averting disease and not engineering human enhancement.
It seems that there are at least two worries that need to be thought through. One is socio-economic. To allow for-profit companies to genetically implant desired traits means that those who can pay for it and want to will ensure their progeny genetic advantage. Now, genetic advantage does not mean one will succeed, but it is an advantage--at least a perceived advantage even if not always a real one.
The second question is broader. If Hannah Arendt and others are right that one part of our humanity is that we are subject to chance and to fate, then the opportunity to control all decisions in life--including the decision of life itself--raises questions about our humanity. As the mystery is taken from childbirth, one of the great human experiences begins to approximate the experience of ordering from a catalogue. Of course, this is not yet the case. But with surrogacy pregnancies and genetic implants, it is now possible to go on vacation, order a boy with green eyes, blonde hair, and high intelligence, and come home to pick him up. Is this a human way to have children? Is it human to eradicate disease? What about creating such healthy people that they live to be 700 years old, as Ray Kurzweil imagines will soon be the case?
One needs to think: What it means to be Human in a World of Super-Human Technologies.
And read about the latest clone mammal, Got, the Spanish fighting Bull.
The first installment was a wide-eyed look at the Singularity movement, profiling Ray Kurzweil who will be speaking at the Arendt Center Conference on Friday, Oct. 22. Kurzweil will speak for an hour, followed by a discussion between Mr. Kurzweil and Bard President, Leon Botstein.
The latest entry looks at the rapid advance of speech software and artificial intelligence that allows machines to have meaningful conversations and perform tasks that require comprehension, conversation, and some level of thinking and learning.
For decades, computer scientists have been pursuing artificial intelligence — the use of computers to simulate human thinking. But in recent years, rapid progress has been made in machines that can listen, speak, see, reason and learn, in their way. The prospect, according to scientists and economists, is not only that artificial intelligence will transform the way humans and machines communicate and collaborate, but will also eliminate millions of jobs, create many others and change the nature of work and daily routines.
This raises as well the question of what the role of humans will be in the world when machines do more of the work that we have traditionally done. As Bill Joy worried in Wired, what besides altruism will lead our political leaders to keep superfluous workers alive when not only factory work but also teaching, warfare, and administration can be done by an automated workforce?
The advances, according to The Times, herald an era in which we interact with machines much as we do with friends and co-workers:
"Our young children and grandchildren will think it is completely natural to talk to machines that look at them and understand them,” said Eric Horvitz, a computer scientist at Microsoft’s research laboratory who led the medical avatar project, one of several intended to show how people and computers may communicate before long.
The question here, as with so much in the world of artificial intelligence, is what actually distinguishes humans from machines. If it is simply the ability to think, to reason, and to calculate, the day is coming when that difference will be no difference. This requires that we ask about the soul, the heart, and the ineffable nature of humanity. And as more and more of the beings we interact with and institutions we work with are governed by computer rationality, what does it mean today to be human?