Professor Samuel Moyn of Columbia University recently gave a talk entitled at Bard sponsored by the Hannah Arendt Center and the Human Rights Program.
The talk addressed themes from his recent and widely-discussed book, The Last Utopia: Human Rights in History (Harvard 2010). Professor Moyn is also the author of Origins of the Other: Emmanuel Levinas Between Revelation and Ethics (2005) and A Holocaust Controversy: The Treblinka Affair in Postwar France (2005), as well as numerous articles. He is editor of the journal Humanity and co-Editor of the journal of Modern Intellectual History. In 2007 he received Columbia Universities Mark van Doren Award for excellence in teaching.
Jaron Lanier has another excellent essay probing the dangers that technophilia poses to human thinking. His theme is technology in the classroom and he asks: Does the Digital Classroom Enfeeble the Mind?
What makes Lanier perhaps the most thoughtful essayist on the question of human thinking is his insistence on asking the question: What happens when we use technology to solve problems that we humans cannot understand? In this case he questions: what are the possibile advantages and dangers of using technology to teach our children to think?
Clearly educational technology has its advantages. My five-year-old daughter plays brainquest on my iPhone and has learned addition and division earlier than most. Her new Kindergarten classroom is outfitted with Smartboards that bring children into the educational process. My colleague at Bard has students post drafts of their papers on Moodle (an online teaching resource) and has the students to edit each others' papers before re-writing them). And technological evaluation of teachers and students allows us to identify what works and what doesn't work in the classroom.
I have no doubt that computers can make teachers better and may, in certain tasks, be better than a human teacher. A robot may be better than a teacher at drilling a young student on vocabulary and math. A computer may also be a better grader than a teacher, especially given the rampant grade inflation of recent years. If every teacher were presented with a computer generated grade of a student test or paper based on objective criteria, that might go a long way to counteracting the emotionally laden tendency to soften our grades and simply pass on weak students without challenging them to do better.
As technology infiltrates the classroom and shines light on what teachers do well and what they do badly, it also reorients teaching in a way that denies the magic of thinking and teaching well. As Lanier insightfully notes, there is a magic to teaching that our reliance on technology seems destined to overlook. I am a teacher and I can tell you that the most valuable and extraordinary moments of teaching and learning happen in those surprising moments when teacher and student are transfixed on the precipice of a question that lingers for minutes, days, or weeks. To be able to breakthrough a student's commonsense assumptions and confidence about how they see the world and open up new paths and ways of inquiry and thought--that is what makes teaching wonderful.
Can a computer do such a thing? I admit it is possibile. But I also say that there is no model for doing so, and the point is that we do not know how it is done. It is something a teacher--a good teacher--learns to do and we cannot yet possibly program a computer to do such a thing--although it may be possible that computers learn to do it themselves. We cannot, as Lanier suggests, bottle the magic of teaching in a computer, because we do not yet understand the magical aspects of human thinking. To ignore our ignorance and subject more and more of our educational efforts to technological controls and measures is to forget that even today, as Lanier writes, "Learning at its truest is a leap into the unknown."
If I have one quibble with Lanier's overwhelmingly intelligent approach to these questions, it is his focus on technology as the question. In his essays and his book You Are Not A Gadget, Lanier presents our challenge as how to deal with technology. In this most recent essay, he offers a version of the alternative that he has now presented in many different forms. Either we use computers and robots and systems to "measure and represent the students and teachers," or you employ technology to help teach the students how to "build a virtual spaceship." The former approach subjects thinking and teaching to computerized models. The latter frees students to use technology in their own thoughtful pursuits. Lanier is right that we should encourage the building of spaceships and be wary of teaching software that forgets that thinking is a magical process beyond our comprehension.
What such an opposition overlooks, however, is that the human decisions about whether and how to use technology are themselves subject to a deeper desire: The desire to relieve ourselves of the burdens of thinking.
Hannah Arendt diagnosed this same desire to overcome our thinking, our acting, and thus our humanity in her book The Human Condition, published in 1958. Discussing Sputnik, what she calls the most important event of the modern age, Arendt cites The New York Times, that Sputnik was the first "step toward escape from men's imprisonment on earth." She sets this sentiment alongside a 1911 quote from the Russian scientist Konstantin E. Tsiolkovsky-Kaluga: "Mankind will not remain bound to the earth forever." Together, these statements manifest a profound desire to leave behind the earth--a desire that Arendt believes long precedes our technological ability to do so.
The desire to leave the earth is, for Arendt, the desire to abandon our humanity. As she writes:
"The earth is the very quintessence of the human condition."
When she speaks of the "earth," Arendt means that aspect of our lives which mankind does not create--that which is beyond human control and human artifice. The earth names that quintessential essence of man that as it has been given, free, as she writes, from human intervention. Our earthliness is "a free gift from nowhere." This free gift of human existence can, of course, be understood religiously as man's divine creation by God. But it can also be understood, as Arendt means it, in a secular sense, as the fact that mortal beings are subject to fate and chance beyond their control and comprehension. It is this earthly subjection to chance that Arendt says is the "quintessence of the human condition."
Is Arendt right that the subjection to chance and fate is at the essence of being human?
If she is, then she is right to worry that our dreams of abandoning the earth, along with our dreams of creating life in the test tube, our dreams of producing superior human beings as well as our dreams of breeding intelligent servants, our dreams of living forever, and, of course, our dreams of creating robots intelligent enough to teach and think for us--all of these dreams manifest an urgent desire on behalf of humanity to cut the last tie to our humanity. What we humans want, Arendt argues, is to commit suicidal genocide. It is not technology that is the danger, but technology is only an expression of our darker and deeper urges to overcome ourselves.
Lanier is a prescient guide to the right questions about our engagement with technology, but when he expresses the hope that we will decide to use technology wisely, he proceeds on the assumption that it is the technology that is the danger and we thoughtful humans need to understand and resist that danger. What Arendt places before us is the troubling insight that technology's dangers are only symptoms of an all-too-human wish to extinguish the very thoughtful, soulful, and creative impulses that distinguish us.
What needs to be asked is not whether new possibilities will emerge for using technology well--of course they will, although these possibilities will likely be ever more rare. The bigger question is what is driving our communal desire to exchange our earthly humanity for ever-more-rational and ever-more-expertly-conceived ways of life? Arendt offers us an answer. We want to exchange the free gift of life for a planned life. We want to exchange freedom for behavior. And we want to exchange thoughtful creativity for the security of reason. It is these fundamental desires--themselves human--that we must grapple with in an increasingly inhuman age.
You could do far worse than spending a few minutes on a Sunday reading Scott Horton's brief interview with Julian Young on the philosopher of the bent bow.
“Without music life would be an error” is a great T-shirt slogan, but its meaning is far from obvious. Here is how Nietzsche glosses his aphorism in a letter from 1888, the last year of his sanity:
Music … frees me from myself, it sobers me up from myself, as though I survey the scene from a great distance … It is very strange. It is as though I had bathed in some natural element. Life without music is simply an error, exhausting, an exile.
Nietzsche’s first book, The Birth of Tragedy, dedicated to Richard Wagner, is constructed around the duality between the “Apollonian” and the “Dionysian.” Apollo stands for intellect, reason, control, form, boundary-drawing and thus individuality. Dionysus stands for the opposites of these; for intuition, sensuality, feeling, abandon, formlessness, for the overcoming of individuality, absorption into the collective. Crucially, Apollo stands for language and Dionysus for music. What, therefore, music does is to–as we indeed say–”take one out of oneself.” Music transports us from the Apollonian realm of individuals to which our everyday self belongs and into the Dionysian unity. Music is mystical.
Since the human essence is the will to live–or for Nietzsche, the “will to power”–the worst thing that can happen to us is death. Death is our greatest fear, so that without some way of stilling it we cannot flourish. This is why musical mysticism is important. In transcending the everyday ego we are delivered from “the anxiety brought by time and death.” Through absorption into what Tristan und Isolde calls the “waves of the All,” we receive the promise and experience of immortality.
Later on, Nietzsche realized that not all music is Dionysian. Much classical music, based as it is on the geometrical forms of dance and march, is firmly rooted in the Apollonian. Yet as the 1888 letter indicates, he never abandoned the musical “antidote” to death. Without music, life would be anxiety and then extinction. Without music, life would be an “exile” from the realm of immortality.
On Wednesday, Sept. 15th, The Hannah Arendt Center inaugurated our new series of Wednesday lunchtime talks. The talks take place in the Mary McCarthy House seminar room.
For the first lunchtime talk, I invited three colleagues to discuss Ray Kurzweil's book, The Singularity is Near. The talks are viewable below.
Making History in the Courtroom
From the Soviet Show Trials to the Khmer Rouge Trials
International Conference, Cardozo Law School & NYU Law School, New York, September 16 & 17, 2010
In anticipation of Bard’s upcoming fall conference (“Human Being in an Inhuman Age”) and reflecting upon several related threads in recent blogs (regarding “the wonders of man in the age of simulation”), I’ve found myself thinking about Rabbi Joseph Soleveitchik’s observations concerning the profound split in human nature.
It’s a division Soleveitchik traces back to the two creation stories in the Old Testament. In the first creation story (“Genesis I”), we read: “God created man, in the likeness of God made he him.” Created in God’s likeness, the first Adam stands as both the model and champion of humanity’s instrumental mastery over the earth and all that it contains. (“Fill the earth and subdue it, and have dominion over the fish of the sea, over the fowl of the heaven, and over the beasts, and all over the earth.”) Humankind’s mimetic faculty, in other words, correlates to material mastery. In the second creation story, by contrast, we find no reference either to images or to mastery. Instead, we read: “God breathed into his nostrils the breath of life; and man became a living soul.”
The chief variation in this version consists in the gift of life in the form of God’s breath. With the introduction of this immaterial element, the second creation story shifts focus, along with its normative register. Dominion over the material world gives way to a very different purpose. Placing Adam in the Garden of Eden, God instructs him “to dress it and to keep it.” In other words, mastery now yields to solicitude and conservation. If the first Adam is the master of creation, the second Adam is its self-denying caretaker. In short, if our first nature is instrumental, in the service of command and control, our second is responsive, mindful of that which requires care or service.
Today, it is the spirit of mastery that seems to be on the upswing. Whether it’s the culture of digital gaming, or the likes of Kurzweil’s immortal “spiritual machines,” or in popular films like The Matrix and Dark City, the message we hear is: “you can have it all!”
Dreams and the will to power, desire and reality, converge. Yet, it is this very convergence that may threaten the human – if we think of the “human” in terms of finitude, suffering, fragility, and the inevitability of uncertainty. This human reality is precisely what the will to material mastery (and dreams of digital immortality) deny. In this respect, Genesis I trumps Genesis II. The impulse to control is displacing our capacity for self-demotion in the service of what is other (beyond control). Otherness precludes mastery. Instead, it invites wonder. Wonder is the way we respond to that which goes beyond rational or instrumental control or mastery. This is the sublime. We experience it in the infinite call of nature (“beauty”) and in the infinite demands of the other who stands before us (“the ethical”). Judgment (of the beautiful and the just) begins in wonder, in the face of the real.
Sherry Turkle writes that digital simulation tends to undermine our fealty to the real. If this is so, authentic judgment may have no place in the domain of digital simulation. That claim looms large when law itself migrates to the screen (e.g., in the form of visual evidence and visual argument in court). Over the last decade or so, initially in my book When Law Goes Pop (Chicago: 2000) and more recently in my book, Visualizing Law in the Age of the Digital Baroque: Arabesques & Entanglements (Routledge: forthcoming 2011), this phenomenon has preoccupied my attention. What happens when visual images become the basis for judgment inside the courtroom? How does the image – the amateur documentary, the police surveillance video, the fMRI of brain or heart, or the digital re-enactment of accidents and crimes – affect law’s ongoing quest for fact-based justice? Upon reflection, it becomes plain that judgments based on visual images arise in a different way, with different aesthetic and ethical consequences, than when they rest upon words alone. Nor is visual literacy a given. We need to carefully decode the truth claims of images on the screen, but in order to do that we must first crack the code that constitutes the meaning they provide. And the code changes with the kind of image we see. Regardless, we all tend to be naïve realists when it comes to images. “Seeing is believing.” We tend to look through the screen as if it were a window rather than a construct.
When law lives as an image on the screen, it lives there the way other images do, for good and for ill. Law emulates the cultural constructs of popular entertainment as well as the aesthetics of science. When law lives as an image it, too, takes delight in images of a brain glowing with the beautiful, digitally programmed colors of visual neuroscience. Thus, the images on which legal judgments are based may serve as factual anchors or merely as a source of aesthetic delight, as reliable information or as unmitigated fantasy or illicit desire. So it’s no idle matter to ask, in what reality (if any) does the digital image partake? When fact-based justice rests upon digital simulation its claim to truth may come from a fantasy.
Like an image, law invites us to forget or deny what lies beyond its mimetic (figurative) aspect. Law’s oscillation between aesthetic form (image, figure, copy, text) and moral authority reenacts humanity’s historic vacillation between the two poles of our nature: mastery (Genesis I) and service (Genesis II). In the endless dance of power and meaning, Adam I and Adam II recapitulate the King’s two bodies, the letter and spirit of the law. Law oscillates between these two poles. Law commands, but it wants its commands to be accepted not simply out of fear of punishment, but also, even more importantly, in the belief that it is just. Without good (non-punitive, moral) reasons to accept its coercive power, law remains merely a gunman writ large.
And so, in a visual age like ours, it becomes incumbent upon all of us – jurists and lay people alike – to discern with great care whether or not the screen images we see are capable of bringing justice to mind.
John Markoff has a new installment in The New York Times Smarter Than You Think Series today, The Boss is Robotic, And Rolling Up Behind You. After an earlier article looking at the use of robots in the classroom, here Markoff looks at the use of robots to enhance and expand the reach of those in higher level management situations. The importance of these articles is that the robots Markoff is investigating are not for low-level menial tasks like factory work or giving solace to elderly patients. The great change coming to our economy and our lives is that the automation of handwork that has hollowed out the lives of so many lower class laborers is coming now to the professions usually thought immune to the threats of automation. As robots get smarter and more mobile, the human advantages of thinking and walking are being whittled away.
With the help of RP-7i, a robot from InTouch Health, Dr. Alan Shatzel can sit at home and role into a patient's room at any hospital where an RP-7i is stationed.
The advantage of the RP-7i is that the doctor can "be in the room," not only hearing and seeing as if on a teleconference call, but being present via what is referred to as "telepresence." The doctor can speak with the patient, zoom in on the monitors, note the way he uses their hands or curl their lips. As Dr. John Whapham who also uses a RP-7i says of the experience:
You're live, and you can walk around, examine, image, zoom in and out. I do it all the time.
Markoff explores a number of these new telepresence robots and notes that these robots offer the promise of enhancing the work of doctors as well as other professionals. These professionals will be freed from their physical offices even more so than they are currently.
In addition, they will be able to work in many locations at once. From an economic perspective, one can easily imagine a hospital or chain of hospitals reducing the number of chief surgeons from say 10 to 5, as those five now sit in a control room monitoring different groups of patients via different telepresences in different hospitals. Whereas for centuries automation was largely seen as a threat to the lower and menial workers, advances in technology are now threatening to transform the work of the most highly educated elite.
Finally, these telepresence robots are not mere cost-cutting devices, although they are that as well.
For now, most of the mobile robots, sometimes called telepresence robots, are little more than ventriloquists' dummies with long, invisible strings. But some models have artificial intelligence that lets them do some things on their own, and they will inevitably grow smarter and more agile. They will not only represent the human users, they will augment them.
Soon these robots will, as Markoff writes, include artificial intelligence features that will enhance the surgeon's own human capacities. The robots will have have infinite data-storing capacities to access records of past procedures and scan a patients entire medical history. There is little doubt that as these machines progress quickly, they will be second-guessing and advising the doctors who control them.
So what does it mean that the robots will augment their human users?
1. Economically, the world will have use for far fewer highly-trained doctors. I have written about how robots are replacing teachers as well. This is part of the more general attack that computers and robots pose for the middle and even upper-middle classes in the next few decades. As my colleague Walter Russell Mead writes in his recent blogpost:
The upper middle class benefited over the last generation from a rising difference between the living standards of professional and blue collar American workers. This is likely to change; from civil service jobs in government to university professors, lawyers, health care personnel, middle and upper middle management in the private sector, the upper-middle class is going to face a much harsher environment going forward. Automation, outsourcing and unremitting pressures to control costs are going to squeeze upper middle class incomes. What blue collar workers faced in the last thirty years is coming to the white collar workforce now.
2. Medical care will change as doctors work alongside artificial intelligence robots. Just as computer assisted chess players make fewer mistakes and take fewer chances so that more games end in draws, computer-assisted medicine will become more careful and proficient.
Those familiar with Hannah Arendt's work will recall her own certainty that the rise of automation would soon have an extraordinary impact on our world. Her worry was that humans today are simply not prepared for a life in which most of us will not have jobs because there will not much left for humans to do that computers and robots cannot. Thus at the very time when automation promises to realize the ancient dream of freeing us from the necessity to labor, we humans don't know what to do with our time outside of our work. The threat of automation, she writes, is political as much as it is economic. But more on this later.
Read more of Markoff's article here:
If there is one core assumption that some, like Ray Kurzweil, make, it is that what we do, how we think, and what we are as humans can, as can all things in nature, be understood. Quite simply, Kurzweil adopts the fundamentally scientific view that the world is an ordered universe that can be analyzed, comprehended, and ultimately mastered. The fundamental scientific hypothesis is the principle of sufficient reason: that everything that is has a reason. Thus, nothing can be at all without a reason. Since human beings are exist, we too must be rationally comprehensible. Why can't we too be understood, figured out, reverse engineered, and even engineered?
One rejoinder to Kurzweil's scientific optimism comes from recent work in neuroscience, the science of the brain.
cannot be dominated by a theoretical gaze, but must be explored with an open world to which we belong, in whose construction we participate.
The point is that the there are limits to the human ability to know the world, a world that is not a static object before us, but is a growing and open experience that we are helping to build.
The impact of this human limitation is made vivid and visible in Neuroscience. Connolly offers examples from the neuroscience literature of people like Philip, who lost his left arm and, like many others, is plagued by 'phantom pain'--a pain that cannot be relieved because there is no arm to treat. Discussing the work of V.S. Ramachandran, Connolly argues that there is a
gap between third-person observations of brain/body activity, however sophisticated they have become in recent neuroscience, and phenomenological experience correlated with those observations.
Why is there this gap? Why is it that scientific attempts to observe and explain the brain and thinking activities do not match up with those experiences themselves?
Confronted with Philip and his phantom limb, the Neuroscientist V.S. Ramachandran put Philip in front of a "mirror box" that allowed him to see an image in which both his arms seemed to be working normally. This has worked, with Philip and other patients, to reduce the phantom pain.
According to Ramachandran, when the limb is lost, the ususal messages sent from the arm to the brain cannot be sent. There are thus no signals that might counteract and self-correct the pain signals being sent by the brain. The "mirror box" seems to trick the brain into seeing the arm again, allowing the signals of a pain-free arm to be reactivated, even though the arm is still missing.
What Ramachandran takes from his "mirror box" treatment is a strong skepticism about the computer-oriented models of the brain that many, like Kurzweil, work from. The brain, he argues, is not like a computer with each part performing one highly specialized job. Instead,
[The Brain's] connections are extraordinarily labile and dynamic. Perceptions emerge as a result of reverberations of signals between different levels of the sensory hierarchy, indeed even across different senses. That fact that visual input [i.e. the mirror box] can eliminate the spasm of a nonexistent arm and then erase the associated memory of pain vividly illustrates how extensive and profound these interactions are.
Ramachandran's neuroscience shows that thinking, our human thinking, is a complex and layered activitity with dissonant relays, complicated feedback loops that connect different and competing centers of bodily and brain activity. The brain, in other words, is not simply a conception machine that works upon logical inputs. Instead, body image, affect, the unconscious, and other images and sensations are parts of thinking.
There are more than 100 Billion neurons in the brain and each neuron has between 1,000 and 10,000 connections with other neurons. All told, "the number of possible permutations and combinations of brain activity, in other words the numbers of brain states, exceeds the number of elementary particles in the known universe."
Connolly's analysis of Ramachandran's work on Neuroscience leads him to argue that since our brain works according to such a complex and infraperceptual model the confidence of scientists like Kurzweil that we can model the brain is misplaced. The presence of such infraperceptions in our brain and our thinking are evidence of the "layered character of everyday perception" and suggest that the brain is not a logical, computer-like, reasoning mechanism that can be modeled and reverse engineered.
I think Connolly is right. And yet, even if human thinking is multilayered, complex, and not rationally deliberative, it is not inconceivable that computers might someday approach human thinking. More importantly, however, is the fact that as human beings continue to "enhance" their thinking with technology and computer-assistance, their own thinking will increasingly be rationalized. A few questions:
1) What will happen when humans have processors in their brain or assisting their thinking? Will we, technologically given the ability to process stimuli at speeds not humanly imaginable, be able to cognitively evaluate our stimuli and overcome the limits of human thinking?
2) As we augment our own senses with neural implants, will humans be overwhelmed by intense sensations beyond the human sensory capacity so that we either are paralyzed by sensory overload or depend on increased cognitive power from computers simply to make sense of our world?
If the answers to both of these questions is at least a qualified yes, the human thinking that Connolly celebrates becomes something that can be and likely will be overcome. Is this a problem? As Ben Stevens keeps asking on this blog: so what? Does "non-human" mean "inhuman"?
Hannah Arendt certainly thinks so. At the most basic level, Arendt thinks that humans, to be human, must be subject to chance, to fate, and to the spontaneity of a world beyond their control. Humans must, in other words, be mortal beings who are born and die in ways beyond their control. Without that mortal subjection to fate, humans increasingly lose our freedom and humanity.
The question remains: how to be human in an increasingly inhuman age?