“There are no dangerous thoughts; thinking itself is dangerous.”
“There are no dangerous thoughts; thinking itself is dangerous.”
Roger Berkowitz, Director of the Arendt Center, held a lecture this week titled “Earth Alienation from Galileo to Google,” as part of the Rostrum Lecture Series sponsored by Bard’s Language & Thinking Program.
You can read the text of his lecture here: EarthAlienationgtogbardtext
In his talk, Berkowitz writes:
My Thesis today is: The scientific way of thinking inaugurated by Galileo in the 17th century is, in the first decades of the 21st century, forcing us to ask the question that the scientific approach to the world has harbored all along: Is humanity important?
How we humans answer this question will have a greater impact on our world than any scientific, technological, economic or artistic innovation that we may witness. For one thing, in an age of nuclear and biological weapons, we—or some few of us—may well choose to extinguish humanity. Or, in an age of automation where robots and machines are able to perform most economically necessary tasks, those in power may decide that it is better to euthanize the masses of superfluous persons for either economic or environmental reasons, or both.
Although nuclear Armageddon is one button away and Sun Microsystems Chairman Bill Joy has publicly raised the possibility of culling the superfluous, it is far more likely that we as a species will ignore the question.
I fear, however, that the refusal to confront the question of humanity’s worth will lead to very nearly the same effect as an affirmative decision of humanicide: In other words, we are now threatened with the possibility that the kindling of the human spark will dampen so that the darkness of the world will be interrupted only with the most fleeting fires of the human spirit.
In her book Simulation and Its Discontents, MIT Professor Sherry Turkle argues that what simulation wants is immersion in the simulated world that is so complete that it serves as a proxy for the real. Turkle’s worry, or the worry she reports from the scientists she studies in her book, is that simulation replaces reality with a deceptive simulacrum that is so compelling that we take it as real even when it is not. I have discussed Turkle’s thesis here. And here.
In a fascinating TED lecture, Pranav Mistry–Turkle’s colleague at MIT–has a completely different take, arguing that simulation will free us from computers that divide us from the real world. By “getting rid of the digital divide,” Mistry argues, simulation will actually make us more human. Watch the video of his TED talk here and see if you agree?
More human? Less human? Differently human. I think it undeniable that this technology will change our world and our understanding of ourselves.
Remember to attend the Arendt Center’s Conference, Human Being in an Inhuman Age.
So, too, is Sophocles’ Antigone. Are these fictions not simulations? For my money, then, what remains to be seen is whether increased pervasion of simulation is qualitatively different from traditional or non-technoscientific modes of mediation including products of verbal art like drama, poetry, and 20th-century philosophy. Are these last of such a different quality or order, of such a factual humanity, as still to make technoscientific modes of mediation seem, by contrast, the more (dis)simulative?
In the comments, I responded: Is a book (a technology) the same as a story (also a technology). Is a film the same as a book? Is facebook the same as a movie?
My point is that Turkle argues that that simulation wants something different than stories or books or movies. Those are media to entertain. Simulation wants a total immersion that becomes a proxy for the real. Contextualizing is important, but you have take seriously the claims of the new technology. It may turn out that the claims are inflated and all will revert to a mere tool for human entertainment. But that is not necessarily true. Some times there are new things in the world.
Professor Thomas asks:
I truly don’t understand the question you are posing, and I hope you will clarify it. “Simulation” isn’t the type of thing that can “want,” right? So are you asking what the many developers and users of simulation want? Or are you asking toward what ends the possibility of simulation drives its users?
The question “what does simulation want?” is, as you say, a question of what does simulation–insofar as we use it–reveal about our wants and drives. Your formulation, to “what ends the possibility of simulation drives its users” is perfectly fine in my view, although I would replace “possibility” with “activity.” Insofar as we develop and use simulations, what does that reveal about our wants? And in what ways will simulation transform our wants and desires–thus, what does Simulation want?
This is the question Sherry Turkle asks and her answer is: Simulation wants immersion in a virtual world that is so profound that it replaces the real. or blurs with the real. Or is a proxy for the real. These aren’t the same. This needs to be flushed out.
Ben says, haven’t we always been living in fictions, thus simulations. I agree. All common life together depends on fictions of unity and common ideas, customs, that form our sense of identity and comprise our world. Plato understood that politics is about the unification of a multitude, and this unity is always based in a fiction (see Nietzsche too, and Arendt). The question we are debating, as I understand it, is a version of “is this time different?” Always a difficult question in medias res. I don’t know the answer. But I do think that simulations, as I am coming to understand them, pose the possibility of a radical fictionalizing of the world in ways that will further attenuate our belief in a shared, commonly accessible world. If different people “see” and “feel” the world differently because of neural enhancements and oracular implants and artificial skin grafts, then the very idea of a common world of sense perception falls away and a new idea of reality–one suffused with simulation–takes its place. This is fundamentally different from the fantasy of a book or a movie. Even a religion, which offers a complete worldview, can be confronted with reality as Galileo did. But in a world of simulation, that reality threatens to disappear.
I say all of this not entirely sure of how it works. But the confidence with which such researchers now embrace simulation is a shock to my system.
Jaron Lanier has quickly established himself as the most important opponent of the Singularity crowd. A silicon valley entrepreneur and one of the original pioneers of virtual reality, Lanier is hardly a Luddite. Yet he has been writing clear and provocative prose raising serious questions about the humanity of current trends on the internet and in society.
His op-ed in the NY Times today is a case in point. He is clear that most of what goes by the name AI is less intelligent and more simply a technological achievement. Yet, by calling it Artificial Intelligence, we demean and dumb down what we mean by intelligence. He writes:
What bothers me most about this trend, however, is that by allowing artificial intelligence to reshape our concept of personhood, we are leaving ourselves open to the flipside: we think of people more and more as computers, just as we think of computers as people.
He offers as an example NYU Professor Clay Shirky, who suggests that when people forward tweets around and “re-Tweet,” this displays real thought and creativity–although not amongst humans but in a global brain. It is this kind of anti-humanism that Lanier is so trenchant at unveiling.
I review his most recent book, You Are Not a Gadget, here.
Read his Times Op-Ed Here
Here is my latest essay, The Wonders of Man in an Age of Simulations that just appeared in The Fortnightly Review.
It is a review of books by Ray Kurzweil, Jaron Lanier, and Sherry Turkle and sets up the question of Human Being in an Inhuman Age, the topic of the Arendt Center’s upcoming conference.
Read the interesting history of The Fortnightly Review (founded by Anthony Trollope, Frederic Chapman and George Henry Lewes, with Lewes as its first Editor).
IN “THE ODE TO MAN” from Antigone, Sophocles conjures “Man” as the wondrous being who wears out the “imperishable earth” with his ploughs. This man “overpowers the rough-maned horses with his devices” and tames the “unbending mountain bull.” He flees the “stormy darts” of winter’s frost and he escapes “needful illness.” Such a man who tames nature is a wonder, according to the Ode’s opening line:
Manifold the wonders
And nothing towers more wondrous than man.
The Greek word for “wonder” is Deinon, which connotes both greatness and horror, that which is so extraordinary as to be at once terrifying and beautiful. This is how Sophocles understands man. As an inventor and maker of his world, man can remake and master the earth. This wonder terrifyingly carries the seeds of his destruction. Man, Sophocles imagines, threatens to so fully control his own way of life that he might no longer be man. As the chorus sings: “Always overcoming all ways, man loses his way and comes to nothing.” If man so tames the earth as to free himself from uncertainty, what then is left of human being?
A new urgency has energized those who welcome and those who fear the power of man to transform his nature. While hopes of technological utopias and fears of technological dystopias may be part and parcel of the human condition itself, we are living through a moment when extraordinary technological advances are once again raising the question of what it means to be human. The problem that confronts man in the 20th and now 21st centuries, as Hannah Arendt writes, is that we face the danger that we might so fully create and make our artificial world that we endanger that quality of human life which is subject to fate, nature, and chance. To bring oneself up to date on this current version of the debate over our human, superhuman, and inhuman futures, three recent books serve as excellent guides.
Three years ago when I decided to host a conference celebrating Hannah Arendt’s 100th Birthday (this was before the Arendt Center existed), the first email went to Christopher Hitchens. While he had not written on Arendt, somehow I knew that he was the right person to think with her in our times. He accepted immediately, graciously, and charitably and his talk, “Reflections on Antisemitism,” was thoughtful, witty, and profound. He spoke with students and guests late into the night as he downed fine Scotch and smoked. His generosity, curiosity, and brilliance are exemplary.
He has cancer now and you can read his typically trenchant thought on his illness here.
My favorite of his books is: Letters to a Young Contrarian.
Below is the first few pages of his essay, Reflections on Antisemitism, which is now published in Thinking in Dark Times: Hannah Arendt on Ethics and Politics.
In October 1956, exactly fifty years ago to the month that we celebrate Hannah Arendt’s one-hundredth birthday, the two Cold War colossi were being simultaneously convulsed by the uprising in Budapest and its repression by Soviet tanks. At the same time, the final act of Anglo-French imperialism in the Near East—you might prefer to say Middle East, or Western Asia—was taking place, in collusion with the state of Israel, with the invasion of Suez.
We know that the events in Hungary had an enormous emotional and intellectual impact on Hannah Arendt. The nature of this effect is somewhat enigmatic, which is why I want to begin with it. We know that she wrote a separate epilogue on these events for the second edition of The Origins of Totalitarianism, an epilogue she later removed. She didn’t airbrush it. She was candid about having removed it, as having, as she put it, “become obsolete in many details.” But she never actually said why it was that she had decided that her tribute to the Hungarian rebels wouldn’t stand the test of republication.
I want to begin by asking, “Why was that?” And that involves revisiting the events of 1956. Not alone were the Soviet tanks involved in the repression of the Hungarian revolution. There must also be dealt with, as was discussed by Hannah Arendt and many others, the betrayal of the Hungarian revolution by the statecraft of the United States—particularly by its Central Intelligence Agency, which, not unlike its performance in the year 1991 in Iraq, was content to issue incendiary broadcasts to the insurgents in Budapest, promising them help as long as they would continue to die. The poet e. e. cummings, I remember, wrote a song at that time called “Thanksgiving 1956,” which ends by saying:
“so rah-rah-rah democracy
let’s all be thankful as hell
and bury the statue of liberty
(because it begins to smell).”
If one takes the trouble to find her missing epilogue, one finds it’s full of surprisingly naive optimism—and surprisingly naive optimism is not a quality most saliently associated with the name of Hannah Arendt. I say it was naive because it stressed the spontaneous democracy of the worker’s councils that were set up in Budapest. I think perhaps here she was expressing a nostalgia—even a little romance—for the German revolutions of 1919 in Munich and elsewhere, in which her future husband Heinrich Blücher had played such an honorable part.
Arendt’s epilogue was naive also because it laid great stress on what she called the peaceful and orderly and good-humored crowds of Budapest. She rather romanticized the good-naturedness of the Hungarian revolution. Now, this optimism may possibly be justified in the long term, which is why it’s worth looking up that epilogue again. After all, in 1989, not more than three decades later, there was a peaceful, bloodless, and orderly velvet revolution; it had its beginning in Budapest when the Hungarians allowed their East German brethren to resist by transiting Hungarian soil without hindrance. It led, in the end, to the fall of the Berlin Wall. And that was a classic case of the recovery of what Arendt so beautifully called, I think, the lost treasure of revolution.
The lost treasure of revolution is the common property to which Hannah Arendt alludes, very lyrically, in the opening passages of her collection Between Past and Present. She describes this ability to recover freedom: the spirit of an unforced liberty that is latent, she thought, in all people and which she claimed to detect in “the summer in 1776 in Philadelphia, the summer of 1789 in Paris, and the autumn of 1956 in Budapest.” Which, as you can see, is putting 1956 in Budapest on quite a high pedestal and threshold. Now this concept of the hidden treasure, the treasure that’s always hidden but that can be reclaimed, is remarkable for its lack of what a Marxist would call concreteness. Here’s how it appears according to Hannah Arendt, this treasure: It appears only “under the most varied circumstances, appears abruptly, unexpectedly, and disappears again under different mysterious conditions, as though it were a fata morgana,” or, so to say, as a will of the wisp or ignis fatuus. The lost treasure of the revolution is a very, very elusive, almost ethereal concept for Hannah Arendt to be dealing with. And let me say, one of the nice things about reading and rereading Hannah Arendt is to discover how nice it is when she is fanciful every now and then.
But is the fantastical element of the lost treasure the reason why she so sternly decided to remove that epilogue? I think I know why she did it. Further research and disclosure of what happened that time in Budapest had brought it to her attention that those events in 1956 hadn’t been as beautifully spontaneous as she had supposed. Mixed into the grandeur of the Hungarian rebellion was quite a heavy element of ultra-Magyar, ultra-Hungarian nationalism. The revolution also included quite a lot of anti-Semitism, directed at the strongly Jewish membership and character of Hungary’s Communist elite. Many of the Jewish communist leaders had been denationalized from Hungary, having spent the war in the Soviet Union, in Moscow, some of them becoming Russian citizens. They came back to take over Hungary, which was still largely a Catholic, rural, and conservative country, and they did so only with the support of Red Army bayonets. The resentment aroused by the returning Jewish Communist leaders was considerable. The revolution did not lead to pogroms in the true, ghastly, meaning of the word, but there were some ugly lynchings of Jewish communists and some nasty rhetoric. And I think this must have weighed very much with her.
In commenting on my essay “Why We Must Judge,” Scott Horton writes:
One of the most serious distortions of liberalism in modern American thought could be reduced to a simple, oft-repeated phrase: don’t be so judgmental. The argument is that it’s healthy for citizens in a modern society to collect information and suspend the process of forming judgments. A core aspect of this approach is doubtless correct: as Count Tolstoy observed in What Is Art, even sophisticated minds are prone to fail to grasp essential facts if those facts contradict some conclusions they have already drawn. But this doesn’t mean that judgment should be suspended indefinitely. To the contrary, judgment is sometimes a moral imperative. Without judgment, there is no justice.
It is this idea that judgment is sometimes a moral imperative that is too often forgotten. Read Scott Horton’s full post.
Professor Stevens response to my post on genetically choosing traits in our offspring suggests that I, and Arendtians (whatever such a thing may be) think “technology is inimical to nature, and therefore undesirable.” He diagnoses a fear of technology, and, it seems, a nostalgia for a pre-technological age.
In the thinking of Arendt and her followers, it seems that changes in ‘humanity’ — especially changes caused by technology and its interdependence with modern government — lead automatically to ‘inhumanity’, i.e., to an undesirable absence of what is figured as having been, so far, human nature.
The point of concern is not that technology brings change. Far from it. I can’t imagine any reader of mine or of Arendt’s–to take one example, her paean to revolutions in On Revolution— thinking her inimical to change. She values, above all, spontaneity and creativity and thus the possibility for the emergence of the new.
The points I hoped to make in my post were:
1. That one essential characteristic of humanity is that human beings are subject to chance, change, newness, and unpredictability. Now this is not a claim about some natural inborn human nature. But it does say that humans, if they are to be human, exist in such a way that the world can be surprising and new. If Christians thought humans were created by God and Kant saw humans as rational beings, Arendt thought that humans, at the very least, were free to act in surprising ways.
2. This is not at all anti-technological. On the contrary, Arendt distinguishes humans from animals precisely because humans can create and build an artificial world. In other words, there is no human civilization with technologies and without a built and fabricated world. Only animals live in a purely natural existence. Humans make their world.
3. What is worrisome in the age of modern science is not technology and not fabrication, but the increasing possibility that human creative powers will become so great that the human power to create an artificial world will overtake the human itself as something given, freely existing by a mystery impervious to human mastery.
I doubt very much that humans will ever extinguish the mystery of human being. And yet, I do believe that the spaces of freedom in our time are shrinking greatly. The dream to control our fate by purchasing our progeny in a genetic boutique will not lead to homogeneity. People will pick differently. Nor will it lead to dominance by one class, since life is impossible to fully control. But it does move us further down the road towards a time when the selection of human qualities is so rationalized by an artificially intelligent mind that the mysterious quintessence of humanity is forgotten.
I have a new essay just published in Democracy, A Journal of Ideas. My title was: The Wisdom of Rhadamanthus. If you read to the end you’ll see the point. But they wisely called it: “Why We Must Judge.”
The essay begins:
I n 2004, The New York Times reported that numerous captured Iraqi military officers had been beaten by American interrogators, and that Major General Abed Hamed Mowhoush had been killed by suffocation. The Times has also published the stories of the so-called “ice man” of Abu Ghraib, Manadel al-Jamadi, who was beaten and killed while in U.S. custody, his body wrapped in ice to hide evidence of the beatings; of Walid bin Attash, forced to stand on his one leg (he lost the other fighting in Afghanistan) with his hands shackled above his head for two weeks; and of Gul Rahman, who died of hypothermia after being left naked from the waist down in a cold cell in a secret CIA prison outside Kabul. And the paper has documented the fate of Abu Zubaydah, captured in Pakistan, questioned in black sites and waterboarded at least 83 times, before being brought to Guantanamo, as well as the story of Khalid Shaikh Mohammed, waterboarded 183 times.
What was missing from these stories, published in the newspaper of record? A simple word: torture.
The omission is standard practice at the Times, just as it is at The Washington Post, NPR, and most U.S.-based media. Clark Hoyt, formerly ombudsman at the Times, defended the refusal to use the word torture and the decision to employ the language of “enhanced interrogation techniques,” a euphemism pioneered by the Bush Administration and embraced by the Obama Administration. For Hoyt, whether banging someone’s head against stone walls to elicit information is torture is in the eye of the beholder: “This president and this attorney general say waterboarding is torture, but the previous president and attorney general said it is not. On what basis should a newspaper render its own verdict, short of charges being filed or a legal judgment rendered?” Alicia C. Shepard, ombudsman at NPR, calls torture “loaded language.” To name simulated suffocation torture means to “unilaterally make such a judgment,” something Andrew Alexander, ombudsman at The Washington Post, argues journalists must avoid. In short, since the definition of torture is a matter of debate, we can’t publicly speak of torture. To judge an act to be torture is beyond our capacity and outside our jurisdiction.
Judgment is in short supply, and not just in the media. President Obama has made it clear that he has no interest in prosecuting and determining the responsibility of the torturers. As he said in April 2009, “This is a time for reflection, not retribution.” “Nothing,” he said, “will be gained by spending our time and energy laying blame for the past.” And so, seven years after the first death by torture in the war on terror, six years after the photos from Abu Ghraib, two years after Vice President Dick Cheney admitted that he personally authorized waterboarding and other techniques of torture, and two years after Barack Obama was elected, the vast majority of those who conceived, justified, and carried out the U.S. policy of torture—acts that are inhuman, unjust, and illegal by both international and domestic law—have not been accused, tried, or judged. Eleven low-ranking army personnel were court-martialed after Abu Ghraib. For the murder of Major General Abed Hamed Mowhoush, Chief Warrant Officer Lewis Welshofer Jr. was convicted of negligent homicide, but given no jail time and not even discharged from the army. Aside from these scapegoats, the vast majority of those involved in the torture regime continue to work for the government. While Obama worries about a rush to judgment, our real problem is that we have abdicated our right and our duty to judge at all.
In spite of Obama’s call at his inauguration for a “new era of responsibility,” we are suffering a culture-wide crisis of judgment. And not just when it comes to torture. Those who employed fancy lawyers to evade taxes are offered amnesty instead of judgment if they return their money to the United States. We frequent restaurants knowing that affordable food is subsidized by underpaid illegal help in the kitchen and we pay nannies and construction workers in cash, rationalizing our violation of both the law and our moral beliefs that everyone deserves health care and other benefits. In academia, professors have so fully abandoned their duty to judge that more than 50 percent of the grades at Harvard University are in the A range. And no Wall Street firm that has received a bailout has fired its CEO.