In 2010, Mohamed Sakr was stripped of his British citizenship. Not long thereafter, Sakr was killed in Somalia by a United States drone strike. American intelligence officials referred to him as an Egyptian, though he never had an Egyptian passport. There was no mention in the U.S. or the U.K. of his former British citizenship. One month before Sakr was killed, his friend Bilal al-Berjawi was killed in a drone attack after also having his British citizenship revoked. These are not isolated instances. As Mark Mazzetti reports in the NY Times, “Forty-two people have been stripped of their British citizenship since 2006, 20 of them last year…. In Israel, by comparison, the power to revoke citizenship has been used only twice since 2000, according to the Interior Ministry there.” And according to Mazzetti’s article in the New York Times, the British Government is seeking even greater authority to denationalize citizens it believes engage in terrorism.
“The cases of Mr. Sakr and Mr. Berjawi are among the most significant relating to the British government’s growing use of its ability to strip citizenship and its associated rights from some Britons at the stroke of a pen, without any public hearing and with only after-the-fact involvement by the courts. Now, faced with concerns that the steady stream of British Muslims traveling to fight in Syria could pose a threat on their return, Prime Minister David Cameron’s government is pushing legislation that would give it additional flexibility to use the power, which among other things keeps terrorism suspects from re-entering the country.”
The sovereign right of a nation to control who is nationalized or denationalized is unchallenged; it is a basic right of sovereignty to patrol the boundary of citizenship. It is true that citizenship is a privilege. And nations have long denationalized those who attained their citizenship by fraud; at times, as well, the practices of expulsion and exile were employed to deal with those who were found guilty of treason or impiety.
But the rise of mass denationalization first emerged in Europe in the 1930s and is associated with the advent of totalitarianism. What denationalization does is deny those it disowns not only the right of a trial or the right of due process, but more importantly, it denies them the right to have rights. To denationalize someone is to say that they no longer belong to any organized political community. They can appeal to no state for rights or protection. They are a mere human being, no longer an American, an Englishman, or a European.
Homeless and stateless, the denationalized person cannot even be arrested or put on trial and imprisoned in accord with the law, for the stateless are also outside the reach of the law. They are outlaws in that they are outside the protection of the laws. Which is why Hannah Arendt argues that it is a truism that “One is almost tempted to measure the degree of totalitarian infection by the extent to which the concerned governments use their sovereign right of denationalization.”
This does not mean the Britain is teetering toward totalitarianism. All countries make use of denationalization to some extent. And yet, the normalization of the practice of depriving some people of their status as citizens does not deprive them simply of rights, but also leaves them fully outside the sphere of organized human society. It is significant that the UK is currently seeking the authority not simply to denationalize those it deems a threat to its security, but to do so even in those cases where doing so would render the person fully stateless.
These are not isolated or extraordinary cases. The dozens of denationalized terrorists in the UK could, of course, be arrested, tried, and convicted in accordance with the law. By choosing to denationalize classes of people, the UK is creating a population of stateless persons who lack not the right to a trial or the right to speak, but the right to have rights as a member of human society. It is creating a class of people outside the law and thus subject to normalized extra-legal killings performed by the secret services of a state that otherwise is a constitutional democracy limited by the rule of law. Mass denationalization is a dangerous road.
Congratulations to my colleague and Arendt Center stalwart Walter Russell Mead, whose article “The Once and Future Liberalism” just won a Sidney Award, “designed to encourage people to step back at this time of the year and look at the big picture.” Mead’s article is indeed bracing, and the thinking behind it has informed many of the posts on the Arendt Center blog this year. At its core, the essay establishes as fact what most commentators on the left and the right see as an opinion: namely, that the 20th century model of liberalism is dead and is not coming back.
In the old system, most blue-collar and white-collar workers held stable, lifetime jobs with defined benefit pensions, and a career civil service administered a growing state as living standards for all social classes steadily rose. Gaps between the classes remained fairly consistent in an industrial economy characterized by strong unions in stable, government-brokered arrangements with large corporations—what Galbraith and others referred to as the Iron Triangle. High school graduates were pretty much guaranteed lifetime employment in a job that provided a comfortable lower middle-class lifestyle; college graduates could expect a better paid and equally secure future. An increasing “social dividend”, meanwhile, accrued in various forms: longer vacations, more and cheaper state-supported education, earlier retirement, shorter work weeks, more social and literal mobility, and more diverse forms of affordable entertainment. Call all this, taken together, the blue model.
Mead calls this the “blue model” of American liberalism. It thrived in American from the 1940s through the 1970s, and America thrived with it. It is the blue model that created the great American middle class, and it is the blue model that has sought eventually to bring excluded groups and minorities into the American dream. Many American liberals want to preserve this model. Conservatives argue for a return to an earlier time where government was small and people who failed lived in pain and poverty. The point of Mead’s article is that both sides miss the basic fact: The blue model is dying and its death is unavoidable, a consequence of demographic and technological changes that make it unsustainable. We cannot continue with the blue model. But neither can we simply dismantle government and go back to the 19th century version of government and society that some conservatives yearn for. The result is a debate between liberals and conservatives that refuses to address the facts of our current situation.
But even as the red-blue division grows more entrenched and bitter, it is becoming less relevant. The blue model is breaking down so fast and so far that not even its supporters can ignore the disintegration and disaster it now presages. Liberal Democrats in states like Rhode Island and cities like Chicago are cutting pensions and benefits and laying off workers out of financial necessity rather than ideological zeal. The blue model can no longer pay its bills, and not even its friends can keep it alive.
Our real choice, however, is not between blue or pre-blue. We can’t get back to the 1890s or 1920s any more than we can go back to the 1950s and 1960s. We may not yet be able to imagine what a post-blue future looks like, but that is what we will have to build. Until we remove the scales from our eyes and launch our discourse toward the future, our politics will remain sterile, and our economy will fail to provide the growth and higher living standards Americans continue to seek. That neither we nor the world can afford.
Mead’s essay is long and it is bracing and provocative, precisely in the spirit of Hannah Arendt. As you celebrate the new year, I hope you also find time to read “The Once and Future Liberalism.”
In a column in The Daily Beast, Buzz Bissinger writes:
The tipping point toward a candidate is perhaps the greatest act of individuality in our unique democracy, although in this day and age of unprecedented political divide, telling somebody who you are voting for has no upside: There is no respect for your right as a citizen, but outright hatred from those who do not agree with you. I fear that I will lose friends, some of whom I hold inside my heart. Of course, I will also lose friends I really don’t like anyway.
There are two points in this short paragraph that bear reflection. The first is the claim in the opening sentence, that deciding whom to cast one's vote for is the greatest act of individuality in our democracy. From my view, that is a bit like saying that deciding which brand of potato chips to buy is the greatest act of individuality in our capitalist economy.
If choosing between Mitt Romney and Barack Obama exemplifies who I am, then I don't think there is much to my individuality. These two paperboard figures are eerily similar in spite of their profoundly different lives. One white, one black. One born rich, the other poor. One a community organizer and the other a capitalist. Yet both are products of the meritocratic culture of Harvard professional schools. Both have an unceasing faith in data and experts. Both are self-satisfied, arrogant, and confident in their unique abilities. And both are politicians who will do or say almost anything to get themselves elected. What is a choice between them really saying about oneself?
The very idea that voting is at the essence of our political world has sent thinkers into a tizzy. Henry David Thoreau had a different view of voting:
All voting is a sort of gaming, like checkers or backgammon, with a slight moral tinge to it, a playing with right and wrong, with moral questions; and betting naturally accompanies it. The character of the voters is not staked. I cast my vote, perchance, as I think right; but I am not vitally concerned that that right should prevail. I am willing to leave it to the majority. Its obligation, therefore, never exceeds that of expediency. Even voting for the right is doing nothing for it. It is only expressing to men feebly your desire that it should prevail. A wise man will not leave the right to the mercy of chance, nor wish it to prevail through the power of the majority. There is but little virtue in the action of masses of men. When the majority shall at length vote for the abolition of slavery, it will be because they are indifferent to slavery, or because there is but little slavery left to be abolished by their vote. They will then be the only slaves. Only his vote can hasten the abolition of slavery who asserts his own freedom by his vote.
And Hannah Arendt also saw that voting was a deeply circumscribed approach to politics. She once wrote: “The voting box can hardly be called a public place.” What distinguished the United States at the time of its revolution was what Hannah Arendt called the experience of "Public Happiness." From town hall meetings in New England to citizen militias and civic organizations, Americans had the daily experience of self-government. In Arendt's words,
They knew that public freedom consisted in having a share in public business, and that the activities connected with this business by no means constituted a burden but gave those who discharged them in public a feeling of happiness they could acquire nowhere else.
Public happiness was found neither in fighting for one's particular interests, nor in doing one's duty by voting or going to town-hall meetings. Rather, the seat of American democracy was the fact that Americans "enjoyed the discussions, the deliberations, and the making of decisions."
This brings us to Bissinger's second point, that he today is fearful of saying his opinion in public for fear of losing his friends. What kind of democracy is it when we are so afraid of and contemptuous of divergent opinions that we turn dissidents into pariahs. I know that I am only somewhat comfortable making my profound dislike of President Obama felt in my liberal academic circles, and only am able to do so because I have an equally visceral dislike of Mr. Romney. If I were to consider voting for Romney, that would be sacrilege to many of my friends and colleagues.
Yet that doesn't bother me. Voting is something that should be secret. If you hold back your voting preference you can actually have mature and thoughtful conversations, even one's that go against the grain of the groupthink you happen to exist in. You can critique the party of your friends and praise alternative policies. People are still rational on the issues. It is simply on the matter of the final vote that they insist on loyalty. But maybe the reason few care so little about the final vote is that the focus on the winner makes the impact ever less meaningful. If we focused more on the actual discussions of issues and less on the final outcome, we would have a more civil and thoughtful political world, one that tolerated much more disagreement and engagement.
Truth is necessarily related to responsibility, that is to say that, when one proclaims a fact to be true, one takes a certain amount of responsibility for the statement. Particularly if the proclamation was made in a dispute and the only resolution in sight depends on the outcome of a game of tug-of-war between the “greater” and the “lesser” truths. In the case of the Conservatory, there can be no truth (at least factual truths) since the fog of falsehood has settled on most layers of bureaucratic life in Russia. However, this doesn't impede those whose lives are affected by the falsehoods to retaliate. Students protested, pictures were published and, while the story remained unclear, the outrage spread across all spheres of communication, from blogs to magazines and from youtube videos to newspapers. In some ways, it feels as though the proliferation of untruths have allowed more people to participate in the discussion of truth, as the burden of absolute responsibility has been lifted. Perhaps this is a negative reaction but, so long as the criticism and versions of truths spread, the conversations and debates can continue. The instant one truth has been installed or accepted, it risks being forgotten or worse, taken as an established fact. If a truth is to be eternal, it cannot be immutable, it must adapt to the changing ages, the changing attitudes and the changing people.
This is why, the case of the Conservatory is particular: it is a remnant of the past, and thus the culture of the present. Furthermore, it is a centre for education grounded in the concept of discipline, which on its own is a demonstration of respect for the traditions and the ways of the past. If one were to hazard a guess as to why the Conservatory has become so neglected and dilapidated, it should address the question of discipline. If politics had not strayed from a path of tradition and established methods, there might remain at least a glimmer of respect for the past, while continuing to evolve and adapt to the present. Instead, the breakneck speed of modern politics have left truth and tradition crumpled in the corner, as pitiful as the drenched pile of sheets of music, written by a composer who was once respected and remembered.
Click here to read the full submission.
Click here to learn more about the thinking challenge.
In his September 12th New York Times op-ed “If It Feels Right…” David Brooks draws our attention to a startling shift in the moral sensibility of American youth. Citing a study of over 200 young Americans led by Notre Dame sociologist Christian Smith, Brooks’ broadens the debate, looking beyond the moral problems of young Americans to ask whether we are living in a moment when the concept of morality is bankrupt? He writes:
“When asked to describe a moral dilemma they had faced, two-thirds of the young people either couldn’t answer the question or described problems that are not moral at all, like whether they could afford to rent a certain apartment or whether they had enough quarters to feed the meter at a parking spot… When asked about wrong or evil, they could generally agree that rape and murder are wrong. But, aside from these extreme cases, moral thinking didn’t enter the picture, even when considering things like drunken driving, cheating in school or cheating on a partner. 'I don’t really deal with right and wrong that often,' is how one interviewee put it.”
The inability to identify situations that have a clear moral dimension speaks to the loss of a common framework for evaluating everyday life. Moreover, it reflects a greater disinterest in moral evaluation itself. This lack of a basic moral disposition, Brooks argues, is an expression of a rampant individualism and relativism, unmoored from any sense of collective principles. Brooks points to a statement from one interviewee who says: “I have no other way of knowing what to do but how I internally feel.” For Brooks, this generation of young people now believes the only compass one ought to follow is one’s own.
As a member of the generation of young people of which Brooks speaks I both support his critique but would like to alter its terms. The problem, both literal and metaphoric, for Brooks is that youth have lost our moral ‘vocabulary.’ This analogy suggests that what he is calling for is a return to the haven of tradition where, in considering ethical dilemmas, we all consult the same lexicon and can come away with unequivocal definitions and firm directives about how to act.
Such a return, Hannah Arendt knew at the time of her writing, was both impossible and undesirable; today it appears to us as downright unfathomable. Despite the inflection of nostalgia some seem to detect in Brooks, I don’t think this kind of throwback is at all what he is after. A better way to describe the impasse we are at is by considering it as a question less of moral vocabulary than of moral ‘grammar.’
For Arendt the very grammar of judging is found in the ability to adopt an “enlarged mentality,” Arendt’s articulation of Kantian disinterest. An enlarged mentality allows one to see and think from another's perspective, thus taking oneself out of oneself and providing access to a non-subjective and commonly shared realm. Such an enlarged mentality is, I believe, still essentially what provides us with a comprehensible moral syntax, allowing us to retain the singularity of the actors involved while also asking us to suspend personal preference.
Despite the decades of progressive education my generation has enjoyed, which boasts the cultivation of critical and compassionate minds, it is the capacity to enter into an enlarged mentality that we lack. It is worth asking if the culture of individual participation and the claim to the inherent validity, equality, and uniqueness of each person’s contribution has dampened the very qualities of mind it seeks to accent? Have we, as individuals with our equal rights to see the world as we do, lost the ability to think and imagine a world shared with others. Smith’s research suggests we may have. It confirms that the pedagogies aimed at progressive inclusiveness are failing in precisely that goal. It is paradoxical, but we are so open as to be closed off to a common world.
While many no doubt find the adoption of Arendt’s enlarged mentality terribly quaint, I would argue that it remains perhaps the most fundamental and poorly performed mental and moral exercise. Its enduring relevance, and its necessity in this moment in particular, comes from the fact that it does not fetter us to frameworks and yet is the precise opposite of the ‘do as I feel’ guide to moral reasoning, displayed by many Americans young and old, that can only be called reckless.
Just because we are no longer living in an age of moral solidity does not mean we can therefore abandon the activity of judging. On the contrary, the precarious nature of our time, means that the hazards of giving up on human judgment become increasingly grave. Arendt observed the abyss of accountability that was opened up by the “cog-theory defense” in Eichmann’s trial, a line of argument that we see from Smith’s research has been revived in different forms today. If all are at fault, no one actor can effectively be held responsible. Director of the Arendt Center Roger Berkowitz has been exploring Arendt's insistence that we must judge in essays and speeches over the last year. You can read his essay "Why We Must Judge" here. And you can watch his TEDx Talk on the Loss of Human Judgment here.
Arendt would likely agree with Brooks’ assessment that we risk a darkening in human affairs when we let moral issues vacate the public realm. “Morality,” Brooks writes, was once revealed, inherited and shared, but now it’s thought of as something that emerges in the privacy of your own heart.” What remains to be seen however is if these chambers, narrowed by individualism, can expand enough on their own to resuscitate a basic moral sense among my generation. Or have we in fact stumbled upon a strange and dangerous inversion of the famous Socratic notion Arendt quoted, one we will likely keep scoffing at but which remains indelibly marked on human morality--that it is better to suffer than to do wrong--when we hold fast to the belief that the right thing to do is the thing that feels good.
Cori Ellison’s September 2nd piece in The New York Times on the premier of a new opera commemorating the inspirational life of Rick Rescorla begins with an invocation of one of Hannah Arendt’s most famous observations. Ellison writes:
“Having coined the phrase “the banality of evil,” Hannah Arendt went on to suggest that the most heinous crimes have often been committed by morally desensitized ordinary people. The inverse may be equally true: that “ordinary” heroes like Rick Rescorla who saved almost 2,700 lives on Sept. 11, 2001, only to lose his own, are the yang to Arendt’s yin, demonstrating what you might call the profundity of virtue.”
Ellison’s comments beg two main questions about the continued usage of Arendt’s term. Alfred Kazin notes the frequent conceptual carelessness with which the phrase is bandied about. Kazin comments in his essay Hannah Arendt: The Burden of our Time, “many a journalist and television commentator refers to the “banality of evil” with a confidence that makes one sick.” Devoted to Arendt, Kazin feared her categorization of Eichmann as banal could in fact be “injurious to thinking.” Given the ubiquity of Arendt’s phrase it is perhaps worth asking if the confidence with which it is used is at times misplaced?
Ellison use of the term in the context of the willingness or failure to act in the case of an emergency such as September 11th appears, in this light, largely correct. She is right to note that, underlying the principle of banal evil is a sense of moral anaesthetization. The Eichmanns of the world certainly demonstrate the muting of the impulse towards human decency that the Rescorlas of the world show to be elemental in the field of action. In this sense it is true that an alertness both to the fragility of individual life and the ethical webs that sustain it, is crucial in confronting and overcoming forces of destruction. For Arendt this basic moral attunement to one’s fellow man was cultivated and kept in check by the principles of religion and tradition, which ceased to be binding with the rise of modernity. In these times being awake to the circumstances and the sufferings of others is indeed no small virtue.
Ellison rightly praises this moral sensitivity, and the impulse towards the good that Rescorla bravely demonstrated. However, following through on such an impulse does not necessarily involve engaging in the activity of thinking, which is the true subject of Arendt’s theorem on Eichmann. What the “banality of evil” articulates is the atrophying of the ability to engage in thought--the dialogical process through which individuals come to consider the very conditions and demands of goodness and justice—rather than the absence of any instinct towards goodness or justice in the world.
For Arendt this activity is fundamentally distinct from the mind’s other jobs, such as deducing, selecting, or deciding. To think is not to “choose.” While undoubtedly heroic, acts of rescue do not commonly represent an Arendtian experience of thought, since, as Rescorla’s and others’ extraordinary efforts show, they are often performed without the mind being transformed into a field for debate. A more appropriate context to consider the inverse of banal evil is perhaps not an act of unmeditated courage, but rather premeditated resistance.
Wolfgang Heuer discusses such an instance in his recent book Courageous Action (Couragiertes Handeln). Heuer examines the resistance effort among East German’s asking with and through Arendt what qualities of character and mind are needed to face bureaucratic evil. Wolfgang Heuer will be speaking about this work and the significance of thinking in action at the upcoming conference at the Hannah Arendt Center.
Returning to the question of the use of Arendt’s term, where Ellison and others make the error is in mistaking the two senses of ‘banal.’ It would likely be no great surprise to Arendt the wordsmith that our misapplication is rooted in a semantic imprecision. And yet it is no minor point since Arendt’s famous observation on Eichmann is of course based on her analysis of Eichmann’s own neat playing of “language-games,” and, as Kazin notes, is itself susceptible to the very distortions it describes.
In the context of her musings on the Eichmann trial Arendt’s banality does not chiefly refer to the ‘quotidian,’ the prosaic lives of men, for which she had utmost respect and no reason to believe couldn’t be concomitant with a depth of thought. Rather the banality she noted in Eichmann is an explanation of the ‘trite,’ of the stock slogans and phony logic of terror and totalitarianism that deform both the rules of reason and man’s moral sense when they claim that if you grant A, so too must you grant B and C, all the way down what Arendt calls, “the murderous alphabet.” Regimes that systematically devalue human lives depend on thought formulas that make thinking insincere. How the mind is able to step outside of these is what Arendt’s observation on Eichmann asks us to contend with.
Therefore, the second question Ellison’s piece raises is what is in fact an adequate, positive counterpart to the “banality of evil”? Her suggestion of the “profundity of virtue” is a fair one in that it captures the notion that the institution of the good is inherently linked to a process of reflection. How else might we phrase this crucial idea? We invite others to consider this question and submit possible terms. Given both the ubiquity and the misunderstanding of Arendt’s original phrase, it is fitting that we now search for the best language through which to articulate the association she saw between man’s capacity for genuine thought and his capacity for moral action.
AAA is gone, and with it, one fears, the City on the Hill. American exceptionalism is a fraught theme, and yet it still provides a demand for action that inspires and stiffens the Emersonian backbone of the nation. It is not the economy that will burn the city to the ground, but our collective political weakness. The question before us is whether there is still enough common spirit left in the United States of America to undergird a regeneration of public life and a commitment to the public good--or will the country drown in a flood of individuals unapologetically craven to their private interests.
We could use some of Emersonian self-reliance right now. For our problems, despite the very real and extraordinary debts we have, are less economic than political, moral, and spiritual. Which is why the calm pleadings of economists saying "its not so bad" ring hollow. And why Standard & Poors was more right than wrong to base its decision not only on economic factors, but also on our political swamp:
The political brinksmanship of recent months highlights what we see as America’s governance and policymaking becoming less stable, less effective, and less predictable than what we previously believed.
Of course this political morass is not limited to the United States. The European Union has been uniquely incompetent in owning up to the size and severity of the crisis in the Eurozone—consider Italian politicians who refuse to understand that a low-growth economy with 120% of its GDP in debt is a problem. Leaders in Japan have been equally oblivious for 15 years to the fact that their massive debt and culture of passing the buck is simply not working.
But let's return to the City on the Hill. U.S. politicians continue to promise rosy days ahead, talking about the greatness of America as if the dream were eternal. But it is time to wake up and one can only wonder what or whom today will serve as Henry David Thoreau's cock crow to rouse us from our debt-financed consumer binge. Someone, somehow, needs to wake us from our looming bankruptcy.
As Walter Russell Mead wrote earlier today, our bankruptcy is more than just an economic problem:
Of what does this looming bankruptcy consist? In our case it is the looming inability to pay the trillions in unfunded liabilities of all levels of government, but behind it lies a deeper failure and a poverty of soul. Spiritual near-bankruptcy is the common condition that binds China, Japan, Europe, the US and much of the rest of the world together.
Here in the U.S. as in much of the world, we refuse to take seriously what any sane person knows to be true, that the standard of living that has characterized the American Dream for half a century was and is founded upon funny money and debt. We need to take political control of our destiny, but that first requires that we be honest with ourselves and admit that whatever solutions we offer to our problems, most of us will suffer a decrease in the standard of living.
It is an open question how this will happen. Will the highest earners retain their privileges? Yes, barring a political revolution of some sort, which is also a possibility. Will those with wealth keep their money in the United States pay taxes as citizens, or will they move that wealth to tax havens around the world even as they militate for tax rebates and lower tax rates at home? Will we as a nation recognize the need for everyone to suffer together, or will we insist on slogans like "no taxes" and "soak the rich"? But the biggest question is: Will we suffer for nothing or will we somehow find a way to make suffering meaningful so that the city on the hill might rise again?
The changes that come—soon or possibly pushed down the road into the future— will encompass all areas of American life. Medical care will be rationed (rationally or economically); the unique privilege of every family living in its own house is already eroding as college graduates move back in with family; salaries and average wages will decrease; our consumption economy will contract. This will be painful but there is no way to avoid it. The question is, when will we find a leader or a political movement that will actually call upon us to face up to our future, inspire us to build a new city on the hill, and and imagine a way for us to get there?
No thinker understood the threat to public society and public action as clearly as Hannah Arendt. She saw that the philosophy of representative government fit all too well the bourgeois desire to focus on one's private interest and let paid representatives go to Washington simply to ensure that one was left well enough alone to pursue one's dream. She also saw that a consumer society values the immediate needs of life over the more diffuse and human need to build a common world. Amidst all the post-9/11 rhetoric of patriotism, it is easy to forget that we are living through an utter loss of public feeling and common sense in this country and in others beyond. The bond with our past as well as with our future has been cut and the question for all of us is how, or if, we can in some way live in a world without that sense of connection to a past and a future. This is what Arendt meant with the title of her book Between Past and Future, that space of thinking without bannisters, divorced from tradition, where we have nothing to fall back upon but ourselves. It is a scary proposition, but we have no choice but to live up to it.
You may have heard about or Read Deborah Lipstadt's new book on The Eichmann Trial. Amidst some powerful storytelling, Lipstadt offers a powerful Zionist reading of the Eichmann trial and, in the process, takes aim at Arendt. She agrees with Arendt's defense of Israel's right to hold the trial and agrees with Arendt's defense of the importance of the trial for Israel and the Jews. But she also criticizes Arendt on numerous accounts. At times, her criticisms become hysterical and divorced from the facts. She writes that Arendt denies that Eichmann was an antisemite, which would be laughably false if it weren't also widely believed. She suggests that Arendt's anti-Jewish presentation of the trial was influenced by her enduring love for Martin Heidegger, again an utterly ridiculous premise. And she says that Arendt was, like Eichmann, unthinking--something is hard to take from such a polemical writer as Lipstadt.
Despite wild inaccuracies and self-interested potshots, Liptstadt's book has received much attention, some of it positive. And the book has some positive features.
To separate good from bad and to engage the ongoing conversation, The Hannah Arendt Center will be publishing a series of blogs, essays, reviews, and talks that address Arendt's Eichmann in Jerusalem and the controversy it has spawned.
We recently posted a short plea to be sceptical of second-hand mis-appropriations and to read Arendt's book oneself, before one criticizes or defends it.
Here, we post a video of a recent talk by Daniel Maier-Katkin, author of an excellent intellectual biography of Arendt, The Stranger from Abroad.
On July 5, 2011, Maier-Katkin gave a talk at an NEH Seminar at Bard College in which he addresses Lipstadt's book alongside Arendt's Eichmann in Jerusalem. You can watch the talk here.
n.b. The sound is a bit quiet, but very audible, especially if heard with earphones.
In commenting on my essay "Why We Must Judge," Scott Horton writes:
One of the most serious distortions of liberalism in modern American thought could be reduced to a simple, oft-repeated phrase: don’t be so judgmental. The argument is that it’s healthy for citizens in a modern society to collect information and suspend the process of forming judgments. A core aspect of this approach is doubtless correct: as Count Tolstoy observed in What Is Art, even sophisticated minds are prone to fail to grasp essential facts if those facts contradict some conclusions they have already drawn. But this doesn’t mean that judgment should be suspended indefinitely. To the contrary, judgment is sometimes a moral imperative. Without judgment, there is no justice.
It is this idea that judgment is sometimes a moral imperative that is too often forgotten. Read Scott Horton's full post.
Professor Stevens response to my post on genetically choosing traits in our offspring suggests that I, and Arendtians (whatever such a thing may be) think "technology is inimical to nature, and therefore undesirable." He diagnoses a fear of technology, and, it seems, a nostalgia for a pre-technological age.
In the thinking of Arendt and her followers, it seems that changes in 'humanity' -- especially changes caused by technology and its interdependence with modern government -- lead automatically to 'inhumanity', i.e., to an undesirable absence of what is figured as having been, so far, human nature.
The point of concern is not that technology brings change. Far from it. I can't imagine any reader of mine or of Arendt's--to take one example, her paean to revolutions in On Revolution-- thinking her inimical to change. She values, above all, spontaneity and creativity and thus the possibility for the emergence of the new.
The points I hoped to make in my post were:
1. That one essential characteristic of humanity is that human beings are subject to chance, change, newness, and unpredictability. Now this is not a claim about some natural inborn human nature. But it does say that humans, if they are to be human, exist in such a way that the world can be surprising and new. If Christians thought humans were created by God and Kant saw humans as rational beings, Arendt thought that humans, at the very least, were free to act in surprising ways.
2. This is not at all anti-technological. On the contrary, Arendt distinguishes humans from animals precisely because humans can create and build an artificial world. In other words, there is no human civilization with technologies and without a built and fabricated world. Only animals live in a purely natural existence. Humans make their world.
3. What is worrisome in the age of modern science is not technology and not fabrication, but the increasing possibility that human creative powers will become so great that the human power to create an artificial world will overtake the human itself as something given, freely existing by a mystery impervious to human mastery.
I doubt very much that humans will ever extinguish the mystery of human being. And yet, I do believe that the spaces of freedom in our time are shrinking greatly. The dream to control our fate by purchasing our progeny in a genetic boutique will not lead to homogeneity. People will pick differently. Nor will it lead to dominance by one class, since life is impossible to fully control. But it does move us further down the road towards a time when the selection of human qualities is so rationalized by an artificially intelligent mind that the mysterious quintessence of humanity is forgotten.
I have a new essay just published in Democracy, A Journal of Ideas. My title was: The Wisdom of Rhadamanthus. If you read to the end you'll see the point. But they wisely called it: "Why We Must Judge."
The essay begins:
I n 2004, The New York Times reported that numerous captured Iraqi military officers had been beaten by American interrogators, and that Major General Abed Hamed Mowhoush had been killed by suffocation. The Times has also published the stories of the so-called “ice man” of Abu Ghraib, Manadel al-Jamadi, who was beaten and killed while in U.S. custody, his body wrapped in ice to hide evidence of the beatings; of Walid bin Attash, forced to stand on his one leg (he lost the other fighting in Afghanistan) with his hands shackled above his head for two weeks; and of Gul Rahman, who died of hypothermia after being left naked from the waist down in a cold cell in a secret CIA prison outside Kabul. And the paper has documented the fate of Abu Zubaydah, captured in Pakistan, questioned in black sites and waterboarded at least 83 times, before being brought to Guantanamo, as well as the story of Khalid Shaikh Mohammed, waterboarded 183 times.
What was missing from these stories, published in the newspaper of record? A simple word: torture.
The omission is standard practice at the Times, just as it is at The Washington Post, NPR, and most U.S.-based media. Clark Hoyt, formerly ombudsman at the Times, defended the refusal to use the word torture and the decision to employ the language of “enhanced interrogation techniques,” a euphemism pioneered by the Bush Administration and embraced by the Obama Administration. For Hoyt, whether banging someone’s head against stone walls to elicit information is torture is in the eye of the beholder: “This president and this attorney general say waterboarding is torture, but the previous president and attorney general said it is not. On what basis should a newspaper render its own verdict, short of charges being filed or a legal judgment rendered?” Alicia C. Shepard, ombudsman at NPR, calls torture “loaded language.” To name simulated suffocation torture means to “unilaterally make such a judgment,” something Andrew Alexander, ombudsman at The Washington Post, argues journalists must avoid. In short, since the definition of torture is a matter of debate, we can’t publicly speak of torture. To judge an act to be torture is beyond our capacity and outside our jurisdiction.
Judgment is in short supply, and not just in the media. President Obama has made it clear that he has no interest in prosecuting and determining the responsibility of the torturers. As he said in April 2009, “This is a time for reflection, not retribution.” “Nothing,” he said, “will be gained by spending our time and energy laying blame for the past.” And so, seven years after the first death by torture in the war on terror, six years after the photos from Abu Ghraib, two years after Vice President Dick Cheney admitted that he personally authorized waterboarding and other techniques of torture, and two years after Barack Obama was elected, the vast majority of those who conceived, justified, and carried out the U.S. policy of torture—acts that are inhuman, unjust, and illegal by both international and domestic law—have not been accused, tried, or judged. Eleven low-ranking army personnel were court-martialed after Abu Ghraib. For the murder of Major General Abed Hamed Mowhoush, Chief Warrant Officer Lewis Welshofer Jr. was convicted of negligent homicide, but given no jail time and not even discharged from the army. Aside from these scapegoats, the vast majority of those involved in the torture regime continue to work for the government. While Obama worries about a rush to judgment, our real problem is that we have abdicated our right and our duty to judge at all.
In spite of Obama’s call at his inauguration for a “new era of responsibility,” we are suffering a culture-wide crisis of judgment. And not just when it comes to torture. Those who employed fancy lawyers to evade taxes are offered amnesty instead of judgment if they return their money to the United States. We frequent restaurants knowing that affordable food is subsidized by underpaid illegal help in the kitchen and we pay nannies and construction workers in cash, rationalizing our violation of both the law and our moral beliefs that everyone deserves health care and other benefits. In academia, professors have so fully abandoned their duty to judge that more than 50 percent of the grades at Harvard University are in the A range. And no Wall Street firm that has received a bailout has fired its CEO.
I stopped in at the “Systematic” exhibit now on at the Project 176 in London and received a tour by two of the gallery assistants, David Angus and Chloe Cooper. The exhibit, curated by Ellen Mara De Wachter, confronts the question of the place of the human being and the role of the artist at a time when individuals and humans are being subsumed by rational, social, and scientific systems. Featuring 18 works by 8 artists, the exhibit raises the fundamental question of our time: what does it mean to be human in an increasingly inhuman age?
The works on display in “Systematic” provoke principally because they enthusiastically embrace the utopian optimism that underlies the thinking of prophets of singularity from Ray Kurzweil to Sergey Brin. The premise of the exhibit is the power of systems over individuals. As De Wachter writes in her essay that accompanies the exhibit, the system today represents the
emergent properties ‘of the combination as a whole—which are more than the sum of its individual parts.’
The artists in “Systematic” produce works that abandon themselves to systems that operate beyond the awareness or control of human intelligence.
Justin Beal offers glass and dry-wall tables that incorporate rotting fruit into their joints. The fruit rots and attracts insects, molds, and fungi that alter the “artwork” in ways that are outside of artistic control. For De Wachter, Beal “celebrates the unpredictability and undecidability that befall all works of art once they leave the artist’s hands.” The key word here is “celebrates.” For Beal, as for many in the artistic and technological worlds today, the power of the system over the individual is to be welcomed.
Katie Paterson’s “Earth-Moon-Earth” partakes in a similar bow to the power of systems. Paterson translates Beethoven’s Moonlight Sonata into morse code, beams it to the moon, and receives it back upon its reflection. She then translates the returned code into musical notes, with all the losses, transpositions, and gaps left in. This new sonata is then played and the spectator can listen to the new sonata played on vinyl through headphones in the gallery.
For De Wachter, artists like Beal and Paterson—and the other artists on exhibit—work by “surrendering a certain amount of control to the systems” with which they interact. In doing so, “these artists admit that the artworks they produce have a life of their own, and a life beyond the studio in which they were made.”
The language of artistic surrender is reminiscent of an older artistic ideal and also eerily different. Artists of the pre-modern and classic ages were often anonymous. The artistic ideal was to serve simply as a medium through which the divine truth flowed and manifested itself in the world as a work of art. The artist, bemused by his muse, lost himself in rapture and gave himself over to the fashioning of a work in which the truth came to stand in the world. Opposed to this tradition of the artist as medium is the ideal of artistic genius, the artist who composes works from the productive brilliance of his own mind.
In Systematic, the artists abandon control not to a divine, rational, or meaningful truth, but to the random, unpredictable, and meaningless systems of growth and decay, chance and circumstance. The celebration of this powerlessness is, I think, undoubtedly the result of a new faith that has swept up much of the artistic and technological intelligentsia today: the faith in an intelligent universe that goes by the popular name, The Singularity.
The Singularity, as Ray Kurzweil has popularized it, is the hope that humans and machines will merge into a new species that will be governed by super-rational and super-intelligent knowledge. As Kurzweil says:
Once nonbiological intelligence gets a foothold in the human brain (this has already started with computerized neural implants), the machine intelligence in our brains will grow exponentially (as it has been doing all along), at least doubling in power each year. Ultimately, the entire universe will become saturated with our intelligence. This is the destiny of the universe.
In the Singularity, knowledge that is inaccessible to the human brain, a system of all systems, will inaugurate a harmonious existence amongst man-machines and the natural world.
What needs to be remembered amidst this technological utopianism is that the singularity means the death of humanity. The super-intelligent consciousness is not something accessible by mere humans who live and die in mortal timelines. This is why there is a persistent anti-humanism in artistic and technological avant garde circles.
The celebratory anti-humanism exhibited inSystmatic is, of course, ambiguous. These artists claim at once to be celebrating systems and also pointing to their limits and dangers. The glass solitude booths in Damian Hirst’s “Sometimes I Avoid People” are, as De Wachter notes, reminiscent of cases at a natural history museum. In this early work from 1991, Hirst, in a way others in the exhibition do not, points to the dark side of the elevation of systems over humanity.
Above all, the exhibition reminded me of what Hannah Arendt calls Earth Alienation. The great event that inaugurates earth alienation is Galileo’s discovery of the telescope. While the telescope symbolizes the power of sense perception to see what had previously been invisible, it also challenges the adequacy of our human senses to make sense of the world. What the telescope shows us is not reality. It is not the earth or the moon or the stars. Similarly, social science does not show us individuals and persons. The scientific perspective views persons and objects as seen through systems and instruments and, as Sir Arthur Eddington wrote, the things we see have as much resemblance to their appearance in our instruments as a “telephone number to a subscriber.”
Science, for Arendt, is both anti-human and anti-earth. It is anti-earth, she writes, because
in physics—whether we release energy processes that ordinarily go on only in the sun, or attempt to initiate in a test tube the processes of cosmic evolution, or penetrate with the help of telescopes the cosmic space to a limit of two and even six billion light years, or build machines for the production and control of energies unknown in the household of earthly nature, or attain speeds in atomic accelerators which approach the speed of light, or produce elements not to be found in nature, or disperse radioactive particles, created by us through the use of cosmic radiation, on the earth—we always handle nature from a point in the universe outside the earth. And even at the risk of endangering the natural life process we expose the earth to universal, cosmic forces alien to nature’s household.
And science is anti-human:
[The humanist] view of man is even more alien to the scientist, to whom man is no more than a special case of organic life and to whom man’s habitat—the earth, together with earthbound laws—is no more than a special borderline case of absolute, universal laws, that is, laws that rule the immensity of the universe. Surely the scientist cannot permit himself to ask: What consequences will the result of my investigations have for the stature of man? It has been the glory of modern science that it has been able to emancipate itself completely from all such anthropocentric, that is, truly humanistic, concerns.
The scientist cannot ask the question of whether science dehumanizes man. The scientist also cannot ask the question of whether science alienates man from the earth and his life on earth. The scientist can’t ask such questions because the scientific perspective is the universal, not the particular. It is to ask from an Archimedean point divorced from all reality. That is why the scientist speaks in no earthly language, but in the pure language of mathematics.
The scientist reasons, Arendt writes. He or she seeks to reveal the hidden causes of the universe. But the scientist does not think, does not ask whether such knowledge is good or bad.
But what of the artist? What struck me in Systematic was just how fully the artists today have given themselves over to a celebration of the scientific-technological world and its values. I value their art as a mark of the power of that discourse to shape contemporary thought. But I wonder: why have artists have followed scientists in celebrating the anti-human power of technology?
The question of art’s response to the power of systems and science is at the forefront of Human Being in an Inhuman Age, the Arendt Center’s October 2010 conference that explores the fate of humanity in an inhuman age. The conference features Ann Lauterbach, Nicholson Baker, Wyatt Mason, Gilles Peress, David Rothenberg on the question: "Is Art Human? The Fate of Art in the Age of Machines."
The Zabludowicz Collection, London.