Richard Halpern, “Eclipse of Action: Hamlet and the Political Economy of Playing,” Shakespeare Quarterly, Volume 59, Number 4, Winter 2008, pp. 450-482
As he formulates an original response to the classic problem of Hamlet’s non-action, Halpern offers one of the few critical analyses of Arendt’s reading of Adam Smith in The Human Condition. He shows how Arendt draws on Smith’s concepts of productive and unproductive labor to articulate her key concepts of work and labor. Moreover, his close reading draws our attention to an intriguing paradox in the temporality of action that may indicate a corrective—albeit a difficult one—to the current demand for instant gratification that often leads to cynicism in the face of great political challenges.
Halpern reminds us that Aristotle separates action from labor; Smith replaces action with production; and Arendt seeks to restore action to a place of prominence in the political realm. Arendt explicitly says that “the distinction between productive and unproductive labor contains, albeit in a prejudicial manner, the more fundamental distinction between work and labor” (HC 87). She does not simply take over Smith’s idea, but wishes to transfer his distinction from his own economic system (the “prejudice” of his own thought) to her own thinking of labor and work. Halpern’s analysis of Arendt’s move helps us start to think about her surprising appeal to 18th century economic theory. Moreover, it her discussion of Smith (and better known critique of Marx), I see her posing an even broader question: what does it mean to be productive and what are the appropriate spheres of different types of productivity?
Within the realm of production, Halpern looks at how Smith offers a further distinction in Book 2, Chapter 3 of The Wealth of Nations, under the heading “Of the Accumulation of Capital, or of Productive and Unproductive Labor”:
There is one sort of labor which adds to the value of the subject upon which it is bestowed: there is another which has no such effect. The former, as it produces a value, may be called productive; the latter, unproductive labour. Thus the labour of a manufacturer adds, generally, to the value of the materials which he works upon, that of his own maintenance, and of his master’s profit. The labor of a menial servant, on the contrary, adds to the value of nothing. (Adam Smith, An Inquiry into the Nature and Causes of the Wealth of Nations, ed. Edwin Cannan (Chicago: U of Chicago P, 1976), 351.)
Smith draws a distinction between labor that holds or builds value (say the manufacture of a chair), and labor that evaporates the moment the worker completes it (such as cleaning the house or washing clothes). Classical political economists of the 18th and 19th century engaged in wide ranging debates over what should “count” as value before capitalist countries agreed on the ratio of labour to output or per capita GDP as the standard; socialist countries, following the USSR, adopted an alternative “material product system” that prioritized the amount of goods. In a time of environmental change, this glimpse into the history of economic theory may offer a helpful reminder that society can decide to change the standard of economic success.
According to Halpern, Arendt draws from Smith not to rehabilitate an outmoded aspect of economic theory, but to draw inspiration for her creation of distinct conceptual spaces for labor, work, and action. Specifically, she aligns Smith’s “unproductive labor” with her circular conception of labor and “productive labor” with her linear conception of work. This does not mean that labor is unproductive but it does require a clarification of different types of productivity. I see it as useful to keep the discussion on productivity since these spheres of private life and cultural and industrial economy then offer a contrast to the political sphere where action can happen. Action is neither circular like labor, nor linear like work, but has its own peculiar directionality and temporality. Halpern’s analysis helpfully zeroes in on the perplexing relation between the ephemerality of labor and action and action’s desire for permanence:
The temporal paradox of the political is that while it aims at immortality, action and speech are, in themselves, evanescent: “Left to themselves, they lack not only the tangibility of other things, but are even less durable and more futile than what we produce for consumption” (HC 95). Like Smith’s unproductive labor, action disappears in the moment of its occurrence because it leaves no material trace behind. (Halpern, 457)
Politics demands an extraordinary effort. It asks that one expend energy indefinitely for an uncertain reward. Discussion and debate goes on and on, only occasionally clicking with spectacular agreement or deflationary compromise. Arendt’s analysis can help us perceive the difficulty of contemporary politics that attempts to fit into consumer culture that preserves, and thus remembers, nothing.
Arendt’s attention to the aspects of debate and negotiation that might be seen as unproductive (a dimension that in other parts of the Human Condition she relates to menial work, again often in relation to Smith) offers a corrective to a misguided understanding of politics that leads to frustration and despair.Even if we are not at the extreme level of the menial functioning of a New England town hall meeting debating the budget for potholes or an Occupy Wall Street discussion that requires unanimous consensus for closure, politics works in a different temporality. Rather than the fever pitched accusations of crisis that in the U.S. actually covers up rather than encourage political risk, a more humble sense of public debate as requiring something like the patience of the menial task may be a corrective.
Political action in Arendt’s sense differs from work in being freed from a fixed goal. She links this freedom, which for her is based on self-referentiality, to drama:
Arendt’s discomfort with the economic dimension of theater reveals itself when she criticizes Adam Smith for grouping actors, along with churchmen, lawyers, musicians, and others, as unproductive laborers and hence as lowly cousins of the menial servant (HC 207). Arendt would distinguish all of these activities from labor in that they “do not pursue an end . . . and leave no work behind . . . , but exhaust their full meaning in the performance itself ” (206). Smith’s inclusion of these autotelic activities under the category of labor is for Arendt a sign of the degradation that human activity had already undergone by the early days of the modern era. By contrast, “It was precisely these occupations—healing, flute-playing, play-acting—which furnished ancient thinking with examples for the highest and greatest activities of man” (207–21). What Arendt overlooks is that—already in the ancient world—healing, flute playing, and playacting became remunerated professions and differed in this respect from politics, which was not the work of a professional class of politicians. (Halpern 458)
Arendt agrees that actors on the stage perform fleeting scenes, but wishes to link this to “the highest and greatest activities of man,” ie. those of politics. Halpern argues that in fact, actors in ancient times already worked for wages and were thus not independent like citizens in their roles as politicians. Nonetheless, Arendt shows us that in the modern period we can learn something about acting in politics from acting in the arts. The key point for Halpern is that drama, etc. are “autotelic activities.” They do not even keep up the house like menial work; they have their own end and really evaporate in reaching this end. Political action works along an undecidable edge: even less productive than labor but at any moment potentially the most lasting. Against the odds, politics holds open the space in which something new can begin and thus renew the human world against the circular forces of nature.
One could reasonably argue that in his focus on the connection between labor and action, Halpern fails to adequately emphasize the importance of work. In a world of labor and the victory of animal laborans, there is no work to preserve action and no polis/world to give action memorialization. Indeed, we face the danger of the collapse of the world into the “waste economy” (HC 134) and the seductions to action disappear. However, Halpern does not say that play is action for Arendt but rather, as I understand his argument, that it there is an aspect of action that is like play. Action requires debate that may seem to be going nowhere, or just be undertaken for its own sake up to the moment that it takes a risk. When it dares to venture into the public realm, action clearly very different from play as a hobby.
Labor is both constant and fleeting. On the one hand, the demands of the body never end, nor do the cycles of nature. On the other hand, labor is also fleeting in that its mode of production only temporarily maintains life. Action is also fleeting from the perspective that the risk it takes often evaporates but has the utmost political constancy when one considers those actions that succeed in forming the power of a new beginning.
In the remainder of the article, Halpern moves from The Human Condition to Hamlet, arguing that Shakespeare replaces action on the classical model of tragedy with the ceaseless activity of Hamlet’s thoughts. This activity runs in circles like unproductive labor in Smith and labor in Arendt rather than the action of Aristotle’s aesthetic and Arendt’s political ideal. From an Arendtian point of view, the modernity of the drama reveals a challenge to politics, the challenge of a time out of joint that action has to face again and again.
After months in which university after university signed on to the bandwagon for Massive Open Online Courses called MOOCs, the battle over the future of education has finally begun. This week Duke University pulled out of EdX, the Harvard/MIT led consortium of Massive Open Online Courses called MOOC’s.
The reason: Its faculty rebelled. According to The New York Times,
While [Duke provost Peter] Lange saw the consortium as expanding the courses available to Duke students, some faculty members worried that the long-term effect might be for the university to offer fewer courses — and hire fewer professors. Others said there had been inadequate consultation with the faculty.
The Times also reports that faculty at Amherst College, my alma mater and former employer, voted against joining EdX. Again, the faculty saw danger. My former colleagues worried that the introduction of online courses would detrimentally impact the quality and spirit of education and the small liberal arts college. They also, as our friends over at ViaMeadia report, worried that MOOCs would “take student tuition dollars away from so-called middle-tier and lower-tier” schools, pushing their colleagues at these institutions out of their jobs.
And that brings us to ground zero of the battle between the faculty and the MOOCs: San Jose State University. San Jose State has jumped out as a leader in the use of blended online and offline courses. Mohammad H. Qayoumi, the university's president, has defended his embrace of online curricula on both educational and financial grounds. He points to one course, "Circuits & Electronics," offered by EdX. In a pilot program, students in that course did better than students in similar real-world courses taught by San Jose State professors. Where nearly 40% of San Jose students taking their traditional course received a C or lower, only 9% of students taking the EdX course did. For Qayoumi and others, such studies offer compelling grounds for integrating MOOCs into the curriculum. The buzzword is “blended courses,” in which the MOOCs are used in conjunction with faculty tutors. In this “flipped classroom,” the old model in which students listen to lectures in lecture halls and then do assignments at home, is replaced by online lectures supplemented by discussions and exercises done in class with professors. As I have written, such a model can be pedagogically powerful, if done right.
But as attractive as MOOCs may be, they carry with them real dangers. And these dangers emerge front and center in the hard-hitting Open Letter that the philosophy department at San Jose State University has published addressed to Michael Sandel. Sandel is the Harvard Professor famous for his popular and excellent course “Justice,” that has been wowing and provoking Harvard undergraduates for decades. Sandel not only teaches his course, he has branded it. He sells videos of the course; he published a book called Justice based on the course, and, most recently, created an online video version of the course for EdX. San Jose State recently became one of the first public universities in the country to sign a contract paying for the use of EdX courses. This is what led to the letter from the philosophers.
The letter begins by laying out the clear issue. The San Jose Philosophy department has professors who can teach courses in justice and ethics of the kind Sandel teaches. From their point of view, “There is no pedagogical problem in our department that JusticeX solves, nor do we have a shortage of faculty capable of teaching our equivalent course.” In short, while some students may prefer a course with a famous Harvard professor, the faculty at San Jose State believe that they are qualified to teach about Justice.
Given their qualifications, the philosophy professors conclude that the real reason for the contract with EdX is not increased educational value, but simply cost. As they write: "We believe that long-term financial considerations motivate the call for massively open online courses (MOOCs) at public universities such as ours.
In short, the faculty sees the writing on the wall. Whatever boilerplate rhetoric about blended courses and educational benefit may be fashionable and necessary, the real issue is simple. Public universities (and many private ones as well) will not keep paying the salaries of professors when those professors are not needed.
While for now professors are kept on to teach courses in a blended classroom, there will soon be need for many fewer professors. As students take Professor Sandel’s class at universities around the country, they will eventually work with teaching assistants—just as students do at Harvard, where Professor Sandel has pitifully little interaction with his hundreds of students in every class. These teaching assistants make little money, significantly less than a tenured or even a non-tenured professor. It is only a matter of time before many university classes are taught virtually by superstar professors assisted by armies of low-paid onsite assistants. State universities will then be able to educate significantly more students at a fraction of the current cost. For many students this will be a great boon—a certified and possibly quality education at a cheap price. For most California voters, this is a good deal. But it is precisely what the faculty at San Jose State fear. As they write:
We believe the purchasing of online and blended courses is not driven by concerns about pedagogy, but by an effort to restructure the U.S. university system in general, and our own California State University system in particular. If the concern were pedagogically motivated, we would expect faculty to be consulted and to monitor quality control. On the other hand, when change is financially driven and involves a compromise of quality it is done quickly, without consulting faculty or curriculum committees, and behind closed doors. This is essentially what happened with SJSU's contract with edX. At a press conference (April 10, 2013 at SJSU) announcing the signing of the contract with edX, California Lieutenant Governor Gavin Newsom acknowledged as much: "The old education financing model, frankly, is no longer sustainable." This is the crux of the problem. It is time to stop masking the real issue of MOOCs and blended courses behind empty rhetoric about a new generation and a new world. The purchasing of MOOCs and blended courses from outside vendors is the first step toward restructuring the CSU.
The San Jose State philosophy professors are undoubtedly correct. We are facing a systematic transformation in higher education in this country and also in secondary education as well. Just as the Internet has revolutionized journalism and just as it is now shaking the foundations of medicine and law, the Internet will not leave education alone. Change seems nigh. Part of this change is being driven by cost. Some of it is also being driven by the failures and perceived failures of our current system. The question for those of us in the world of higher education is whether we can respond intelligently to save the good and change out the bad. It is time that faculties around the country focus on this question and for that we should all be thankful to the philosophy professors at San Jose State.
The Open Letter offers three main points to argue that it is bad pedagogy to replace them with the blended course model of MOOCs and teaching assistants.
First, they argue that good teaching requires professors engaged in research. When professors are engaged in active research programs, they are interested in and motivated by their fields. Students can perceive if a professor is bored with a class and students will always learn more and be driven to study and excel by professors who feel that their work matters. Some may wonder what the use of research is that is read by only a few colleagues around the world, but one answer is that such research is necessary to keep professors fresh and sharp. We all know the sad fate of professors who have disengaged from research.
Second, the philosophy professors accept the argument of many including myself that large lectures are not the best way to teach. They teach by the Socratic method, interacting with students. Such classes, they write, are much better than having students watch Professor Sandel engage Socratically with faculty at Harvard. Of course, the MOOC model would still allow for Socratic and personal engagement, just by much lower paid purveyors of the craft. The unanswered question is whether low-paid assistants can be trained to teach well. The answer may well be yes.
Third, the philosophy faculty worry about the exact same moral justice course being taught across the country. We can already see the disciplinary barricades being drawn. It may be one thing to teach Math to the whole country from one or two MOOCs, but philosophy needs multiple perspectives. But how many? The philosophy professors suggest that their highly diverse and often lower-middle-class students have different experiences and references than do Professor Sandel’s Harvard students. They can, in the classroom, better connect with these students than Professor Sandel via online lectures.
The points the San Jose State philosophy professors raise are important. In many ways, however, their letter misses the point. Our educational system is now structured on a few questionable premises. First, that everyone who attends college wants a liberal arts education. That is simply not true. Many students simply want a credential to get a job. If these students can be taught well and more cheaply, we should help them. There is a question of whether we need to offer everyone the same kind of highly personalized and expensive education. While such arguments will be lambasted as elitist, it is nevertheless true that not everyone wants or needs to read Kant closely. We should seek to protect the ability of those who do—no matter their economic class—and also allow those who don’t a more efficient path through school.
A second questionable premise is that specialization is necessary to be a good teacher. This also is false. Too much specialization removes one from the world of common sense. As I have argued before, we need professors who are educated more generally. It is important to learn about Shakespeare and Aristotle, but you don’t need to be a specialist in Shakespeare or Aristotle to teach them well and thoughtfully to undergraduates. This is not an argument against the Ph.D. It is important to study and learn an intellectual tradition if you are going to teach. But it is an argument against the professionalization of the Ph.D. and of graduate education in general. It is also an argument against the dominance of undergraduate curriculum by professionalized scholars.
Third, and perhaps most importantly, is the premise that everyone needs to go to college. If we put a fraction of the resources we currently spend on remedial education for college students back into public high schools in this country, we could begin the process of transforming high school into a serious and meaningful activity. For one thing, we could begin employing Ph.D.s as high school teachers as are many of the emerging early colleges opening around the country.
I am sympathetic to the philosophy professors at San Jose State. I too teach a course on Justice called “The Foundation of Law: The Quest for Justice.” It is a course quite similar and yet meaningfully different from Michael Sandel’s course on Justice. I believe it is better, no offense meant. And I would be upset if I were told next year that instead of teaching my course I would be in effect a glorified TA for Professor Sandel. I hope it doesn’t come to that, but I know it might.
The only response for those whose jobs are being replaced by computers or the Internet is to go out and figure out how to do it better. That is what happened to journalists who were fired in droves. Many quit voluntarily and began developing new models of journalism, including blogs that have enriched our public discourse and largely rejuvenated public journalism in this country. Blogs, of course, are not perfect, and there is the question of how to make a living writing one. But enterprising bloggers like Andrew Sullivan and Walter Russell Mead are figuring that out. So too are professors like Michael Sandel and Andrew Ng.
We need educators to become experimental these days, to create small schools and intensive curricula within larger institutions that make the most of the personal interaction that is the core of true pedagogy. If that happens, and if teachers offer meaningful education for which students or our taxpayers will pay, then our jobs will be safe. And our students will be better for it. For this reason, we should welcome the technology as a push to make ourselves better teachers.
The Open Letter to Michael Sandel deserves a response. I hope Professor Sandel offers one. Until then, I recommend that this beautiful Spring weekend you read the letter from the San Jose State Philosophy Department. It is your weekend read.
In an essay in the Wall Street Journal, Frans de Waal—C. H. Candler Professor of Primate Behavior at Emory University—offers a fascinating review of recent scientific studies that upend long-held expectations about the intelligence of animals. De Waal rehearses a catalogue of fantastic studies in which animals do things that scientists have long thought they could not do. Here are a few examples:
Ayumu, a male chimpanzee, excels at memory; just as the IBM computer Watson can beat human champions at Jeopardy, Ayumu can easily best the human memory champion in games of memory.
Similarly, Kandula, a young elephant bull, was able to reach some fragrant fruit hung out of reach by moving a stool over to the tree, standing on it, and reaching for the fruit with his trunk. I’ll admit this doesn’t seem like much of a feat to me, but for the researchers de Waal talks with, it is surprising proof that elephants can use tools.
Scientists may be surprised that animals can remember things or use tools to accomplish tasks, but any one raised on children’s tales of Lassie or Black Beauty knows this well, as does anyone whose pet dog opened a door knob, brought them a newspaper, or barked at intruders. The problem these studies address is less our societal view of animals than the overly reductive view of animals that de Waal attributes to his fellow scientists. It’s hard to take these studies seriously as evidence that animals think in the way that humans do.
Seemingly more interesting are experiments with self-recognition and also facial recognition. De Waal describes one Asian Elephant who stood in front of a mirror and “repeatedly rubbed a white cross on her forehead.” Apparently the elephant recognized the image in the mirror as herself. In another experiment, chimpanzees were able to recognize which pictures of chimpanzees were from their own species. Like my childhood Labrador who used to stare knowingly into the mirror, these studies confirm that animals are able to recognize themselves. This means that animals do, likely, understand that they are selves.
For de Waal, these studies have started to upend a view of humankind's unique place in the universe that dates back at least to ancient Greece. “Science,” he writes, “keeps chipping away at the wall that separates us from the other animals. We have moved from viewing animals as instinct-driven stimulus-response machines to seeing them as sophisticated decision makers.”
The flattening of the distinction between animals and humans is to be celebrated, De Waal argues, and not feared. He writes:
Aristotle's ladder of nature is not just being flattened; it is being transformed into a bush with many branches. This is no insult to human superiority. It is long-overdue recognition that intelligent life is not something for us to seek in the outer reaches of space but is abundant right here on earth, under our noses.
DeWaal has long championed the intelligence of animals, and now his vision is gaining momentum. This week, in a long essay called “One of Us” in the new Lapham’s Quarterly on animals, the glorious essayist John Jeremiah Sullivan begins with this description of similar studies to the ones DeWaal writes about:
These are stimulating times for anyone interested in questions of animal consciousness. On what seems like a monthly basis, scientific teams announce the results of new experiments, adding to a preponderance of evidence that we’ve been underestimating animal minds, even those of us who have rated them fairly highly. New animal behaviors and capacities are observed in the wild, often involving tool use—or at least object manipulation—the very kinds of activity that led the distinguished zoologist Donald R. Griffin to found the field of cognitive ethology (animal thinking) in 1978: octopuses piling stones in front of their hideyholes, to name one recent example; or dolphins fitting marine sponges to their beaks in order to dig for food on the seabed; or wasps using small stones to smooth the sand around their egg chambers, concealing them from predators. At the same time neurobiologists have been finding that the physical structures in our own brains most commonly held responsible for consciousness are not as rare in the animal kingdom as had been assumed. Indeed they are common. All of this work and discovery appeared to reach a kind of crescendo last summer, when an international group of prominent neuroscientists meeting at the University of Cambridge issued “The Cambridge Declaration on Consciousness in Non-Human Animals,” a document stating that “humans are not unique in possessing the neurological substrates that generate consciousness.” It goes further to conclude that numerous documented animal behaviors must be considered “consistent with experienced feeling states.”
With nuance and subtlety, Sullivan understands that our tradition has not drawn the boundary between human and animal nearly as securely as de Waal portrays it. Throughout human existence, humans and animals have been conjoined in the human imagination. Sullivan writes that the most consistent “motif in the artwork made between four thousand and forty thousand years ago,” is the focus on “animal-human hybrids, drawings and carvings and statuettes showing part man or woman and part something else—lion or bird or bear.” In these paintings and sculptures, our ancestors gave form to a basic intuition: “Animals knew things, possessed their forms of wisdom.”
Religious history too is replete with evidence of the human recognition of the dignity of animals. God says in Isaiah that the beasts will honor him and St. Francis, the namesake of the new Pope, is famous for preaching to birds. What is more, we are told that God cares about the deaths of animals.
“In the Gospel According to Matthew we’re told, “Not one of them will fall to the ground apart from your Father.” Think about that. If the bird dies on the branch, and the bird has no immortal soul, and is from that moment only inanimate matter, already basically dust, how can it be “with” God as it’s falling? And not in some abstract all-of-creation sense but in the very way that we are with Him, the explicit point of the verse: the line right before it is “fear not them which kill the body, but are not able to kill the soul.” If sparrows lack souls, if the logos liveth not in them, Jesus isn’t making any sense in Matthew 10:28-29.
What changed and interrupted the ancient and deeply human appreciation of our kinship with besouled animals? Sullivan’s answer is René Descartes. The modern depreciation of animals, Sullivan writes,
proceeds, with the rest of the Enlightenment, from the mind of René Descartes, whose take on animals was vividly (and approvingly) paraphrased by the French philosopher Nicolas Malebranche: they “eat without pleasure, cry without pain, grow without knowing it; they desire nothing, fear nothing, know nothing.” Descartes’ term for them was automata—windup toys, like the Renaissance protorobots he’d seen as a boy in the gardens at Saint-Germain-en-Laye, “hydraulic statues” that moved and made music and even appeared to speak as they sprinkled the plants.
Too easy, however, is the move to say that the modern comprehension of the difference between animal and human proceeds from a mechanistic view of animals. We live at a time of the animal rights movement. Around the world, societies exist and thrive whose mission is to prevent cruelty toward and to protect animals. Yes, factory farms treat chickens and pigs as organic mechanisms for the production of meat, but these farms co-exist with active and quite successful movements calling for humane standards in food production. Whatever the power of Cartesian mechanics, its success is at odds with the persistence of the religious, ancient solidarity, and also deeply modern sympathy between human and animal.
A more meaningful account of the modern attitude towards animals might be found in Spinoza. Spinoza, as Sullivan quotes him, recognizes that animals feel in ways that Descartes did not. As do animal rights activists, Spinoza admits what is obvious: that animals feel pain, show emotion, and have desires. And yet, Spinoza maintains a distinction between human and animal—one grounded not in emotion or feeling, but in human nature. In his Ethics, he writes:
Hence it follows that the emotions of the animals which are called irrational…only differ from man’s emotions to the extent that brute nature differs from human nature. Horse and man are alike carried away by the desire of procreation, but the desire of the former is equine, the desire of the latter is human…Thus, although each individual lives content and rejoices in that nature belonging to him wherein he has his being, yet the life, wherein each is content and rejoices, is nothing else but the idea, or soul, of the said individual…It follows from the foregoing proposition that there is no small difference between the joy which actuates, say, a drunkard, and the joy possessed by a philosopher.
Spinoza argues against the law prohibiting slaughter of animals—it is “founded rather on vain superstition and womanish pity than on sound reason”—because humans are more powerful than animals. Here is how he defends the slaughter of animals:
The rational quest of what is useful to us further teaches us the necessity of associating ourselves with our fellow men, but not with beasts, or things, whose nature is different from our own; we have the same rights in respect to them as they have in respect to us. Nay, as everyone’s right is defined by his virtue, or power, men have far greater rights over beasts than beasts have over men. Still I do not deny that beasts feel: what I deny is that we may not consult our own advantage and use them as we please, treating them in the way which best suits us; for their nature is not like ours.
Spinoza’s point is quite simple: Of course animals feel and of course they are intelligent. Who could doubt such a thing? But they are not human. That is clear too. While we humans may care for and even love our pets, we recognize the difference between a dog and a human. And we will, in the end, associate more with our fellow humans than with dogs and porpoises. Finally, we humans will use animals when they serve our purposes. And this is ok, because have the power to do so.
Is Spinoza arguing that might makes right? Surely not in the realm of law amongst fellow humans. But he is insisting that we recognize that for us humans, there is something about being human that is different and, even, higher and more important. Spinoza couches his argument in the language of natural right, but what he is saying is that we must recognize that there are important differences between animals and humans.
At a time that values equality over what Friedrich Nietzsche called the “pathos of difference,” the valuation of human beings over animals is ever more in doubt. This comes home clearly in a story told recently by General Stanley McChrystal, about a soldier who expressed sympathy for some dogs killed in a raid in Iraq. McChrystal responded, severely: “"Seven enemy were killed on that target last night. Seven humans. Are you telling me you're more concerned about the dog than the people that died? The car fell silent again. "Hey listen," I said. "Don't lose your humanity in this thing."” Many, no doubt, are more concerned, or at least are equally concerned, about the deaths of animals as they are about the deaths of humans. There is ever-increasing discomfort about McChrystal’s common sense affirmation of Spinoza’s claim that human beings simply are of more worth than animals.
The distinctions upon which the moral sense of human distinction is based are foundering. For DeWaal and Sullivan, the danger today is that we continue to insist on differences between animals and humans—differences that we don’t fully understand. The consequences of their openness to the humanization of animals, however, is undoubtedly the animalization of humans. The danger that we humans lose sight of what distinguishes us from animals is much more significant than the possibility that we underestimate animal intelligence.
I fully agree with DeWaal and Sullivan that there is a symphony of intelligence in the world, much of it not human. And yes, we should have proper respect for our ignorance. But all the experiments in the world do little to alter the basic facts, that no matter how intelligent and feeling and even conscious animals may be, humans and animals are different.
What is the quality of that difference? It is difficult to say and may never be fully articulated in propositional form. On one level it is this: Simply to live, as do plants or animals, does not constitute a human life. In other words, human life is not simply about living. Nor is it about doing tasks or even being conscious of ourselves as humans. It is about living meaningfully. There may, of course, be some animals that can create worlds of meaning—worlds that we have not yet discovered. But their worlds are not the worlds to which we humans aspire.
Over two millennia ago, Sophocles, in his “Ode to Man,” named man Deinon, a Greek word that connotes both greatness and horror, that which is so extraordinary as to be at once terrifying and beautiful. Man, Sophocles tells us, can travel over water and tame animals, using them to plough fields. He can invent speech, and institute governments that bring humans together to form lasting institutions. As an inventor and maker of his world, this wonder that is man terrifyingly carries the seeds of his destruction. As he invents and comes to control his world, he threatens to extinguish the mystery of his existence, that part of man that man himself does not control. As the chorus sings: “Always overcoming all ways, man loses his way and comes to nothing.” If man so tames the earth as to free himself from uncertainty, what then is left of human being?
Sophocles knew that man could be a terror; but he also glorified the wonder that man is. He knew that what separates us humans from animals is our capacity to alter the earth and our natural environment. “The human artifice of the world,” Arendt writes, “separates human existence from all mere animal environment…” Not only by building houses and erecting dams—animals can do those things and more—but also by telling stories and building political communities that give to man a humanly created world in which he lives. If all we did as humans was live or build things on earth, we would not be human.
To be human means that we can destroy all living matter on the Earth. We can even today destroy the earth itself. Whether we do so or not, it now means that to live on Earth today is a “Choice” that we make, not a matter of fate or chance. Our Earth, although we did not create it, is now something we humans can decide to sustain or destroy. In this sense, it is a human creation. No other animal has such a potential or such a responsibility.
There is a deep desire today to flee from that awesome and increasingly unbearable human responsibility. We flee, therefore, our humanity and take solace in the view that we are just one amongst the many animals in the world. We see this reductionism above all in human rights discourse. One core demand of human rights—that men and women have a right to live and not be killed—brought about a shift in the idea of humanity from logos to life. The rise of a politics of life—the political demand that governments limit freedoms and regulate populations in order to protect and facilitate their citizens’ ability to live in comfort—has pushed the animality, the “life,” of human beings to the center of political and ethical activity. In embracing a politics of life over a politics of the meaningful life, human rights rejects the distinctive dignity of human rationality and works to reduce humanity to its animality.
Hannah Arendt saw human rights as dangerous precisely because they risked confusing the meaning of human worldliness with the existence of mere animal life. For Arendt, human beings are the beings who build and live in a political world, by which she means the stories, institutions, and achievements that mark the glory and agony of humanity. To be human, she insists, is more than simply living, laboring, working, acting, and thinking. It is to do all of these activities in such a way as to create, together, a common life amongst a plurality of persons.
I fear that the interest in animal consciousness today is less a result of scientific proof that animals are human than it is an increasing discomfort with the world we humans have built. A first step in responding to such discomfort, however, is a reaffirmation of our humanity and our human responsibility. There is no better way to begin that process than in engaging with a very human response to the question of our animality. Towards that end, I commend to you “One of Us,” by John Jeremiah Sullivan.
“Arendt on Narrative Theory and Practice”
Allen Speight, College Literature, Volume 38, Number 1, Winter 2011, pp. 115-130
Allen Speight, Director, Institute for Philosophy and Religion at Boston University, argues for Arendt’s place among theorists of narrative such as Alasdair MacIntyre, Charles Talyor, and Paul Ricouer. While he does indicate contemporary questions in both the Anglo-American and continental traditions throughout the article, he delivers particularly rich insights into Arendt’s engagement with three canonical thinkers. Specifically, he highlights aspects of Arendt’s use of conceptions of narration in developing her ideas of action in The Human Condition. In each aspect, he sees Arendt drawing on a specific philosophical precursor—Aristotle, Hegel, and Augustine in turn—but also diverging from them.
In relation to Aristotle, Speight focuses on how action reveals the “who,” how the actor emerges not from his intention but from his impact on the world. As does Aristotle, Arendt places a strong focus on drama. Aristotle and Arendt both hold that “dramatic actions” allow us to “construe what sort of a character an agent has.” However, rather than focusing on the reception of the audience, Arendt links the spectator to the actor. Indeed, expanding from Speight’s interpretation, we might say Arendt opens another center in the actor himself with her idea of the daimon, who watches over one’s shoulder.
From Hegel, Speight sees Arendt picking up on the tragic nature of action and how this leads to a need for forgiveness. The agent will not get what he wants and indeed often perish due to effects that he cannot foresee. Speight makes a striking link to Hegel here:
“A stone thrown is the devil’s,” Hegel liked to say: action by its nature is not something construable in given terms but is a kind of “stepping-forth” or opening up of the unexpected and unpredictable (Elements of the Philosophy of Right.) The classic, tragic examples of action in its openness—Antigone’s deed, for example, which both Hegel and Arendt were drawn to—present in an intensified way what is an underlying condition within ordinary action, one requiring the need for some means of reconciliation.
With the line “A stone thrown is the devil’s,” Hegel lets the personified evil step in as a kind of holding place that opens the question of how the effect of action will change the actor. Unlike Hegel though, the ultimate judge is not institutionalized world history, but the world as the space in which the who is revealed.
Stepping back chronologically, Speight then turns to Augustine as a source of Arendt’s idea of narrative rebirth. Here he picks up on an existentialist debate through Sartre: given that one’s account of one’s life can change it fundamentally, do we have a responsibility to an authentic narration? To what extent are we free when we tell our own stories? Arendt rejects the possibility that a life can simply me “made” in narrative. However:
for Arendt the distinction between a life that is “lived” and a story that is “made” involves two distinctly non-Sartrean consequences. The first we have already seen in her “daimõn thesis”: that precisely because we live rather than make a life, there is a privileged—but (pace Sartre) a not necessarily false—retrospective position from which we must view the “who,“ the daimõn, that is revealed in our lives. Thus, as we have seen, the “who” is visible “ex post facto through action and speech” (Arendt 1958, 186) and this retrospectivity in turn privileges the work of the discerning interpretive historian or storyteller. (121)
I find Speight’s repeated discussion of the daimon particularly relevant, since it offers an original way to talk about the belatedness of knowledge, of how it can comes later, or even from the side, without privileging an end position as Hegel does.
In the second half of his article, Speight offers a reading of Men in Dark Times that illustrates how Arendt uses these three aspects of her narrative theory in her own practice of narration. His reading the sections on Jaspers and Waldemar Gurian explicitly link the question of the daimon, biography, and how a person come to appearance in the public realm. Readers following the growing subsection of Arendt scholarship engaged with Arendt’s literary dimension will find an original effort here that offers a model for future work connecting Arendt’s theoretical articulations with her writing practice.
“The wonder that man endures or which befalls him cannot be related in words because it is too general for words….That this speechless wonder is the beginning of philosophy became axiomatic for both Plato and Aristotle.”
-Hannah Arendt, "Philosophy and Politics"
Aristotle had told us that philosophy begins in thaumázein-- θαυμάζειν –“to wonder, marvel, be astonished.” In the New Testament, the word appears only twice. In the parallel occurrences (Matthew 27:14 and Mark 15:5), Pilate marvels at the fact that Jesus says nothing. What is significant is that thaumázein is associated there with an experience for which there were no words. The word means a kind of an initial wordless astonishment at what is, at that that is is. For Aristotle, thaumázein is the beginning of philosophy as wonder. It is not for the Greeks, therefore, the beginning of political philosophy.
Key here is the fact of speechlessness. This wonder “cannot be related in words because it is too general for words.” Arendt suggests that Plato encountered it in those moments in which Socrates, “as though seized by a rapture, [fell] into complete motionlessness, just staring without seeing or hearing anything.” It follows that “ultimate truth is beyond words.” Nevertheless, humans want to talk about that which cannot be spoken. “As soon as the speechless state of wonder translates itself into words, it … will formulate in unending variations what we call the ultimate questions.” These questions – what is being? Who is the human being? What is the meaning of life” what is death? And so forth “have in common that they cannot be answered scientifically.” Thus Socrates “I know that I do not know” is actually an expression that opens the door to the political, public realm, in the recognition that nothing that can be said there can ever have the quality of being final.
According to Arendt, Socrates has three distinct aspects. First he arouses citizens from their slumber – this is the gadfly who gets others to think, to think about those topics for which there is no final answer. Secondly as “midwife” he decides – he makes evident – whether an opinion is fit to live or is merely an unimpregnated “wind-egg” (cf Theateatus 152a; 157d; 161a): Greek midwives not only assisted in the delivery but determined if the new-born was healthy enough to live. Socrates concludes his discussion in the Theateatus (210b) by saying all they have done is to produce a mere wind-egg and that he must leave as he has to get to the courthouse for his trial. Lastly, as stinging ray, Socrates paralyzes in two ways. He makes you stop and think; he destroys the certainty one has of received opinions. Arendt is clear that this can be dangerous. She goes on to say that “thinking is … dangerous to all creeds and, by itself, does not bring forth any new creed,” but she is equally clear that “non-thinking … has its dangers [which are] the possession of rules under which to subsume particulars.” To think is dangerous: but to think is to desire wisdom, what is not there. It is thus a longing; it is eros and, as with all things erotic, “to bring this relationship into the open, make it appear, men speak about it in the same way that the lover wants to speak of his beloved.” Where does this leave one? For the most part, in normal times, thinking is not of political use. It is, however, of use, in times when the “center does not hold,” in times of crisis.
At these moments, thinking ceases to be a marginal affair in political matters. When everybody is swept away unthinkingly by whatever everyone else does and believes in, those who think are drawn out of hiding because their refusal to join is conscious and thereby becomes a kind of action. The purging element … is political by implication. For this destruction has a liberating effect on another human faculty, the faculty of judgment, … the faculty to judge particulars without subsuming them under those general rules which can be taught and learned until the grow into habits.
Suppose we read Arendt as saying that political philosophy must now turn and thaumázein – and wonder – not at that what is, is, but at the human reality, at the world of human activity. This would involve a change in philosophy – for which she says philosophers are not particularly well equipped. She thinks such a turn would rest on and derive from several elements – she mentions in particular Jaspers’ reformulation of truth as transcending the realm that can be instrumentally controlled, thus related to freedom; Heidegger’s analysis of ordinary everyday life; and existentialism’s insistence on action. It will be an inquiry into the “political significance of thought; that is into the meaningfulness and the conditions of thinking for a being that never exists in the singular and whose essential plurality is far from explored when an I-Thou relationship is added to the traditional understanding of human nature.”
What is problematic with purely philosophical thaumázein? The Thracian maid who appears in the title to Jacques Taminiaux’s book and stands for Arendt in his analysis derives from an account in the Theateatus. Upon encountering Thales who, all-focused in his wondering, had fallen into a well, the maid notes that the philosopher had “failed to see what was in front of him.” Mary-Jane Robinson notes four elements to Arendt’s suspicion of excessive wonder, a suspicion one assumes was directed at Heidegger. First, such wonder allows avoidance of the messiness of the everyday world; secondly, such “uncritical openness” leads philosophers to be “swept away by dictators.” Thirdly, such wonder alienates the philosopher (as with Heidegger post-1945) from the world around him, and lastly, such openness to the mystery of the world, “disables decision making.”
If politics is the realm of how humans appear to each other when they act and speak, from whence does it come? The only possible answer is that politics is an emergence from a realm which is neither that of action nor that of speech. The political emerges from nothingness. Perhaps this is the realm to which poetry can call us – and some of Arendt’s most moving essays are on poetry and literature – but such a realm is not political. In this sense there is a limit to political science, as there is to all science. For Arendt, there are no underlying causes out of which that which is political must emerge. This is why political action is always for her a beginning and a marvel for which we have to try to find words.
Freeman Dyson, the eclectic physicist, took good aim at philosophy last week in a review of the silly book by Jim Holt, Why Does the World Exist?" An Existential Detective Story. Holt went around to "a portrait gallery of leading modern philosophers," and asked them the Leibnizian question: Why is there something rather than nothing?" The book offers their answers, along with biographical descriptions.
For Dyson, Holt's book "compels us to ask" these "ugly questions." First, "When and why did philosophy lose its bite?" Philosophers were, once important. In China, Confucius and his followers made a civilization. So too in Greece did Socrates and then the schools of Plato and Aristotle give birth to the western world. In the Christian era Jesus and Paul, then Aquinas and Augustine granted depth to dominant worldviews. Philosophers like Descartes, Hobbes, and Leibniz were central figures in the scientific revolution, and philosophical minds like Nietzsche, Heidegger, and Arendt (even if one was a philologist and the other two refused the name philosopher) have become central figures in the experience of nihilism. Against these towering figures, the "leading philosophers" in Holt's book cut a paltry figure. Here is Dyson:
Holt's philosophers belong to the twentieth and twenty-first centuries. Compared with the giants of the past, they are a sorry bunch of dwarfs. They are thinking deep thoughts and giving scholarly lectures to academic audiences, but hardly anybody in the world outside is listening. They are historically insignificant. At some time toward the end of the nineteenth century, philosophers faded from public life. Like the snark in Lewis Carroll's poem, they suddenly and silently vanished. So far as the general public was concerned, philosophers became invisible.
There are many reasons for the death of philosophy, some of which were behind Hannah Arendt's refusal to call herself a philosopher. Philosophy was born, at least in its Platonic variety, from out of the thinker's reaction to the death of Socrates. Confronted with the polis that put the thinker to death, Plato and Aristotle responded by retreating from the world into the world of ideas. Philosophical truth separated itself from worldly truths, and idealism was born. Realism was less a return to the world than a reactive fantasy to idealism. In both, the truths that were sought were otherworldly truths, disconnected to the world.
Christianity furthered the divorce of philosophy from the world by imagining two distinct realms, the higher realm existing beyond the world. Science, too, taught that truth could only be found in a world of abstract reason, divorced from real things. Christianity and science together gave substance to the philosophical rebellion against the world. The result, as Dyson rightly notes, is that philosophy today is as abstract, worldly, and relevant as it is profound.
What Dyson doesn't explore is why philosophers of the past had such importance, even as they also thought about worlds of ideas. The answer cannot be that ideas had more import in the past than now. On the contrary, we live in an age more saturated in ideas than any other. More people today are college educated, literate, and knowledgeable of philosophy than at any period in the history of the world. Books like Holt's are proof positive of the profitable industry of philosophical trinkets. That is the paradox—at a time when philosophy is read by more people than ever, it is less impactful than it ever was.
One explanation for this paradox is nihilism—The devaluing or re-valuing of the highest values. The truth about truth turned out to be neither so simple nor singular as the philosophers had hoped. An attentive inquiry into the true and the good led not to certainty, but to ideology critique. For Nietzsche, truth, like the Christian God, was a human creation, and the first truth of our age is that we recognized it as such. That is the precondition for the death of God and the death of truth. Nihilism has not expunged ideas from our world, but multiplied them. When speaking about the "true" or the "good" or the "just," Christians, Platonists, and moralists no longer have the stage to themselves. They must now shout to be heard amongst the public relations managers, advertisers, immoralists, epicureans, anarchists, and born again Christians.
Dyson ignores this strain of philosophy. He does point out that Nietzsche was the last great philosopher, but then dismisses Heidegger who "lost his credibility in 1933" and even Wittgentstein who would remain silent if a woman attended his lectures until she would leave. And yet it is Heidegger who has given us the great literary masterpieces of the 20th century philosophy.
His work on technology (The Question Concerning Technik) and art (The Origins of the Work of Art) has been widely read in artistic, literary, and lay circles. It is hard to imagine a philosopher more engaged with the science and literature than Heidegger was. He read physics widely and co-taught courses at the house of the Swiss psychiatrist Medard Boss and also taught seminars with the German novelist Ernst Jünger.
It seems worthwhile to end with a poem of Heidegger's from his little book, Aus der Erfahrung des Denkens/From Out of the Experience of Thinking:
Drei Gefahren drohen dem Denken
Die gute und darum heilsame Gefahr ist die Nachbarschaft des singenden Dichters.
Die böse und darum schärfste Gefahr ist das Denken selber. Es muß gegen sich selbst denken, was es nur selten vermag.
Die schlechte und darum wirre Gefahr ist das Philosophieren.
Three dangers threaten thinking.
The good and thus healthy danger is the nearness of singing poetry.
The evil and thus sharpest danger is thinking itself. It must think against itself, something it can do only rarely.
The bad and thus confusing danger is philosophizing.
“The Origin and Character of Hannah Arendt's Theory of Judgment”
David L. Marshall
Political Theory 2010 38 (3) 367-393
Drawing chiefly on entries between 1952 and 1957 in Arendt's recently published Denktagebuch, David Marshall proposes an account of the origin of Arendt's theory of judgment based on her early readings of Hegel, Aristotle, and Kant. Marshall sets the broader frame of his argument in terms of the shift between Arendt's negative appraisal of Kant's philosophy in the second Critique as recorded in her (unpublished) Berkeley lecture of 1955 and her embrace of the third Critique in 1970 (in Lectures on Kant's Political Philosophy). Arendt saw the categorical imperative as concerning only the individual and thus ignoring the plurality of the world. Kant's aesthetics offers her the resources for a bold shift in political thinking but critics argue that too much emphasis on the individual's subjective decision (for example in the idea of taste) potentially undermines an eventual group judgment.
One of Marshall's strongest contributions helps explain how these group judgments develop in Arendt's view. Taking up an entry from December 1952 in the Denktagebuch on Hegel's Logic, he argues that Arendt's early understanding of judgment involves a move from particular to general characterized by “continuity” rather than “subsumption” (Hegel, cited by Marshall, 373). As an example, the judgment “Cicero is great” would not place Cicero under the already existing definition of greatness, but lead to a reconsideration of both terms. For Arendt this reconsideration points the way to a discussion about the shifts in meaning involved. Thus “in an Arendtian gloss, Hegel's emphasis on reflective judgment is a commitment to worldliness, to history, and to the particular” (375). From a broader perspective, Marshall's reading complicates Hegel's influence on Arendt by showing how he positively impacted her thought. Further work in this direction (drawing on the Denktagebuch) will be of great value in drawing a contrast with her general use of him in her published work to indicate an automatic development of history that threatens freedom.
The following section focuses on Aristotle's use of the term krinein in the Rhetoric and Arendt's double translation of the term as urteilen and entscheiden (judging and deciding). Marshall points out that the judge in Aristotle's text is not merely a spectator but also at least potentially and actor. As in the section on Hegel, Marshall sees this in terms of a turn away from the general and towards “a logic of the example” (379). One intriguing point for future research mentioned briefly relates to the connection between Arendt's reading of the Rhetoric and that of Heidegger in the summer semester of 1924 (published as Grundbegriffe der Aristotelischen Philosophie).
The remainder of the article places these specific engagements with Hegel and Aristotle in the context of Arendt's 1957 notes in the Denktagebuch that document her careful rereading of the Critique of Judgment. While Marshall sees these notes as being largely in line with the published 1970 Kant lectures, he employs the specifications made in his exegesis to respond to five criticisms of Arendt's theory of judgment from contemporary scholars broadly related to the supposed danger of the aesthetic dimension of her thought. Some readers may find this aspect of the article to be posturing and others may think that he sets himself too large a task, since each criticism could be explicated and parsed at much greater length. However, with his pointers to key sections of the Denktagebuch, Marshall offers a key contribution to growing work on the importance of this text and opens a number of lines of future inquiry.
-Review by Jeffrey Champlin
There is probably no question more debated in the course of Middle Eastern uprisings than that of the status of human rights. Anyone familiar with the region knows that the status of human rights in the Middle East is at best obscure. The question of why there was not a “revolution” in Lebanon is a very complex one, tied with the fate of Syria and with the turbulent Lebanese politics since the end of the civil war, and hence cannot be fully answered. In a vague sense it can be said of course that Lebanon is the freest Arab country and that as such it bears a distinctively different character.
While at face value, the statement is true, being “more free than” in the Middle East is simply understating a problem. Just to outline the basic issues, Lebanon’s record on human rights has been a matter of concern for international watchdogs on the following counts:
Security forces arbitrarily detain and torture political opponents and dissidents without charge, different groups (political, criminal, terrorist and often a combination of the three) intimidate civilians throughout the country in which the presence of the state is at best weak, freedom of speech and press is severely limited by the government, Palestinian refugees are systematically discriminated and homosexual intercourse is still considered a crime.
While these issues remain at the level of the state, in society a number of other issues are prominent: Abuse of domestic workers, racism (for example excluding people from color and maids from the beaches) violence against women and homophobia that even included recently a homophobic rant on a newspaper of the prestigious American University in Beirut. The list could go on forever.
The question of gay rights in Lebanon remains somewhat paradoxical. On the one hand, article 534 of the Lebanese Penal Code prohibits explicitly homosexual intercourse since it “contradicts the laws of nature”, and makes it punishable with prison. On the other hand, Beirut – and Lebanon – remains against all odds a safe haven, for centuries, for many people in the Middle East fleeing persecution or looking for a more tolerant lifestyle.
That of course includes gays and lesbians and it is not uncommon to hear of gay parties held from time to time in Beirut’s celebrated clubs. At the same time, enforcement of the law is sporadic and like everything in Lebanon, it might happen and it might not; best is to read the horoscope in the morning and pray for good luck. A few NGO pro-LGBT have been created in the country since the inception of “Hurriyyat Khassa” (Private Liberties) in 2002.
In 2009 Lebanese LGBT-organization Helem launched a ground-breaking report about the legal status of homosexuals in the entire region, in which a Lebanese judge ruled against the use of article 534 to prosecute homosexuals.
It is against the background of this turbulent scenario that Samer Daboul’s film “Out Loud” (2011) came to life, putting together an unusual tale about friendship and love set in postwar Lebanon in which five friends and a girl set on a perilous journey in order to find their place in the world.
Though the plot of the film seems simple, underneath the surface lurks a challenge to the traditional morals and taboos of Lebanese society – homosexuality, the role of women, the troubled past of the war, delinquency, crime, honor – which for Lebanese cinema, on the other hand, marks a turning point.
This wouldn’t be so important in addressing the question of rights and freedoms in Lebanon were it not for a documentary, “Out Loud – The Documentary”, released together with the film that documents in detail the ordeal through which the director, actors and crew had to go through in order to complete this film.
Shot in Zahlé, in mountainous heartland of Lebanon and what the director called “a city and a nation of conservatism and intolerance”, it is widely reported in the documentary that from the very beginning the cast and crew were met with the same angry mobs, insults, and physical injuries that their film in itself so vehemently tried to overcome; a commercial film about family violence, gay lovers, and the boundaries of relationships between men and women. A film not about Lebanon fifteen or twenty years ago, but about Lebanon of here and today.
Daboul writes: “Although I grew up in the city in which “Out Loud” was filmed, even I had no idea how difficult it would be to make a movie in a nation plagued by violence, racism, sexism, corruption and a lack of respect for art and human rights.” The purpose of “Out Loud” of course wasn’t only to make a movie but a school of life, in which the maker, the actors and the audience could all have a peaceful chance to re-examine their own history and future.
Until very recently in lieu of a public space, in Lebanon, any conflict was solved by means of shooting, kidnapping and blackmailing by armed militias spread throughout the country and acting in the name of the nation.
The wounds have been very slow to heal as is no doubt visible from the contemporary political panorama. Recently, a conversation with an addiction counselor in Beirut revealed the alarming statistics of youth mental illness, alcoholism and drug addiction across all social classes in Lebanon, to which I will devote a different article.
Making films in Lebanon is an arduous process that not only does not receive support from the state but is also subject to an enormous censorship bureaucracy that wants to make sure that the content of the films do not run counter to the religious and political sensibilities of the state. In the absence of strong state powers, the regulations are often malleable and rather look after the sensibilities of political blocs and religious leaders rather than state security, if any such exists.
The whole idea of censorship of ideas is intimately intertwined with the reality of freedom and rights and with the severe limitations – both physical and intellectual – placed upon the public space.
In the Middle East, censorship of a gay relationship is an established practice in order to protect public morality; however what we hear on the news daily that goes from theft to murder to kidnap to abuse to rape to racism, does not require much censorship and is usually consumed by the very same public.
If there is one thing here that one can learn from Hannah Arendt about freedom of speech is that as Roger Berkowitz writes in “Hannah Arendt and Human Rights”:
The only truly human rights, for Arendt, are the rights to act and speak in public. The roots for this Arendtian claim are only fully developed five years later with the publication of The Human Condition. Acting and speaking, she argues, are essential attributes of being human. The human right to speak has, since Aristotle defined man as a being with the capacity to speak and think, been seen to be a “general characteristic of the human condition which no tyrant could take away.”
Similarly, the human right to act in public has been at the essence of human being since Aristotle defined man as a political animal who lives, by definition, in a community with others. It is these rights to speak and act –to be effectual and meaningful in a public world – that, when taken away, threaten the humanity of persons.
While these ideas might seem oversimplified and rather vague in a region “thirsty” for politics, they establish a number of crucial distinctions that must be taken into account in any discussion about human rights. Namely:
1) The failure of human rights is a fundamental fact of the modern age
2) There is a distinction between civil rights and human rights, the latter being what people resort to when the former have failed them
3) It is the fact that we appear in public and speak our minds to our fellowmen that ensures that we live our lives in a plurality of opinions and perspectives and the ultimate indicator of a life being lived with dignity.
Even if we have a “right” to a house, to an education and to a citizenship (that is, belonging to a community) if we do not have the right to speak and act in public and express ourselves (as homosexual, woman, dissident and what not) we are not being permitted to become fully human. Regardless of the stability of political institutions, provision of basic needs and security, there is no such a thing as a human world – a human community – in the absence of the possibility of appearing in the world as what we truly are.
“Out Loud” – both the film and the documentary – are a testimony of the degree to which the many elements composing the multi-layered landscape of Lebanese society are at a tremendous risk of worldlessness by being subject to an authority that relies on violence in lieu of power. Power and violence couldn’t be any more opposite.
Hannah Arendt writes in her journals:
Violence is measurable and calculable and, on the other hand, power is imponderable and incalculable. This is what makes power such a terrible force, but it is there precisely that its eminently human character lies. Power always grows in between men, whereas violence can be possessed by one man alone. If power is seized, power itself is destroyed and only violence is left.
It is always the case in dark times that peoples – and also the intellectuals among them – put their entire faith in politics to solve the conflicts that emerge in the absence of plurality and of the right to have rights, but nothing could be more mistaken. Politics cannot save, cannot redeem, cannot change the world. Just like the human community, it is something entirely contingent, fragile and temporary.
That is why no decisions made on the level of government and policies are a replacement for the spontaneity of human action and appearance. It is here that the immense worth of “Out Loud” lies; in enabling a generation that is no longer afraid of hell – for whatever reason – to have a conversation, and it is there where the rehabilitation of the public space is at stake and not in building empty parks to museumficate a troubled past, as has been often the case in Beirut. In an open conversation, people will continue contesting the legacy and appropriating the memory not as a distant past, but as their own.
The case of Lebanon remains precarious: Lebanon’s clergy has recently united in a call for more censorship; and today it was revealed that the security services summon people for interrogation over what they have posted on their Facebook accounts; HRW condemned the performance of homosexuality tests on detainees in Lebanon, even though this sparked a debate and a discussion on the topic ensued at the seminar “Test of Shame” held at Université Saint-Joseph in Beirut and the Lebanese Medical Society held a discussion in which they concluded those tests are of no scientific value.
In a country like Lebanon, plagued by decades of war and violence, as Samer Daboul has said in his film, people are more than often engaged at survival and just at that – surviving from one war to another, from one ruler to another, from one abuse to another, and as such, the responses of society to the challenges of the times are of an entirely secondary order. But what he has done in his films is what we, those who still have a little faith in Lebanon, should have as a principle: “It’s time to live. Not to survive”.
During a conference organized in her honor in Toronto, Hannah Arendt was asked by Hans Morgenthau, to categorize herself as such: “What are you? Are you a conservative? Are you a liberal? Where is your position in the contemporary possibilities?”
Arendt replied: “I don’t know and I’ve never known. And I suppose I never had any such position. You know the left think that I am conservative, and the conservatives think that I am a maverick or God knows what. And I must say I couldn’t care less. I don’t think that the real questions of this century will get any kind of illumination by this kind of thing.”
It is precisely in this spirit that one should read Jens Hanssen’s recent paper “Reading Hannah Arendt in the Middle East: Preliminary Observations on Totalitarianism, Revolution and Dissent”.
Hanssen offers in his paper a rather detailed survey of how Arendt has been read – and misread – by the Middle East, beginning with Kanan Makiya’s World Policy Journal article (2006) “An Iraqi Discovers Arendt”, all the way to Israeli revisionist (and evidently critical of Israel) scholars such as Idith Zertal and Amnon Raz-Krakotzkin.
The particular examples he brings up are paradigmatic of this already established tradition of appropriations of Hannah Arendt that though emerging from her political thought, have much to do with politics and little with thinking.
For example, the case of Kanan Makiya is interesting if only because of his controversial – and rather maverick – position in the landscape of Iraqi politics. This Marxist engineer-turned-neo-conservative political advisor (in Hanssen's telling) is apparently credited with being the first Arab author to apply Arendt’s phenomenology of totalitarianism to Baathist Iraq.
Makiya makes a case for Iraq as a totalitarian regime in Arendt’s terms, drawing a straight line from anti-Semitism and intellectual support for Saddam Hussein to comparisons with Nazi Germany. Though his book The Republic of Fear stands for many Iraqis as the greatest testimony to the sad state of affairs under Hussein, the analysis is at best a misappropriation in many respects and seems to fall within the line of warmongering that Arendt so vehemently criticized as McCarthyism: To use totalitarian means to fight – real or imagined – totalitarian enemies.
The most interesting reading he brings up however is Vince Dolan’s course at the American University in Beirut, “Contemporary Philosophical Reflections on the Use of Political Violence”, in the spring of 1983. Dolan tailored the course to polemicize Arendt’s distinction between power and violence – perhaps the most difficult in all of her thought – by first exposing students to Habermas’ evaluation of Arendt’s project and then bringing her into conversation with Popper, Adorno and Horkheimer.
While this practice is common among liberal academics, the integration of Arendt into the corpus of critical theory has been time and again debunked by serious Arendt scholars, of which I might bring only two salient examples:
First, Dana Villa (Arendt and Heidegger, 1996, p. 3-4) argues that although Habermas called Arendt’s theory of political action “the systematic renewal of the Aristotelian concept of praxis”, there is no one that would argue more vehemently against Aristotle (and the whole project of critical theory) than Arendt.
According to Villa, critical theory has immensely profited from Arendt’s renewal of Aristotelian praxis as opposed to the instrumentalization of action in order to highlight the intersubjective nature of political action, when in fact this renewal is a radical reconceptualization whose renewal is nothing but a renewal in order to overcome rather than to restore the tradition of political thought of and since Aristotle.
Second, Fina Birulés insisted in an interview from 2001 that there is a wide gap between Arendt’s radical theory of democracy and Habermas. According to Birulés, though Habermas is deeply indebted to Arendt, his theory of communicative action is hardly political at all and he reduces the concept of plurality to some sort of ideal community of dialogue.
Doubtless Hanssen is correct in pointing out that Arendt did not provide a concise definition of totalitarianism. Definition is a privilege of theory that Arendt’s story-telling didn’t embrace and she “merely” listed phenomenological elements. However he also indicates how Arendt insisted that only two forms of totalitarianism existed: Nazi Germany and the Soviet Union. This distinction is crucial to understand the rest of his paper.
Nowadays totalitarianism – as much as the banality of evil – is a slogan in newspapers and politics, often lacking in meaning and intention and this brings to mind the whole post 9-11 discourse in philosophy and politics in which Islam and Islamism – among other things – take the place of the “old” totalitarian movements.
While it is true that in phenomenological and structural terms nothing since the collapse of the Soviet Union can be called strictly totalitarian, there is no doubt that there are totalitarian elements in many movements and policies not only in the Middle East today, but also in the democratic West.
Among other – far less influential readings of Arendt – Hanssen lists the translations into Arabic and Persian, providing crucial information about how and why Arendt informed certain – mostly – Arab authors.
Lastly there is an elaborate discussion on the use – and again, abuse – of Arendt by Israeli scholars since her “rehabilitation” in Israel that coincided with the rise to prominence of certain revisionist scholars.
Though Hannah Arendt wasn’t exclusively concerned with Zionism or the Jewish question, it is undeniable that her entire work was informed by her status and experience as a Jew in the Europe of the early 20th century.
There are many Hannah Arendts and to this effect Jerome Kohn writes in the introduction to her “Jewish Writings”: “In 1975, the year she died, she spoke of a voice that comes from behind the masks she wears to suit the occasions and the various roles that world offers her. That voice is identical to none of the masks, but she hopes it is identifiable, sounding through all of them”.
Something that is identifiable in her entire work – but not identical anywhere, is her concern with the young State of Israel in spite of the controversies into which she became trapped later on.
While it is true that Arendt was very critical of the Zionist establishment and of the course that Israel had taken, it is also important to remember that her writings (“The Crisis of Zionism” and “Peace or Armistice in the Middle East”) were anchored in an intense anxiety over the Jewish people regaining control of their own destinies and entering the realm of politics.
Julia Kristeva expressed this best in her speech upon receiving the Hannah Arendt Prize in 2006, making it clear how for Arendt the survival of Israel and the refoundation of politics in the West was part of one and the same task:
Thirty years after her death, added to the danger she tries to confront through a refoundation of political authority and which, as they get worse, make this refoundation increasingly improbable, is the new threat that weighs on Israel and the world. Arendt had a premonition about it as she warned against underestimating the Arab world and, while giving the State of Israel her unconditional support as the only remedy to the acosmism of the Jewish people, and as a way to return to the “world” and “politics” of which history has deprived, she also voiced criticism.
But Jerome Kohn writes also in the introduction to the Jewish Writings, “Already in 1948 Arendt foresaw what now perhaps has come to pass, that Israel would become a militaristic state behind closed but threatened borders, a “semi-sovereign” state from which Jewish culture would gradually vanish” (paraphrased from her “To Save the Jewish homeland”).
In her piece “Peace or Armistice in the Middle East,” Arendt laid out what is in my opinion a foundation for what could be the ideal of Arab-Jewish cooperation in the Middle East – including even a surprisingly rare background on Arab personalities that had lent support to the possibility of a Jewish settlement from Lebanon and Egypt – but the element of religious fundamentalism and anti-Semitism that have crystallized now in the Middle East couldn’t be foreseen by Arendt, or at least not to the extent that they were articulated by Kristeva:
Although many of her analyses and advances seem to us more prophetic than ever, Arendt could not foresee the rise of Islamic fundamentalism, nor the havoc it is wreaking in a world faced with the powerlessness of politics to respond, and the apolitia, the indifference created by the omnipresent society of the spectacle.
Hanssen concludes from reading Arendt on totalitarianism, revolution and dissent in the Middle East that “one of the most powerful (in Arendt’s sense of power as consent-based), non-violent movements coming out of the Arab World today is the Boycott, Sanctions and Divestments campaign that Palestinian civil society groups have called for in 2005 and has now become a global counter-hegemonic phenomenon” and raises the question whether Hannah Arendt would have supported Palestinian BDS movement to bring about the end of Israeli occupation.
On the one hand he argues that “the intellectual merit of BDS campaign from an Arendtian standpoint is that it is not based on old and invalid hyperbolic equation of Israel with Nazi Germany.” On the other hand, he also says:
There is certainly ample room for this kind of non-violent action in her writings. For one, she supported the economic boycott of German businesses in the 1930’s and was furious when Zionist Organization in Palestine broke it.
Leaving the associations with Nazi Germany asides, it is vital to recall that it was Arendt who said that not even in the moon is one safe from anti-Semitism and that the State of Israel alone wouldn’t come to solve the Jewish question.
It is clear by now that BDS campaign has blended elements no doubt altruistic of non-violent struggle with elements from the old anti-Semitism, in which there’s little distinction made between Israelis and Jews.
BDS has come to include not only boycott to the settlements (as has been articulated with great intelligence by Peter Beinart and his book “The Crisis of Zionism”) but also academic and cultural boycott. In extreme cases, there have been boycotts of products not for being Israeli or produced in the settlements, but merely out of being kosher products produced in Britain and the United States.
While it is more than clear that Arendt saw and foresaw the risks and dangers to which Israel polity was exposed by its leaders, she also articulated with clarity that it wasn’t the Jews alone who were responsible for this sad state of affairs and whether or not Hannah Arendt’s ideal of a binational state is at all realizable at this point – bearing in mind the complexities of Arab Spring – what is clear is that an ideology fed on old anti-Semitism and prejudice as much as on uncritical views of Arab and Palestinian history is very unlikely to produce the Arab-Jewish councils (at the heart of her theorizing on revolutions) upon the basis of which a secular and democratic state might be founded.
Elisabeth Young-Bruehl's final work, Childism, was published soon after her untimely passing in December of 2011. In the book, Young-Bruehl, a long time psychoanalyst and child advocate, focuses on the pervasive prejudice she feels overshadows many children in our society. Be it abuse, or the modern day phenomenon of helicopter-parenting, she felt these injustices served to demarcate children, marking them as less worthy than adults. The resulting consequences result in unhealthy and damaging parent-children relationships.
Arendt Center intern, Anastasia Blank, is reading Childism and providing us with a chapter by chapter review, highlighting some of the most interesting and compelling insights and arguments. Her first post last week, provided us with an overview of the book and its themes. Today, she shares her thoughts and impressions of Chapter 1. We hope you are inspired to read along. You can purchase the book here.
Chapter One of Childism argues that prejudice emerges from a “we” against “them” mentality. This way of thinking not only separates a target group, but also defines the group as distinctly other from oneself. When this separation appears between children and adults, it is easy for the adult mind to think of children as immature and helpless. The child is projected as a feeble creature, produced by the adult, and thus owned by the adult; this is where the childism prejudice arises. By viewing children as a group incapable of independence, children come to be seen as needing adults to look after them, to rule them.
Young-Bruehl reminds us that children are in a stage of development where they are developing independence and maturity. The turning of children into objects to be governed only stunts this development process and breeds further division between children and adults. Young-Bruehl iterates the pervasive belief that, “Children are ‘childish’, which is a negative adjective marking something an adult should not be. Being a grown-up is imagined as separating from what is childish by denigrating it and calling it shameful”. Many adults tend to intentionally separate their child and grown-up identities, which makes difficult the recognition that children are constantly forming who they will become as an adult. The separation between youth and maturity is not an abyss one leaps over on their eighteenth birthday, it is a bridge we build through our years of development. If one is lucky, this bridge will never be torn down; for those who are prejudiced towards children, it seems such a tearing down or suppression of their own youth is what makes them a “real” adult. This is where the fissure in the understanding of what children need arises.
This first chapter of Childism provides a sweeping review of the field of prejudice studies looking as far back to Aristotle’s assumptions about children as possessions, and culminating in the present day. Young-Bruehl offers a definition of prejudice:
Prejudice corrupts understanding through a combination of partiality and defensiveness by setting up a hierarchical binary ‘on the grounds of X.’ A prejudgment that one class of beings is privileged over another extends to the idea that the class is superior, and fit to rule or dominate over another.
Prejudice blinds one from a view of equality and replaces it, in the case of childism, with the idea that an adult’s needs should be honored before a child's. Someone who thinks this way ignores the fact that we all exist among one another as like beings, together in the search for happiness and well-being. We all desire respect and wish for our needs to be appreciated, so it does not seem to follow that one person’s needs should be superior to those of children, simply because they are older.
Such biased thinking, however, is exactly how a prejudiced person thinks. A childist adult believes their needs are privileged over the needs of the youth and this arises through neglect, abuse, and the hunt for subservience, which in turn creates a suppression of healthy development. Young-Bruehl takes care to point out that, “a prejudice is a belief system, not a knowledge system about the group”; prejudices are beliefs, they are not facts.
One reason for this prejudice is the projection of unwanted aspects of oneself onto the child. According to Young-Bruehl, people project onto children features of themselves that they wish to get rid of. We deem children immature, but this may be because we fantasize about remaining children ourselves. We call children burdensome, but this may be because we cannot handle the burden of our own lives, our adult lives. It is possible that much of childism arises from a jealousy of something we can never return to. Or maybe the belief that we can never return to this time is a result of that prejudice. Either way, a disconnection has developed between adults and children that has caused us to view ‘childish’ as bad and ‘adult’ as good. I think we would be well served to reevaluate this created value system.