Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.
Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.
Amy Ireland is thinking about a genocide at the level of "genus-cide," the eradication of humanity itself. The threat is not weaponry but technology. And the exemplary precursor is the horse: "In the United States--where competition with the automobile was at its most intense--there were about 26 million horses in 1915. By the 1950s only 2 million remained." The question Ireland asks is whether humans are going the way of horses to be replaced by more efficient machines. Will artificially intelligent machines consume humans' fuel? "Far from being actively malevolent, an artificially intelligent agent endowed with enough power only needs to be indifferent to become a murderer. What are we, after all, but fuel? Atoms that can be freely disassembled and reassembled into something else - a thousand paperclip factories, for instance, or a massive supercomputer, capable of mathematical calculations we can't even begin to imagine in our current state of technological paucity. Even the clearly delimited goal of creating exactly one million paperclips can warrant the wasting of an entire planet, for a fully rational AI would never assign zero probability to the hypothesis that it has not yet achieved its goal.... There is something satisfying about imagining a malevolent artificial intelligence that actively wants to destroy us because it fears us, loathes us, or at least finds our existence frustrating and inconvenient. But the notion that something will destroy us out of sheer indifference is much harder to swallow because it forces us to consider the possibility of our utter insignificance. Bostrom surmises with all the level-headedness of a pure statistician that the odds against humanity's survival are overwhelmingly high. The default outcome of our construction of a single strong artificial intelligence is, quite plainly, extinction. His intention, naturally, is to raise awareness of the risks that lie behind this seemingly anodyne technological innovation and encourage governments, corporations or other entities that may one day attempt to build strong AI to implement rigorously tested control measures before letting the thing out of the box. All this is well and good, but it rests upon a deeper anthropomorphic supposition. What if the most radical gesture a flailing humanity can make at this juncture is not to increase its investment in security and control, but to pass it on? What if we are entangled in a larger evolutionary process that we never had control over in the first place? The real question then, might not be how to survive the construction of strong artificial intelligence but whether or not the survival of the human race is a good thing after all." Ireland is right to pose the question of "genus-cide," although her tone is a bit blithe. The threat is not the eradication of human beings but, as Arendt writes in The Human Condition, the loss of the human condition, those characteristics of being human like labor, work, action, and (sometimes) thinking. As Arendt writes, "This future man, whom the scientists tell us they will produce in no more than a hundred years, seems to be possessed by a rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking), which he wishes to exchange, as it were, for something he has made himself. There is no reason to doubt our abilities to accomplish such an exchange."
Karl Ove Knausgaard was commissioned to travel from Sweden to the Viking's first settlement in Newfoundland and then drive across the United States in order to reflect on the state of America. In part one of his two-part "Saga," Knausgaard offers this insight into a specifically American form of poverty, the poverty of imagination and the abandonment of distinction: "I'd seen poverty before, of course, even incomprehensible poverty, as in the slums outside Maputo, in Mozambique. But I'd never seen anything like this. If what I had seen tonight--house after house after house abandoned, deserted, decaying as if there had been disaster--if this was poverty, then it must be a new kind poverty, maybe in the same way that the wealth that had amassed here in the 20th century had been a new kind of wealth. I had never really understood how a nation that so celebrated the individual could obliterate all differences the way this country did. In a system of mass production, the individual workers are replaceable and the products are identical. The identical cars are followed by identical gas stations, identical restaurants, identical motels and, as an extension of these, by identical TV screens, which hang everywhere in this country, broadcasting identical entertainment and identical dreams. Not even the Soviet Union at the height of its power had succeeded in creating such a unified, collective identity as the one Americans lived their lives within. When times got rough, a person could abandon one town in favor of another, and that new town would still represent the same thing. Was that what home was here? Not the place, not the local, but the culture, the general?"
Peter Railton gave the John Dewey Lecture at the American Philosophical Association Meeting this year, where amidst reflections on philosophical thinking, personal courage, and political activism, he offered a guileless and moving account of his personal struggle with depression. "And what of depression? Perhaps we all know the mask of depression, that frozen, affectless face we catch glimpses of on our students, colleagues, and friends. I can't do anything about that. But perhaps I can do something about the face of depression--its visible image in the minds of our children and parents, teachers and students. Because in truth, we are still to a considerable degree still in a world of 'Don't ask, don't tell' with regard to depression and associated mental disorders, such as anxiety, even though these will severely affect one in ten of us over the course of a lifetime, and often at more than one point in a lifetime. So there's nothing for it. Those whose have dwelt in the depths depression need to come out as well. Some already have, but far too few adult men (big surprise!), and especially far too few of the adult men who somehow have come to bear the stamp of respectability and recognition, and thus are visible to hundreds of students and colleagues. It's no big deal, right? We're all enlightened about this. Then why do the words stick in my throat when I tell you that another theme uniting the three episodes I have recounted from my life, and that has played an equally important role in shaping my philosophy, is that they were all accompanied by my depression. This moody high school student, this struggling protester, this anxious young faculty member--they were all me and they were all living through major depressive episodes at the time. And there have been other such episodes, some more recent. Thankfully, for me and especially for my family who have been through so much already, not right now. Did others know? I don't know. Some must have guessed--perhaps those who themselves had known depression in their lives could see the mask of depression upon my face. But the thing is: I couldn't say it. I couldn't say, 'Look, I'm dying inside. I need help.' Because that's what depression is--it isn't sadness or moodiness, it is above all a logic that undermines from within, that brings to bear all the mind's mighty resources in convincing you that you're worthless, incapable, unloveable, and everyone would be better off without you. Not a steely-eyed, careful critique from which one might learn, but an incessant bludgeoning that exaggerates past errors while ignoring new information, eroding even the ability to form memories. A young man once had the courage to tell me, 'My brain is telling me to kill myself, but my body is saying "no."' Happily, his body won. But it doesn't always. Every year, thousands of young men don't win the battle. We are captive audiences to our own minds, and it can become intolerable." Depression, Railton suggests, is still in the closet, and this causes untold pain at colleges, where, as a recent study shows, the mental health of college Freshmen is at an all-time low--something that will not surprise any of us who teach in this nation's colleges and universities.
In Railton's speech on depression discussed above, he also has this tidbit on meetings: "Oscar Wilde is still right--because the cost of building a society where the people have more say in how their lives are run is still many, many meetings. What is a meeting, after all, but people deliberating together with a capacity to act as a group that is more than just a sum of individual actions, and this sort of informed joint action is a precondition for significant social change. Come together, decide together, act together, and bear the consequences together. We must own our institutions or they will surely own us. As Aristotle told us, one becomes a citizen not by belonging to a polity or having a vote, but by shouldering the tasks of joint deliberation and civic governance. And there is no civic or faculty governance, no oversight of discrimination in hiring and promotion, no regulation of pollutants, no organization of faculty or students to initiate curricular reform, no mobilization by professional associations to protect their most vulnerable members or to promote greater diversity, no increased humaneness in the treatment of animals and human subjects, no chance to offset arbitrariness and bullying within offices and departments, no oversight of progress and revision of plans in response to changing circumstances, without actual people who care spending long hours in the work of planning, meeting, and making things happens. The alternative is for all these decisions to be made at the discretion of those on high--or not at all." At a moment when faith and participation in all institutions is rare and the pursuit of individual pursuits comparatively common, Railton's reminder of what Arendt calls the power of talking and acting together is worth heeding.
David Cole writes that the Senate Torture Report, when read in full, leads to fundamentally different conclusions than most of the headlines and early accounts suggest. Above all, the report blaming the CIA for lying may have missed the real story: "The full story is more complicated, and ultimately much more disturbing, than the initial responses--mine included--suggested. And because these documents may be the closest we come to some form of accountability, it is essential that we get the lessons right.... So why did the committee focus on efficacy and misrepresentation, rather than on the program's fundamental illegality? Possibly because that meant it could cast the C.I.A. as solely responsible, a rogue agency. A focus on legality would have rightly held C.I.A. officials responsible for failing to say no--but it also would have implicated many more officials who were just as guilty, if not more so. Lawyers at the Justice Department wrote a series of highly implausible legal memos from 2002 to 2007, opining that waterboarding, sleep deprivation, confinement in coffinlike boxes, painful stress positions and slamming people into walls were not torture; were not cruel, inhuman or degrading; and did not violate the Geneva Conventions. The same can be said for President George W. Bush, Vice President Dick Cheney and all the cabinet-level officials responsible for national security, each of whom signed off on a program that was patently illegal. The reality is, no one in a position of authority said no. This may well explain the committee's focus on the C.I.A. and its alleged misrepresentations. The inquiry began as a bipartisan effort, and there is no way that the Republican members would have agreed to an investigation that might have found fault with the entire leadership of the Bush administration. But while the committee's framing may be understandable as a political matter, it was a mistake as a matter of historical accuracy and of moral principle. The report is, to date, the closest thing to official accountability that we have. But by focusing on whether the program worked and whether the C.I.A. lied, the report was critically misleading. Responsibility for the program lies not with the C.I.A. alone, but also with everyone else, up to the highest levels of the White House, who said yes when law and morality plainly required them to say no."
Adam Phillips worries about what's inside us: "We are never as good as we should be; and neither, it seems, are other people. A life without a so-called critical faculty would seem an idiocy: what are we, after all, but our powers of discrimination, our taste, the violence of our preferences? Self-criticism, and the self as critical, are essential to our sense, our picture, of our so-called selves. Nothing makes us more critical--more suspicious or appalled or even mildly amused--than the suggestion that we should drop all this relentless criticism, that we should be less impressed by it and start really loving ourselves. But the self-critical part of ourselves, the part that Freud calls the super-ego, has some striking deficiencies: it is remarkably narrow-minded; it has an unusually impoverished vocabulary; and it is, like all propagandists, relentlessly repetitive. It is cruelly intimidating--Lacan writes of 'the obscene super-ego'--and it never brings us any news about ourselves. There are only ever two or three things we endlessly accuse ourselves of, and they are all too familiar; a stuck record, as we say, but in both senses--the super-ego is reiterative. It is the stuck record of the past ('something there badly not wrong', Beckett's line from Worstward Ho, is exactly what it must not say) and it insists on diminishing us. It is, in short, unimaginative; both about morality, and about ourselves. Were we to meet this figure socially, this accusatory character, this internal critic, this unrelenting fault-finder, we would think there was something wrong with him. He would just be boring and cruel. We might think that something terrible had happened to him, that he was living in the aftermath, in the fallout, of some catastrophe. And we would be right." In other words, critical thinking is essential, but let's also recall that it is dangerous. All thinking is an attack on the status quo and the common world in which we live. That is what Arendt means when she wrote, "There are no dangerous thoughts. Thinking itself is dangerous." That doesn't mean we should stop thinking critically, but it does mean that thinking requires knowing when thinking is, and when it is not, needed. That is the moment of judgment.
Novelist Gary Shteyngart spent a week watching Russian television and living like a Russian oligarch: "Here is the question I'm trying to answer: What will happen to me--an Americanized Russian-speaking novelist who emigrated from the Soviet Union as a child--if I let myself float into the television-filtered head space of my former countrymen? Will I learn to love Putin as 85 percent of Russians profess to do? Will I dash to the Russian consulate on East 91st Street and ask for my citizenship back? Will I leave New York behind and move to Crimea, which, as of this year, Putin's troops have reoccupied, claiming it has belonged to Russia practically since the days of the Old Testament? Or will I simply go insane? A friend of mine in St. Petersburg, a man in his 30s who, like many his age, avoids state-controlled TV and goes straight to alternative news sources on the Internet, warns me in an email: 'Your task may prove harmful to your psyche and your health in general. Russian TV, especially the news, is a biohazard.' I'll be fine, I think. Russians have survived far worse than this. But, just in case, I have packed a full complement of anti-anxiety, sleep and pain medication."
Andy Greenwald considers what made the recently concluded sitcom Parks and Recreation successful and what it's legacy might be: "Art doesn't always have to be a dark mirror reflecting reality. It can and should also be a window, thrown open to let in every last bit of possible light. Parks and Recreation never quite resembled the real America. But every episode was imbued with the idea that maybe it could, if only we, the people, cared a little more and tried a little harder. The Wire, the greatest drama of the young 21st century, left us with a tough legacy to reckon with. Parks and Rec, the best comedy of that same century, gifted us with a beautiful model to which we can collectively aspire. I doubt the future will be as bleak as David Simon's vision for it or as rosy as Mike Schur's. The joy of being a TV fan is that we get to consider both. That's not a cop-out, by the way. That's a compromise, and one that even President Leslie Knope could accept. After all, Parks was built on the bedrock belief that opposing ideas could not only have merit, they could coexist. Like the show itself, it's an idea that sounds simple but in practice is anything but."
"Arendt's Critique of Modern Society as an Analysis of Process Imaginary"
Tuesday, March 3, 2015
The Hannah Arendt Center, 1:00 pm
The Hannah Arendt Center announces three post-doctoral fellowships for the 2015-2016 academic year.
To learn more about the fellowships, including how to apply, click here.
Application Deadline: Thursday, March 5, 2015
HAC members at all levels are eligible to participate in a monthly reading group led online via a telecommunication website by Roger Berkowitz, Director of the Hannah Arendt Center.
For questions and to enroll in our virtual reading group, please email David Bisson, our Media Coordinator, at email@example.com.
Friday, March 6, 2015
Bluejeans.com, 11:00 am - 12:00 pm
"Figuring Rights: Wollstonecraft and the Right to Political Community
Tuesday, March 10, 2015
The Hannah Arendt Center, 6:00 - 7:00 pm
Synopsis: A diverse group of South African actors tours the war-torn regions of Northern Ireland, Rwanda, and the former Yugoslavia to share their country's experiment with reconciliation. As they ignite a dialogue among people with raw memories of atrocity, the actors find they must once again confront their homeland's violent past, and question their own capacity for healing and forgiveness.
Tuesday, March 24, 2015
Weis Cinema, Campus Center, 6:30 pm
Putting Courage at the Centre: Gandhi on Civility, Society and Self-Knowledge
Invite Only. RSVP Required.
Property and Freedom: Are Access to Legal Title and Assets the Path to Overcoming Poverty in South Africa?
A one-day conference sponsored by the Hannah Arendt Center for Politics and Humanities at Bard College, the Human Rights Project, and the Center for Civic Engagement, with support from the Ford Foundation, The Brenthurst Foundation, and The University of The Western Cape
Monday, April 6, 2015
Bard College Campus Center, Weis Cinema, 10:00 am - 7:00 pm
Invite Only. RSVP Required.
Thursday and Friday, October 15 and 16, 2015
The Hannah Arendt Center's eighth annual fall conference, "Privacy: Why Does It Matter?," will be held this year on Thursday and Friday, October 15-16, 2015! We'll see you there!
This week on the Blog, Johannes Lang explores the moral and political consequences of emotion entering into the public sphere in the Quote of the Week. American moral and social philosopher Eric Hoffer provides this week's Thoughts on Thinking. In a special feature, we recognize Aliza Becker, one of her Associate Fellows, and her creation of the American Jewish Peace Archive: An Oral History of Israeli-Palestinian Peace Activists (AJPA). And we appreciate Arendt's engagement with Saint Augustine's "Confessions" in our Library feature.
This coming Friday, March 6th, the Hannah Arendt Center will host the fifth session of its Virtual Reading Group. We will be discussing Chapters 10-13 of The Human Condition.
The reading group is available to all members and is always welcoming new participants! Please click here to learn more!
Arendt had an impressive collection of Aristotle's works in her personal library. This is no surprise. After all, as Roger Berkowitz, Academic Director of the Hannah Arendt Center, wrote back in 2010, it was Aristotle who characterized humans as the only animal in possession of logos, or the ability to reason and participate in philosophical thinking. Not only that, but Aristotle also valued dramatic actions as public gestures out of which an actor's character emerges. These two ideas -- the significance of human beings' ability to think and of public action -- have since proven central to much of Hannah Arendt's philosophy.
Richard Halpern, “Eclipse of Action: Hamlet and the Political Economy of Playing,” Shakespeare Quarterly, Volume 59, Number 4, Winter 2008, pp. 450-482
As he formulates an original response to the classic problem of Hamlet’s non-action, Halpern offers one of the few critical analyses of Arendt’s reading of Adam Smith in The Human Condition. He shows how Arendt draws on Smith’s concepts of productive and unproductive labor to articulate her key concepts of work and labor. Moreover, his close reading draws our attention to an intriguing paradox in the temporality of action that may indicate a corrective—albeit a difficult one—to the current demand for instant gratification that often leads to cynicism in the face of great political challenges.
Halpern reminds us that Aristotle separates action from labor; Smith replaces action with production; and Arendt seeks to restore action to a place of prominence in the political realm. Arendt explicitly says that “the distinction between productive and unproductive labor contains, albeit in a prejudicial manner, the more fundamental distinction between work and labor” (HC 87). She does not simply take over Smith’s idea, but wishes to transfer his distinction from his own economic system (the “prejudice” of his own thought) to her own thinking of labor and work. Halpern’s analysis of Arendt’s move helps us start to think about her surprising appeal to 18th century economic theory. Moreover, it her discussion of Smith (and better known critique of Marx), I see her posing an even broader question: what does it mean to be productive and what are the appropriate spheres of different types of productivity?
Within the realm of production, Halpern looks at how Smith offers a further distinction in Book 2, Chapter 3 of The Wealth of Nations, under the heading “Of the Accumulation of Capital, or of Productive and Unproductive Labor”:
There is one sort of labor which adds to the value of the subject upon which it is bestowed: there is another which has no such effect. The former, as it produces a value, may be called productive; the latter, unproductive labour. Thus the labour of a manufacturer adds, generally, to the value of the materials which he works upon, that of his own maintenance, and of his master’s profit. The labor of a menial servant, on the contrary, adds to the value of nothing. (Adam Smith, An Inquiry into the Nature and Causes of the Wealth of Nations, ed. Edwin Cannan (Chicago: U of Chicago P, 1976), 351.)
Smith draws a distinction between labor that holds or builds value (say the manufacture of a chair), and labor that evaporates the moment the worker completes it (such as cleaning the house or washing clothes). Classical political economists of the 18th and 19th century engaged in wide ranging debates over what should “count” as value before capitalist countries agreed on the ratio of labour to output or per capita GDP as the standard; socialist countries, following the USSR, adopted an alternative “material product system” that prioritized the amount of goods. In a time of environmental change, this glimpse into the history of economic theory may offer a helpful reminder that society can decide to change the standard of economic success.
According to Halpern, Arendt draws from Smith not to rehabilitate an outmoded aspect of economic theory, but to draw inspiration for her creation of distinct conceptual spaces for labor, work, and action. Specifically, she aligns Smith’s “unproductive labor” with her circular conception of labor and “productive labor” with her linear conception of work. This does not mean that labor is unproductive but it does require a clarification of different types of productivity. I see it as useful to keep the discussion on productivity since these spheres of private life and cultural and industrial economy then offer a contrast to the political sphere where action can happen. Action is neither circular like labor, nor linear like work, but has its own peculiar directionality and temporality. Halpern’s analysis helpfully zeroes in on the perplexing relation between the ephemerality of labor and action and action’s desire for permanence:
The temporal paradox of the political is that while it aims at immortality, action and speech are, in themselves, evanescent: “Left to themselves, they lack not only the tangibility of other things, but are even less durable and more futile than what we produce for consumption” (HC 95). Like Smith’s unproductive labor, action disappears in the moment of its occurrence because it leaves no material trace behind. (Halpern, 457)
Politics demands an extraordinary effort. It asks that one expend energy indefinitely for an uncertain reward. Discussion and debate goes on and on, only occasionally clicking with spectacular agreement or deflationary compromise. Arendt’s analysis can help us perceive the difficulty of contemporary politics that attempts to fit into consumer culture that preserves, and thus remembers, nothing.
Arendt’s attention to the aspects of debate and negotiation that might be seen as unproductive (a dimension that in other parts of the Human Condition she relates to menial work, again often in relation to Smith) offers a corrective to a misguided understanding of politics that leads to frustration and despair.Even if we are not at the extreme level of the menial functioning of a New England town hall meeting debating the budget for potholes or an Occupy Wall Street discussion that requires unanimous consensus for closure, politics works in a different temporality. Rather than the fever pitched accusations of crisis that in the U.S. actually covers up rather than encourage political risk, a more humble sense of public debate as requiring something like the patience of the menial task may be a corrective.
Political action in Arendt’s sense differs from work in being freed from a fixed goal. She links this freedom, which for her is based on self-referentiality, to drama:
Arendt’s discomfort with the economic dimension of theater reveals itself when she criticizes Adam Smith for grouping actors, along with churchmen, lawyers, musicians, and others, as unproductive laborers and hence as lowly cousins of the menial servant (HC 207). Arendt would distinguish all of these activities from labor in that they “do not pursue an end . . . and leave no work behind . . . , but exhaust their full meaning in the performance itself ” (206). Smith’s inclusion of these autotelic activities under the category of labor is for Arendt a sign of the degradation that human activity had already undergone by the early days of the modern era. By contrast, “It was precisely these occupations—healing, flute-playing, play-acting—which furnished ancient thinking with examples for the highest and greatest activities of man” (207–21). What Arendt overlooks is that—already in the ancient world—healing, flute playing, and playacting became remunerated professions and differed in this respect from politics, which was not the work of a professional class of politicians. (Halpern 458)
Arendt agrees that actors on the stage perform fleeting scenes, but wishes to link this to “the highest and greatest activities of man,” ie. those of politics. Halpern argues that in fact, actors in ancient times already worked for wages and were thus not independent like citizens in their roles as politicians. Nonetheless, Arendt shows us that in the modern period we can learn something about acting in politics from acting in the arts. The key point for Halpern is that drama, etc. are “autotelic activities.” They do not even keep up the house like menial work; they have their own end and really evaporate in reaching this end. Political action works along an undecidable edge: even less productive than labor but at any moment potentially the most lasting. Against the odds, politics holds open the space in which something new can begin and thus renew the human world against the circular forces of nature.
One could reasonably argue that in his focus on the connection between labor and action, Halpern fails to adequately emphasize the importance of work. In a world of labor and the victory of animal laborans, there is no work to preserve action and no polis/world to give action memorialization. Indeed, we face the danger of the collapse of the world into the “waste economy” (HC 134) and the seductions to action disappear. However, Halpern does not say that play is action for Arendt but rather, as I understand his argument, that it there is an aspect of action that is like play. Action requires debate that may seem to be going nowhere, or just be undertaken for its own sake up to the moment that it takes a risk. When it dares to venture into the public realm, action clearly very different from play as a hobby.
Labor is both constant and fleeting. On the one hand, the demands of the body never end, nor do the cycles of nature. On the other hand, labor is also fleeting in that its mode of production only temporarily maintains life. Action is also fleeting from the perspective that the risk it takes often evaporates but has the utmost political constancy when one considers those actions that succeed in forming the power of a new beginning.
In the remainder of the article, Halpern moves from The Human Condition to Hamlet, arguing that Shakespeare replaces action on the classical model of tragedy with the ceaseless activity of Hamlet’s thoughts. This activity runs in circles like unproductive labor in Smith and labor in Arendt rather than the action of Aristotle’s aesthetic and Arendt’s political ideal. From an Arendtian point of view, the modernity of the drama reveals a challenge to politics, the challenge of a time out of joint that action has to face again and again.
After months in which university after university signed on to the bandwagon for Massive Open Online Courses called MOOCs, the battle over the future of education has finally begun. This week Duke University pulled out of EdX, the Harvard/MIT led consortium of Massive Open Online Courses called MOOC’s.
The reason: Its faculty rebelled. According to The New York Times,
While [Duke provost Peter] Lange saw the consortium as expanding the courses available to Duke students, some faculty members worried that the long-term effect might be for the university to offer fewer courses — and hire fewer professors. Others said there had been inadequate consultation with the faculty.
The Times also reports that faculty at Amherst College, my alma mater and former employer, voted against joining EdX. Again, the faculty saw danger. My former colleagues worried that the introduction of online courses would detrimentally impact the quality and spirit of education and the small liberal arts college. They also, as our friends over at ViaMeadia report, worried that MOOCs would “take student tuition dollars away from so-called middle-tier and lower-tier” schools, pushing their colleagues at these institutions out of their jobs.
And that brings us to ground zero of the battle between the faculty and the MOOCs: San Jose State University. San Jose State has jumped out as a leader in the use of blended online and offline courses. Mohammad H. Qayoumi, the university's president, has defended his embrace of online curricula on both educational and financial grounds. He points to one course, "Circuits & Electronics," offered by EdX. In a pilot program, students in that course did better than students in similar real-world courses taught by San Jose State professors. Where nearly 40% of San Jose students taking their traditional course received a C or lower, only 9% of students taking the EdX course did. For Qayoumi and others, such studies offer compelling grounds for integrating MOOCs into the curriculum. The buzzword is “blended courses,” in which the MOOCs are used in conjunction with faculty tutors. In this “flipped classroom,” the old model in which students listen to lectures in lecture halls and then do assignments at home, is replaced by online lectures supplemented by discussions and exercises done in class with professors. As I have written, such a model can be pedagogically powerful, if done right.
But as attractive as MOOCs may be, they carry with them real dangers. And these dangers emerge front and center in the hard-hitting Open Letter that the philosophy department at San Jose State University has published addressed to Michael Sandel. Sandel is the Harvard Professor famous for his popular and excellent course “Justice,” that has been wowing and provoking Harvard undergraduates for decades. Sandel not only teaches his course, he has branded it. He sells videos of the course; he published a book called Justice based on the course, and, most recently, created an online video version of the course for EdX. San Jose State recently became one of the first public universities in the country to sign a contract paying for the use of EdX courses. This is what led to the letter from the philosophers.
The letter begins by laying out the clear issue. The San Jose Philosophy department has professors who can teach courses in justice and ethics of the kind Sandel teaches. From their point of view, “There is no pedagogical problem in our department that JusticeX solves, nor do we have a shortage of faculty capable of teaching our equivalent course.” In short, while some students may prefer a course with a famous Harvard professor, the faculty at San Jose State believe that they are qualified to teach about Justice.
Given their qualifications, the philosophy professors conclude that the real reason for the contract with EdX is not increased educational value, but simply cost. As they write: "We believe that long-term financial considerations motivate the call for massively open online courses (MOOCs) at public universities such as ours.
In short, the faculty sees the writing on the wall. Whatever boilerplate rhetoric about blended courses and educational benefit may be fashionable and necessary, the real issue is simple. Public universities (and many private ones as well) will not keep paying the salaries of professors when those professors are not needed.
While for now professors are kept on to teach courses in a blended classroom, there will soon be need for many fewer professors. As students take Professor Sandel’s class at universities around the country, they will eventually work with teaching assistants—just as students do at Harvard, where Professor Sandel has pitifully little interaction with his hundreds of students in every class. These teaching assistants make little money, significantly less than a tenured or even a non-tenured professor. It is only a matter of time before many university classes are taught virtually by superstar professors assisted by armies of low-paid onsite assistants. State universities will then be able to educate significantly more students at a fraction of the current cost. For many students this will be a great boon—a certified and possibly quality education at a cheap price. For most California voters, this is a good deal. But it is precisely what the faculty at San Jose State fear. As they write:
We believe the purchasing of online and blended courses is not driven by concerns about pedagogy, but by an effort to restructure the U.S. university system in general, and our own California State University system in particular. If the concern were pedagogically motivated, we would expect faculty to be consulted and to monitor quality control. On the other hand, when change is financially driven and involves a compromise of quality it is done quickly, without consulting faculty or curriculum committees, and behind closed doors. This is essentially what happened with SJSU's contract with edX. At a press conference (April 10, 2013 at SJSU) announcing the signing of the contract with edX, California Lieutenant Governor Gavin Newsom acknowledged as much: "The old education financing model, frankly, is no longer sustainable." This is the crux of the problem. It is time to stop masking the real issue of MOOCs and blended courses behind empty rhetoric about a new generation and a new world. The purchasing of MOOCs and blended courses from outside vendors is the first step toward restructuring the CSU.
The San Jose State philosophy professors are undoubtedly correct. We are facing a systematic transformation in higher education in this country and also in secondary education as well. Just as the Internet has revolutionized journalism and just as it is now shaking the foundations of medicine and law, the Internet will not leave education alone. Change seems nigh. Part of this change is being driven by cost. Some of it is also being driven by the failures and perceived failures of our current system. The question for those of us in the world of higher education is whether we can respond intelligently to save the good and change out the bad. It is time that faculties around the country focus on this question and for that we should all be thankful to the philosophy professors at San Jose State.
The Open Letter offers three main points to argue that it is bad pedagogy to replace them with the blended course model of MOOCs and teaching assistants.
First, they argue that good teaching requires professors engaged in research. When professors are engaged in active research programs, they are interested in and motivated by their fields. Students can perceive if a professor is bored with a class and students will always learn more and be driven to study and excel by professors who feel that their work matters. Some may wonder what the use of research is that is read by only a few colleagues around the world, but one answer is that such research is necessary to keep professors fresh and sharp. We all know the sad fate of professors who have disengaged from research.
Second, the philosophy professors accept the argument of many including myself that large lectures are not the best way to teach. They teach by the Socratic method, interacting with students. Such classes, they write, are much better than having students watch Professor Sandel engage Socratically with faculty at Harvard. Of course, the MOOC model would still allow for Socratic and personal engagement, just by much lower paid purveyors of the craft. The unanswered question is whether low-paid assistants can be trained to teach well. The answer may well be yes.
Third, the philosophy faculty worry about the exact same moral justice course being taught across the country. We can already see the disciplinary barricades being drawn. It may be one thing to teach Math to the whole country from one or two MOOCs, but philosophy needs multiple perspectives. But how many? The philosophy professors suggest that their highly diverse and often lower-middle-class students have different experiences and references than do Professor Sandel’s Harvard students. They can, in the classroom, better connect with these students than Professor Sandel via online lectures.
The points the San Jose State philosophy professors raise are important. In many ways, however, their letter misses the point. Our educational system is now structured on a few questionable premises. First, that everyone who attends college wants a liberal arts education. That is simply not true. Many students simply want a credential to get a job. If these students can be taught well and more cheaply, we should help them. There is a question of whether we need to offer everyone the same kind of highly personalized and expensive education. While such arguments will be lambasted as elitist, it is nevertheless true that not everyone wants or needs to read Kant closely. We should seek to protect the ability of those who do—no matter their economic class—and also allow those who don’t a more efficient path through school.
A second questionable premise is that specialization is necessary to be a good teacher. This also is false. Too much specialization removes one from the world of common sense. As I have argued before, we need professors who are educated more generally. It is important to learn about Shakespeare and Aristotle, but you don’t need to be a specialist in Shakespeare or Aristotle to teach them well and thoughtfully to undergraduates. This is not an argument against the Ph.D. It is important to study and learn an intellectual tradition if you are going to teach. But it is an argument against the professionalization of the Ph.D. and of graduate education in general. It is also an argument against the dominance of undergraduate curriculum by professionalized scholars.
Third, and perhaps most importantly, is the premise that everyone needs to go to college. If we put a fraction of the resources we currently spend on remedial education for college students back into public high schools in this country, we could begin the process of transforming high school into a serious and meaningful activity. For one thing, we could begin employing Ph.D.s as high school teachers as are many of the emerging early colleges opening around the country.
I am sympathetic to the philosophy professors at San Jose State. I too teach a course on Justice called “The Foundation of Law: The Quest for Justice.” It is a course quite similar and yet meaningfully different from Michael Sandel’s course on Justice. I believe it is better, no offense meant. And I would be upset if I were told next year that instead of teaching my course I would be in effect a glorified TA for Professor Sandel. I hope it doesn’t come to that, but I know it might.
The only response for those whose jobs are being replaced by computers or the Internet is to go out and figure out how to do it better. That is what happened to journalists who were fired in droves. Many quit voluntarily and began developing new models of journalism, including blogs that have enriched our public discourse and largely rejuvenated public journalism in this country. Blogs, of course, are not perfect, and there is the question of how to make a living writing one. But enterprising bloggers like Andrew Sullivan and Walter Russell Mead are figuring that out. So too are professors like Michael Sandel and Andrew Ng.
We need educators to become experimental these days, to create small schools and intensive curricula within larger institutions that make the most of the personal interaction that is the core of true pedagogy. If that happens, and if teachers offer meaningful education for which students or our taxpayers will pay, then our jobs will be safe. And our students will be better for it. For this reason, we should welcome the technology as a push to make ourselves better teachers.
The Open Letter to Michael Sandel deserves a response. I hope Professor Sandel offers one. Until then, I recommend that this beautiful Spring weekend you read the letter from the San Jose State Philosophy Department. It is your weekend read.
In an essay in the Wall Street Journal, Frans de Waal—C. H. Candler Professor of Primate Behavior at Emory University—offers a fascinating review of recent scientific studies that upend long-held expectations about the intelligence of animals. De Waal rehearses a catalogue of fantastic studies in which animals do things that scientists have long thought they could not do. Here are a few examples:
Ayumu, a male chimpanzee, excels at memory; just as the IBM computer Watson can beat human champions at Jeopardy, Ayumu can easily best the human memory champion in games of memory.
Similarly, Kandula, a young elephant bull, was able to reach some fragrant fruit hung out of reach by moving a stool over to the tree, standing on it, and reaching for the fruit with his trunk. I’ll admit this doesn’t seem like much of a feat to me, but for the researchers de Waal talks with, it is surprising proof that elephants can use tools.
Scientists may be surprised that animals can remember things or use tools to accomplish tasks, but any one raised on children’s tales of Lassie or Black Beauty knows this well, as does anyone whose pet dog opened a door knob, brought them a newspaper, or barked at intruders. The problem these studies address is less our societal view of animals than the overly reductive view of animals that de Waal attributes to his fellow scientists. It’s hard to take these studies seriously as evidence that animals think in the way that humans do.
Seemingly more interesting are experiments with self-recognition and also facial recognition. De Waal describes one Asian Elephant who stood in front of a mirror and “repeatedly rubbed a white cross on her forehead.” Apparently the elephant recognized the image in the mirror as herself. In another experiment, chimpanzees were able to recognize which pictures of chimpanzees were from their own species. Like my childhood Labrador who used to stare knowingly into the mirror, these studies confirm that animals are able to recognize themselves. This means that animals do, likely, understand that they are selves.
For de Waal, these studies have started to upend a view of humankind's unique place in the universe that dates back at least to ancient Greece. “Science,” he writes, “keeps chipping away at the wall that separates us from the other animals. We have moved from viewing animals as instinct-driven stimulus-response machines to seeing them as sophisticated decision makers.”
The flattening of the distinction between animals and humans is to be celebrated, De Waal argues, and not feared. He writes:
Aristotle's ladder of nature is not just being flattened; it is being transformed into a bush with many branches. This is no insult to human superiority. It is long-overdue recognition that intelligent life is not something for us to seek in the outer reaches of space but is abundant right here on earth, under our noses.
DeWaal has long championed the intelligence of animals, and now his vision is gaining momentum. This week, in a long essay called “One of Us” in the new Lapham’s Quarterly on animals, the glorious essayist John Jeremiah Sullivan begins with this description of similar studies to the ones DeWaal writes about:
These are stimulating times for anyone interested in questions of animal consciousness. On what seems like a monthly basis, scientific teams announce the results of new experiments, adding to a preponderance of evidence that we’ve been underestimating animal minds, even those of us who have rated them fairly highly. New animal behaviors and capacities are observed in the wild, often involving tool use—or at least object manipulation—the very kinds of activity that led the distinguished zoologist Donald R. Griffin to found the field of cognitive ethology (animal thinking) in 1978: octopuses piling stones in front of their hideyholes, to name one recent example; or dolphins fitting marine sponges to their beaks in order to dig for food on the seabed; or wasps using small stones to smooth the sand around their egg chambers, concealing them from predators. At the same time neurobiologists have been finding that the physical structures in our own brains most commonly held responsible for consciousness are not as rare in the animal kingdom as had been assumed. Indeed they are common. All of this work and discovery appeared to reach a kind of crescendo last summer, when an international group of prominent neuroscientists meeting at the University of Cambridge issued “The Cambridge Declaration on Consciousness in Non-Human Animals,” a document stating that “humans are not unique in possessing the neurological substrates that generate consciousness.” It goes further to conclude that numerous documented animal behaviors must be considered “consistent with experienced feeling states.”
With nuance and subtlety, Sullivan understands that our tradition has not drawn the boundary between human and animal nearly as securely as de Waal portrays it. Throughout human existence, humans and animals have been conjoined in the human imagination. Sullivan writes that the most consistent “motif in the artwork made between four thousand and forty thousand years ago,” is the focus on “animal-human hybrids, drawings and carvings and statuettes showing part man or woman and part something else—lion or bird or bear.” In these paintings and sculptures, our ancestors gave form to a basic intuition: “Animals knew things, possessed their forms of wisdom.”
Religious history too is replete with evidence of the human recognition of the dignity of animals. God says in Isaiah that the beasts will honor him and St. Francis, the namesake of the new Pope, is famous for preaching to birds. What is more, we are told that God cares about the deaths of animals.
“In the Gospel According to Matthew we’re told, “Not one of them will fall to the ground apart from your Father.” Think about that. If the bird dies on the branch, and the bird has no immortal soul, and is from that moment only inanimate matter, already basically dust, how can it be “with” God as it’s falling? And not in some abstract all-of-creation sense but in the very way that we are with Him, the explicit point of the verse: the line right before it is “fear not them which kill the body, but are not able to kill the soul.” If sparrows lack souls, if the logos liveth not in them, Jesus isn’t making any sense in Matthew 10:28-29.
What changed and interrupted the ancient and deeply human appreciation of our kinship with besouled animals? Sullivan’s answer is René Descartes. The modern depreciation of animals, Sullivan writes,
proceeds, with the rest of the Enlightenment, from the mind of René Descartes, whose take on animals was vividly (and approvingly) paraphrased by the French philosopher Nicolas Malebranche: they “eat without pleasure, cry without pain, grow without knowing it; they desire nothing, fear nothing, know nothing.” Descartes’ term for them was automata—windup toys, like the Renaissance protorobots he’d seen as a boy in the gardens at Saint-Germain-en-Laye, “hydraulic statues” that moved and made music and even appeared to speak as they sprinkled the plants.
Too easy, however, is the move to say that the modern comprehension of the difference between animal and human proceeds from a mechanistic view of animals. We live at a time of the animal rights movement. Around the world, societies exist and thrive whose mission is to prevent cruelty toward and to protect animals. Yes, factory farms treat chickens and pigs as organic mechanisms for the production of meat, but these farms co-exist with active and quite successful movements calling for humane standards in food production. Whatever the power of Cartesian mechanics, its success is at odds with the persistence of the religious, ancient solidarity, and also deeply modern sympathy between human and animal.
A more meaningful account of the modern attitude towards animals might be found in Spinoza. Spinoza, as Sullivan quotes him, recognizes that animals feel in ways that Descartes did not. As do animal rights activists, Spinoza admits what is obvious: that animals feel pain, show emotion, and have desires. And yet, Spinoza maintains a distinction between human and animal—one grounded not in emotion or feeling, but in human nature. In his Ethics, he writes:
Hence it follows that the emotions of the animals which are called irrational…only differ from man’s emotions to the extent that brute nature differs from human nature. Horse and man are alike carried away by the desire of procreation, but the desire of the former is equine, the desire of the latter is human…Thus, although each individual lives content and rejoices in that nature belonging to him wherein he has his being, yet the life, wherein each is content and rejoices, is nothing else but the idea, or soul, of the said individual…It follows from the foregoing proposition that there is no small difference between the joy which actuates, say, a drunkard, and the joy possessed by a philosopher.
Spinoza argues against the law prohibiting slaughter of animals—it is “founded rather on vain superstition and womanish pity than on sound reason”—because humans are more powerful than animals. Here is how he defends the slaughter of animals:
The rational quest of what is useful to us further teaches us the necessity of associating ourselves with our fellow men, but not with beasts, or things, whose nature is different from our own; we have the same rights in respect to them as they have in respect to us. Nay, as everyone’s right is defined by his virtue, or power, men have far greater rights over beasts than beasts have over men. Still I do not deny that beasts feel: what I deny is that we may not consult our own advantage and use them as we please, treating them in the way which best suits us; for their nature is not like ours.
Spinoza’s point is quite simple: Of course animals feel and of course they are intelligent. Who could doubt such a thing? But they are not human. That is clear too. While we humans may care for and even love our pets, we recognize the difference between a dog and a human. And we will, in the end, associate more with our fellow humans than with dogs and porpoises. Finally, we humans will use animals when they serve our purposes. And this is ok, because have the power to do so.
Is Spinoza arguing that might makes right? Surely not in the realm of law amongst fellow humans. But he is insisting that we recognize that for us humans, there is something about being human that is different and, even, higher and more important. Spinoza couches his argument in the language of natural right, but what he is saying is that we must recognize that there are important differences between animals and humans.
At a time that values equality over what Friedrich Nietzsche called the “pathos of difference,” the valuation of human beings over animals is ever more in doubt. This comes home clearly in a story told recently by General Stanley McChrystal, about a soldier who expressed sympathy for some dogs killed in a raid in Iraq. McChrystal responded, severely: “"Seven enemy were killed on that target last night. Seven humans. Are you telling me you're more concerned about the dog than the people that died? The car fell silent again. "Hey listen," I said. "Don't lose your humanity in this thing."” Many, no doubt, are more concerned, or at least are equally concerned, about the deaths of animals as they are about the deaths of humans. There is ever-increasing discomfort about McChrystal’s common sense affirmation of Spinoza’s claim that human beings simply are of more worth than animals.
The distinctions upon which the moral sense of human distinction is based are foundering. For DeWaal and Sullivan, the danger today is that we continue to insist on differences between animals and humans—differences that we don’t fully understand. The consequences of their openness to the humanization of animals, however, is undoubtedly the animalization of humans. The danger that we humans lose sight of what distinguishes us from animals is much more significant than the possibility that we underestimate animal intelligence.
I fully agree with DeWaal and Sullivan that there is a symphony of intelligence in the world, much of it not human. And yes, we should have proper respect for our ignorance. But all the experiments in the world do little to alter the basic facts, that no matter how intelligent and feeling and even conscious animals may be, humans and animals are different.
What is the quality of that difference? It is difficult to say and may never be fully articulated in propositional form. On one level it is this: Simply to live, as do plants or animals, does not constitute a human life. In other words, human life is not simply about living. Nor is it about doing tasks or even being conscious of ourselves as humans. It is about living meaningfully. There may, of course, be some animals that can create worlds of meaning—worlds that we have not yet discovered. But their worlds are not the worlds to which we humans aspire.
Over two millennia ago, Sophocles, in his “Ode to Man,” named man Deinon, a Greek word that connotes both greatness and horror, that which is so extraordinary as to be at once terrifying and beautiful. Man, Sophocles tells us, can travel over water and tame animals, using them to plough fields. He can invent speech, and institute governments that bring humans together to form lasting institutions. As an inventor and maker of his world, this wonder that is man terrifyingly carries the seeds of his destruction. As he invents and comes to control his world, he threatens to extinguish the mystery of his existence, that part of man that man himself does not control. As the chorus sings: “Always overcoming all ways, man loses his way and comes to nothing.” If man so tames the earth as to free himself from uncertainty, what then is left of human being?
Sophocles knew that man could be a terror; but he also glorified the wonder that man is. He knew that what separates us humans from animals is our capacity to alter the earth and our natural environment. “The human artifice of the world,” Arendt writes, “separates human existence from all mere animal environment…” Not only by building houses and erecting dams—animals can do those things and more—but also by telling stories and building political communities that give to man a humanly created world in which he lives. If all we did as humans was live or build things on earth, we would not be human.
To be human means that we can destroy all living matter on the Earth. We can even today destroy the earth itself. Whether we do so or not, it now means that to live on Earth today is a “Choice” that we make, not a matter of fate or chance. Our Earth, although we did not create it, is now something we humans can decide to sustain or destroy. In this sense, it is a human creation. No other animal has such a potential or such a responsibility.
There is a deep desire today to flee from that awesome and increasingly unbearable human responsibility. We flee, therefore, our humanity and take solace in the view that we are just one amongst the many animals in the world. We see this reductionism above all in human rights discourse. One core demand of human rights—that men and women have a right to live and not be killed—brought about a shift in the idea of humanity from logos to life. The rise of a politics of life—the political demand that governments limit freedoms and regulate populations in order to protect and facilitate their citizens’ ability to live in comfort—has pushed the animality, the “life,” of human beings to the center of political and ethical activity. In embracing a politics of life over a politics of the meaningful life, human rights rejects the distinctive dignity of human rationality and works to reduce humanity to its animality.
Hannah Arendt saw human rights as dangerous precisely because they risked confusing the meaning of human worldliness with the existence of mere animal life. For Arendt, human beings are the beings who build and live in a political world, by which she means the stories, institutions, and achievements that mark the glory and agony of humanity. To be human, she insists, is more than simply living, laboring, working, acting, and thinking. It is to do all of these activities in such a way as to create, together, a common life amongst a plurality of persons.
I fear that the interest in animal consciousness today is less a result of scientific proof that animals are human than it is an increasing discomfort with the world we humans have built. A first step in responding to such discomfort, however, is a reaffirmation of our humanity and our human responsibility. There is no better way to begin that process than in engaging with a very human response to the question of our animality. Towards that end, I commend to you “One of Us,” by John Jeremiah Sullivan.
“Arendt on Narrative Theory and Practice”
Allen Speight, College Literature, Volume 38, Number 1, Winter 2011, pp. 115-130
Allen Speight, Director, Institute for Philosophy and Religion at Boston University, argues for Arendt’s place among theorists of narrative such as Alasdair MacIntyre, Charles Talyor, and Paul Ricouer. While he does indicate contemporary questions in both the Anglo-American and continental traditions throughout the article, he delivers particularly rich insights into Arendt’s engagement with three canonical thinkers. Specifically, he highlights aspects of Arendt’s use of conceptions of narration in developing her ideas of action in The Human Condition. In each aspect, he sees Arendt drawing on a specific philosophical precursor—Aristotle, Hegel, and Augustine in turn—but also diverging from them.
In relation to Aristotle, Speight focuses on how action reveals the “who,” how the actor emerges not from his intention but from his impact on the world. As does Aristotle, Arendt places a strong focus on drama. Aristotle and Arendt both hold that “dramatic actions” allow us to “construe what sort of a character an agent has.” However, rather than focusing on the reception of the audience, Arendt links the spectator to the actor. Indeed, expanding from Speight’s interpretation, we might say Arendt opens another center in the actor himself with her idea of the daimon, who watches over one’s shoulder.
From Hegel, Speight sees Arendt picking up on the tragic nature of action and how this leads to a need for forgiveness. The agent will not get what he wants and indeed often perish due to effects that he cannot foresee. Speight makes a striking link to Hegel here:
“A stone thrown is the devil’s,” Hegel liked to say: action by its nature is not something construable in given terms but is a kind of “stepping-forth” or opening up of the unexpected and unpredictable (Elements of the Philosophy of Right.) The classic, tragic examples of action in its openness—Antigone’s deed, for example, which both Hegel and Arendt were drawn to—present in an intensified way what is an underlying condition within ordinary action, one requiring the need for some means of reconciliation.
With the line “A stone thrown is the devil’s,” Hegel lets the personified evil step in as a kind of holding place that opens the question of how the effect of action will change the actor. Unlike Hegel though, the ultimate judge is not institutionalized world history, but the world as the space in which the who is revealed.
Stepping back chronologically, Speight then turns to Augustine as a source of Arendt’s idea of narrative rebirth. Here he picks up on an existentialist debate through Sartre: given that one’s account of one’s life can change it fundamentally, do we have a responsibility to an authentic narration? To what extent are we free when we tell our own stories? Arendt rejects the possibility that a life can simply me “made” in narrative. However:
for Arendt the distinction between a life that is “lived” and a story that is “made” involves two distinctly non-Sartrean consequences. The first we have already seen in her “daimõn thesis”: that precisely because we live rather than make a life, there is a privileged—but (pace Sartre) a not necessarily false—retrospective position from which we must view the “who,“ the daimõn, that is revealed in our lives. Thus, as we have seen, the “who” is visible “ex post facto through action and speech” (Arendt 1958, 186) and this retrospectivity in turn privileges the work of the discerning interpretive historian or storyteller. (121)
I find Speight’s repeated discussion of the daimon particularly relevant, since it offers an original way to talk about the belatedness of knowledge, of how it can comes later, or even from the side, without privileging an end position as Hegel does.
In the second half of his article, Speight offers a reading of Men in Dark Times that illustrates how Arendt uses these three aspects of her narrative theory in her own practice of narration. His reading the sections on Jaspers and Waldemar Gurian explicitly link the question of the daimon, biography, and how a person come to appearance in the public realm. Readers following the growing subsection of Arendt scholarship engaged with Arendt’s literary dimension will find an original effort here that offers a model for future work connecting Arendt’s theoretical articulations with her writing practice.
“The wonder that man endures or which befalls him cannot be related in words because it is too general for words….That this speechless wonder is the beginning of philosophy became axiomatic for both Plato and Aristotle.”
-Hannah Arendt, "Philosophy and Politics"
Aristotle had told us that philosophy begins in thaumázein-- θαυμάζειν –“to wonder, marvel, be astonished.” In the New Testament, the word appears only twice. In the parallel occurrences (Matthew 27:14 and Mark 15:5), Pilate marvels at the fact that Jesus says nothing. What is significant is that thaumázein is associated there with an experience for which there were no words. The word means a kind of an initial wordless astonishment at what is, at that that is is. For Aristotle, thaumázein is the beginning of philosophy as wonder. It is not for the Greeks, therefore, the beginning of political philosophy.
Key here is the fact of speechlessness. This wonder “cannot be related in words because it is too general for words.” Arendt suggests that Plato encountered it in those moments in which Socrates, “as though seized by a rapture, [fell] into complete motionlessness, just staring without seeing or hearing anything.” It follows that “ultimate truth is beyond words.” Nevertheless, humans want to talk about that which cannot be spoken. “As soon as the speechless state of wonder translates itself into words, it … will formulate in unending variations what we call the ultimate questions.” These questions – what is being? Who is the human being? What is the meaning of life” what is death? And so forth “have in common that they cannot be answered scientifically.” Thus Socrates “I know that I do not know” is actually an expression that opens the door to the political, public realm, in the recognition that nothing that can be said there can ever have the quality of being final.
According to Arendt, Socrates has three distinct aspects. First he arouses citizens from their slumber – this is the gadfly who gets others to think, to think about those topics for which there is no final answer. Secondly as “midwife” he decides – he makes evident – whether an opinion is fit to live or is merely an unimpregnated “wind-egg” (cf Theateatus 152a; 157d; 161a): Greek midwives not only assisted in the delivery but determined if the new-born was healthy enough to live. Socrates concludes his discussion in the Theateatus (210b) by saying all they have done is to produce a mere wind-egg and that he must leave as he has to get to the courthouse for his trial. Lastly, as stinging ray, Socrates paralyzes in two ways. He makes you stop and think; he destroys the certainty one has of received opinions. Arendt is clear that this can be dangerous. She goes on to say that “thinking is … dangerous to all creeds and, by itself, does not bring forth any new creed,” but she is equally clear that “non-thinking … has its dangers [which are] the possession of rules under which to subsume particulars.” To think is dangerous: but to think is to desire wisdom, what is not there. It is thus a longing; it is eros and, as with all things erotic, “to bring this relationship into the open, make it appear, men speak about it in the same way that the lover wants to speak of his beloved.” Where does this leave one? For the most part, in normal times, thinking is not of political use. It is, however, of use, in times when the “center does not hold,” in times of crisis.
At these moments, thinking ceases to be a marginal affair in political matters. When everybody is swept away unthinkingly by whatever everyone else does and believes in, those who think are drawn out of hiding because their refusal to join is conscious and thereby becomes a kind of action. The purging element … is political by implication. For this destruction has a liberating effect on another human faculty, the faculty of judgment, … the faculty to judge particulars without subsuming them under those general rules which can be taught and learned until the grow into habits.
Suppose we read Arendt as saying that political philosophy must now turn and thaumázein – and wonder – not at that what is, is, but at the human reality, at the world of human activity. This would involve a change in philosophy – for which she says philosophers are not particularly well equipped. She thinks such a turn would rest on and derive from several elements – she mentions in particular Jaspers’ reformulation of truth as transcending the realm that can be instrumentally controlled, thus related to freedom; Heidegger’s analysis of ordinary everyday life; and existentialism’s insistence on action. It will be an inquiry into the “political significance of thought; that is into the meaningfulness and the conditions of thinking for a being that never exists in the singular and whose essential plurality is far from explored when an I-Thou relationship is added to the traditional understanding of human nature.”
What is problematic with purely philosophical thaumázein? The Thracian maid who appears in the title to Jacques Taminiaux’s book and stands for Arendt in his analysis derives from an account in the Theateatus. Upon encountering Thales who, all-focused in his wondering, had fallen into a well, the maid notes that the philosopher had “failed to see what was in front of him.” Mary-Jane Robinson notes four elements to Arendt’s suspicion of excessive wonder, a suspicion one assumes was directed at Heidegger. First, such wonder allows avoidance of the messiness of the everyday world; secondly, such “uncritical openness” leads philosophers to be “swept away by dictators.” Thirdly, such wonder alienates the philosopher (as with Heidegger post-1945) from the world around him, and lastly, such openness to the mystery of the world, “disables decision making.”
If politics is the realm of how humans appear to each other when they act and speak, from whence does it come? The only possible answer is that politics is an emergence from a realm which is neither that of action nor that of speech. The political emerges from nothingness. Perhaps this is the realm to which poetry can call us – and some of Arendt’s most moving essays are on poetry and literature – but such a realm is not political. In this sense there is a limit to political science, as there is to all science. For Arendt, there are no underlying causes out of which that which is political must emerge. This is why political action is always for her a beginning and a marvel for which we have to try to find words.
Freeman Dyson, the eclectic physicist, took good aim at philosophy last week in a review of the silly book by Jim Holt, Why Does the World Exist?" An Existential Detective Story. Holt went around to "a portrait gallery of leading modern philosophers," and asked them the Leibnizian question: Why is there something rather than nothing?" The book offers their answers, along with biographical descriptions.
For Dyson, Holt's book "compels us to ask" these "ugly questions." First, "When and why did philosophy lose its bite?" Philosophers were, once important. In China, Confucius and his followers made a civilization. So too in Greece did Socrates and then the schools of Plato and Aristotle give birth to the western world. In the Christian era Jesus and Paul, then Aquinas and Augustine granted depth to dominant worldviews. Philosophers like Descartes, Hobbes, and Leibniz were central figures in the scientific revolution, and philosophical minds like Nietzsche, Heidegger, and Arendt (even if one was a philologist and the other two refused the name philosopher) have become central figures in the experience of nihilism. Against these towering figures, the "leading philosophers" in Holt's book cut a paltry figure. Here is Dyson:
Holt's philosophers belong to the twentieth and twenty-first centuries. Compared with the giants of the past, they are a sorry bunch of dwarfs. They are thinking deep thoughts and giving scholarly lectures to academic audiences, but hardly anybody in the world outside is listening. They are historically insignificant. At some time toward the end of the nineteenth century, philosophers faded from public life. Like the snark in Lewis Carroll's poem, they suddenly and silently vanished. So far as the general public was concerned, philosophers became invisible.
There are many reasons for the death of philosophy, some of which were behind Hannah Arendt's refusal to call herself a philosopher. Philosophy was born, at least in its Platonic variety, from out of the thinker's reaction to the death of Socrates. Confronted with the polis that put the thinker to death, Plato and Aristotle responded by retreating from the world into the world of ideas. Philosophical truth separated itself from worldly truths, and idealism was born. Realism was less a return to the world than a reactive fantasy to idealism. In both, the truths that were sought were otherworldly truths, disconnected to the world.
Christianity furthered the divorce of philosophy from the world by imagining two distinct realms, the higher realm existing beyond the world. Science, too, taught that truth could only be found in a world of abstract reason, divorced from real things. Christianity and science together gave substance to the philosophical rebellion against the world. The result, as Dyson rightly notes, is that philosophy today is as abstract, worldly, and relevant as it is profound.
What Dyson doesn't explore is why philosophers of the past had such importance, even as they also thought about worlds of ideas. The answer cannot be that ideas had more import in the past than now. On the contrary, we live in an age more saturated in ideas than any other. More people today are college educated, literate, and knowledgeable of philosophy than at any period in the history of the world. Books like Holt's are proof positive of the profitable industry of philosophical trinkets. That is the paradox—at a time when philosophy is read by more people than ever, it is less impactful than it ever was.
One explanation for this paradox is nihilism—The devaluing or re-valuing of the highest values. The truth about truth turned out to be neither so simple nor singular as the philosophers had hoped. An attentive inquiry into the true and the good led not to certainty, but to ideology critique. For Nietzsche, truth, like the Christian God, was a human creation, and the first truth of our age is that we recognized it as such. That is the precondition for the death of God and the death of truth. Nihilism has not expunged ideas from our world, but multiplied them. When speaking about the "true" or the "good" or the "just," Christians, Platonists, and moralists no longer have the stage to themselves. They must now shout to be heard amongst the public relations managers, advertisers, immoralists, epicureans, anarchists, and born again Christians.
Dyson ignores this strain of philosophy. He does point out that Nietzsche was the last great philosopher, but then dismisses Heidegger who "lost his credibility in 1933" and even Wittgentstein who would remain silent if a woman attended his lectures until she would leave. And yet it is Heidegger who has given us the great literary masterpieces of the 20th century philosophy.
His work on technology (The Question Concerning Technik) and art (The Origins of the Work of Art) has been widely read in artistic, literary, and lay circles. It is hard to imagine a philosopher more engaged with the science and literature than Heidegger was. He read physics widely and co-taught courses at the house of the Swiss psychiatrist Medard Boss and also taught seminars with the German novelist Ernst Jünger.
It seems worthwhile to end with a poem of Heidegger's from his little book, Aus der Erfahrung des Denkens/From Out of the Experience of Thinking:
Drei Gefahren drohen dem Denken
Die gute und darum heilsame Gefahr ist die Nachbarschaft des singenden Dichters.
Die böse und darum schärfste Gefahr ist das Denken selber. Es muß gegen sich selbst denken, was es nur selten vermag.
Die schlechte und darum wirre Gefahr ist das Philosophieren.
Three dangers threaten thinking.
The good and thus healthy danger is the nearness of singing poetry.
The evil and thus sharpest danger is thinking itself. It must think against itself, something it can do only rarely.
The bad and thus confusing danger is philosophizing.