No government exclusively based on the means of violence has ever existed. Even the totalitarian ruler, whose chief instrument of rule is torture, needs a power basis—the secret police and its net of informers. Only the development of robot soldiers, which, as previously mentioned, would eliminate the human factor completely and, conceivably, permit one man with a push button to destroy whomever he pleased, could change this fundamental ascendancy of power over violence.
—Hannah Arendt, “On Violence.”
Hannah Arendt wrote these lines in the midst of the United States’ defeat in Vietnam. Her argument was that as long as robot soldiers were a thing of the future, brute violence and force like that unleashed by the United States would always succumb to collective power, of the kind exhibited by the Vietcong. Hers was, at least in part, a hopeful voice, praising the impotence of violence in the face of power.
To read Arendt’s lines today, amidst the rise of drone warfare, alters the valence of her remarks. Drones are increasingly prototypes and even embodiments of the “robot soldiers” that Arendt worried would dehumanize war and elevate violence over power. If we draw out the consequences from Arendt’s logic, then drone soldiers might displace the traditional limits that politics places on violence; drones, in other words, make possible unprecedented levels of unlimited violence.
The rise of drones matters, Arendt suggests, in ways that are not currently being seen. Her worry has little to do with assassination, the concern of most opponents of drones today. Nor is she specifically concerned with surveillance. Instead, against those, like General Stanley McChrystal, who argue that drones are simply new tools in an old activity of war, Arendt’s warning is that drones and robot soldiers may change the very dynamic of war and politics.
To see how drones change the calculus of violence in politics, we need to understand Arendt’s thesis about the traditional political superiority of power over violence. The priority of power over violence is based on the idea that power is “inherent in the very existence of political communities.” Power, Arendt writes, “corresponds to the human ability not just to act, but to act in concert.” It “springs up whenever people get together and act in concert.” All government, and this is central to Arendt’s thesis, needs power in order to act.
This need for popular support is true even for totalitarian governments, which also depend on the power of people—at least a select group of them like the secret police and their informers—continuing to act together. It is thus a myth that totalitarian rule can exist without the support of the people. Whether in Nazi Germany or contemporary Syria, totalitarian or tyrannical governments still are predicated on power that comes from support of key segments of the population.
Even if all government is predicated on some power, governments also employ violence—but that violence is held in check by political limits. As a government loses its popular support, it finds itself tempted to “substitute violence for power.” The problem is that when governments give in to the temptation to use violence to shore up slackening of popular power, their use of violence diminishes further their power and results in impotence. The more violence a government needs to rely upon, the less power it has at its disposal. There is thus a political limit on how much violence any government can employ before it brings about the loss of its own power.
As much as she respects the claims for power over violence, Arendt is clear-eyed about the damage violence can wield. In a direct confrontation between power and violence, violence will win—at least in the short term. Arendt writes that if Gandhi’s “enormously powerful and successful strategy of nonviolent resistance” had met a different enemy—a Stalin or Bashar al-Assad instead of a Churchill or Mubarek—“the outcome would not have been decolonization, but massacre and submission.” Sheer violence can bring victory. But the price for such a triumph is high, not only for the losers, but also for the victors.
We see this exemplified in Middle East over the last few years. In those countries like Bahrain and Syria where governments did not shy from unlimited violence to repress popular revolts, the governments have maintained themselves and the Arab Spring has turned into a long and frigid winter. Assad has been able to maintain power; but his power is irreparably diminished. In the end, there is a limit to the viability and effectiveness of relying on mere violence at the expense of power. This is even more true in a constitutional democracy, where support of the people is a political necessity.
As confident as Arendt is that violence is limited in politics by the need for power, she worries that the coming age of “robot soldiers” might bring about the end of the political advantage power has over violence. Robot soldiers can be controlled absent of consent or political support. With the push of a button or a simple command, a tyrant or totalitarian ruler can exert nearly unlimited violence and destruction, even without the support a massive secret police or a network of informers. Drones threaten the time-immemorial dependence of even the most lonely tyrant on others who will support him and do his bidding.
Of course drones must be built, programmed, and maintained. No tyrant is fully autonomous. Yet building, programming, and maintaining machinery are fundamentally different jobs than arresting and killing dissenters. It is far easier for programmers and electricians to justify doing their jobs in a powerless yet violent state than for soldiers and secret agents to justify theirs.
In a drone-led war, men will rarely need to go into action as soldiers. That is of course one reputed advantage of drones, that they make war less dangerous and more technically predictable. But it also means that as modern warfare becomes safer and more humane, it also excludes without human soldiers and risks stripping war of its human and active character. This helps to explain an enigmatic passage of Arendt’s in The Human Condition, where she offers modern war as an example of when action “loses its specific character” as human action and “becomes one form of achievement among others.” The degradation of human action in modern war, she writes,
happens whenever human togetherness is lost, that is, when people are only for or against other people, as for instance in modern warfare, where men go into action and use means of violence in order to achieve certain objectives for their own side against the enemy. In these instances, which of course have always existed, speech becomes indeed ‘mere talk,’ simply one more means toward the end….
Arendt is here thinking of the anonymity of the modern soldier epitomized by the monuments to the unknown soldiers—the mute mass of humanity who fight and die without the “still existing need for glorification” that makes war a human instead of a merely mechanical activity.
Her modern warfare in its inhumanity and technological capacity abandons the togetherness that has traditionally made war a prime example of human political togetherness.
In the technological advances of modern warfare that made war so awful and so mechanical, Arendt actually found a glimmer of hope: that war’s rabid violence was compensated by neither political advantage nor personal glory. In On Revolution, she dared hope that the fact that technology had reached the stage “where the means of destruction were such as to exclude their rational use” might lead to a “disappearance of war from the scene of politics….” It was possible, she thought, that the threat of total war and total destruction that accompanies war in the modern era might actually lead to the disappearance of war.
Clearly such a hope has not come to pass. One reason for the continuation of war, however, is that the horrors of war are made ever more palatable and silent—at least to the victors—by the use of technology that exerts violence without the need for political power and participation. The drone wars of the early 21st century are in this respect notable for the unprecedented silence that accompanies violence. Since U.S. soldiers are rarely injured or killed and since the strikes are classified and the damage remote, we have indeed entered an era where we can fight wars absent the speech, glory, and “human togetherness” that has traditionally marked both the comradeship of soldiers and the patriotic sacrifice of a nation at war. It is in this extraordinary capacity of mute violence to substitute for power in which we can glimpse both the promise and the peril of drones.
Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.
Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.
Hannah Arendt Center Senior Fellow Wyatt Mason explores the wild and wonderful world of super-artist Kehinde Wiley. "Wiley, as some of you may know, is an American artist, an unusually successful one. In the decade of his career to date, he's become one of the most sought-after painters in America. Holland Cotter, of The New York Times, called Wiley "a history painter, one of the best we have.... He creates history as much as he tells it." Even if you don't know him by name, you've likely glimpsed his grand portraits of hip-hop artists-LL, Ice-T, Biggie. Maybe you've even seen his massive portrait of the King of Pop: the one of MJ in full armor, astride a prancing warhorse. If all this suggests that Wiley, a 36-year-old gay African-American man, is court painter to the black celebretariat, that misconception has been useful to promoting his brand, up to a point."
Mason is skeptical, but if you don't know the Wiley brand, the route through Wiley's world of surfaces is about as fine a reflection as you'll find of the challenges facing the artist in a consumer society.
Zainab Al-Khawaja is sitting in a Bahrani prison reading Martin Luther King Jr. Al-Khawaja is a political prisoner. She is in a cell with 14 others, some murderers. To maintain her dignity and to announce her difference from common criminals, she has refused to wear an orange prison jumpsuit. As a punishment, she is denied family visits, including by her baby. She is now on hunger strike. "Prison administrators ask me why I am on hunger strike. I reply, "Because I want to see my baby." They respond, nonchalantly, "Obey and you will see her." But if I obey, my little Jude will not in fact be seeing her mother, but rather a broken version of her. I wrote to the prison administration that I refuse to wear the convicts' uniform because "no moral man can patiently adjust to injustice." (Thoreau)." Al-Khawaja's thoughts on dignity and non-violence are more than worthy testaments to her mentor.
Sara Horowitz takes on the "micro-gig," a new kind of freelancing that allows people to employ others for small tasks, like delivering or assembling IKEA furniture. Horowitz, however, worries about what "micro-gigging" might mean for workers: "It's as if we're eliminating the "extraneous" parts of a worker's day--like lunch or bathroom breaks--and paying only for the minutes someone is actually in front of the computer or engaged in a task." Welcome to our piece-work future.
Chloe Pantazi considers the work of the photographer Chim, also known as David Seymour, on the occasion of a showing of his work at the International Center of Photography. Pantazi focuses in particular on Chim's photos of children, saying that as he "offers up the every day lives of such adults working within the industry of war (as soldiers, munitions workers) we trust that Chim's postwar photographs of children yield something close to their every day, as vulnerable innocents who-like the newborn seen suckling at its mother's breast in a photograph taken of the crowd at a land reform meeting at the brink of the Civil War, in Spain, 1936-were virtually reared on the conflicts of their time."
Lucy McKeon explores Russian poet Kiril Medvedev, who has renounced the copyright to all of his works. McKeon recounts Medvedev's rebellion against the bourgeois idea of artist as private citizen-a type idealized by Joseph Brodsky in his 1987 Nobel Prize address. Medvedev is searching for a post-individualized and post-socialist culture-what he calls new humanism. "Logically, Medvedev's answer to individualized disconnectedness calls for a synthesis of twentieth-century leftist political and intellectual thought, a situation where several senses of the word 'humanism' begin to collide." Where something from poetry meets something from philosophy; where postmodernism, logocentrism, psychology, culture and counterculture, "and probably something else, too, that we haven't though of yet," writes Medevedev, join to form "a new shared understanding of humanity." Only in this utopian future society could the artist as private citizen responsibly exist and create."
Music in the Holocaust: Jewish Identity and Cosmopolitanism
Part II: Music of Warsaw, Ludz and other Eastern Ghettoes
Learn more here.
Roger Berkowitz lauds the idea of early college. Jeffrey Jurgens considers Jeremy Walton's recent article "Confessional Pluralism and the Civil Society Effect." Cristiana Grigore responds to the recent New York Times article, "The Kings of Roma" by describing her own Roma upbringing in Romania. Kathleen B. Jones takes on New Materialism from an Arendtian point of view.
For too long now high school has been a waste of time for too many people. I always remind my students that Georg Friedrich Hegel developed his lectures on the Philosophy of Right as a course for a German Gymnasium, the equivalent of high school in the United States. Most American high schools have long abandoned the idea of offering challenging courses that demand students think and engage with the world and the history of ideas. Our brightest students are too often bored, confirmed in their intelligence, but rarely pushed. This is especially true of our public high schools in our poorest neighborhoods.
One of the most heartening trends in response to this tragedy is the idea of early college. Bard College has been a leader in the early college movement, now embraced by the Bill and Melinda Gates Foundation, and others.
The New York Times has an excellent article on Bard’s newest Early College in Newark:
Across the country in communities like Newark, the early college high school model is being lauded as a way to provide low-income students with a road map to and through college. According to the most recent figures from the National Center for Education Statistics, 68 percent of all high school graduates make it to a two- or four-year institution, but only 52 percent of low-income students do the same. Of poor students in four-year institutions, only 47 percent graduate within six years, compared with 58 percent of the general population.
Not surprisingly, the challenges are greatest for students whose parents did not attend any college: their graduation rate hovers around 40 percent. Early college high schools seek to rectify that, by merging high school and some college. Students can earn both a high school diploma and an associate degree, and some are set on the path to a four-year degree.
Educators and big-ticket donors have praised the schools for saving students money and time — most schools compress the academic experience into four years. Since 2002, the Bill and Melinda Gates Foundation has provided more than $40 million toward initiatives. The Ford Foundation and the Carnegie Corporation of New York have also chipped in. President Obama is a proponent, giving a shout-out in his State of the Union address to P-Tech, a public-private partnership that pairs the New York City public school system and the City University of New York with I.B.M., which promises graduates a shot at a well-paying job.
There are now more than 400 early college high schools across the country — North Carolina has 76 of them — educating an estimated 100,000 students.
Bard, a liberal arts college in Annandale-on-Hudson, N.Y., is at the vanguard of the movement, with a president, Leon Botstein, who has long chastised the American high school system for its inefficiencies. More than 30 years ago, Bard took over Simon’s Rock, a private college for 11th graders and up in Great Barrington, Mass. In 2001, it opened an early college high school in Lower Manhattan, enormously popular with hyper-motivated New Yorkers, and in 2008 it started one in Queens that has become a magnet for the high-achieving offspring of Chinese, Polish and Bengali immigrants. Until now, Bard’s model has largely focused on elite students.
In Newark, Bard moved into a school building across from a tire shop and a bail bond business. Hanging outside is a cheerful red banner with the Bard name etched in white, as if to signal that new life is being breathed into the neighborhood.
Over the course of the past two decades, the political idiom of liberalism has substantially expanded its global reach and dominance. In the vast majority of the world’s existing states, principles of individual rights and collective recognition have been or are being enshrined in constitutions and other legal codes, and actors in the public sphere and the realm of civil society are adopting liberal discourse in order to press their claims for equality and freedom. The recent Arab Spring is only one of the most recent instantiations of this larger trend.
Yet even as we acknowledge liberalism’s dominance, we should not overlook those settings where it still (and ironically) carries a counter-hegemonic charge. One such locale is the Republic of Turkey, ostensibly one of the most stable and democratic states in the wider Middle East. Here a variety of Islamic organizations have relied on liberal imaginings in their efforts to challenge the state’s anti-clerical model of secularism.
This Islamic recourse to liberalism is the central concern of Jeremy Walton’s intriguing article in the most recent American Ethnologist, “Confessional Pluralism and the Civil Society Effect.” Walton pays particular attention to the work of four Islamic NGOs in Istanbul and Ankara, all of which have adopted the language of confessional pluralism in their efforts to obtain recognition from the state and secure their inclusion in Turkish public life.[i] These organizations define “religion” as a nonpolitical, voluntary mode of social and ethical life that legitimately, indeed necessarily, takes different forms. They also insist that these varied modes of life deserve acknowledgement and protection on the basis of “the ostensibly universal values of liberty and equality.”
When viewed from the perspective of Turkey’s party politics, these NGOs make strange bedfellows. Three of the organizations analyzed by Walton represent Alevism, a syncretic minority tradition that can be broadly defined by its emphasis on Twelver Shi’a history and belief, its incorporation of Central Asian mystical and shamanistic practices, and its distinctive ritual performances. Alevis have typically supported the Republican People’s Party (CHP, the party established by Mustafa Kemal Atatürk) because its staunch secularism has appeared to offer a bulwark against Sunni majoritarianism and discrimination. The fourth organization, meanwhile, is a Sunni association inspired by the contemporary Turkish theologian Fethullah Gülen and his project of universal religious dialogue. It also epitomizes the recent emergence of the Sunni Muslim bourgeoisie, the constituency that has played a pivotal role in the ascendance of the Justice and Development Party (AKP) under Prime Minister Recep Tayyip Erdoğan. Thanks to its overwhelming success in local and national elections over the past decade, the AKP has effectively supplanted the CHP as Turkey’s preeminent political party.
Yet as Walton rightly notes, these NGOs’ seemingly obvious political differences belie their common turn to the liberal rhetoric of pluralism and collective recognition. All of them desire public acknowledgement of their own (and others’) communities and identities, and all thereby challenge the presumption of ethnolinguistic and religious homogeneity that has prevailed in Turkish governmental discourse since the founding of the Republic in 1923. In addition, all of these organizations question the state’s long-standing effort not only to define and regulate the legitimate practice of religion (especially Sunni Islam), but also to limit religious expression to the private sphere. These rather paradoxical governmental imperatives, which remained largely unchallenged in Turkey until the 1990s, can be traced to the laicist model of secularism that the Republic adopted from the French Jacobin tradition.
In subtle or dramatic ways, all of these NGOs seek to divert Turkish secularism from its previous path. One of the Alevi organizations, for example, seeks a mode of pluralism that would grant to Alevis the same privileges—state funding for houses of worship, inclusion in the mandatory religion classes taught in public schools—that the state has historically allocated to Sunni Islam. Another Alevi association, by contrast, favors an “American-style” secularism that would limit or even prohibit state intervention in religious affairs. The Sunni organization, meanwhile, seeks to promote tolerance and public dialogue across confessional boundaries in a manner that departs markedly from the state’s efforts to privatize religious expression. Significantly, the idiom of liberalism is flexible enough to accommodate these varied and not always compatible projects.
At the same time, the liberal language of confessional pluralism creates tensions and dilemmas for the very organizations that seek to mobilize it. Above all, claims for collective recognition presume coherent and “authentic” (i.e., long-standing, non- or pre-political) religious identities as the necessary ground for communal acknowledgement and equal protection. As Walton convincingly relates, it is precisely such coherence and authenticity that prove elusive for many Islamic NGOs. Alevi associations in particular are defined by intense arguments over the very definition of Alevi identity. Does Alevism constitute a distinct and more or less uniform tradition of its own? What precisely is its relationship with Islam? Does Alevism even constitute a “religion” as the concept is commonly understood, or is it rather a body of folklore, a philosophical and political orientation, or an ethnicity? Alevi associations disagree sharply on the answers to these questions, even as they share a common discursive logic.
Walton is somewhat less persuasive, however, when he turns to Islamic NGOs’ relationship to the state and state governance. In his reading, these associations engage in a form of “nongovernmental politics” that does not aspire to occupy the position of a governing agency. In fact, they contribute to what Walton, drawing on the work of Timothy Mitchell, calls “the civil society effect”: the romantic notion that civil society constitutes “a self-evident domain of freedom and authenticity” wholly autonomous from the state. I follow Walton’s reasoning when he notes that the NGOs he analyzes have displayed an increasing skepticism toward Turkey’s dominant model of secularism and its major political parties, including the CHP and the AKP. I believe he oversteps, however, when he suggests that many if not all of these associations dismiss political society and the state. To my mind, the very language of liberalism adopted by these NGOs indicates that they care a great deal about the state and its policies. Very much in the spirit of Arendt’s celebrated pronouncements in The Origins of Totalitarianism, they grasp that rights and recognition, if they are to have real substance, must be backed and warranted by the state’s governmental power.
This wrong turn notwithstanding, Walton’s argument makes for stimulating reading. Perhaps above all, it offers a sharp challenge to the still common presumption that Islam and modern politics are hermetically separate, fundamentally irreconcilable domains. Instead, as Walton subtly demonstrates, they “authorize, animate, challenge, and contextualize each other in contextually specific ways.”
[i] For the sake of easy reading, I do not dwell on the NGOs by name, but the Alevi associations include the Cem Foundation, the Hacı Bektaş Veli Anatolian Cultural Foundation, and the Ehl-i Beyt Foundation. The Sunni association aligned with Gülen is the Journalists and Writers Foundation.
The most common way people give up their power is by thinking they don’t have any.”
— Alice Walker
“The shift from the ‘why’ and ‘what’ to the ‘how’ implies that the actual objects of knowledge can no longer be things or eternal motions but must be processes, and that the object of science is no longer nature or the universe but the history, the story of the coming into being, of nature or life or the universe....Nature, because it could be known only in processes which human ingenuity, the ingeniousness of homo faber, could repeat and remake in the experiment, became a process, and all particular natural things derived their significance and meaning solely from their function in the over-all process. In the place of the concept of Being we now find the concept of Process. And whereas it is in the nature of Being to appear and thus disclose itself, it is in the nature of Process to remain invisible, to be something whose existence can only be inferred from the presence of certain phenomena.”
-Hannah Arendt, The Human Condition
Bookending Arendt’s consideration of the human condition “from the vantage point of our newest experiences and our most recent fears” is her invocation of several “events, ” which she took to be emblematic of the modern world launched by the atomic explosions of the 1940s and the threshold of the modern age that preceded it by several centuries. The event she invokes in the opening pages is the launch of Sputnik in 1957; its companion events are named in the last chapter of the book--the discovery of America, the Reformation, and the invention of the telescope and the development of a new science.
Not once mentioned in The Human Condition, but, as Mary Dietz argued so persuasively in her Turning Operations, palpably present as a “felt absence,” is the event of the Shoah, the “hellish experiment” of the SS concentration camps, which is memorialized today, Yom HaShoah. Reading Arendt’s commentaries on the discovery of the Archimedean point and its application in modern science with the palpably present but textually absent event of the Holocaust in mind sheds new light on the significance of her cautionary tale about the worrying implications of the new techno-science of algorithms and quantum physics and its understanding of nature produced through the experiment.
What happens, she seems to be asking, when the meaning of all “particular things” derives solely from “their function in the over-all process”? If nature in all of its aspects is understood as the inter- (or intra-) related aspects of the overall life process of the universe, does then human existence, as part of nature, become merely one part of that larger process, differing perhaps in degree, but not kind, from any other part?
Recently, “new materialist” philosophers have lauded this so-called “posthumanist” conceptualization of existence, arguing that the anthropocentrism anchoring earlier modern philosophies—Arendt implicitly placed among them?—arbitrarily separates humans from the rest of nature and positions them as masters in charge of the world (universe). By contrast, a diverse range of thinkers such as Jane Bennett, Rosi Braidotti, William Connolly, Diana Coole, and Cary Wolfe have drawn on a variety of philosophical and scientific traditions to re-appropriate and “post-modernize” some form of vitalism. The result is a reformulation of an ontology of process—what Connolly calls “a world of becoming”—as the most accurate way to understand matter’s dynamic and eternal self-unfolding. And, consequentially, it also entails transforming agency from a human capacity of “the will” with its related intentions to a theory of agency of “multiple degrees and sites...flowing from simple natural processes, to human beings and collective social assemblages” with each level and site containing “traces and remnants from the levels from which it evolved,” which “affect [agency’s] operation.” (Connolly, A World Becoming, p. 22, emphasis added). The advantage of a “philosophy/faith of radical immanence or immanent realism,” Connolly argues, is its ability to engage the “human predicament”: “how to negotiate life, without hubris or existential resentment, in a world that is neither providential nor susceptible to consummate mastery. We must explore how to invest existential affirmation in such a world, even as we strive to fend off its worst dangers.”
An implicit ethic of aiming to take better care of the world, “to fold a spirit of presumptive generosity for the diversity of life into your conduct” by not becoming too enamored with human agency resides in this philosophy/faith. In the entanglements she explores between human and non-human materiality—a “heterogeneous monism of vibrant bodies” —one can discern similar ethical concerns in Jane Bennett’s Vibrant Matter. “It seems necessary and impossible to rewrite the default grammar of agency, a grammar that assigns activity to people and passivity to things.” Conceptualizing nature as “an active becoming, a creative not-quite-human force capable of producing the new” Bennett affirms a “vital materiality [that] congeals into bodies, bodies that seek to persevere or prolong their run,” (p. 118, emphasis in the original) where “bodies” connotes all forms of matter. And she contends that this vital materialism can “enhance the prospects for a more sustainability-oriented public.” Yet, without some normative criteria for discerning the ways this new materialism can work toward “sustainability,” it is by no means obvious how either a declaration of faith in the “radical character of the (fractious) kinship between the human and the non-human” or having greater “attentiveness to the indispensable foreignness that we are” would lead to a change in political direction toward more gratitude and away from more destructive patterns of production and consumption. The recognition of our vulnerability could just as easily lead to renewed efforts to truncate or even eradicate the “foreignness” within.
Nonetheless, although these and other accounts call for a reconceptualization of concepts of agency and of causality, none pushes as far toward a productivist/performative account of matter and meaning as does Karen Barad’s theory of “agential realism.” Drawing out the implications of Niels Bohr’s quantum mechanics, Barad develops a theory of how “subjects” and “objects” are produced as apparently separable entities by “specific material configurings of the world” which enact “boundaries, properties, and meanings.” And, in her conceptualization, “meaning is not a human-based notion; rather meaning is an ongoing performance of the world in its differential intelligibility...Intelligibility is not an inherent characteristic of humans but a feature of the world in its differential becoming. The world articulates itself differently...[H]uman concepts or experimental practices are not foundational to the nature of phenomena. ” The world is immanently real and matter immanently materializes.
At first glance, this posthumanist understanding of reality seems consistent with Arendt’s own critique of Cartesian dualism and Newtonian physics and her understanding of the implicitly conditioned nature of human existence. “Men are conditioned beings because everything they come into contact with turns immediately into a condition of their existence. The world in which the vita activa spends itself consists of things produced by human activities; but the things that owe their existence exclusively to men nevertheless constantly condition their human makers.” Nonetheless, there is a profound difference between them. For Barad, “world” is not Arendt’s humanly built habitat, the domain of homo faber (which does not necessarily entail mastery of nature, but always involves a certain amount of violence done to nature, even to the point of “degrading nature and the world into mere means, robbing both of their independent dignity.” (H.C., p. 156, emphasis added.) “World” is matter, the physical, ever-changing reality of an inherently active, “larger material configuration of the world and it ongoing open-ended articulation.” Or is it?
Since this world is made demonstrably real or determinate only through the design of the right experiment to measure the effects of, or marks on, bodies, or “measuring agencies” (such as a photographic plate) made or produced by “measured objects” (such as electrons), the physical nature of this reality becomes an effect of the experiment itself. Despite the fact that Barad insists that “phenomena do not require cognizing minds for their existence” and that technoscientific practices merely manifest “an expression of the objective existence of particular material phenomena” (p. 361), the importance of the well-crafted scientific experiment to establishing the fact of matter looms large.
Why worry about the experiment as the basis for determining the nature of nature, including so-called “human nature? For Arendt, the answer was clear: “The world of the experiment seems always capable of becoming a man-made reality, and this, while it may increase man’s power of making and acting, even of creating a world, far beyond what any previous age dared imagine...unfortunately puts man back once more—and now even more forcefully—into the prison of his own mind, into the limitations of patterns he himself has created...[A] universe construed according to the behavior of nature in the experiment and in accordance with the very principles which man can translate technically into a working reality lacks all possible representation...With the disappearance of the sensually given world, the transcendent world disappears as well, and with it the possibility of transcending the material world in concept and thought.”
The transcendence of representationalism does not trouble Barad, who sees “representation” as a process of reflection or mirroring hopelessly entangled with an outmoded “geometrical optics of externality.” But for Arendt, appearance matters, and not in the sense that a subject discloses some inner core of being through her speaking and doing, but in the sense that what is given to the senses of perception—and not just to the sense of vision—is the basis for constructing a world in common. The loss of this “sensually given world” found its monstrous enactment in the world of the extermination camps, which Arendt saw as “special laboratories to carry through its experiment in total domination.”
If there is a residual humanism in Arendt’s theorizing it is not the simplistic anthropocentrism, which takes “man as the measure of all things,” a position she implicitly rejects, especially in her critique of instrumentalism. Rather, she insists that “the modes of human cognition [science among them] applicable to things with ‘natural’ qualities, including ourselves to the limited extent that we are specimens of the most highly developed species of organic life, fail us when we raise the question: And who are we?” (H.C., p. 11, emphasis in the original) And then there is the question of responsibility.
We may be unable to control the effects of the actions we set in motion, or, in Barad’s words, “the various ontological entanglements that materiality entails.”
But no undifferentiated assignation of agency to matter, or material sedimentations of the past “ingrained in the body’s becoming” can release us humans from the differential burden of consciousness and memory that is attached to something we call the practice of judgment. And no appeal to an “ethical call...written into the very matter of all being and becoming” will settle the question of judgment, of what is to be done. There may be no place to detach ourselves from responsibility, but how to act in the face of it is by no means given by the fact of entanglement itself. What if “everything is possible.”?
-Kathleen B. Jones
I was at dinner with a colleague this week—midterm week. Predictably, talk turned to the scourge of all professors: grading essays. There are few tasks in the life of a college professor less fulfilling than grading student essays. Every once in a while a really good essay jolts me to consciousness. I am elated by such encounters. To be honest, however, reading essays is for the most part stultifying. This is not the fault of the students, many of whom are brilliant and exuberant writers. I find it trying to wade through 25 essays discussing the same book, offering varying opinions and theories, while keeping my attention and interest. How many different ways can one ask for a thesis, talk about the importance of transition sentences, and correct grammar? For some time it is fun, in a way. One learns new things and is captivated by comparing how bright young minds see things. But after years, grading the essay becomes just part of the worst part of a great job.
So how might my colleagues and I react to news that EdX—the influential Harvard-MIT led consortium offering online courses—has developed software that will grade college student essays? I imagine it is sort of like how people felt when the dishwasher was invented. You mean we can cook and feast and don’t have to scrub pots and wash dishes? It promises to allow us to focus on teaching well without having to do that part of our job that we truly dread.
The appeal of computer grading is obvious and broad. Not only will many professors and teachers be freed from unwanted tedium, but also it may help our students. One advantage of computer grading is that it is nearly instantaneous. Students can hand in their work and get a grade and feedback seconds later. Too often essays are handed back days or even weeks after they are submitted. By then the students have lost interest in their paper and forgotten the inspiration that breathed life into their writing. To receive immediate feedback will allow students to see what they did wrong and how they could improve while the generative impulse underlying the paper is still fresh. Computer grading might encourage students to turn in numerous drafts of a paper; it may very well help teach students to write better, something that professorial comments delivered after a week rarely accomplish.
Another putative advantage of computer grading is its objectivity and consistency. Every professor knows that it matters when we read essays and in what order. Some essays find us awake and attentive. Others meet my eyes as they struggle to remain open. As much as I try to ignore the names on the top of the page, I can’t deny that my reading and grading is personalized to the students. I teach at a small liberal arts college where I know the students. If I read a particularly difficult sentence by a student I have come to trust, I often make a second effort. My personal attention has advantages but it is of course discriminatory. The computer will not do that, which may be seen by some as more fair. What is more, the computer doesn’t get tired or need caffeine.
Perhaps the most important advantage for administrators considering these programs is the cost savings. If computers relieve professors from the burden of grading, that means professors can teach more. It may also mean that fewer TA’s are necessary in large lecture courses, thus saving money for strapped universities. There may even be a further side benefit to these programs. If universities need fewer TA’s to grade papers, they may admit fewer graduate students to their programs, thus going some way towards alleviating the extraordinary and irresponsible over-production of young professors that is swelling the ranks of unemployable Ph.D.s.
There are, of course, real worries about computer grading of essays. My concern is not that the computers will make mistakes (so do I); or that we lack studies that show that computers can grade as well as human professors—for I doubt professors are on the whole excellent graders. The real issue is elsewhere.
According to the group “Professionals Against Machine Scoring of Student Essays in High-Stakes Assessment,” the problem with computer grading of essays is simple: Machines cannot read. Here is what the group says in a statement:
Let’s face the realities of automatic essay scoring. Computers cannot ‘read.’ They cannot measure the essentials of effective written communication: accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organization, clarity, and veracity, among others.
What needs to be taken seriously is not that computers can’t grade as well as humans. In many ways they grade better. More consistently. More honestly. With less grade inflation. And more quickly. But computer grading will be different than human grading. It will be less nuanced and aspire to clearly defined criteria. Are sentences grammatical? Is there a clear statement of the thesis? Are there examples given? Is there a transition between sentences? All of these are important parts of good writing and the computer can be trained to look for these characteristics in an essay. What this means, however, is that computers will demand the kind of clear, precise, and logical writing that computers can understand and that many professors and administrators demand from students. What this also means, however, is that writing will become more mechanical.
There is much to be learned here from an analogy with the rise of computer chess. The great grandmaster Gary Kasparov—who famously lost to Deep Blue— has perceptively argued that machines have changed the ways Chess is played and redefined what a good chess move and a well-played chess game looks like. As I have written before:
The heavy use of computer analysis has pushed the game itself in new directions. The machine doesn’t care about style or patterns or hundreds of years of established theory. It counts up the values of the chess pieces, analyzes a few billion moves, and counts them up again. (A computer translates each piece and each positional factor into a value in order to reduce the game to numbers it can crunch.) It is entirely free of prejudice and doctrine and this has contributed to the development of players who are almost as free of dogma as the machines with which they train. Increasingly, a move isn’t good or bad because it looks that way or because it hasn’t been done that way before. It’s simply good if it works and bad if it doesn’t. Although we still require a strong measure of intuition and logic to play well, humans today are starting to play more like computers. One way to put this is that as we rely on computers and begin to value what computers value and think like computers think, our world becomes more rational, more efficient, and more powerful, but also less beautiful, less unique, and less exotic.
Much the same might be expected from the increasing use of computers to grade (and eventually to write) essays. Students will learn to write in ways expected from computers, just as they today try to learn to write in ways desired by their professors. The difference is that different professors demand and respond to varying styles. Computers will consistently and logically drive writing towards a more mechanical and logical style. Writing, like Chess playing, will likely become more rational, more efficient, and more effective, but also less beautiful, less unique, and less eccentric. In other words, writing will become less human.
It turns out that many secondary school districts already use computers to grade essays. But according to John Markoff in The New York Times, the EdX software promises to bring the technology into college classrooms as well as online courses.
It is quite possible that in the near future, my colleagues and I will no longer have to complain about grading essays. But that is unlikely at Bard. More likely is that such software will be used in large university lecture courses. In such courses with hundreds of students, professors already shorten questions or replace essays with multiple-choice tests. Or they use armies of underpaid graduate students to grade these essays. It is quite likely that software will actually augment the educational value of writing assignments at college in these large lecture halls.
In seminars, however, and in classes at small liberal arts colleges like Bard where I teach, such software will not likely free my colleagues and me from reading essays. The essays I assign are not simple responses to questions in which there are clear criteria for grading. I look for elegance, brevity, insight, and the human spark (please no comments on my writing). Whether or not I am good at evaluating writing or at teaching writing, that is my aspiration. I seek to encourage writing that is thoughtful rather than writing that is simply accurate. When I have time to make meaningful comments on papers, they concern structure, elegance, and depth. It is not only a way to grade an essay, but also a way to connect with my students and help them to see what it means to write and think well.
And yet, I can easily imagine making use of such a computer-grading program. I rarely have time to grade essays as well or as quickly as I would like. I would love to have my students submit drafts of their essays to the EdX computer program.
If they could repeatedly submit their essays and receive such feedback and use the computer to catch not only grammatical errors but also poor sentences, redundancies, repetitions, and whatever other mistakes the computer can be trained to recognize, that would allow them to respond and rework their essays many times before I see them. Used well, I hope, such grading programs might really augment my capacities as a professor and their experiences as students.
I have real fears that grading technology will rarely be used well. Rather, it will too-often replace human grading altogether and in large lectures, high schools and standardized tests will impose a new and inhuman standard on the way we write and thus the way we think. We should greet such new technologies enthusiastically and skeptically. But first, we should try to understand them. Towards that end, it is well worth reading John Markoff’s excellent account of the new EdX computer grading software in The New York Times. It is your weekend read.
My thought is me: that is why I cannot stop thinking. I exist because I think I cannot keep from thinking.”
— Jean-Paul Sartre
Critical thinking is possible only where the standpoints of all others are open to inspection. Hence, critical thinking, while still a solitary business, does not cut itself off from ‘all others.’ To be sure, it still goes on in isolation, but by the force of imagination it makes the others present and thus moves in a space that is potentially public, open to all sides; in other words, it adopts the position of Kant’s world citizen. To think with an enlarged mentality means that one trains one’s imagination to go visiting.
-Hannah Arendt, Lectures on Kant's Political Philosophy, 43
Arendt’s appeal to the “enlargement of the mind” of Kantian judgment is well known and is often discussed in relation to Eichmann’s failure to think and recognize the world’s plurality. To the extent that we find lessons in these discussions, a prominent one is that we might all be vulnerable to such failures of judgment.
While recognizing how easy it is for us to not think, especially in the bureaucratic structures of the contemporary world, I want to focus here on the moments of thinking and judgment that do occur but fail to garner recognition.
I was recently involved in a discussion about educational and other support programs in prisons around the country. During the conversation, someone made the observation that these programs seem to appeal especially to women. It was the case that each of the women in this conversation had been involved in some prison program, either as an attorney or an educator. But the observation was intended, of course, to go beyond this relatively small group.
I don’t know whether it’s true that many more women than men are involved in programs like Bard’s Prison Initiative or the Innocence Project or any number of such programs. But what struck me about this conversation was that despite no one claiming to possess any knowledge beyond his or her personal observations, many seemed relatively certain about the possible explanation about this phenomenon (or non-phenomenon): that women might have a greater capacity to empathize with others, not because we are innately sensitive beings, but because we can more easily recognize the suffering of others and respond to that suffering.
Many readers of Arendt will immediately react to this description with Arendt’s critique of empathy in mind. For Arendt, empathy destroys critical thinking to the extent that it tries to “know what actually goes on in the mind of all others” as opposed to the comparing our judgment with the possible judgments of others (Lectures on Kant’s Political Philosophy, 43). In trying to feel like someone else, empathy makes it impossible to respond politically, as it destroys the distance between individuals that makes a response to another as other possible.
But if not empathy, what might better describe those, whether they are women or men, who are open to the sufferings and injustices of others? The answer, I submit, is critical thinking.
For Arendt, critical thinking is necessarily imaginative, as it requires that the thinker make “the others present.” The presence of others is not achieved by imagining what goes on in each of the minds of these imagined others. Rather, this presence is what allows one imaginatively to construct a public space in which one’s actions are visible to other people.
Critical thinking thus most importantly lies not in the ability to compare our judgment with the possible judgments of all others, which is what is often stressed in discussions of Arendtian judgment, but rather in the adoption of the position of Kant’s “world citizen.” Adopting such a position is less about imagining others as such and more about recognizing that one is always putting oneself out there for others to judge. Insofar as it is necessary to construct the audience to which the thinker presents herself, the imagination of others is the first step to critical thinking, but only the first step. Critical thinking is, as Kant writes in “What is Enlightenment?,” “addressing the entire reading public” such that that one presents oneself for judgment by this learned group of which one purports to be a member. Like a politician or a writer or an actor, the critical thinker acts with the understanding that she will be judged not just by friends, lovers, or like-minded compatriots, but by an entire learned public whose judgments are tempered neither by love nor even self-serving support.
The space in which women moved has always been “public” to the extent that women who acted always did so with the knowledge that they are opening themselves up to the judgment of others. Thus acting takes courage and a true living of the motto of the enlightenment “Sapere aude! Have the courage to use your own understanding!” (Kant, “What is Enlightenment?”).
But acting also necessarily engages critical thinking in another sense: one’s actions are always public to the extent that in acting one presents oneself for judgment to the world and discloses oneself. The thinking of women might, in this way, have been “forced” into the realm of the critical, for as solitary as the activity of thinking necessarily is, it occurs in a space in which the others are present by not only the “force of imagination,” but also the force of history. Thus, if certain professions, causes, or activities do draw relatively more women than men, part of the explanation might be that women think more critically. The world that one sees, with all its injustices and its suffering, does not move one to action or service. But this world is not the world in which one thinks or acts. Rather, one moves in and responds to the imagined one in which what one does is meaningful because one’s actions are being judged and because as vulnerable as one might feel in being judged, judgment brings along with it the implicit recognition that what one does is visible to others and, quite simply, that it might matter.
Arendt’s understanding of judgment is closely tied to Kant’s Critique of Judgment for a good reason: she herself builds her ideas directly on Kantian judgment. But reading Arendtian judgment through Kant’s shorter piece, “What is Enlightenment?” opens up to us aspects of the former that have previously been obscured. And it opens us up to acts of thinking, judgment, and courage to which we are often blind. Again, I don’t know that more women than men engage in work that supports prisoners and advances the cause of prisoners’ rights. But I don’t think it is controversial to say that the perception that they do exists and that women’s ability to empathize with others, whether because of their backgrounds or simply because they are women, is frequently an accompanying discourse. This could be the right explanation. But it could also be an expression not only of prejudices of what women are, but also of an insufficiency of our conceptual vocabulary to capture what it is that is going on in a way that does not simply reassert these prejudices.
In an essay in the Wall Street Journal, Frans de Waal—C. H. Candler Professor of Primate Behavior at Emory University—offers a fascinating review of recent scientific studies that upend long-held expectations about the intelligence of animals. De Waal rehearses a catalogue of fantastic studies in which animals do things that scientists have long thought they could not do. Here are a few examples:
Ayumu, a male chimpanzee, excels at memory; just as the IBM computer Watson can beat human champions at Jeopardy, Ayumu can easily best the human memory champion in games of memory.
Similarly, Kandula, a young elephant bull, was able to reach some fragrant fruit hung out of reach by moving a stool over to the tree, standing on it, and reaching for the fruit with his trunk. I’ll admit this doesn’t seem like much of a feat to me, but for the researchers de Waal talks with, it is surprising proof that elephants can use tools.
Scientists may be surprised that animals can remember things or use tools to accomplish tasks, but any one raised on children’s tales of Lassie or Black Beauty knows this well, as does anyone whose pet dog opened a door knob, brought them a newspaper, or barked at intruders. The problem these studies address is less our societal view of animals than the overly reductive view of animals that de Waal attributes to his fellow scientists. It’s hard to take these studies seriously as evidence that animals think in the way that humans do.
Seemingly more interesting are experiments with self-recognition and also facial recognition. De Waal describes one Asian Elephant who stood in front of a mirror and “repeatedly rubbed a white cross on her forehead.” Apparently the elephant recognized the image in the mirror as herself. In another experiment, chimpanzees were able to recognize which pictures of chimpanzees were from their own species. Like my childhood Labrador who used to stare knowingly into the mirror, these studies confirm that animals are able to recognize themselves. This means that animals do, likely, understand that they are selves.
For de Waal, these studies have started to upend a view of humankind's unique place in the universe that dates back at least to ancient Greece. “Science,” he writes, “keeps chipping away at the wall that separates us from the other animals. We have moved from viewing animals as instinct-driven stimulus-response machines to seeing them as sophisticated decision makers.”
The flattening of the distinction between animals and humans is to be celebrated, De Waal argues, and not feared. He writes:
Aristotle's ladder of nature is not just being flattened; it is being transformed into a bush with many branches. This is no insult to human superiority. It is long-overdue recognition that intelligent life is not something for us to seek in the outer reaches of space but is abundant right here on earth, under our noses.
DeWaal has long championed the intelligence of animals, and now his vision is gaining momentum. This week, in a long essay called “One of Us” in the new Lapham’s Quarterly on animals, the glorious essayist John Jeremiah Sullivan begins with this description of similar studies to the ones DeWaal writes about:
These are stimulating times for anyone interested in questions of animal consciousness. On what seems like a monthly basis, scientific teams announce the results of new experiments, adding to a preponderance of evidence that we’ve been underestimating animal minds, even those of us who have rated them fairly highly. New animal behaviors and capacities are observed in the wild, often involving tool use—or at least object manipulation—the very kinds of activity that led the distinguished zoologist Donald R. Griffin to found the field of cognitive ethology (animal thinking) in 1978: octopuses piling stones in front of their hideyholes, to name one recent example; or dolphins fitting marine sponges to their beaks in order to dig for food on the seabed; or wasps using small stones to smooth the sand around their egg chambers, concealing them from predators. At the same time neurobiologists have been finding that the physical structures in our own brains most commonly held responsible for consciousness are not as rare in the animal kingdom as had been assumed. Indeed they are common. All of this work and discovery appeared to reach a kind of crescendo last summer, when an international group of prominent neuroscientists meeting at the University of Cambridge issued “The Cambridge Declaration on Consciousness in Non-Human Animals,” a document stating that “humans are not unique in possessing the neurological substrates that generate consciousness.” It goes further to conclude that numerous documented animal behaviors must be considered “consistent with experienced feeling states.”
With nuance and subtlety, Sullivan understands that our tradition has not drawn the boundary between human and animal nearly as securely as de Waal portrays it. Throughout human existence, humans and animals have been conjoined in the human imagination. Sullivan writes that the most consistent “motif in the artwork made between four thousand and forty thousand years ago,” is the focus on “animal-human hybrids, drawings and carvings and statuettes showing part man or woman and part something else—lion or bird or bear.” In these paintings and sculptures, our ancestors gave form to a basic intuition: “Animals knew things, possessed their forms of wisdom.”
Religious history too is replete with evidence of the human recognition of the dignity of animals. God says in Isaiah that the beasts will honor him and St. Francis, the namesake of the new Pope, is famous for preaching to birds. What is more, we are told that God cares about the deaths of animals.
“In the Gospel According to Matthew we’re told, “Not one of them will fall to the ground apart from your Father.” Think about that. If the bird dies on the branch, and the bird has no immortal soul, and is from that moment only inanimate matter, already basically dust, how can it be “with” God as it’s falling? And not in some abstract all-of-creation sense but in the very way that we are with Him, the explicit point of the verse: the line right before it is “fear not them which kill the body, but are not able to kill the soul.” If sparrows lack souls, if the logos liveth not in them, Jesus isn’t making any sense in Matthew 10:28-29.
What changed and interrupted the ancient and deeply human appreciation of our kinship with besouled animals? Sullivan’s answer is René Descartes. The modern depreciation of animals, Sullivan writes,
proceeds, with the rest of the Enlightenment, from the mind of René Descartes, whose take on animals was vividly (and approvingly) paraphrased by the French philosopher Nicolas Malebranche: they “eat without pleasure, cry without pain, grow without knowing it; they desire nothing, fear nothing, know nothing.” Descartes’ term for them was automata—windup toys, like the Renaissance protorobots he’d seen as a boy in the gardens at Saint-Germain-en-Laye, “hydraulic statues” that moved and made music and even appeared to speak as they sprinkled the plants.
Too easy, however, is the move to say that the modern comprehension of the difference between animal and human proceeds from a mechanistic view of animals. We live at a time of the animal rights movement. Around the world, societies exist and thrive whose mission is to prevent cruelty toward and to protect animals. Yes, factory farms treat chickens and pigs as organic mechanisms for the production of meat, but these farms co-exist with active and quite successful movements calling for humane standards in food production. Whatever the power of Cartesian mechanics, its success is at odds with the persistence of the religious, ancient solidarity, and also deeply modern sympathy between human and animal.
A more meaningful account of the modern attitude towards animals might be found in Spinoza. Spinoza, as Sullivan quotes him, recognizes that animals feel in ways that Descartes did not. As do animal rights activists, Spinoza admits what is obvious: that animals feel pain, show emotion, and have desires. And yet, Spinoza maintains a distinction between human and animal—one grounded not in emotion or feeling, but in human nature. In his Ethics, he writes:
Hence it follows that the emotions of the animals which are called irrational…only differ from man’s emotions to the extent that brute nature differs from human nature. Horse and man are alike carried away by the desire of procreation, but the desire of the former is equine, the desire of the latter is human…Thus, although each individual lives content and rejoices in that nature belonging to him wherein he has his being, yet the life, wherein each is content and rejoices, is nothing else but the idea, or soul, of the said individual…It follows from the foregoing proposition that there is no small difference between the joy which actuates, say, a drunkard, and the joy possessed by a philosopher.
Spinoza argues against the law prohibiting slaughter of animals—it is “founded rather on vain superstition and womanish pity than on sound reason”—because humans are more powerful than animals. Here is how he defends the slaughter of animals:
The rational quest of what is useful to us further teaches us the necessity of associating ourselves with our fellow men, but not with beasts, or things, whose nature is different from our own; we have the same rights in respect to them as they have in respect to us. Nay, as everyone’s right is defined by his virtue, or power, men have far greater rights over beasts than beasts have over men. Still I do not deny that beasts feel: what I deny is that we may not consult our own advantage and use them as we please, treating them in the way which best suits us; for their nature is not like ours.
Spinoza’s point is quite simple: Of course animals feel and of course they are intelligent. Who could doubt such a thing? But they are not human. That is clear too. While we humans may care for and even love our pets, we recognize the difference between a dog and a human. And we will, in the end, associate more with our fellow humans than with dogs and porpoises. Finally, we humans will use animals when they serve our purposes. And this is ok, because have the power to do so.
Is Spinoza arguing that might makes right? Surely not in the realm of law amongst fellow humans. But he is insisting that we recognize that for us humans, there is something about being human that is different and, even, higher and more important. Spinoza couches his argument in the language of natural right, but what he is saying is that we must recognize that there are important differences between animals and humans.
At a time that values equality over what Friedrich Nietzsche called the “pathos of difference,” the valuation of human beings over animals is ever more in doubt. This comes home clearly in a story told recently by General Stanley McChrystal, about a soldier who expressed sympathy for some dogs killed in a raid in Iraq. McChrystal responded, severely: “"Seven enemy were killed on that target last night. Seven humans. Are you telling me you're more concerned about the dog than the people that died? The car fell silent again. "Hey listen," I said. "Don't lose your humanity in this thing."” Many, no doubt, are more concerned, or at least are equally concerned, about the deaths of animals as they are about the deaths of humans. There is ever-increasing discomfort about McChrystal’s common sense affirmation of Spinoza’s claim that human beings simply are of more worth than animals.
The distinctions upon which the moral sense of human distinction is based are foundering. For DeWaal and Sullivan, the danger today is that we continue to insist on differences between animals and humans—differences that we don’t fully understand. The consequences of their openness to the humanization of animals, however, is undoubtedly the animalization of humans. The danger that we humans lose sight of what distinguishes us from animals is much more significant than the possibility that we underestimate animal intelligence.
I fully agree with DeWaal and Sullivan that there is a symphony of intelligence in the world, much of it not human. And yes, we should have proper respect for our ignorance. But all the experiments in the world do little to alter the basic facts, that no matter how intelligent and feeling and even conscious animals may be, humans and animals are different.
What is the quality of that difference? It is difficult to say and may never be fully articulated in propositional form. On one level it is this: Simply to live, as do plants or animals, does not constitute a human life. In other words, human life is not simply about living. Nor is it about doing tasks or even being conscious of ourselves as humans. It is about living meaningfully. There may, of course, be some animals that can create worlds of meaning—worlds that we have not yet discovered. But their worlds are not the worlds to which we humans aspire.
Over two millennia ago, Sophocles, in his “Ode to Man,” named man Deinon, a Greek word that connotes both greatness and horror, that which is so extraordinary as to be at once terrifying and beautiful. Man, Sophocles tells us, can travel over water and tame animals, using them to plough fields. He can invent speech, and institute governments that bring humans together to form lasting institutions. As an inventor and maker of his world, this wonder that is man terrifyingly carries the seeds of his destruction. As he invents and comes to control his world, he threatens to extinguish the mystery of his existence, that part of man that man himself does not control. As the chorus sings: “Always overcoming all ways, man loses his way and comes to nothing.” If man so tames the earth as to free himself from uncertainty, what then is left of human being?
Sophocles knew that man could be a terror; but he also glorified the wonder that man is. He knew that what separates us humans from animals is our capacity to alter the earth and our natural environment. “The human artifice of the world,” Arendt writes, “separates human existence from all mere animal environment…” Not only by building houses and erecting dams—animals can do those things and more—but also by telling stories and building political communities that give to man a humanly created world in which he lives. If all we did as humans was live or build things on earth, we would not be human.
To be human means that we can destroy all living matter on the Earth. We can even today destroy the earth itself. Whether we do so or not, it now means that to live on Earth today is a “Choice” that we make, not a matter of fate or chance. Our Earth, although we did not create it, is now something we humans can decide to sustain or destroy. In this sense, it is a human creation. No other animal has such a potential or such a responsibility.
There is a deep desire today to flee from that awesome and increasingly unbearable human responsibility. We flee, therefore, our humanity and take solace in the view that we are just one amongst the many animals in the world. We see this reductionism above all in human rights discourse. One core demand of human rights—that men and women have a right to live and not be killed—brought about a shift in the idea of humanity from logos to life. The rise of a politics of life—the political demand that governments limit freedoms and regulate populations in order to protect and facilitate their citizens’ ability to live in comfort—has pushed the animality, the “life,” of human beings to the center of political and ethical activity. In embracing a politics of life over a politics of the meaningful life, human rights rejects the distinctive dignity of human rationality and works to reduce humanity to its animality.
Hannah Arendt saw human rights as dangerous precisely because they risked confusing the meaning of human worldliness with the existence of mere animal life. For Arendt, human beings are the beings who build and live in a political world, by which she means the stories, institutions, and achievements that mark the glory and agony of humanity. To be human, she insists, is more than simply living, laboring, working, acting, and thinking. It is to do all of these activities in such a way as to create, together, a common life amongst a plurality of persons.
I fear that the interest in animal consciousness today is less a result of scientific proof that animals are human than it is an increasing discomfort with the world we humans have built. A first step in responding to such discomfort, however, is a reaffirmation of our humanity and our human responsibility. There is no better way to begin that process than in engaging with a very human response to the question of our animality. Towards that end, I commend to you “One of Us,” by John Jeremiah Sullivan.
For two years I taught literature, reading and writing at a public university in one of New York City’s outer Boroughs. Of course having come out of a liberal arts “thinking” institution what I really thought (maybe hoped) I was teaching was new perspectives. Ironically, the challenge that most struck me was not administrative, nor class size or terrible grammar and endless hours of grading, the most pressing obstacle lay in creating a case for the value of “thinking.”
I state “case” because I regularly felt like my passions and beliefs, as well as my liberal arts education went on daily trial. I had originally come from a hard-scrabble immigrant reality, but my perception of reality had been altered by my education experience, and as an educator I felt the need to authenticate my progressive (core text) education with my students.
I was regularly reminded that the immediate world of the “average” student (citizen) with all its pressing, “real” concerns does not immediately open itself to “thought” in the liberal arts sense. We are a specialization, automation, struggling and hyper competitive society. The “learning time” of a student citizen is spent in the acquisition of “marketable,” and differentiating skills, while their “free time” is the opportunity to decompress from, or completely escape the pressures of competitive skill acquisition. The whole cycle is guided by an air of anxiety fostered in our national eduction philosophy, as well as the troubled economy and scattered society at large. I don’t think one can teach the humanities without listening to their students, and listening to the students calls for a deep inventory on the value of “thought” in the humanities sense, and then ultimately in how to most truthfully communicate this value to the students.
I need to add here that my students were quite smart and insightful. This made it even greater of a challenge. Their intelligence was one of realism. I needed to both acknowledge and sway their perspective, as well as my own.
Each semester I began with a close-reading of David Foster Wallace's commencement speech at Kenyon College, “What is Water.” He begins his speech with the parable of two fish swimming by an older fish which as it swims by asks, How is the Water?” The little ones swim on and only later ask each other, “What is water?” Didactic parable, cliche -- yes -- but Wallace goes on to deconstruct the artifice of commencement speeches, parables, and cliches, and then rebuilds them. Having so skillfully deconstructed them he has invited his listers into the form making, and as he communicates the truth beneath what had earlier seemed lofty or cliche, the listers follow him towards meaning making. Ultimately Wallace states that education is “less about teaching you how to think, and more about teaching you of the choice in what to think about.” To have agency is to be a meaning maker. And as more and more cultural institutions artfully vie for the citizens devotion and loyalty -- politics, religion, but even more so, corporate houses and pop culture designs, in the ever growing noise of institutional marketing the call to choose seems ever more muted.
The choice, for so many students today, is simply in how to most skillfully compartmentalize themselves and their lives in the face of the anxieties of their immediate world. The choice for many young teachers, facing their own set of related anxieties, is in how far are they willing step away from the ideal of learning-living-teaching integration model -- so easy is it today as an educator to simply become disenchanted, frustrated and aloof. Sometimes, “thinking” is the process of choosing what to keep and what to give away.
Wallace's insightful, no b.s, humorous and sincere tone resonated with my students, that is of course until they found out that Wallace killed himself. Then, that’s what everyone wanted to focus on. I can not blame them. There is a ‘text’ to ‘personal’ mystery, a ‘content’ to ‘context’ disjunction that opens itself at such a revelation, a mystery that the “thinking” mind wants to explore. The modern “thinking” mind draws little separation between the lofty and the sublime, the public and the personal. Such is a byproduct of a generation raised on reality television and celebrity stories. I, in all sincerity cannot judge this. My generation, the X’s who came of age on the cusp of the Millennials, were culturally educated by MTV, The Real World and Road Rules, and thus we crave hip, colorful, appropriately gentrified spaces to occupy -- think of artist collectives, or Facebook and Google working environments (bean bags, chill and chic prescription sunglasses, lounge happy hour with juice bars, untraditional working hours, colorful earth tones). I digress, I meant to make some observation of “thinking.”
I was excited to teach what excited me: I began with Wallace, then Kafka, O’Connor (Flannery or Frank), Platonov, Carver, Babel, Achebe Kundera, Elliot, etc... It is, essentially, the seven sisters freshmen reading list, a popular catalogue of classic stories peppered with some international obscurity. It is the “cool” thing in liberal arts. But, over and over my students came to me complaining that they could not find this relevant to their lives. After such reports I would tweak my lesson plans to give a greater introduction to the works, going deeper into the philosophical tenets of the stories, and into the universal reward of being able to utilize the tools of the thinking, writing mind. Induct, deduct, compare, contrast, relate, “give it greater shape,” I would say. “Breath life into it.”
To have the skills to decipher plot, to record the echo of a narrative, to infer characterization from setting, to understand the complex structure of a character, to be invited to participate in the co-creation of a narrative which gently guides you through action but leaves the moral implications up to the reader. These are “indispensable,” I would advise my students. “Indispensable for human agency.” Some would slowly gravitate to my vision, as I prodded further and further into their motivations for being in school, career, and other ‘relevant’ choice. Yet, they often felt only like visitors in my library, preparing to check out and return to the “default” education thinking mode as soon as the quarter, mid, or end semester exam periods began. The pressures of what they call “the real world” are much stronger then the ghosts of books and introspective thought -- vague, powerless, intangible.
“The real world:” Here I am reminded of the scene from the Matrix when Morpheus unveils to Neo “the desert of the real.” A barren waste land of human energy as only a power source nourished for consumption. The Matrix, I will add here, is based on a work by Jean Baudrillard, a french philosopher who warns of a modern society as a place existing in consumption and entertainment, devoid of meaning making -- the urge towards agency, in hibernation; the map towards meaning, defunct. In describing this new world he coined the phrase “the desert of the real.” Again, I fall into tangental thought.
I needed to find a way to invite, seduce, capture my students. I tried using myself as a conduit.
I pride myself on the fact that I am an immigrant, a former “at risk” student, that my tattoos all have mythological meaning and thought behind them, that I am a high-school drop out with credentials to my name, a top tier education, a masters degree, etc... I felt like these could help me bridge for my students the platforms of reality-setting discourse and humanistic thought. I had, and still do, believed that real “thinking” is indispensable in being human, in being free, and in the ability to have fun and play with the world.
Again, my students would, at times, meet me in the middle space I wanted to create, though rarely did this space become living for them, instead they lay their heads to the sound of another’s palpitation and breath, and then moved on. Maybe I planted a seed, I like to think. But then, maybe, they were bringing me somewhere as well.
They could not recklessly follow me, or I them. It was an issue of pragmatic bonds. For a moment, my class, or an individual student I was reading with would delve into the power of words with me and the ending of Andrei Platonov’s “Potudon River” would finally break through the events of the page: “Not every grief can be comforted; there is a grief that ends only after the heart has been worn away in long oblivion, or in distraction amidst life’s everyday concerns.” And my students would draw new understanding of the passage, enter it through a word or phrase that could unlock that middle space between their worlds and the world of literature, philosophy, metaphor. “Grief,” “long oblivion,” life’s everyday concerns,” all the sudden my students would give these new meaning, now only slightly guided by the story and letting their lives find a grip to the reigns. They would find new connections, and again they would return to the “real” world.
More and more I struggled to make thinking relevant. “Will this help me get a better job?” I was asked.
Thinking about it I had to encounter my own struggles with this question. I know the answers. I know the programed liberal arts answer, and the “real” answer. I know that the liberal arts answer exposes the “real” as something at best lacking, at its worst empty. I also know that the real, is real; it happens in real time, removed from the concerns of literature, poetry, and philosophy which concern themselves with the work of mans eternity.
“Unlikely,” I would answer. For gods sake, though I was teaching all these things I cared so deeply about, I also worked nights as a bartender to satisfy the demands of the real. I had to produce something consumable and all of my learning and thoughts on thinking are not that.
Here I acknowledge that this answer is not entirely true. We can find jobs which call for liberal arts skills, but these are few and far between and rarely afford a comfortable standard of living. We may also posit the argument that liberal arts skills will contribute to ones ability to perform better and have a greater understanding of ones job, but this argument does not lend itself to substantial evidence, no matter how much I may actually believe it. This was the litmus test of my “thinking,” and it only survives in embracing the privacies of my world, that I chose my private world despite and above the “real.”
“Unlikely.” And where does that leave us?
Ultimately, all I have as a conscious being is the ability to tell stories, to choose and create my narrative from the scattered world I am provided. Ultimately, after deconstructing both the “real” and the “lofty” I could only encourage my students to choose their own themes. To the question of “what is water?” I could only answer, “the desert.”
Oddly enough, and as “unlikely” as it may seem, when I answered with honesty, to them as well as myself, they followed. -- we could talk.
Of late there has been no shortage of commentary on the ten years that have passed since the U.S. invasion of Iraq in 2003. Much of it has focused on the justifications for the war provided by members of the Bush administration, the lingering consequences of the invasion for President Obama and other policymakers, and the often harrowing experiences of American soldiers. These are certainly matters that should be discussed at length.
But U.S. public discourse continues to say little about the impact of the war on Iraqis themselves or about their efforts to survive and interpret it.
Much of it also remains tightly focused on the era after 9/11, as if those day’s events rendered the longer arc of Iraqi history—including the part that the U.S. has played in it—more or less irrelevant. To the extent that the country’s past is addressed at all, it commonly reduces “sectarianism,” “tribalism,” and other shibboleths to intrinsic and timeless features of Iraqi (and wider Arab and Islamic) life.
Two recent contributions on Jadaliyya (www.jadaliyya.com), a blog and e-zine published by the Arab Studies Institute, offer a counterpoint to these prevailing trends. The first is an interview with historian Dina Rizk Khoury related to the publication of her recent book, Iraq in Wartime: Soldiering, Martyrdom, and Resistance (Cambridge, 2013). As Khoury rightly notes, most of the discussion in the U.S. has failed to recognize the fact that Iraqis spent the last twenty-three years of Baathist rule in a state of nearly continuous military conflict. First there was the Iran-Iraq War, then the Iraqi seizure of Kuwait, then the 1991 Gulf War and the ensuring embargo, and finally the most recent American invasion and occupation.
Under such conditions, Khoury argues, war became a matter of normalcy and bureaucratic governance that insinuated violence into the fabric of everyday life in Iraq. At the same time, it created recurring crises and ruptures that reshaped the structures of state authority and citizenship. And it enabled the Iraqi state to fabricate a myth of soldiering and martyrdom that, in the long run, helped to recalibrate Iraqis’ notions of national belonging along ethnic and sectarian lines. Wittingly or unwittingly, the actions of U.S. policymakers after the Gulf War and the 2003 invasion have reinforced Iraq’s societal divisions and the prevalence of violence as a mode of political action.
The second contribution is a commentary from Orit Bashkin, “The Forgotten Protagonists: The Invasion and the Historian.” Bashkin has written extensively on the politics of pluralism (The Other Iraq, Stanford, 2010) and Jewish displacement (New Babylonians, Stanford, 2012) in twentieth-century Iraq, but here she focuses on the present and future conditions of historical scholarship. She contends that our knowledge of the Iraqi past has grown in significant ways over the past decade. (If we take Melani McAlister’s book Epic Encounters seriously, this outcome should hardly surprise us: American cultural, scholarly, and geopolitical interests in the Middle East have long been tightly intertwined.) Such expansion has been facilitated in no small part by the relocation of the Baath Party archives to the U.S. in 2008. This move has allowed professional historians ready access to a crucial corpus of texts on Saddam Hussein’s regime.
Yet Bashkin also worries that the prospects for historical knowledge production will be decidedly less rosy in the years to come. In particular, many of the other materials on which historians of Iraq rely—Ottoman records, collections of poetic and theological writings, museums, archaeological sites, and so on—have been or are being destroyed in the wake of the U.S. invasion.
As a result, it will be considerably more difficult for scholars not simply to reconstruct the Iraqi past, but also to comprehend how Iraqi citizens relate to it. In particular, we will be less able to grasp the imperial and colonial practices, post-independence state policies, and other forces that have forged the country’s current ethnic and religious cleavages. And we will be less able to understand the multiple and competing nostalgias that now proliferate among Iraqi citizens. Such nostalgias include the ambivalent and paradoxical longing for the days of Saddam Hussein, when (in Bashkin’s words) “at least there was some sense of law and order.”
American public discourse is in desperate need of commentary that positions present-day Iraqis as complex actors who both shape and are shaped by the flow of local, regional, and global histories. As Khoury and Bashkin suggest, the current focus on the past ten years is both literally and metaphorically short-sighted. And yet, for a variety of reasons, lengthening our gaze will be easier said than done.
No amount of energy will take the place of thought. A strenuous life with its
eyes shut is a kind of wild insanity.
-Henry Van Dyke
Thomas Levin of Princeton came to Bard Tuesday to give a lecture to the Drones Seminar, a weekly class I am participating in, led by my colleague Thomas Keenan and conceived by two of our students Arthur Holland and Dan Gettinger. Levin has studied surveillance techniques for years and he came to think with us about how the present obsession with drones will transform our landscape and our imaginations. At a time when the obsession with drones in the media is focused on their offensive capacities, it is important to recall that drones were originally developed as a surveillance technology. If drones are to become omnipresent in our lives, what will that mean?
Levin began by reminding us of the embrace of other surveillance devices in mass culture, like recording devices at the turn of the 20th century. He offered old postcards and cartoons in which unsuspecting servants or children were caught goofing off or insulting their superiors with newfangled recording devices like the cylinder phonograph and, later, hidden cameras and spy satellites. The realization emerges that we are being watched, and this sense pervades the popular consciousness. In looking to these representations from mass culture of the fear, awareness, and even expectation that we will be watched and listened to, Levin finds the emergence of what he calls “rhetoric of surveillance.”
In short, we talk and think constantly about the fact that we are or may be being watched. This cannot but change the way we behave and act. Levin poses this question. What, he asks, is the emerging drone imaginary?
To answer that question it is helpful to revisit an uncannily prescient imagination of the rise of drones in a text written over half a century ago, Ernst Jünger’s The Glass Bees. Originally published in 1957 and recently reissued in translation with an introduction by science fiction novelist Bruce Sterling, Jünger’s text centers around a job interview between an unnamed former light cavalry officer and Giacomo Zapparoni, secretive, filthy rich, and powerful proprietor of The Zapparoni Works that “manufactured robots for every imaginable purpose.” Zapparoni’s secret, however, is that he instead of big and hulking robots, he specialized in Lilliputian robots that gave “the impression of intelligent ants.”
The robots were not powerful in themselves, but they worked together. Like drone bees and drone ants—that exist only for procreation and then die—the small robots, or drones, serve specific purposes in industry or business. Zapparoni’s tiny robots “could count, weigh, sort gems or paper money….” Their power came from their coordination.
The robots “worked in dangerous locations, handling explosives, dangerous viruses, and even radioactive materials. Swarms of selectors could not only detect the faintest smell of smoke but could also extinguish a fire at an early stage; others repaired defective wiring, and still others fed upon filth and became indispensable in all jobs where cleanliness was essential.” Dispensable and efficient, Zapparoni’s little robots could do the most dangerous and least desirable tasks.
In The Glass Bees, we are introduced to Zapparoni’s latest invention: flying glass bees that can pollinate flowers much more efficiently and quickly than natural bees. The bees “were about the size of a walnut still encased in its green shell.” They were completely transparent and they were an improvement upon nature, at least insofar as the pollination of flowers was concerned. If a true or natural bee “sucked first on the calyx, at least a dessert remained.” But Zapparoni’s glass bees “proceeded more economically; that is, they drained the flower more thoroughly.” What is more, the bees were a marvel of agility and skill: “Given the flying speed, the fact that no collisions occurred during these flights back and forth was a masterly feat.” According to the cavalry officer, “It was evident that the natural procedure had been simplified, cut short, and standardized.”
Before our hero is introduced to Zapparoni’s bees, he is given a warning: “Beware of the bees!” And yet he forgets this warning. Watching the glass bees, the cavalry officer is fascinated. He felt himself “come under the spell of the deeper domain of techniques,” which like a spectacle “both enthralled and mesmerized.” His mind, he writes, went to sleep and he “forgot time” and “also entirely forgot the possibility of danger.”
Jünger’s book tells, in part, the story of our fascination and subjection to technologies of surveillance. On Facebook or Words with Friends, or even using our smart phones or GPS systems, we allow our fascination with technology to dull our sense of its danger. As Jünger writes: “Technical perfection strives toward the calculable, human perfection toward the incalculable. Perfect mechanisms—around which, therefore, stands an uncanny but fascinating halo of brilliance—evoke both fear and a titanic pride which will be humbled not by insight but only by catastrophe.”
The protagonist of The Glass Bees, a former member of the Light Cavalry and later a tank inspector, had once been fascinated by the “succession of ever new models becoming obsolete at an ever increasing speed, this cunning question-and-answer game between overbred brains.” What he came to see is that “the struggle for power had reached a new stage; it was fought with scientific formulas. The weapons vanished in the abyss like fleeting images, like pictures one throws into the fire. New ones were produced in protean succession.” Victory ceased to be about physical battle; it became, instead, a contest of technical mastery and knowledge.
The danger drones pose is not necessarily military. As General Stanley McChrystal rightly said when I asked him about this last week at the New York Historical Society, drones are simply another military tool that can be used for good or ill. Many fret today about collateral damage by drones and forget that if we had to send in armies to do these tasks the collateral damage would be much greater. Others worry about assassination, but drones are simply the tool, not the person pulling the trigger. It may be true that having drones when others don’t offers an enormous military advantage and makes the decision to go to kill easier, but when both sides have drones, we will all think heavily between beginning a cycle of illegal assassinations.
Rather, the danger of drones is how they change us as humans. As we humans interact more regularly with drones and machines and computers, we will inevitably come to expect ourselves and our friends and our colleagues and our lovers to act with the efficiency and selflessness of drones. Sherry Turkle worries that mechanical companions offer such fascination and unquestionable love that humans are beginning to prefer spending time with their machines than with other humans—who make demands, get tired, act cranky, and disappoint us. Ron Arkin has argued that robot soldiers will be more humane at war than human soldiers, who often act rashly out of exhaustion, anger, or revenge. Doctors are learning to rely on Watson and artificially intelligent medical machines, who can bring databases of knowledge to bear on diagnoses with the speed and objectivity that humans can only dream of. In every area of human life where humans once were thought to be necessary, drones and machines are proving more reliable, more capable, and more desirable.
The danger drones represent is not what they do better than humans, but that they do it better than humans. They are a further step in the human dream of self-improvement—the desire to overcome our shame at our all-too-human limitations.
The incredible popularity of drones today is partly a result of their freeing us to fight wars with ever-reduced human and economic costs. But drones are popular also because they appeal to the human desire for perfection. The question is, however, how perfect we humans can be before we begin to lose our humanity. That is, of course, the force of Jünger’s warning: Beware of the bees!
As drones appear everywhere around us, you would do well to put down the newspaper and turn off You Tube and, instead, revisit Ernst Jünger’s classic tale of drones. The Glass Bees is your weekend read. You can read Bruce Sterling’s introduction to The Glass Bees here.
“Arendt on Narrative Theory and Practice”
Allen Speight, College Literature, Volume 38, Number 1, Winter 2011, pp. 115-130
Allen Speight, Director, Institute for Philosophy and Religion at Boston University, argues for Arendt’s place among theorists of narrative such as Alasdair MacIntyre, Charles Talyor, and Paul Ricouer. While he does indicate contemporary questions in both the Anglo-American and continental traditions throughout the article, he delivers particularly rich insights into Arendt’s engagement with three canonical thinkers. Specifically, he highlights aspects of Arendt’s use of conceptions of narration in developing her ideas of action in The Human Condition. In each aspect, he sees Arendt drawing on a specific philosophical precursor—Aristotle, Hegel, and Augustine in turn—but also diverging from them.
In relation to Aristotle, Speight focuses on how action reveals the “who,” how the actor emerges not from his intention but from his impact on the world. As does Aristotle, Arendt places a strong focus on drama. Aristotle and Arendt both hold that “dramatic actions” allow us to “construe what sort of a character an agent has.” However, rather than focusing on the reception of the audience, Arendt links the spectator to the actor. Indeed, expanding from Speight’s interpretation, we might say Arendt opens another center in the actor himself with her idea of the daimon, who watches over one’s shoulder.
From Hegel, Speight sees Arendt picking up on the tragic nature of action and how this leads to a need for forgiveness. The agent will not get what he wants and indeed often perish due to effects that he cannot foresee. Speight makes a striking link to Hegel here:
“A stone thrown is the devil’s,” Hegel liked to say: action by its nature is not something construable in given terms but is a kind of “stepping-forth” or opening up of the unexpected and unpredictable (Elements of the Philosophy of Right.) The classic, tragic examples of action in its openness—Antigone’s deed, for example, which both Hegel and Arendt were drawn to—present in an intensified way what is an underlying condition within ordinary action, one requiring the need for some means of reconciliation.
With the line “A stone thrown is the devil’s,” Hegel lets the personified evil step in as a kind of holding place that opens the question of how the effect of action will change the actor. Unlike Hegel though, the ultimate judge is not institutionalized world history, but the world as the space in which the who is revealed.
Stepping back chronologically, Speight then turns to Augustine as a source of Arendt’s idea of narrative rebirth. Here he picks up on an existentialist debate through Sartre: given that one’s account of one’s life can change it fundamentally, do we have a responsibility to an authentic narration? To what extent are we free when we tell our own stories? Arendt rejects the possibility that a life can simply me “made” in narrative. However:
for Arendt the distinction between a life that is “lived” and a story that is “made” involves two distinctly non-Sartrean consequences. The first we have already seen in her “daimõn thesis”: that precisely because we live rather than make a life, there is a privileged—but (pace Sartre) a not necessarily false—retrospective position from which we must view the “who,“ the daimõn, that is revealed in our lives. Thus, as we have seen, the “who” is visible “ex post facto through action and speech” (Arendt 1958, 186) and this retrospectivity in turn privileges the work of the discerning interpretive historian or storyteller. (121)
I find Speight’s repeated discussion of the daimon particularly relevant, since it offers an original way to talk about the belatedness of knowledge, of how it can comes later, or even from the side, without privileging an end position as Hegel does.
In the second half of his article, Speight offers a reading of Men in Dark Times that illustrates how Arendt uses these three aspects of her narrative theory in her own practice of narration. His reading the sections on Jaspers and Waldemar Gurian explicitly link the question of the daimon, biography, and how a person come to appearance in the public realm. Readers following the growing subsection of Arendt scholarship engaged with Arendt’s literary dimension will find an original effort here that offers a model for future work connecting Arendt’s theoretical articulations with her writing practice.
"Some of my most cherished books." Submitted by Professor Jorge Giannaeas.
Reading furnishes the mind only with materials of knowledge; it is thinking
that makes what we read ours.
- John Locke
The white smoke ushered in a Pope from the New World, but one firmly planted in the old one. Pope Francis I is from Argentina but descended from Italy. According to the Arch-Bishop of Paris, quoted in The New York Times, the Pope was not of the Curia and not part of the Italian system. At the same time, because of his “culture and background, he was Italo-compatible.” Straddling the new and the old, there is some glimmer of hope that Francis I will be able to reform the machinery of the ecclesiastical administration from the inside.
Amidst this tension, the new Pope signaled his desire to be seen as an outsider by choosing the name Francis I, aligning himself with St. Francis as protector of the poor and the downtrodden. At a time of near universal distrust in the ecclesiastical order, the Pope and his supporters present the choice of Cardinal Jorge Maria Bergoglio as an affirmation of simplicity and humility.
And in some respects the new Pope does appear to be a Pope for whom the life of Jesus and life of St. Francis serve as an example of humility and service. At least if such stories like this one told by Emily Schmall and Larry Rohter are to be credited:
In 2001 he surprised the staff of Muñiz Hospital in Buenos Aires, asking for a jar of water, which he used to wash the feet of 12 patients hospitalized with complications from the virus that causes AIDS. He then kissed their feet, telling reporters that “society forgets the sick and the poor.” More recently, in September 2012, he scolded priests in Buenos Aires who refused to baptize the children of unwed mothers. “No to hypocrisy,” he said of the priests at the time. “They are the ones who separate the people of God from salvation.”
Some complain that the Pope abjures liberation theology for its connection to Marxism and rejects the using of the Gospel for political and economic transformation. Nevertheless, stories like the one above are important and show an exemplary character in Pope Francis I.
Bigger questions arise about new Pope’s past connection to what is called the Dirty War in Argentina, the period from 1976-1983 in which a brutal dictatorship stole children from their communist parents and gave them to military families while also disappearing political and ideological opponents. As one of my colleagues wrote to me, “Almost alone among major Latin American Churches, the Argentine Church officially allied itself with the military in a campaign to eradicate political dissidents (mostly left-wingers).” Bergoglio was a Catholic Church official during this period and he has been accused by many in Argentina of either not doing enough to oppose the regime or, more scandalously, actively collaborating with the dirty war. In 2005, a formal lawsuit claimed that that Bergoglio had been complicit in the kidnapping and torture of two Jesuit priests, Orland Yorio and Francisco Jalics. The priests were working in a poor barrio advocating against the dictatorship. Bergoglio insisted they stop and they were stripped from the Jesuit Order. They disappeared and months later they were found drugged and partially undressed, according to the reporting of Emily Schmall and Larry Rohter.
Margaret Hebbelthwaite, in the Guardian, defends Bergoglio, whom she knows and respects. “It was the kind of complex situation that is capable of multiple interpretations, but it is far more likely Bergoglio was trying to save their lives.” And this is the account Bergoglio gives himself, as Schmall and Rohter report:
In a long interview published by an Argentine newspaper in 2010, he defended his behavior during the dictatorship. He said that he had helped hide people being sought for arrest or disappearance by the military because of their political views, had helped others leave Argentina and had lobbied the country’s military rulers directly for the release and protection of others.
I of course have no idea whether Bergoglio is the victim of baseless calumny, as he claims, or whether he actively or meekly collaborated with a ruthless dictatorship. What is clear, however, is that at the very least, Bergoglio and his colleagues in the Argentine Catholic Church over many years looked the other way and allowed a brutal government to terrorize its population without a word of opposition.
With that history in mind, it is worthwhile to consider Hannah Arendt’s essay “The Christian Pope,” published in the New York Review of Books in 1965. Arendt was reviewing Journal of a Soul, the spiritual diaries of Pope John XXIII, the former Angelo Giuseppe Roncalli. The Jewish thinker has little patience for “endlessly repetitive devout outpourings and self-exhortation” that go on for “pages and pages” and read like “an elementary textbook on how to be good and avoid evil.” Arendt had little patience with such things and little hope that clichés, no matter how well meaning, would have much impact on the moral state of our time.
What did fascinate Arendt, however, were the anecdotes Pope John XXIII tells and the stories about him that she heard while traveling in Rome. She tells of a “Roman chambermaid” in her hotel who asked her, in all innocence:
“Madam,” she said, “this Pope was a real Christian. How could that be? And how could it happen that a true Christian would sit on St. Peter’s chair? Didn’t he first have to be appointed Bishop, and Archbishop, and Cardinal, until he finally was elected to be Pope? Had nobody been aware of who he was?”
Arendt had a simple answer for the maid. “No.” She writes that Roncalli was largely unknown upon his selection and arrived as an outsider. He was, in the words of her title, a true Christian living in the spirit of Jesus Christ. In a sense, this was so surprising in the midst of the 20th century that no one had imagined it to be possible, and Roncalli was selected without anyone knowing who he was.
Who he was Arendt found not in his book, but in the stories told about him. Whether the stories are authentic, she writes, is not so important, because “even if their authenticity were denied, their very invention would be characteristic enough for the man and for what people thought of him to make them worth telling.” One of these stories shows Roncalli’s common touch, something now being praised widely in Bergoglio.
The story tells that the plumbers had arrived for repairs in the Vatican. The Pope heard how one of them started swearing in the name of the whole Holy Family. He came out and asked politely: “Must you do this? Can’t you say merde as we do too?”
My favorite story tells of Roncalli’s meeting with Pope Pius XII in 1944 in Paris. Apparently Pius tells Roncalli that he is busy and has only 7 minutes to spare for their conversation. Roncalli then “took his leave with the words: “In that case, the remaining six minutes are superfluous.”
And then there is the story of Roncalli’s reaction when he was given Rolf Hochhuth’s play, The Deputy, which portrayed Pope Pius XII as silent and indifferent to the persecution and extermination of European Jews. When Roncalli was asked what one could do against Hochhuth’s play, he responded: “’Do against it? What can you do against the truth?’”
These stories are essential, Arendt writes, because they
show the complete independence which comes from a true detachment from the things of this world, the splendid freedom from prejudice and convention which quite frequently could result in an almost Voltairean wit, an astounding quickness in turning the tables.
Arendt found in Roncalli the kind of independence and “self-thinking” she valued so highly and that unites all the persons she profiled in her book Men in Dark Times. For Roncalli, his “complete freedom from cares and worries was his form of humility; what set him free was that he could say without any reservation, mental or emotional: “Thy will be done.”” It was this humility that girded Roncalli’s faith and led to his being content to live from day to day and even hour to hour “like the lilies in the field” with “no concern for the future.” It was, in other words, his faith—and not any theory or philosophy—that “guarded him against ‘in any way conniving with evil in the hope that by so doing [he] may be useful to someone.’” A true Christian in imitation of Jesus, Roncalli was one who “welcomed his painful and premature death as confirmation of his vocation: the “sacrifice” that was needed for the great enterprise he had to leave undone.”
There was one exception, however, to Roncalli’s sureness of his innocence, and that was his action and service during World War II. Here is Arendt’s account:
It is with respect to his work in Turkey, where, during the war, he came into contact with Jewish organizations (and, in one instance, prevented the Turkish government from shipping back to Germany some hundred Jewish children who had escaped from Nazi-occupied Europe) that he later raised one of the very rare serious reproaches against himself—for all “examinations of conscience” notwithstanding, he was not at all given to self-criticism. “Could I not,” he wrote, “should I not, have done more, have made a more decided effort and gone against the inclinations of my nature? Did the search for calm and peace, which I considered to be more in harmony with the Lord’s spirit, not perhaps mask a certain unwillingness to take up the sword?” At this time, however, he had permitted himself but one outburst. Upon the outbreak of the war with Russia, he was approached by the German Ambassador, Franz von Papen, who asked him to use his influence in Rome for outspoken support of Germany by the Pope. “And what shall I say about the millions of Jews your countrymen are murdering in Poland and in Germany?” This was in 1941, when the great massacre had just begun.
Even in his questioning of himself in his actions during the war, Roncalli shows himself to be a man of independence and faith. Yes, he might have done more. But unlike so many who did nothing, he made his dissent known, worked to do good where he could, and yet still fell short. And then struggled with his shortcomings.
These stories of the self-thinking independence of Pope John XXIII offer a revealing and humbling reflection in relation to the new Pope Francis I. Like Roncalli, Bergoglio is praised for his humility and his simple faith. And like Roncalli, Bergoglio served the Church through dark times, when secular authorities were engaging in untold evils and the Church remained silent if not complicit. But Roncalli not only did speak up and act to protect the persecuted and hopeless, he also worried that he had not done enough. He was right.
Many are accusing Pope Francis I of war crimes and complicity. I worry about jumping to conclusions when we do not know what happened. But the new Pope carries baggage Roncalli did not—formal accusations of complicity with terror and torture. It is human to respond with denials and anger. It would be befitting, however, if Pope Francis I would throw aside such defenses and let the truth come out. That would be an instance of leadership by example that might actually serve to cleanse the dirty laundry of the Catholic Church.
On this first weekend of Pope Francis I new reign, it is well worth revisiting Hannah Arendt’s The Christian Pope. It is your weekend read.
Thought looks into the pit of hell and is not afraid. Thought is great and swift and
free, the light of the world, and the chief glory of man.
The office library of Mery del Rocio Castillo Cisneros, a Professor of Philosophy
and Humanities at Universidad de La Salle in Bogotá.
We commonly assume that political acts and claims are shaped by some form of reasoning. How then do we respond to political stands in which arguments are piled atop arguments in contradictory ways, and where the force of the various arguments is less important than victory? We see in political discourse a definite willingness to embrace any argument that helps one win, whether or not it makes sense.
One example of our cynical embrace of bad arguments is the recent controversy over the East Side Gallery in Berlin. The Gallery is comprised of a series of murals that, over the course of the past two decades, an international cast of artists has painted and re-painted on an approximately one-mile stretch of the Berlin Wall. Indeed, the East Side Gallery occupies the longest existing remnant of the Wall, and it has become a significant landmark not only for those visitors who seek to experience something of the city’s Cold War past, but also for those long-time residents who regard it as an embodiment of the city’s contemporary feel and texture.
The tumult of the past few weeks erupted over the plans of a developer, Maik Uwe Hinkel, to construct luxury apartments and an office complex in the former border zone—now a modest green space—that lies between the East Side Gallery and the Spree River. According to the agreements reached by Hinkel and the local government, these new buildings would entail the creation of an access road and pedestrian bridge to allow passage to pedestrians, bicyclists, and emergency vehicles. The road and bridge, in turn, would require the removal of two stretches of the East Side Gallery and their replacement in the adjacent green space. Local planners had first approved the construction and the alteration to the East Side Gallery back in 2005, and since that time Hinkel’s plans had aroused little concerted opposition.
When workers lifted out one concrete slab from the Gallery on Friday, March 2nd, however, hundreds of demonstrators flocked to the site to prevent any further removals. A group of activists hastily organized a larger demonstration that same weekend, one that ultimately drew a raucous crowd of more than six thousand people. In the face of these surprising protests, Berlin Mayor Klaus Wowereit declared that all further work on the site would be postponed until at least March 18th, when a meeting of the major players would decide its fate. Since then, the developer and the relevant local officials have all declared their eagerness to find a solution that preserves the East Side Gallery in its current state. Even the slab removed earlier this month seems destined to return to its former location.
Yet the apparent success of the protest threatens to overshadow the problematic aspects of the demonstrators’ arguments. On the one hand, many of the organizers and protesters regarded their opposition as a small but significant rejoinder to the insistent tide of commercial development in post-Wall Berlin. To adopt the terms of Sharon Zukin’s recent book Naked City, they saw the East Side Gallery as an embodiment of the city’s distinctive authenticity and rootedness, which they argued should be protected from the homogenizing onslaught of upscale growth and gentrification. To wit, one of the coalitions that spearheaded the protest calls itself “Sink the Media Spree” (Mediaspree Versenken), a name that invokes developers’ recent efforts to transform the area along the river into a headquarters for high-tech communications and media. Its webpage declares that this portion of Berlin should preserve “the neighborhood” as it currently exists and not fall victim to “profit mania” (Kiez statt Profitwahn).
But the East Side Gallery cannot be cast so readily as an incarnation of local authenticity, especially the kind that stands opposed to commerce. First of all, many government actors and city residents were far more eager to see the Wall dismantled in the months and years after November 1989 than to see it preserved, and they condoned if not actively contributed to its wholesale removal. As a result, the survival of the East Side Gallery represents the exception, not the rule, in the city’s engagement with the Wall as a material structure. Second, artists from around the world initially established the East Side Gallery as a celebration of artistic and political liberty, but their murals received support from the local and national governments because they helped to draw tourists to Berlin and added to the city’s cachet as a cultural destination. In the light of this state patronage, I find it rather curious to hear activists pitching the East Side Gallery against the forces of capital and development.
On the other hand, many demonstrators contended that the alteration of the East Side Gallery would amount to an intolerable attack on the city’s historical inheritance. One variation of this position is that the removal of the two sections constitutes a dilution if not erasure of Germany’s traumatic past. According to this argument, the East Side Gallery should be left intact so that residents and visitors can confront the traces of the country’s division. Another, more strident variation insists that the construction plans display a callous disregard for those who suffered under the East German regime and, more specifically, lost their lives while attempting to escape it. In the words of one activist in Der Tagesspiegel: “the most important point is not whether the Wall will be opened. We are against the combination of removing the Wall and building hotels and apartments in death strips.”
Again, the East Side Gallery’s connection with Germany’s fraught past is not nearly as straightforward as the activists and demonstrators have suggested. As Brian Ladd details in his book The Ghosts of Berlin, the murals of the East Side Gallery were not painted until the early 1990s, after the Wall had fallen and East Germany had ceased to exist. In fact, this portion of the Wall could not have been painted before 1989, because it stood in East Berlin, and anyone who attempted to leave a mark on it, or even lingered near it, would have been apprehended by East German police officers or border soldiers. Of course, amateur and professional artists did draw and paint some striking imagery on the Berlin Wall during the Cold War, but they created it on the Wall’s “outer” surface while standing in West Berlin, where they had much less to fear from East German border personnel. The muralists who launched and maintained the East Side Gallery certainly meant to evoke and further this tradition of “Wall art,” but in the process they abstracted it from a prior historical era and relocated it in another part of the city.
I note these objections not because I support the proposed construction or the alteration of the East Side Gallery. In particular, I am not at all convinced that the partial removal of the Wall is really necessary, whether or not Hinkel and the city go ahead with the area’s development. But I am troubled by the protesters’ reluctance to take the ironies and complexities of the current circumstances more fully into account. They are too eager to cast the developer and local officials as the villains in this story, particularly when the city and the federal government have in fact created a substantial memorial landscape related to the Wall. And they are too quick to position themselves on the moral high ground. Given the Wall’s disappearance from virtually every other part of the city, their demands for preserving the East Side Gallery seem more than a little belated.
I am a neural matrix of roughly 80 billion cells each charged with the potential for action, firing out in multiple patters of synchronicity towards a seemingly inexhaustible order of calculations -- I am the system that emerges, I am its apex, I am sentience -- therefore I am.
This, I imagine, is what Descartes would have to say today of what remains of the self under the scope of examination, though I will admit this sounds less poetic then his original statement.
Galileo’s telescope, the atom, the space age, the tech age, the Human Genome Project, and now the BAM project, all can be seen as a succession of strivings towards a new perspective through which we could gleam a greater understanding and synthesis of Man. The BAM project is the newest manifestation of this urge. It is an exciting endeavor, and yet as with any new attempt of science to probe ourselves, it is a frightening one too.
Recently I learned about the “Brain Activity Map” (BAM) initiative sponsored by the Obama administration. I have a baseline knowledge of neuroscience and have been long fascinated by its hoped for implications and speculative repercussions. I wanted more detail. I found what I understand to be the source paper for this project, The Brain Activity Functional Connectomics, by Paul Alivisatos, et al. This is hot stuff, and I am not being glib. Obama thinks so too, that’s why 3 billion governmental dollars are slated to go into the project. Microsoft and Google are throwing in real money too. So what is really going on?
BAM follows the model of the Human Genome Project. In the proposal paper, as well as Obama’s state of the union address, reference is made to the fact that each $1 put into the Human Genome Project brought back $140 to the economy. I will leave alone the implications of this being economy driven. Should science be economically driven? This question, in our society, is mostly moot. Everything must now at least appear to be economy driven. Knowledge, transcendence, self-discovery, can only resonate in conversation with the economy.
But what are the human as opposed to the economic implications of the Brain Activity Map? BAM is a 15-year plan to create a non-topographical map of the brain the repercussions of which reach into the medical, commercial, educational, and technological fields. Until now our neuro-understanding of the brain has been limited to compartmentalized thinking, or to the study of individual ingredients. The brain simply cannot be understood this way and thus Alivisatos’ paper argues that “no general theory of brain function is universally accepted.” BAM seeks to create an “emergent systems” model, something akin to the rules of complex systems. This stems from the knowledge that brain function arises from the interplay of the electrical impulse grid (the action potential of all the neurons). The best way I can state this is that brain activity is a symphony rather then a carpenter’s graph. It is the interplay of notes, tones, and pacing, and sound rather than a combination of these individual elements. The point is not to isolate and combine but to mimic the complex yet structured electrical impulses of the brain in a way that allows higher order brain function to emerge in an artificially intelligent being. To quote Alivisatos: “An emergent level of analysis appears to be critical for understanding the most compelling questions of how brain functions create sentience.” The most exciting effort, in other words, is to create a sentient, thinking, and autonomous entity.
The project calls for an investment into new technologies that could make recording the action potentials and coordination of their impulses more feasible. This can be accomplished by investing in nano-technology: nanotubes and wires, quantum dots, nano-particles, neural probes, shanks containing optical waveguides, and tiny microchips that can pass into the brain.
The brain mapping project could likely entail human testing, which “we do not exclude,” though it would not take place till the last phase of the project.
Microsoft and Google have signed on as partners and possibly fiscal contributors, because clearly the repercussions of such of project could be ground breaking for the tech industry: Computer chips that replicate the emergent systems model; search engines that could graph society by treating each user as if they are a neuron and their googling activity as action potential. The source paper acknowledges some possible paranoia at such an endeavor and thus states that it is essential that this project be a public one, thus allowing for transparency in all findings. It also encourages a public relations campaign to reassure any party that may be susceptible to conspiracy theory making. That’s me!
I hold both, a fear of repercussions and a sense of excitement for this project. I tend to think that conspiracy theories are healthy. All great science fiction is fed by the conspiracy model, but it also tends to foretell future technological and social revelations. And there exactly is my point, or fear, or observation -- the irrelevance of social relevance. We don’t really care, unless it scares us.
I found myself facing this in writing this post. I am excited to tell people about this project, but as a writer I have a constant mechanism at play in my head as I write, to present a story or topic in a light that will make people interested. As much as this mechanism comes from within me it is also a product of cultural observation, a consistent tracking of what stimulates popular dialogue. What stimulates popular dialogue is conspiracy, not excitement or optimism. This itself is worthy of examination.
Ultimately the fear is of what we are losing in the race to understand ourselves through science and technology, of what we leave behind. I do not mean to gesture towards a conservative approach on science. Rather, I am fascinated by the anxiety that accompanies the prospect, and propose that our fear is that of isolated parties traveling at quite different speeds. We can investigate the self intrusively or/and reflectively. Reflectively, we evaluate and discuss our culture, ethics, the relationship of groups and individuals to one another, we pause and contemplate the grace of being. Intrusively we probe into the elemental makeup of ourselves and the world we inhabit. As one practice outpaces the other, something feels askew, as if a key organ in the symphony of being human is muting in the distance.
“The wonder that man endures or which befalls him cannot be related in words because it is too general for words….That this speechless wonder is the beginning of philosophy became axiomatic for both Plato and Aristotle.”
-Hannah Arendt, "Philosophy and Politics"
Aristotle had told us that philosophy begins in thaumázein-- θαυμάζειν –“to wonder, marvel, be astonished.” In the New Testament, the word appears only twice. In the parallel occurrences (Matthew 27:14 and Mark 15:5), Pilate marvels at the fact that Jesus says nothing. What is significant is that thaumázein is associated there with an experience for which there were no words. The word means a kind of an initial wordless astonishment at what is, at that that is is. For Aristotle, thaumázein is the beginning of philosophy as wonder. It is not for the Greeks, therefore, the beginning of political philosophy.
Key here is the fact of speechlessness. This wonder “cannot be related in words because it is too general for words.” Arendt suggests that Plato encountered it in those moments in which Socrates, “as though seized by a rapture, [fell] into complete motionlessness, just staring without seeing or hearing anything.” It follows that “ultimate truth is beyond words.” Nevertheless, humans want to talk about that which cannot be spoken. “As soon as the speechless state of wonder translates itself into words, it … will formulate in unending variations what we call the ultimate questions.” These questions – what is being? Who is the human being? What is the meaning of life” what is death? And so forth “have in common that they cannot be answered scientifically.” Thus Socrates “I know that I do not know” is actually an expression that opens the door to the political, public realm, in the recognition that nothing that can be said there can ever have the quality of being final.
According to Arendt, Socrates has three distinct aspects. First he arouses citizens from their slumber – this is the gadfly who gets others to think, to think about those topics for which there is no final answer. Secondly as “midwife” he decides – he makes evident – whether an opinion is fit to live or is merely an unimpregnated “wind-egg” (cf Theateatus 152a; 157d; 161a): Greek midwives not only assisted in the delivery but determined if the new-born was healthy enough to live. Socrates concludes his discussion in the Theateatus (210b) by saying all they have done is to produce a mere wind-egg and that he must leave as he has to get to the courthouse for his trial. Lastly, as stinging ray, Socrates paralyzes in two ways. He makes you stop and think; he destroys the certainty one has of received opinions. Arendt is clear that this can be dangerous. She goes on to say that “thinking is … dangerous to all creeds and, by itself, does not bring forth any new creed,” but she is equally clear that “non-thinking … has its dangers [which are] the possession of rules under which to subsume particulars.” To think is dangerous: but to think is to desire wisdom, what is not there. It is thus a longing; it is eros and, as with all things erotic, “to bring this relationship into the open, make it appear, men speak about it in the same way that the lover wants to speak of his beloved.” Where does this leave one? For the most part, in normal times, thinking is not of political use. It is, however, of use, in times when the “center does not hold,” in times of crisis.
At these moments, thinking ceases to be a marginal affair in political matters. When everybody is swept away unthinkingly by whatever everyone else does and believes in, those who think are drawn out of hiding because their refusal to join is conscious and thereby becomes a kind of action. The purging element … is political by implication. For this destruction has a liberating effect on another human faculty, the faculty of judgment, … the faculty to judge particulars without subsuming them under those general rules which can be taught and learned until the grow into habits.
Suppose we read Arendt as saying that political philosophy must now turn and thaumázein – and wonder – not at that what is, is, but at the human reality, at the world of human activity. This would involve a change in philosophy – for which she says philosophers are not particularly well equipped. She thinks such a turn would rest on and derive from several elements – she mentions in particular Jaspers’ reformulation of truth as transcending the realm that can be instrumentally controlled, thus related to freedom; Heidegger’s analysis of ordinary everyday life; and existentialism’s insistence on action. It will be an inquiry into the “political significance of thought; that is into the meaningfulness and the conditions of thinking for a being that never exists in the singular and whose essential plurality is far from explored when an I-Thou relationship is added to the traditional understanding of human nature.”
What is problematic with purely philosophical thaumázein? The Thracian maid who appears in the title to Jacques Taminiaux’s book and stands for Arendt in his analysis derives from an account in the Theateatus. Upon encountering Thales who, all-focused in his wondering, had fallen into a well, the maid notes that the philosopher had “failed to see what was in front of him.” Mary-Jane Robinson notes four elements to Arendt’s suspicion of excessive wonder, a suspicion one assumes was directed at Heidegger. First, such wonder allows avoidance of the messiness of the everyday world; secondly, such “uncritical openness” leads philosophers to be “swept away by dictators.” Thirdly, such wonder alienates the philosopher (as with Heidegger post-1945) from the world around him, and lastly, such openness to the mystery of the world, “disables decision making.”
If politics is the realm of how humans appear to each other when they act and speak, from whence does it come? The only possible answer is that politics is an emergence from a realm which is neither that of action nor that of speech. The political emerges from nothingness. Perhaps this is the realm to which poetry can call us – and some of Arendt’s most moving essays are on poetry and literature – but such a realm is not political. In this sense there is a limit to political science, as there is to all science. For Arendt, there are no underlying causes out of which that which is political must emerge. This is why political action is always for her a beginning and a marvel for which we have to try to find words.