"The end of the old is not necessarily the beginning of the new."
Hannah Arendt, The Life of the Mind
This is a simple enough statement, and yet it masks a profound truth, one that we often overlook out of the very human tendency to seek consistency and connection, to make order out of the chaos of reality, and to ignore the anomalous nature of that which lies in between whatever phenomena we are attending to.
Perhaps the clearest example of this has been what proved to be the unfounded optimism that greeted the overthrow of autocratic regimes through American intervention in Afghanistan and Iraq, and the native-born movements known collectively as the Arab Spring. It is one thing to disrupt the status quo, to overthrow an unpopular and undemocratic regime. But that end does not necessarily lead to the establishment of a new, beneficent and participatory political structure. We see this time and time again, now in Putin's Russia, a century ago with the Russian Revolution, and over two centuries ago with the French Revolution.
Of course, it has long been understood that oftentimes, to begin something new, we first have to put an end to something old. The popular saying that you can't make an omelet without breaking a few eggs reflects this understanding, although it is certainly not the case that breaking eggs will inevitably and automatically lead to the creation of an omelet. Breaking eggs is a necessary but not sufficient cause of omelets, and while this is not an example of the classic chicken and egg problem, I think we can imagine that the chicken might have something to say on the matter of breaking eggs. Certainly, the chicken would have a different view on what is signified or ought to be signified by the end of the old, meaning the end of the egg shell, insofar as you can't make a chicken without it first breaking out of the egg that it took form within.
So, whether you take the chicken's point of view, or adopt the perspective of the omelet, looking backwards, reverse engineering the current situation, it is only natural to view the beginning of the new as an effect brought into being by the end of the old, to assume or make an inference based on sequencing in time, to posit a causal relationship and commit the logical fallacy of post hoc ergo propter hoc, if for no other reason that by force of narrative logic that compels us to create a coherent storyline. In this respect, Arendt points to the foundation tales of ancient Israel and Rome:
We have the Biblical story of the exodus of Israeli tribes from Egypt, which preceded the Mosaic legislation constituting the Hebrew people, and Virgil's story of the wanderings of Aeneas, which led to the foundation of Rome—"dum conderet urbem," as Virgil defines the content of his great poem even in its first lines. Both legends begin with an act of liberation, the flight from oppression and slavery in Egypt and the flight from burning Troy (that is, from annihilation); and in both instances this act is told from the perspective of a new freedom, the conquest of a new "promised land" that offers more than Egypt's fleshpots and the foundation of a new City that is prepared for by a war destined to undo the Trojan war, so that the order of events as laid down by Homer could be reversed.
Fast forward to the American Revolution, and we find that the founders of the republic, mindful of the uniqueness of their undertaking, searched for archetypes in the ancient world. And what they found in the narratives of Exodus and the Aeneid was that the act of liberation, and the establishment of a new freedom are two events, not one, and in effect subject to Alfred Korzybski's non-Aristotelian Principle of Non-Identity. The success of the formation of the American republic can be attributed to the awareness on their part of the chasm that exists between the closing of one era and the opening of a new age, of their separation in time and space:
No doubt if we read these legends as tales, there is a world of difference between the aimless desperate wanderings of the Israeli tribes in the desert after the Exodus and the marvelously colorful tales of the adventures of Aeneas and his fellow Trojans; but to the men of action of later generations who ransacked the archives of antiquity for paradigms to guide their own intentions, this was not decisive. What was decisive was that there was a hiatus between disaster and salvation, between liberation from the old order and the new freedom, embodied in a novus ordo saeclorum, a "new world order of the ages" with whose rise the world had structurally changed.
I find Arendt's use of the term hiatus interesting, given that in contemporary American culture it has largely been appropriated by the television industry to refer to a series that has been taken off the air for a period of time, but not cancelled. The typical phrase is on hiatus, meaning on a break or on vacation. But Arendt reminds us that such connotations only scratch the surface of the word's broader meanings. The Latin word hiatus refers to an opening or rupture, a physical break or missing part or link in a concrete material object. As such, it becomes a spatial metaphor when applied to an interruption or break in time, a usage introduced in the 17th century. Interestingly, this coincides with the period in English history known as the Interregnum, which began in 1649 with the execution of King Charles I, led to Oliver Cromwell's installation as Lord Protector, and ended after Cromwell's death with the Restoration of the monarchy under Charles II, son of Charles I. While in some ways anticipating the American Revolution, the English Civil War followed an older pattern, one that Mircea Eliade referred to as the myth of eternal return, a circular movement rather than the linear progression of history and cause-effect relations.
The idea of moving forward, of progress, requires a future-orientation that only comes into being in the modern age, by which I mean the era that followed the printing revolution associated with Johannes Gutenberg (I discuss this in my book, On the Binding Biases of Time and Other Essays on General Semantics and Media Ecology). But that same print culture also gave rise to modern science, and with it the monopoly granted to efficient causality, cause-effect relations, to the exclusion in particular of final and formal cause (see Marshall and Eric McLuhan's Media and Formal Cause). This is the basis of the Newtonian universe in which every action has an equal and opposite reaction, and every effect can be linked back in a causal chain to another event that preceded it and brought it into being. The view of time as continuous and connected can be traced back to the introduction of the mechanical clock in the 13th century, but was solidified through the printing of calendars and time lines, and the same effect was created in spatial terms by the reproduction of maps, and the use of spatial grids, e.g., the Mercator projection.
And while the invention of history, as a written narrative concerning the linear progression over time can be traced back to the ancient Israelites, and the story of the exodus, the story incorporates the idea of a hiatus in overlapping structures:
A1. Joseph is the golden boy, the son favored by his father Jacob, earning him the enmity of his brothers
A2. he is sold into slavery by them, winds up in Egypt as a slave and then is falsely accused and imprisoned
A3. by virtue of his ability to interpret dreams he gains his freedom and rises to the position of Pharaoh's prime minister
B1. Joseph welcomes his brothers and father, and the House of Israel goes down to Egypt to sojourn due to famine in the land of Canaan
B2. their descendants are enslaved, oppressed, and persecuted
B3. Moses is chosen to confront Pharaoh, liberate the Israelites, and lead them on their journey through the desert
C1. the Israelites are freed from bondage and escape from Egypt
C2. the revelation at Sinai fully establishes their covenant with God
C3. after many trials, they return to the Promised Land
It can be clearly seen in these narrative structures that the role of the hiatus, in ritual terms, is that of the rite of passage, the initiation period that marks, in symbolic fashion, the change in status, the transformation from one social role or state of being to another (e.g., child to adult, outsider to member of the group). This is not to discount the role that actual trials, tests, and other hardships may play in the transition, as they serve to establish or reinforce, psychologically and sometimes physically, the value and reality of the transformation.
In mythic terms, this structure has become known as the hero's journey or hero's adventure, made famous by Joseph Campbell in The Hero with a Thousand Faces, and also known as the monomyth, because he claimed that the same basic structure is universal to all cultures. The basis structure he identified consists of three main elements: separation (e.g., the hero leaves home), initiation (e.g., the hero enters another realm, experiences tests and trials, leading to the bestowing of gifts, abilities, and/or a new status), and return (the hero returns to utilize what he has gained from the initiation and save the day, restoring the status quo or establishing a new status quo).
Understanding the mythic, non-rational element of initiation is the key to recognizing the role of the hiatus, and in the modern era this meant using rationality to realize the limits of rationality. With this in mind, let me return to the quote I began this essay with, but now provide the larger context of the entire paragraph:
The legendary hiatus between a no-more and a not-yet clearly indicated that freedom would not be the automatic result of liberation, that the end of the old is not necessarily the beginning of the new, that the notion of an all-powerful time continuum is an illusion. Tales of a transitory period—from bondage to freedom, from disaster to salvation—were all the more appealing because the legends chiefly concerned the deeds of great leaders, persons of world-historic significance who appeared on the stage of history precisely during such gaps of historical time. All those who pressed by exterior circumstances or motivated by radical utopian thought-trains, were not satisfied to change the world by the gradual reform of an old order (and this rejection of the gradual was precisely what transformed the men of action of the eighteenth century, the first century of a fully secularized intellectual elite, into the men of the revolutions) were almost logically forced to accept the possibility of a hiatus in the continuous flow of temporal sequence.
Note that concept of gaps in historical time, which brings to mind Eliade's distinction between the sacred and the profane. Historical time is a form of profane time, and sacred time represents a gap or break in that linear progression, one that takes us outside of history, connecting us instead in an eternal return to the time associated with a moment of creation or foundation. The revelation in Sinai is an example of such a time, and accordingly Deuteronomy states that all of the members of the House of Israel were present at that event, not just those alive at that time, but those not present, the generations of the future. This statement is included in the liturgy of the Passover Seder, which is a ritual reenactment of the exodus and revelation, which in turn becomes part of the reenactment of the Passion in Christianity, one of the primary examples of Campbell's monomyth.
Arendt's hiatus, then represents a rupture between two different states or stages, an interruption, a disruption linked to an eruption. In the parlance of chaos and complexity theory, it is a bifurcation point. Arendt's contemporary, Peter Drucker, a philosopher who pioneered the scholarly study of business and management, characterized the contemporary zeitgeist in the title of his 1969 book: The Age of Discontinuity. It is an age in which Newtonian physics was replaced by Einstein's relativity and Heisenberg's uncertainty, the phrase quantum leap becoming a metaphor drawn from subatomic physics for all forms of discontinuity. It is an age in which the fixed point of view that yielded perspective in art and the essay and novel in literature yielded to Cubism and subsequent forms of modern art, and stream of consciousness in writing.
Beginning in the 19th century, photography gave us the frozen, discontinuous moment, and the technique of montage in the motion picture gave us a series of shots and scenes whose connections have to be filled in by the audience. Telegraphy gave us the instantaneous transmission of messages that took them out of their natural context, the subject of the famous comment by Henry David Thoreau that connecting Maine and Texas to one another will not guarantee that they have anything sensible to share with each other. The wire services gave us the nonlinear, inverted pyramid style of newspaper reporting, which also was associated with the nonlinear look of the newspaper front page, a form that Marshall McLuhan referred to as a mosaic. Neil Postman criticized television's role in decontextualizing public discourse in Amusing Ourselves to Death, where he used the phrase, "in the context of no context," and I discuss this as well in my recently published follow-up to his work, Amazing Ourselves to Death.
The concept of the hiatus comes naturally to the premodern mind, schooled by myth and ritual within the context of oral culture. That same concept is repressed, in turn, by the modern mind, shaped by the linearity and rationality of literacy and typography. As the modern mind yields to a new, postmodern alternative, one that emerges out of the electronic media environment, we see the return of the repressed in the idea of the jump cut writ large.
There is psychological satisfaction in the deterministic view of history as the inevitable result of cause-effect relations in the Newtonian sense, as this provides a sense of closure and coherence consistent with the typographic mindset. And there is similar satisfaction in the view of history as entirely consisting of human decisions that are the product of free will, of human agency unfettered by outside constraints, which is also consistent with the individualism that emerges out of the literate mindset and print culture, and with a social rather that physical version of efficient causality. What we are only beginning to come to terms with is the understanding of formal causality, as discussed by Marshall and Eric McLuhan in Media and Formal Cause. What formal causality suggests is that history has a tendency to follow certain patterns, patterns that connect one state or stage to another, patterns that repeat again and again over time. This is the notion that history repeats itself, meaning that historical events tend to fall into certain patterns (repetition being the precondition for the existence of patterns), and that the goal, as McLuhan articulated in Understanding Media, is pattern recognition. This helps to clarify the famous remark by George Santayana, "those who cannot remember the past are condemned to repeat it." In other words, those who are blind to patterns will find it difficult to break out of them.
Campbell engages in pattern recognition in his identification of the heroic monomyth, as Arendt does in her discussion of the historical hiatus. Recognizing the patterns are the first step in escaping them, and may even allow for the possibility of taking control and influencing them. This also means understanding that the tendency for phenomena to fall into patterns is a powerful one. It is a force akin to entropy, and perhaps a result of that very statistical tendency that is expressed by the Second Law of Thermodynamics, as Terrence Deacon argues in Incomplete Nature. It follows that there are only certain points in history, certain moments, certain bifurcation points, when it is possible to make a difference, or to make a difference that makes a difference, to use Gregory Bateson's formulation, and change the course of history. The moment of transition, of initiation, the hiatus, represents such a moment.
McLuhan's concept of medium goes far beyond the ordinary sense of the word, as he relates it to the idea of gaps and intervals, the ground that surrounds the figure, and explains that his philosophy of media is not about transportation (of information), but transformation. The medium is the hiatus.
The particular pattern that has come to the fore in our time is that of the network, whether it's the decentralized computer network and the internet as the network of networks, or the highly centralized and hierarchical broadcast network, or the interpersonal network associated with Stanley Milgram's research (popularly known as six degrees of separation), or the neural networks that define brain structure and function, or social networking sites such as Facebook and Twitter, etc. And it is not the nodes, which may be considered the content of the network, that defines the network, but the links that connect them, which function as the network medium, and which, in the systems view favored by Bateson, provide the structure for the network system, the interaction or relationship between the nodes. What matters is not the nodes, it's the modes.
Hiatus and link may seem like polar opposites, the break and the bridge, but they are two sides of the same coin, the medium that goes between, simultaneously separating and connecting. The boundary divides the system from its environment, allowing the system to maintain its identity as separate and distinct from the environment, keeping it from being absorbed by the environment. But the membrane also serves as a filter, engaged in the process of abstracting, to use Korzybski's favored term, letting through or bringing material, energy, and information from the environment into the system so that the system can maintain itself and survive. The boundary keeps the system in touch with its situation, keeps it contextualized within its environment.
The systems view emphasizes space over time, as does ecology, but the concept of the hiatus as a temporal interruption suggests an association with evolution as well. Darwin's view of evolution as continuous was consistent with Newtonian physics. The more recent modification of evolutionary theory put forth by Stephen Jay Gould, known as punctuated equilibrium, suggests that evolution occurs in fits and starts, in relatively rare and isolated periods of major change, surrounded by long periods of relative stability and stasis. Not surprisingly, this particular conception of discontinuity was introduced during the television era, in the early 1970s, just a few years after the publication of Peter Drucker's The Age of Discontinuity.
When you consider the extraordinary changes that we are experiencing in our time, technologically and ecologically, the latter underlined by the recent news concerning the United Nations' latest report on global warming, what we need is an understanding of the concept of change, a way to study the patterns of change, patterns that exist and persist across different levels, the micro and the macro, the physical, chemical, biological, psychological, and social, what Bateson referred to as metapatterns, the subject of further elaboration by biologist Tyler Volk in his book on the subject. Paul Watzlawick argued for the need to study change in and of itself in a little book co-authored by John H. Weakland and Richard Fisch, entitled Change: Principles of Problem Formation and Problem Resolution, which considers the problem from the point of view of psychotherapy. Arendt gives us a philosophical entrée into the problem by introducing the pattern of the hiatus, the moment of discontinuity that leads to change, and possibly a moment in which we, as human agents, can have an influence on the direction of that change.
To have such an influence, we do need to have that break, to find a space and more importantly a time to pause and reflect, to evaluate and formulate. Arendt famously emphasizes the importance of thinking in and of itself, the importance not of the content of thought alone, but of the act of thinking, the medium of thinking, which requires an opening, a time out, a respite from the onslaught of 24/7/365. This underscores the value of sacred time, and it follows that it is no accident that during that period of initiation in the story of the exodus, there is the revelation at Sinai and the gift of divine law, the Torah or Law, and chief among them the Ten Commandments, which includes the fourth of the commandments, and the one presented in greatest detail, to observe the Sabbath day. This premodern ritual requires us to make the hiatus a regular part of our lives, to break the continuity of profane time on a weekly basis. From that foundation, other commandments establish the idea of the sabbatical year, and the sabbatical of sabbaticals, or jubilee year. Whether it's a Sabbath mandated by religious observance, or a new movement to engage in a Technology Sabbath, the hiatus functions as the response to the homogenization of time that was associated with efficient causality and literate linearity, and that continues to intensify in conjunction with the technological imperative of efficiency über alles.
To return one last time to the quote that I began with, the end of the old is not necessarily the beginning of the new because there may not be a new beginning at all, there may not be anything new to take the place of the old. The end of the old may be just that, the end, period, the end of it all. The presence of a hiatus to follow the end of the old serves as a promise that something new will begin to take its place after the hiatus is over. And the presence of a hiatus in our lives, individually and collectively, may also serve as a promise that we will not inevitably rush towards an end of the old that will also be an end of it all, that we will be able to find the opening to begin something new, that we will be able to make the transition to something better, that both survival and progress are possible, through an understanding of the processes of continuity and change.
Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.
Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.
It is a new year, not only for Jews celebrating Rosh Hashanah but also for hundreds of thousands of college and university students around the world. Over at Harvard, they invited Nannerl O. Keohane—past President of Wellesley College—to give the new students some advice on how to reflect upon and imagine the years of education that lay before them. Above all, Keohane urges students to take time to think about what they want from their education: “You now have this incredible opportunity to shape who you are as a person, what you are like, and what you seek for the future. You have both the time and the materials to do this. You may think you’ve never been busier in your life, and that’s probably true; but most of you have “time” in the sense of no other duties that require your attention and energy. Shaping your character is what you are supposed to do with your education; it’s not competing with something else. You won’t have many other periods in your life that will be this way until you retire when, if you are fortunate, you’ll have another chance; but then you will be more set in your ways, and may find it harder to change.”
Robin Kelly, writing on the 1963 March on Washington and the March's recent fiftieth anniversary celebrations, zooms out a little bit on the original event. It has, he says, taken on the characteristics of a big, feel good event focused on Civil Rights and directly responsible for the passage of the Civil Rights Act, when, in fact, all those people also came to Washington in support of economic equality and the gritty work of passing laws was accomplished later, with additional momentum and constraints. It's important to remember, he says, that "big glitzy marches do not make a movement; the organizations and activists who came to Washington, D. C., will continue to do their work, fight their fights, and make connections between disparate struggles, no matter what happens in the limelight."
Robinson Meyer investigates what, exactly, poet Seamus Heaney's last words were. Just before he passed away last week at 74, Heaney, an Irish Nobel Laureate, texted the Latin phrase noli timere, don't be afraid, to his wife. Heaney's son Michael mentioned this in his eulogy for his father, and it was written down and reported as, variously, the correct phrase or the incorrect nolle timore. For Meyer, this mis-recording of the poet's last words is emblematic of some of the transcriptions and translations he did in his work, and the further translations and transcriptions we will now engage in because he is gone. "We die" Meyer writes, "and the language gets away from us, in little ways, like a dropped vowel sound, a change in prepositions, a mistaken transcription. Errors in transfer make a literature."
Jay Rosen, who will be speaking at the Hannah Arendt Center’s NYC Lecture Series on Sunday, Oct. 27th at 5pm, has recently suggested that journalism solves the problem of awayness - “Journalism enters the picture when human settlement, daily economy, and political organization grow beyond the scale of the self-informing populace.” C.W. Anderson adds that "awayness" should include alienation from a moment in time as well as from a particular place: "Think about how we get our news today: We dive in and out of Twitter, with its short bursts of immediate information. We click over to a rapidly updating New York Times Lede blog post, with it's rolling updates and on the ground reports, complete with YouTube videos and embedded tweets. Eventually, that blog post becomes a full-fledged article, usually written by someone else. And finally, at another end of the spectrum, we peruse infographics that can sum up decades of data into a single image. All of these are journalism, in some fashion. But the kind of journalisms they are - what they are for - is arguably very different. They each deal with the problem of context in different ways."
Adam Gopnik makes a case for the study of English, and of the humanities more broadly. His defense is striking because it rejects a recent turn towards their supposed use value, instead emphasizing such study for its own sake: "No sane person proposes or has ever proposed an entirely utilitarian, production-oriented view of human purpose. We cannot merely produce goods and services as efficiently as we can, sell them to each other as cheaply as possible, and die. Some idea of symbolic purpose, of pleasure seeking rather than rent seeking, of Doing Something Else, is essential to human existence. That’s why we pass out tax breaks to churches, zoning remissions to parks, subsidize new ballparks and point to the density of theatres and galleries as signs of urban life, to be encouraged if at all possible. When a man makes a few billion dollars, he still starts looking around for a museum to build a gallery for or a newspaper to buy. No civilization we think worth studying, or whose relics we think worth visiting, existed without what amounts to an English department—texts that mattered, people who argued about them as if they mattered, and a sense of shame among the wealthy if they couldn’t talk about them, at least a little, too. It’s what we call civilization."
The sixth annual fall conference, "Failing Fast:The Crisis of the Educated Citizen"
Olin Hall, Bard College
In the two years since its inception, the Arab Spring remains an extraordinarily difficult phenomenon to define and assess. Its local, national, and regional consequences have been varied and contradictory, and many of them are not obviously or immediately heartening. These observations certainly apply to Syria: although growing numbers of the country’s military personnel are abandoning their posts, the Assad regime’s war with the Sunni insurgency still threatens to draw Turkey, Lebanon, Iran, and Jordan into an intractable sectarian conflict. But they are, if anything, even more relevant to Egypt. There the overthrow of the Mubarak regime occurred with less brutality, all things considered, than we might have reasonably feared. But, the nature of the country’s social and political reconstruction nevertheless remains extremely uncertain, given the delicate balance of forces between the Muslim Brotherhood, the Salafist Nour Party, and the country’s diverse liberal and activist camps.
The effects of Egypt’s revolution have been particularly ambiguous for the country’s women. To be sure, women have played a noteworthy role in the Tahrir Square protests in January and February 2011, and many local and foreign observers commented on the lack of intimidation and harassment they faced in the days leading to Mubarak’s fall. But as Wendell Steavenson details in the most recent New Yorker, the protests were by no means free of gendered violence, and the revolution has yet to create a more comfortable or equitable place for women in Egyptian public life.
Let me touch on one example from Steavenson’s article. Hend Badawi, a twenty-three-year-old graduate student, was protesting against the interim military government in Tahrir Square in December 2011 when she was confronted by a group of soldiers. In the course of her arrest, the soldiers tore off Badawi’s headscarf, dragged her several hundred meters by the hair, cursed at her, struck her, and groped her breasts and behind. One of the soldiers also apparently told her that “if my sister went to Tahrir, I would shoot her” After being taken to a parliament building, Badawi was beaten again and interrogated for several hours before landing in a military hospital, where she was treated for severe lacerations on her feet, a broken wrist, and multiple broken fingers.
The next day, Field Marshal Mohamed Tantawi, at that time Egypt’s effective ruler, paid a visit to the hospital for a photo op with a state-TV camera crew. Despite her injuries, Badawi confronted him: “We don’t want your visit!” she reportedly screamed. “We are not the ones who are thugs! You’ve beaten us and ruined us! Shame on you! Get out!” News of the tongue-lashing quickly made the rounds on Twitter and Facebook, and when Badawi was moved to a civilian hospital, she used a video camera smuggled in by friends to issue a lengthier statement about her ordeal. The resulting video went viral, and independent TV stations used it to challenge government claims that the Army had not used violence against civilians.
One might expect that Badawi would be honored for her courage and conviction, and I can only imagine that she is, at least among pro-democracy activists. But her family, which happened to sympathize with the Mubarak regime, was appalled. Badawi had gone to Tahrir Square without informing them, and they blamed her not only for the violent treatment she had received, but also for the damage they believed she had done to the family’s reputation. Badawi’s relatives locked her in her room; her elderly aunt yelled at her frequently; and her brother Ahmed hit her. Later, when Badawi’s family did not allow her to return to Tahrir for the first anniversary of the revolution, she basically reenacted the protests of the previous year—only this time on a more intimate scale. As she related to Steavenson, she launched a hunger strike to protest her treatment at her family’s hands and made placards that read, “Hend wants to topple the siege! Down with Ahmed!”
Badawi’s experience is particular and inevitably her own, but it nevertheless exemplifies the conundrums that many women face in contemporary Egypt. As the daughter of a pious rural family, she has benefitted from the increasing levels of affluence, education, and occupational opportunity that at least some young people, both women and men, have enjoyed over the past several decades. But she has also come face to face with the possibilities and the limits created by Egypt’s Islamic Revival, which has established new expectations for women’s comportment on the street and in other public institutions. (If many women in Cairo went bareheaded and wore skirts and blouses at the beginning of Mubarak’s reign, almost all now wear headscarves, and the niqab is not an uncommon sight.) Finally, Badawi’s life has been shaped not simply by her family’s notions of appropriate womanly behavior, but by a wider climate of pervasive sexual harassment. According to one 2008 survey, sixty percent of Egyptian men admit to having harassed a woman, and the country’s police and security forces either openly condone such treatment or engage in even more serious assaults themselves.
Badawi chafes at the “customs and traditions”—a common Arabic phrase, which she employs sardonically—that mold and circumscribe her life. And, like at least some other women, she regards Egypt’s recent upheaval as a potential opening, an “opportunity to mix my inner revolution with the revolution of my country". But it is significant, I think, that Badawi does not seek a “Western” form of women’s equality and emancipation. Although she appreciates “the space and freedom” that appear to be available to women on American TV shows, she nevertheless intends to pursue them “in the context of my religion”. At the same time, many of the reforms that she and other women’s advocates might champion are now thoroughly tainted by their association with the autocratic Mubarak regime. For example, many Egyptians dismiss recent amendments to the country’s “personal-status laws”—which allowed women to initiate no-fault divorces and enhanced their child-custody rights—as cosmetic changes that only aimed to improve the government’s international image. Many other citizens, meanwhile, view Mubarak’s 2010 effort to mandate a quota for female members of parliament as a patent violation of democratic procedure.
These developments offer no clear path forward for Badawi and other Egyptian women, whether or not they regard themselves as activists. But they also pose a distinct challenge to outside observers—like me—who sympathize with their efforts to transform Egyptian society. Ten years ago, the Columbia anthropologist Lila Abu-Lughod drew on the impending American invasion of Afghanistan to question the notion that the U.S. should “save” Muslim women from oppression. Instead of adopting a position of patronizing superiority, Abu-Lughod urged concerned Americans to ally themselves with local activists in the Middle East and to work with them on the issues that they deemed most important. In the context of the Arab Spring, however, even this advice appears to have its shortcomings. I worry that American (or wider “Western”) support for women like Hend Badawi, however well-meaning, will unintentionally undermine the very reforms that the activists themselves favor. I also suspect that a considerable number of Egyptians will resent even the most “enlightened” coalitions as yet another instance of anti-democratic meddling if not neo-colonial imposition. After all, the U.S. did much to keep Mubarak in power for thirty years. Why now should Americans, whether they are affiliated with the U.S. government or not, attempt to intervene even indirectly in Egypt’s transformation?
I certainly believe, from a political and scholarly perspective, that Americans should care a great deal about the consequences of the revolutions in Egypt and other North African and Middle Eastern states. In the end, however, I wonder if the most advisable practical course may be to adopt an attitude of principled non-interference in those cases where mass violence is not imminent. In short, we should allow Egyptians (and other Middle Easterners) room to work out the consequences and implications of the Arab Spring on their own, even if we are not entirely comfortable with the results.
Note: Lila Abu-Lughod’s argument, which I reference near the end of this post, appears in “Do Muslim Women Really Need Saving? Anthropological Reflections on Cultural Relativism and its Others.” American Anthropologist 104.3 (2002): 783-790.
“Hence it is not in the least superstitious, it is even a counsel of realism, to look for the unforeseeable and unpredictable, to be prepared for and to expect “miracles” in the political realm. And the more heavily the scales are weighted in favor of disaster, the more miraculous will the deed done in freedom appear.”
—Hannah Arendt, What is Freedom?
This week at Bard College, in preparation for the Hannah Arendt Center Conference "Does the President Matter?", we put up 2 writing blocks around campus, multi-paneled chalkboards that invite students to respond to the question: Does the President Matter? The blocks generated quite a few interesting comments. Many mentioned the Supreme Court. Quite a few invoked the previous president, war, and torture. And, since we are at Bard, others responded: it depends what you mean by matters.
This last comment struck me as prescient. It does depend on what you mean by matters.
If what we mean is, say, an increasing and unprecedented power by a democratic leader not seen since the time of enlightened monarchy, the president does matter. We live in an age of an imperial presidency. The President can, at least he does, send our troops into battle without the approval of Congress. The President can, and does, harness the power of the TV, Internet, and twitter to bypass his critics and reach the masses more directly than ever before. The president can, and does, appoint Supreme Court Justices with barely a whimper from the Senate; and the president’s appointments can, and do, swing the balance on a prisoner’s right to habeas corpus, a woman’s right to choose, or a couple’s right to marry.
And yet, what if by matter, we mean something else? What if we mean, having the power to change who we are in meaningful ways? What if by matter we mean: to confront honestly the enormous challenges of the present? What if by matter we mean: to make unpredictable and visionary choices, to invite and inspire a better future?
On the really big questions—the thoughtless consumerism that degrades our environment and our souls; the millions of people who have no jobs and increasingly little prospect for productive employment; the threat of devastating terrorism; and the astronomical National Debt: 16 trillion and counting for the US. -- That is $140,000 for each taxpayer. -- Add to that the deficiency in Public Pension Obligations (estimated at anywhere from $1 to $5 trillion.) Not to mention the 1 trillion dollars of inextinguishable student debt that is creating a lost generation of young people whose lives are stifled by unwise decisions made before they were allowed to buy a beer.
This election should be about a frank acknowledgement of the unsustainability of our economic, social, and environmental practices and expectations. We should be talking together about how we should remake our future in ways that are both just and exciting. This election should be scary and exciting. But so far it’s small-minded and ugly.
Around the world, we witness worldwide distrust and disdain for government. In Greece there is a clear choice between austerity and devaluation; but Greek leaders have saddled their people with half-hearted austerity that causes pain without prospect for relief. In Italy, the paralysis of political leaders has led to resignation and the appointment of an interim technocratic government. In Germany, the most powerful European leader delays and denies, trusting that others will blink every time they are brought to the mouth of the abyss.
No wonder that the Tea Party and Occupy Wall Street in the US, and the Pirate Parties in Europe share a common sense that liberal democratic government is broken. A substantial—and highly educated—portion of the electorate has concluded that our government is so inept and so compromised that it needs to be abandoned or radically constrained. No president, it seems, is up to the challenge of fixing our broken political system.
Every President comes to Washington promising reform! And they all fail. According to Jon Rauch, a leading journalist for The Atlantic and the National Journal, this is inevitable. He has this to say in his book Government's End:
If the business of America is business, the business of government programs and their clients is to stay in business. And after a while, as the programs and the clients and their political protectors adapt to nourish and protect each other, government and its universe of groups reach a turning point—or, perhaps more accurately, a point from which there is no turning back. That point has arrived. Government has become what it is and will remain: a large, incoherent, often incomprehensible mass that is solicitous of its clients but impervious to any broad, coherent program of reform. And this evolution cannot be reversed.
On the really big questions of transforming politics, the President is, Rauch argues, simply powerless. President Obama apparently agrees. Just last week he said, in Florida: "The most important lesson I've learned is that you can't change Washington from the inside. You can only change it from the outside."
A similar sentiment is offered by Laurence Lessig, a founding member of Creative Commons. In his recent book Republic 2.0, Lessig writes:
The great threat today is in plain sight. It is the economy of influence now transparent to all, which has normalized a process that draws our democracy away from the will of the people. A process that distorts our democracy from ends sought by both the Left and the Right: For the single most salient feature of the government that we have evolved is not that it discriminates in favor of one side and against the other. The single most salient feature is that it discriminates against all sides to favor itself. We have created an engine of influence that seeks not some particular strand of political or economic ideology, whether Marx or Hayek. We have created instead an engine of influence that seeks simply to make those most connected rich.
The system of influence and corruption through PACs, SuperPacs, and lobbyists is so entrenched, Lessig writes, that no reform seems plausible. All that is left is the Hail Mary idea of a new constitutional convention—an idea Lessig promotes widely, as with his Conference On the Constitutional Convention last year at Harvard.
For Rauch on the Right and Lessig on the Left, government is so concerned with its parochial interests and its need to stay in business that we have forfeited control over it. We have, in other words, lost the freedom to govern ourselves.
The question "Does the President Matter?" is asked, in the context of the Arendt Center conference, from out of Hannah Arendt's maxim that Freedom is the fundamental raison d'etre of politics. In "What is Freedom?", Arendt writes:
“Freedom is actually the reason that men live together in political organization at all. Without it, political life as such would be meaningless. The raison d’être of politics is freedom.”
So what is freedom? To be free, Arendt says, is to act. Arendt writes: "Men are free as long as they act, neither before nor after; for to be free and to act are the same.”
What is action? Action is something done spontaneously. It brings something new into the world. Man is the being capable of starting something new. Political action, and action in general, must happen in public. Like the performing arts—dance, theatre, and music—politics and political actions requires an audience. Political actors act in front of other people. They need spectators, so that the spectators can be drawn to the action; and when the spectators find the doings of politicians right, or true, or beautiful, they gather around and form themselves into a polity. The political act, the free act must be surprising if it is to draw people to itself. Only an act that is surprising and bold is a political act, because only such an act will strike others, and make them pay attention.
The very word politics derives from the Greek polis which itself is rooted in the Greek pelein, a verb used to describe the circular motion of smoke rings rising up from out of a pipe. The point is that politics is the gathering of a plurality around a common center. The plurality does not become a singularity in circling around a polestar, but it does acknowledgement something common, something that unites the members of a polity in spite of their uniqueness and difference.
When President Washington stepped down after his second term; when President Lincoln emancipated the slaves; when FDR created the New Deal; when President Eisenhower called the Arkansas National Guard into Federal Service in order to integrate schools in Little Rock; these presidents acted in ways that helped refine, redefine, and re-imagine what it means to be an American.
Arendt makes one further point about action and freedom that is important as they relate to the question: Does the President Matter? Courage, she writes, is "the political virtue par excellence." To act in public is leave the security of one's home and enter the world of the public. Such action is dangerous, for the political actor might be jailed for his crime or even killed. Arendt's favorite example of political courage is Socrates, who was killed for his courageous engagement of his fellow Athenians. We must always recall that Socrates was sentenced to death for violating the Athenian law.
Political action also requires courage because the actor can suffer a fate even worse than death. He may be ignored. At least to be killed for one's ideas means that one is recognized as capable of action, of saying and doing something that matters. To be ignored, however, denies the actor the basic human capacity for action and freedom.
One fascinating corollary of Arendt's understanding of the identity of action and freedom is that action, any action—any original deed, any political act that is new and shows leadership—is, of necessity, something that was not done before. It is, therefore, always against the law.
This is an insight familiar to readers of Fyodor Dostoevsky. In Crime and Punishment Raskolnikov says:
Let's say, the lawgivers and founders of mankind, starting from the most ancient and going on to the Lycurguses, the Solons, the Muhammads, the Napoleons, and so forth, that all of them to a man were criminals, from the fact alone that in giving a new law they thereby violated the old one.
All leaders are, in important ways, related to criminals. This is an insight Arendt and Nietzsche too share.
Shortly after we began to plan this conference, I heard an interview with John Ashcroft speaking on the Freakonomics Radio Show. He said:
"Leadership in a moral and cultural sense may be even more important than what a person does in a governmental sense. A leader calls people to their highest and best. ... No one ever achieves greatness merely by obeying the law. People who do above what the law requires become really valuable to a culture. And a President can set a tone that inspires people to do that."
My first reaction was: This is a surprising thing for the Attorney General of the United States to say. My second reaction was: I want him to speak at the conference. Sadly, Mr. Ashcroft could not be with us here today. But this does not change the fact that, in an important way, Ashcroft is right. Great leaders will rise above the laws in crisis. They will call us to our highest and best.
What Ashcroft doesn't quite say, and yet Arendt and Dostoevsky make clear, is that there is a thin and yet all-so-important line separating great leaders from criminals. Both act in ways unexpected and novel. In a sense, both break the law.
But only the leader's act shows itself to be right and thus re-makes the law. Hitler may have acted and shown a capacity for freedom; his action, however, was rejected. He was a criminal, not a legislator. Martin Luther King Jr. or Gandhi also broke the laws in actions of civil disobedience. Great leader show in their lawbreaking that the earlier law had been wrong; they forge a new moral and also written law through the force and power of moral example.
In what is perhaps the latest example in the United States of a Presidential act of lawbreaking, President George W. Bush clearly broke both U.S. and international law in his prosecution of the war on terror. At least at this time it seems painfully clear that President George W. Bush's decision to systematize torture stands closer to a criminal act than an act of great legislation.
In many ways Presidential politics in the 21st takes place in the shadow of George W. Bush's overreach. One result is that we have reacted against great and daring leadership. In line with the spirit of equality that drives our age, we ruthlessly expose the foibles, missteps, scandals and failures of anyone who rises to prominence. Bold leaders are risk takers. They fail and embarrass themselves. They have unruly skeletons in their closets. They will hesitate to endure and rarely prevail in the public inquisition that the presidential selection process has become.
These candidates, who are inoffensive enough to prevail, are branded by their consultants as pragmatists. Our current pragmatists are Products of Harvard Business School and Harvard Law School. Mr. Romney loves data. President Obama worships experts. They are both nothing if not faithful to the doctrine of technocratic optimism, that we with the right people in charge we can do anything. The only problem is they refuse to tell us what it is they want to do. They have forgotten that politics is a matter of thinking, not a pragmatic exercise in technical efficiency.
Look at the Mall in Washington: the Washington monument honors our first President, the Jefferson Memorial, the Lincoln Memorial, the Memorial to Franklin Delano Roosevelt. There is not a monument to any president since FDR. And yet, just 2 years ago we dedicated the Martin Luther King Memorial. It doesn't seem like an accident that the leaders of the Civil Rights Movement were not politicians. Our leaders today do not gravitate to the presidency. The presidency does not attract leaders. Bold leaders today are not the people running for office.
Yet, people crave what used to be called a statesman. To ask: "Does the President Matter?" is to ask: might a president, might a political leader, be able to transform our nation, to restore the dignity and meaning of politics? It is to ask, in other words, for a miracle.
At the end of her essay, "What is Freedom?", Hannah Arendt said this about the importance of miracles in politics.
Hence it is not in the least superstitious, it is even a counsel of realism, to look for the unforeseeable and unpredictable, to be prepared for and to expect “miracles” in the political realm. And the more heavily the scales are weighted in favor of disaster, the more miraculous will the deed done in freedom appear.
It is men who perform miracles—men who because they have received the twofold gift of freedom and action can establish a reality of their own.
I don't know if the president matters.
But I know that he or she must. Which is why we must believe that miracles are possible. And that means we, ourselves, must act in freedom to make the miraculous happen.
In the service of the not-yet-imagined possibilities of our time, our goal over the two days of the conference days was to engage in the difficult, surprising, and never-to-be-understood work of thinking, and of thinking together, in public, amongst others. We heard from philosophers and businessmen, artists and academics. The speakers came from across the political spectrum, but they shared a commitment to thinking beyond ideology. Such thinking is itself a form of action, especially so in a time of such ideological rigidity. Whether our meeting here at Bard gives birth to the miracle of political action--that is up to you. If we succeeded in thinking together, in provoking, and in unsettling, we perhaps sowed the seeds that will one day blossom into the miracle of freedom.
Watch Roger's opening talk from the conference, "Does the President Matter?" here.