"The end of the old is not necessarily the beginning of the new."
Hannah Arendt, The Life of the Mind
This is a simple enough statement, and yet it masks a profound truth, one that we often overlook out of the very human tendency to seek consistency and connection, to make order out of the chaos of reality, and to ignore the anomalous nature of that which lies in between whatever phenomena we are attending to.
Perhaps the clearest example of this has been what proved to be the unfounded optimism that greeted the overthrow of autocratic regimes through American intervention in Afghanistan and Iraq, and the native-born movements known collectively as the Arab Spring. It is one thing to disrupt the status quo, to overthrow an unpopular and undemocratic regime. But that end does not necessarily lead to the establishment of a new, beneficent and participatory political structure. We see this time and time again, now in Putin's Russia, a century ago with the Russian Revolution, and over two centuries ago with the French Revolution.
Of course, it has long been understood that oftentimes, to begin something new, we first have to put an end to something old. The popular saying that you can't make an omelet without breaking a few eggs reflects this understanding, although it is certainly not the case that breaking eggs will inevitably and automatically lead to the creation of an omelet. Breaking eggs is a necessary but not sufficient cause of omelets, and while this is not an example of the classic chicken and egg problem, I think we can imagine that the chicken might have something to say on the matter of breaking eggs. Certainly, the chicken would have a different view on what is signified or ought to be signified by the end of the old, meaning the end of the egg shell, insofar as you can't make a chicken without it first breaking out of the egg that it took form within.
So, whether you take the chicken's point of view, or adopt the perspective of the omelet, looking backwards, reverse engineering the current situation, it is only natural to view the beginning of the new as an effect brought into being by the end of the old, to assume or make an inference based on sequencing in time, to posit a causal relationship and commit the logical fallacy of post hoc ergo propter hoc, if for no other reason that by force of narrative logic that compels us to create a coherent storyline. In this respect, Arendt points to the foundation tales of ancient Israel and Rome:
We have the Biblical story of the exodus of Israeli tribes from Egypt, which preceded the Mosaic legislation constituting the Hebrew people, and Virgil's story of the wanderings of Aeneas, which led to the foundation of Rome—"dum conderet urbem," as Virgil defines the content of his great poem even in its first lines. Both legends begin with an act of liberation, the flight from oppression and slavery in Egypt and the flight from burning Troy (that is, from annihilation); and in both instances this act is told from the perspective of a new freedom, the conquest of a new "promised land" that offers more than Egypt's fleshpots and the foundation of a new City that is prepared for by a war destined to undo the Trojan war, so that the order of events as laid down by Homer could be reversed.
Fast forward to the American Revolution, and we find that the founders of the republic, mindful of the uniqueness of their undertaking, searched for archetypes in the ancient world. And what they found in the narratives of Exodus and the Aeneid was that the act of liberation, and the establishment of a new freedom are two events, not one, and in effect subject to Alfred Korzybski's non-Aristotelian Principle of Non-Identity. The success of the formation of the American republic can be attributed to the awareness on their part of the chasm that exists between the closing of one era and the opening of a new age, of their separation in time and space:
No doubt if we read these legends as tales, there is a world of difference between the aimless desperate wanderings of the Israeli tribes in the desert after the Exodus and the marvelously colorful tales of the adventures of Aeneas and his fellow Trojans; but to the men of action of later generations who ransacked the archives of antiquity for paradigms to guide their own intentions, this was not decisive. What was decisive was that there was a hiatus between disaster and salvation, between liberation from the old order and the new freedom, embodied in a novus ordo saeclorum, a "new world order of the ages" with whose rise the world had structurally changed.
I find Arendt's use of the term hiatus interesting, given that in contemporary American culture it has largely been appropriated by the television industry to refer to a series that has been taken off the air for a period of time, but not cancelled. The typical phrase is on hiatus, meaning on a break or on vacation. But Arendt reminds us that such connotations only scratch the surface of the word's broader meanings. The Latin word hiatus refers to an opening or rupture, a physical break or missing part or link in a concrete material object. As such, it becomes a spatial metaphor when applied to an interruption or break in time, a usage introduced in the 17th century. Interestingly, this coincides with the period in English history known as the Interregnum, which began in 1649 with the execution of King Charles I, led to Oliver Cromwell's installation as Lord Protector, and ended after Cromwell's death with the Restoration of the monarchy under Charles II, son of Charles I. While in some ways anticipating the American Revolution, the English Civil War followed an older pattern, one that Mircea Eliade referred to as the myth of eternal return, a circular movement rather than the linear progression of history and cause-effect relations.
The idea of moving forward, of progress, requires a future-orientation that only comes into being in the modern age, by which I mean the era that followed the printing revolution associated with Johannes Gutenberg (I discuss this in my book, On the Binding Biases of Time and Other Essays on General Semantics and Media Ecology). But that same print culture also gave rise to modern science, and with it the monopoly granted to efficient causality, cause-effect relations, to the exclusion in particular of final and formal cause (see Marshall and Eric McLuhan's Media and Formal Cause). This is the basis of the Newtonian universe in which every action has an equal and opposite reaction, and every effect can be linked back in a causal chain to another event that preceded it and brought it into being. The view of time as continuous and connected can be traced back to the introduction of the mechanical clock in the 13th century, but was solidified through the printing of calendars and time lines, and the same effect was created in spatial terms by the reproduction of maps, and the use of spatial grids, e.g., the Mercator projection.
And while the invention of history, as a written narrative concerning the linear progression over time can be traced back to the ancient Israelites, and the story of the exodus, the story incorporates the idea of a hiatus in overlapping structures:
A1. Joseph is the golden boy, the son favored by his father Jacob, earning him the enmity of his brothers
A2. he is sold into slavery by them, winds up in Egypt as a slave and then is falsely accused and imprisoned
A3. by virtue of his ability to interpret dreams he gains his freedom and rises to the position of Pharaoh's prime minister
B1. Joseph welcomes his brothers and father, and the House of Israel goes down to Egypt to sojourn due to famine in the land of Canaan
B2. their descendants are enslaved, oppressed, and persecuted
B3. Moses is chosen to confront Pharaoh, liberate the Israelites, and lead them on their journey through the desert
C1. the Israelites are freed from bondage and escape from Egypt
C2. the revelation at Sinai fully establishes their covenant with God
C3. after many trials, they return to the Promised Land
It can be clearly seen in these narrative structures that the role of the hiatus, in ritual terms, is that of the rite of passage, the initiation period that marks, in symbolic fashion, the change in status, the transformation from one social role or state of being to another (e.g., child to adult, outsider to member of the group). This is not to discount the role that actual trials, tests, and other hardships may play in the transition, as they serve to establish or reinforce, psychologically and sometimes physically, the value and reality of the transformation.
In mythic terms, this structure has become known as the hero's journey or hero's adventure, made famous by Joseph Campbell in The Hero with a Thousand Faces, and also known as the monomyth, because he claimed that the same basic structure is universal to all cultures. The basis structure he identified consists of three main elements: separation (e.g., the hero leaves home), initiation (e.g., the hero enters another realm, experiences tests and trials, leading to the bestowing of gifts, abilities, and/or a new status), and return (the hero returns to utilize what he has gained from the initiation and save the day, restoring the status quo or establishing a new status quo).
Understanding the mythic, non-rational element of initiation is the key to recognizing the role of the hiatus, and in the modern era this meant using rationality to realize the limits of rationality. With this in mind, let me return to the quote I began this essay with, but now provide the larger context of the entire paragraph:
The legendary hiatus between a no-more and a not-yet clearly indicated that freedom would not be the automatic result of liberation, that the end of the old is not necessarily the beginning of the new, that the notion of an all-powerful time continuum is an illusion. Tales of a transitory period—from bondage to freedom, from disaster to salvation—were all the more appealing because the legends chiefly concerned the deeds of great leaders, persons of world-historic significance who appeared on the stage of history precisely during such gaps of historical time. All those who pressed by exterior circumstances or motivated by radical utopian thought-trains, were not satisfied to change the world by the gradual reform of an old order (and this rejection of the gradual was precisely what transformed the men of action of the eighteenth century, the first century of a fully secularized intellectual elite, into the men of the revolutions) were almost logically forced to accept the possibility of a hiatus in the continuous flow of temporal sequence.
Note that concept of gaps in historical time, which brings to mind Eliade's distinction between the sacred and the profane. Historical time is a form of profane time, and sacred time represents a gap or break in that linear progression, one that takes us outside of history, connecting us instead in an eternal return to the time associated with a moment of creation or foundation. The revelation in Sinai is an example of such a time, and accordingly Deuteronomy states that all of the members of the House of Israel were present at that event, not just those alive at that time, but those not present, the generations of the future. This statement is included in the liturgy of the Passover Seder, which is a ritual reenactment of the exodus and revelation, which in turn becomes part of the reenactment of the Passion in Christianity, one of the primary examples of Campbell's monomyth.
Arendt's hiatus, then represents a rupture between two different states or stages, an interruption, a disruption linked to an eruption. In the parlance of chaos and complexity theory, it is a bifurcation point. Arendt's contemporary, Peter Drucker, a philosopher who pioneered the scholarly study of business and management, characterized the contemporary zeitgeist in the title of his 1969 book: The Age of Discontinuity. It is an age in which Newtonian physics was replaced by Einstein's relativity and Heisenberg's uncertainty, the phrase quantum leap becoming a metaphor drawn from subatomic physics for all forms of discontinuity. It is an age in which the fixed point of view that yielded perspective in art and the essay and novel in literature yielded to Cubism and subsequent forms of modern art, and stream of consciousness in writing.
Beginning in the 19th century, photography gave us the frozen, discontinuous moment, and the technique of montage in the motion picture gave us a series of shots and scenes whose connections have to be filled in by the audience. Telegraphy gave us the instantaneous transmission of messages that took them out of their natural context, the subject of the famous comment by Henry David Thoreau that connecting Maine and Texas to one another will not guarantee that they have anything sensible to share with each other. The wire services gave us the nonlinear, inverted pyramid style of newspaper reporting, which also was associated with the nonlinear look of the newspaper front page, a form that Marshall McLuhan referred to as a mosaic. Neil Postman criticized television's role in decontextualizing public discourse in Amusing Ourselves to Death, where he used the phrase, "in the context of no context," and I discuss this as well in my recently published follow-up to his work, Amazing Ourselves to Death.
The concept of the hiatus comes naturally to the premodern mind, schooled by myth and ritual within the context of oral culture. That same concept is repressed, in turn, by the modern mind, shaped by the linearity and rationality of literacy and typography. As the modern mind yields to a new, postmodern alternative, one that emerges out of the electronic media environment, we see the return of the repressed in the idea of the jump cut writ large.
There is psychological satisfaction in the deterministic view of history as the inevitable result of cause-effect relations in the Newtonian sense, as this provides a sense of closure and coherence consistent with the typographic mindset. And there is similar satisfaction in the view of history as entirely consisting of human decisions that are the product of free will, of human agency unfettered by outside constraints, which is also consistent with the individualism that emerges out of the literate mindset and print culture, and with a social rather that physical version of efficient causality. What we are only beginning to come to terms with is the understanding of formal causality, as discussed by Marshall and Eric McLuhan in Media and Formal Cause. What formal causality suggests is that history has a tendency to follow certain patterns, patterns that connect one state or stage to another, patterns that repeat again and again over time. This is the notion that history repeats itself, meaning that historical events tend to fall into certain patterns (repetition being the precondition for the existence of patterns), and that the goal, as McLuhan articulated in Understanding Media, is pattern recognition. This helps to clarify the famous remark by George Santayana, "those who cannot remember the past are condemned to repeat it." In other words, those who are blind to patterns will find it difficult to break out of them.
Campbell engages in pattern recognition in his identification of the heroic monomyth, as Arendt does in her discussion of the historical hiatus. Recognizing the patterns are the first step in escaping them, and may even allow for the possibility of taking control and influencing them. This also means understanding that the tendency for phenomena to fall into patterns is a powerful one. It is a force akin to entropy, and perhaps a result of that very statistical tendency that is expressed by the Second Law of Thermodynamics, as Terrence Deacon argues in Incomplete Nature. It follows that there are only certain points in history, certain moments, certain bifurcation points, when it is possible to make a difference, or to make a difference that makes a difference, to use Gregory Bateson's formulation, and change the course of history. The moment of transition, of initiation, the hiatus, represents such a moment.
McLuhan's concept of medium goes far beyond the ordinary sense of the word, as he relates it to the idea of gaps and intervals, the ground that surrounds the figure, and explains that his philosophy of media is not about transportation (of information), but transformation. The medium is the hiatus.
The particular pattern that has come to the fore in our time is that of the network, whether it's the decentralized computer network and the internet as the network of networks, or the highly centralized and hierarchical broadcast network, or the interpersonal network associated with Stanley Milgram's research (popularly known as six degrees of separation), or the neural networks that define brain structure and function, or social networking sites such as Facebook and Twitter, etc. And it is not the nodes, which may be considered the content of the network, that defines the network, but the links that connect them, which function as the network medium, and which, in the systems view favored by Bateson, provide the structure for the network system, the interaction or relationship between the nodes. What matters is not the nodes, it's the modes.
Hiatus and link may seem like polar opposites, the break and the bridge, but they are two sides of the same coin, the medium that goes between, simultaneously separating and connecting. The boundary divides the system from its environment, allowing the system to maintain its identity as separate and distinct from the environment, keeping it from being absorbed by the environment. But the membrane also serves as a filter, engaged in the process of abstracting, to use Korzybski's favored term, letting through or bringing material, energy, and information from the environment into the system so that the system can maintain itself and survive. The boundary keeps the system in touch with its situation, keeps it contextualized within its environment.
The systems view emphasizes space over time, as does ecology, but the concept of the hiatus as a temporal interruption suggests an association with evolution as well. Darwin's view of evolution as continuous was consistent with Newtonian physics. The more recent modification of evolutionary theory put forth by Stephen Jay Gould, known as punctuated equilibrium, suggests that evolution occurs in fits and starts, in relatively rare and isolated periods of major change, surrounded by long periods of relative stability and stasis. Not surprisingly, this particular conception of discontinuity was introduced during the television era, in the early 1970s, just a few years after the publication of Peter Drucker's The Age of Discontinuity.
When you consider the extraordinary changes that we are experiencing in our time, technologically and ecologically, the latter underlined by the recent news concerning the United Nations' latest report on global warming, what we need is an understanding of the concept of change, a way to study the patterns of change, patterns that exist and persist across different levels, the micro and the macro, the physical, chemical, biological, psychological, and social, what Bateson referred to as metapatterns, the subject of further elaboration by biologist Tyler Volk in his book on the subject. Paul Watzlawick argued for the need to study change in and of itself in a little book co-authored by John H. Weakland and Richard Fisch, entitled Change: Principles of Problem Formation and Problem Resolution, which considers the problem from the point of view of psychotherapy. Arendt gives us a philosophical entrée into the problem by introducing the pattern of the hiatus, the moment of discontinuity that leads to change, and possibly a moment in which we, as human agents, can have an influence on the direction of that change.
To have such an influence, we do need to have that break, to find a space and more importantly a time to pause and reflect, to evaluate and formulate. Arendt famously emphasizes the importance of thinking in and of itself, the importance not of the content of thought alone, but of the act of thinking, the medium of thinking, which requires an opening, a time out, a respite from the onslaught of 24/7/365. This underscores the value of sacred time, and it follows that it is no accident that during that period of initiation in the story of the exodus, there is the revelation at Sinai and the gift of divine law, the Torah or Law, and chief among them the Ten Commandments, which includes the fourth of the commandments, and the one presented in greatest detail, to observe the Sabbath day. This premodern ritual requires us to make the hiatus a regular part of our lives, to break the continuity of profane time on a weekly basis. From that foundation, other commandments establish the idea of the sabbatical year, and the sabbatical of sabbaticals, or jubilee year. Whether it's a Sabbath mandated by religious observance, or a new movement to engage in a Technology Sabbath, the hiatus functions as the response to the homogenization of time that was associated with efficient causality and literate linearity, and that continues to intensify in conjunction with the technological imperative of efficiency über alles.
To return one last time to the quote that I began with, the end of the old is not necessarily the beginning of the new because there may not be a new beginning at all, there may not be anything new to take the place of the old. The end of the old may be just that, the end, period, the end of it all. The presence of a hiatus to follow the end of the old serves as a promise that something new will begin to take its place after the hiatus is over. And the presence of a hiatus in our lives, individually and collectively, may also serve as a promise that we will not inevitably rush towards an end of the old that will also be an end of it all, that we will be able to find the opening to begin something new, that we will be able to make the transition to something better, that both survival and progress are possible, through an understanding of the processes of continuity and change.
Indeed my opinion now is that evil is never “radical,” that it is only extreme, and that it possesses neither depth nor any demonic dimension. It can overgrow and lay waste the whole world precisely because it spreads like a fungus over the surface. It is ‘thought-defying,’ as I said, because thought tries to reach some depth, to go to the roots, and the moment it concerns itself with evil, it is frustrated because there is nothing.
-Hannah Arendt, letter to Gershom Scholem
Recent commentators have marked the 50th anniversary of Stanley Kubrick’s bleak nuclear satire, Dr. Strangelove, by noting that the film contained quite a bit more reality than we had thought. While national security and military officials at the time scoffed at the film’s farfetched depictions of a nuclear holocaust set off by a crazed general, we now know that such an unthinkable event would have been, at least theoretically, entirely possible. Yet there is another, deeper sense in which Kubrick’s satire puts us in touch with a reality that could not be readily depicted through other means.
The film tells the story of a rogue general who, at the height of the Cold War arms race, launches a nuclear attack that cannot be recalled, which leads to the destruction of most of humanity in a nuclear holocaust. These are events that we would conventionally describe as “tragic,” but the film is no tragedy. Why not? One answer, of course, is the comic, satirical touch with which Kubrick treated the material, his use of Peter Sellers to play three different characters, and his method of actually tricking his actors into playing their roles more ridiculously than they would have otherwise. But in a deeper sense, Stranglove is about the loss of a capacity for the tragic. The characters, absorbed in utter banalities as they hurtle toward collective catastrophe, display no real grasp of the moral reality of their actions, because they’ve lost contact with the moral reality of the world they share. Dr. Strangelove, then, is a satire about the impossibility of tragedy.
In order to think about what this might mean, it’s helpful to turn to the idea, famously invoked by Hannah Arendt at the end of Eichmann in Jerusalem, of the banality of evil. As Arendt stressed in a later essay, the banality of evil is not a theory or a doctrine “but something quite factual, the phenomenon of evil deeds, committed on a gigantic scale, which could not be traced to any particularity of wickedness, pathology, or ideological conviction in the doer, whose only personal distinction was perhaps extraordinary shallowness.” Eichmann was no villainous monster or demon; rather, he was “terrifyingly normal,” and his chief characteristic was “not stupidity but a curious, quite authentic inability to think.” The inability to think has nothing to do with the capacity of strategizing, performing instrumental calculations, or “reckoning with consequences,” as Hobbes put it. Rather, thinking has to do with awakening the inner dialogue involved in all consciousness, the questioning of the self by the self, which Arendt says dissolves all certainties and examines anew all accepted dogmas and values.
According to Arendt, the socially recognized function of “clichés, stock phrases, adherence to conventional, standardized codes of expression and conduct” is to “protect us against reality”; their function is to protect us against the claim that reality makes on our thinking. This claim, which awakens the dissolving powers of thought, can be so destabilizing that we all must inure ourselves to some degree against it, so that ordinary life can go on at all. What characterized Eichmann is that “he clearly knew of no such claim at all.” Eichmann’s absorption in instrumental and strategic problem solving, on the one hand, and clichés and empty platitudes on the other, was total. The absence of thought, and with it the absence of judgment, ensured a total lack of contact with the moral reality of his actions. Hence the “banality” of his evil resides not in the enormity of the consequences of his actions, but in the depthless opacity of the perpetrator.
The characters in Dr. Strangelove are banal in precisely this sense. All of them—from the affable, hapless president, the red-blooded general, the vodka-swilling diplomat, the self-interested advisors and Dr. Strangelove himself—are silly cardboard cutouts, superficial stereotypes of characters that any lack depth, self-reflection or the capacity for communicating anything other than empty clichés. They are missing what Arendt called “the activity of thinking as such, the habit of examining and reflecting upon whatever happens to come to pass, regardless of specific content and quite independent of results…” They also lack any contact with the moral reality of their activity. All of their actions takes place in an increasingly claustrophobic series of confined spaces carefully sealed off by design: the war room, the military base, the bomber cockpit. The world—Arendt’s common world of appearances that constitutes the possibility of narrative and story telling—never appears at all; reality cannot break through.
The presence of some of Arendt’s core themes in Kubrick’s film should not come as a surprise. Although she dedicated very little attention in her published works to the problem of nuclear war, in an early draft of a text that would later become The Human Condition, Arendt claimed that two experiences of the 20th century, “totalitarianism and the atomic bomb – ignite the question about the meaning of politics in our time. They are fundamental experiences of our age, and if we ignore them it is as if we never lived in the world that is our world.” Moreover, the culmination of strategic statecraft in social scientific doctrines mandating the nuclear arms race reflects on some of the core themes Arendt identified with political modernity: the emergence of a conception of politics as a strategic use of violence for the purposes of protecting society.
Niccolò Machiavelli, a thinker for whom Arendt had a lot of admiration, helped inaugurate this modern adventure of strategic statecraft by reframing politics as l’arte della stato – the art of the state, which unlike the internal civic space of the republic, always finds itself intervening within an instrumental economy of violence. For Machiavelli the prince, shedding the persona of Ciceronian humanism, must be willing to become beastly, animal-like, to discover the virtues of the vir virtutis in the animal nature of the lion and the fox. If political modernity is inaugurated by Machiavelli’s image of the centaur, the Prince-becoming-beastly, Strangelove closes with a suitable 20th century corollary to the career of modern statecraft. It is the image of the amiable, good-natured “pilot” who never steers the machines he occupies but is himself steered by them, finally straddling and literally transforming himself into the Bomb. It is an image that, in our own age of remote drone warfare and the possible dawning of a new, not yet fully conceivable epoch of post-human violence, has not lost its power to provoke reflection.
‘This child, this in-between to which the lovers are now related and which they hold in common, is representative of the world in that it also separates them; it is an indication that they will insert a new world into the existing world.’
-Hannah Arendt, The Human Condition
What can we know about Arendtian action? In The Human Condition, Arendt tells us, variously, that it belongs to the public sphere, “the space of appearance”, that it takes place between political equals, and that it is “ontologically rooted” in “the fact of natality”. “Natality”, here, is not the same as birth, though it relies on the fact of birth for its conceptual understanding. Natality is the distinctly human capacity to bring forth the new, the radical, the unprecedented: that which is unaccountable by any natural causality, but the fact that we must recourse to the patterns of the natural world in order to explain it is what interests me here.
When we try to fix a notion of Arendtian action, it becomes clear that speech has an important role to play, though the precise relationship between speech and action is a slippery one. Actions are defined in speech, becoming recognisable as actions only when they have been placed in narrative, that is: regarded with “the backward glance of the historian”. At the same time, most actions “are performed in the manner of speech”. Speech is rendered as the revelatory tool of action, but, further to this, both action and speech share a number of key characteristics so that it is impossible to fully disentangle the one from the other.
A moment of possible illumination arrives under the heading “Irreversibility and the Power to Forgive”. For Arendt, action has no end. It contains within it the potential to produce an endless chain of reactions that are both unforeseeable and irreversible. With such terrifying momentum attached to everything we do, forgiveness is our release from the consequences of what we have done, without which “our capacity to act would, as it were, be confined to a single deed from which we could never recover”. In this context, forgiveness is always radical. It is the beginning of the possibility of the new: “… the act of forgiving can never be predicted, it is the only reaction that acts in an unexpected way and thus retains, though being a reaction, something of the original character of action”.
What’s more, forgiveness is personal, though not necessarily individual or private. It is, traditionally, connected to love, which Arendt describes as unworldly, indeed: “the most powerful of all anti-political human forces”. In the image of the lovers’ child, the child is used to represent the possibility of forgiveness, that is made representative of the world in its ability to join and divide.
Ultimately, it is not love that Arendt places in relation to forgiveness, it is a distant respect that can only occur “without intimacy and without closeness; it is a regard for the person from the distance which the space of the world puts between us”. Yet, in this moment in the text, Arendt leans upon an image of the unworldly in order to pull from it the particular activities of the world. It is the ability of action to emerge -- unforeseeable, unprecedented -- that Arendt performs here in language. It is the movement of the imagery that alerts us to the essential quality of action to appear, unexpected, as well as to the fragility of the political realm and its complex array of differences from and interconnections with the private. One need only examine the syntax to understand the dynamic of action that Arendt illustrates here: where a semi-colon would usually indicate two halves of a balanced equation, Arendt uses it as a springboard from which to make a tiger’s leap into the new.
There are a number of things to be gained from a close reading of the linguistic representation of the movement of action, not least in light of the fact that, in writing this book, Arendt is expressing a deep-seated fear that the faculty for action is about to slip away from us entirely. While much ink has been spilled over whether or not the categories and oppositions that arise in The Human Condition can be fully understood in any concrete way, on whether or not they hold, it may be that the apparent slippages in the text are, in fact, our most fruitful way in to understanding the particular dynamics and character of Arendtian action; an understanding that may then be put to some homeopathic use in our own work.
In the most recent NY Review of Books, David Cole wonders if we've reached the point of no return on the issue of privacy:
“Reviewing seven years of the NSA amassing comprehensive records on every American’s every phone call, the board identified only one case in which the program actually identified an unknown terrorist suspect. And that case involved not an act or even an attempted act of terrorism, but merely a young man who was trying to send money to Al-Shabaab, an organization in Somalia. If that’s all the NSA can show for a program that requires all of us to turn over to the government the records of our every phone call, is it really worth it?”
Cole is beyond convincing in listing the dangers to privacy in the new national security state. Like many others in the media, he speaks the language of necessary trade-offs involved in living in a dangerous world, but suggests we are trading away too much and getting back too little in return. He warns that if we are not careful, privacy will disappear. He is right.
What is often forgotten and is absent in Cole’s narrative is that most people—at least in practice—simply don’t care that much about privacy. Whether snoopers promise security or better-targeted advertisements, we are willing to open up our inner worlds for the price of convenience. If we are to save privacy, the first step is articulating what it is about privacy that makes it worth saving.
Cole simply assumes the value of privacy and doesn’t address the benefits of privacy until his final paragraph. When he does come to explaining why privacy is important, he invokes popular culture dystopias to suggest the horror of a world without privacy:
More broadly, all three branches of government—and the American public—need to take up the challenge of how to preserve privacy in the information age. George Orwell’s 1984, Ray Bradbury’s Fahrenheit 451, and Philip K. Dick’s The Minority Report all vividly portrayed worlds without privacy. They are not worlds in which any of us would want to live. The threat is no longer a matter of science fiction. It’s here. And as both reports eloquently attest, unless we adapt our laws to address the ever-advancing technology that increasingly consumes us, it will consume our privacy, too.
There are two problems with such fear mongering in defense of privacy. The first is that these dystopias seem too distant. Most of us don’t experience the violations of our privacy by the government or by Facebook as intrusions. The second is that on a daily basis the fact that my phone knows where I am and that in a pinch the government could locate me is pretty convenient. These dystopian visions can appear not so dystopian.
Most writing about privacy simply assume that privacy is important. We are treated to myriad descriptions of the way privacy is violated. The intent is to shock us. But rarely are people shocked enough to actually respond in ways that protect the privacy they often say that they cherish. We have collectively come to see privacy as a romantic notion, a long-forgotten idle, exotic and even titillating in its possibilities, but ultimately irrelevant in our lives.
There is, of course, a reason why so many advocates of privacy don’t articulate a meaningful defense of privacy: It is because to defend privacy means to defend a rich and varied sphere of difference and plurality, the right and importance of people actually holding opinions divergent from one’s own. In an age of political correctness and ideological conformism, privacy sounds good in principle but is less welcome in practice when those we disagree with assert privacy rights. Thus many who defend privacy do so only in the abstract.
When it comes to actually allowing individuals to raise their children according to their religious or racial beliefs or when the question is whether people can marry whomever they want, defenders of privacy often turn tail and insist that some opinions and some practices must be prohibited. Over and over today, advocates of privacy show that they value an orderly, safe, and respectful public realm and that they are willing to abandon privacy in the name of security and a broad conception of civility according to which no one should have to encounter opinions and acts that give them offense.
The only major thinker of the last 100 years who insisted fully and consistently on the crucial importance of a rich and vibrant private realm is Hannah Arendt. Privacy, Arendt argues, is essential because it is what allows individuals to emerge as unique persons in the world. The private realm is the realm of “exclusiveness,” it is that realm in which we “choose those with whom we wish to spend our lives, personal friends and those we love.” The private choices we make are guided by nothing objective or knowable, “but strikes, inexplicably and unerringly, at one person in his uniqueness, his unlikeness to all other people we know.” Privacy is controversial because the “rules of uniqueness and exclusiveness are, and always will be, in conflict with the standards of society.” Arendt’s defense of mixed marriages (and by extension gay marriages) proceeds—no less than her defense of the right of parents to educate their children in single-sex or segregated schools—from her conviction that the uniqueness and distinction of private lives need to be respected and protected.
Privacy, for Arendt, is connected to the “sanctity of the hearth” and thus to the idea of private property. Indeed, property itself is respected not on economic grounds, but because “without owning a house a man could not participate in the affairs of the world because he had no location in it which was properly his own.” Property guarantees privacy because it enforces a boundary line, “ kind of no man’s land between the private and the public, sheltering and protecting both.” In private, behind the four walls of house and heath, the “sacredness of the hidden” protects men from the conformist expectations of the social and political worlds.
In private, shaded from the conformity of societal opinions as well from the demands of the public world, we can grow in our own way and develop our own idiosyncratic character. Because we are hidden, “man does not know where he comes from when he is born and where he goes when he dies.” This essential darkness of privacy gives flight to our uniqueness, our freedom to be different. It is privacy, in other words, that we become who we are. What this means is that without privacy there can be no meaningful difference. The political importance of privacy is that privacy is what guarantees difference and thus plurality in the public world.
Arendt develops her thinking on privacy most explicitly in her essays on education. Education must perform two seemingly contradictory functions. First, education leads a young person into the public world, introducing them and acclimating them to the traditions, public language, and common sense that precede him. Second, education must also guard the child against the world, care for the child so that “nothing destructive may happen to him from the world.” The child, to be protected against the destructive onslaught of the world, needs the privacy that has its “traditional place” in the family.
Because the child must be protected against the world, his traditional place is in the family, whose adult members return back from the outside world and withdraw into the security of private life within four walls. These four walls, within which people’s private family life is lived, constitute a shield against the world and specifically against the public aspect of the world. This holds good not only for the life of childhood but for human life in general…Everything that lives, not vegetative life alone, emerges from darkness and, however, strong its natural tendency to thrust itself into the light, it nevertheless needs the security of darkness to grow at all.
The public world is unforgiving. It can be cold and hard. All persons count equally in public, and little if any allowance is made for individual hardships or the bonds of friendship and love. Only in privacy, Arendt argues, can individuals emerge as unique individuals who can then leave the private realm to engage the political sphere as confident, self-thinking, and independent citizens.
The political import of Arendt’s defense of privacy is that privacy is what allows for meaningful plurality and differences that prevent one mass movement, one idea, or one opinion from imposing itself throughout society. Just as Arendt valued the constitutional federalism in the American Constitution because it multiplied power sources through the many state and local governments in the United States, so did she too value privacy because it nurtures meaningfully different and even opposed opinions, customs, and faiths. She defends the regional differences in the United States as important and even necessary to preserve the constitutional structure of dispersed power that she saw as the great bulwark of freedom against the tyranny of the majority. In other words, Arendt saw privacy as the foundation not only of private eccentricity, but also of political freedom.
Cole offers a clear-sighted account of the ways that government is impinging on privacy. It is essential reading and it is your weekend read.
Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.
Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.
Decline, writes Josef Joffe in a recent essay in The American Interest, “is as American as apple pie.” The tales of decline that populate American cultural myths have many morals, but one common shared theme: Renewal. Here is Joffe: “Decline Time in America” is never just a disinterested tally of trends and numbers. It is not about truth, but about consequences—as in any morality tale. Declinism tells a story to shape belief and change behavior; it is a narrative that is impervious to empirical validation, whose purpose is to bring comforting coherence to the flow of events. The universal technique of mythic morality tales is dramatization and hyperbole. Since good news is no news, bad news is best in the marketplace of ideas. The winning vendor is not Pollyanna but Henny Penny, also known as Chicken Little, who always sees the sky falling. But why does alarmism work so well, be it on the pulpit or on the hustings—whatever the inconvenient facts?” You can read more about Joffe’s tale of decline in Roger Berkowitz’s weekend read.
James Somers considers recent advances in machine learning and whether or not they answer the big question: whether or not we can make machines that think like humans. In a essay that's part rofile of Douglas Hofstadter, author of Godel, Escher, Bach, and part history of artificial intelligence, Hofstadter argues that the AI community has abandoned the big questions for smaller ones,
more easily answered, more obviously profitable.
Using the left's ambivalence on marriage equality as a starting point, Sam Brody considers two groups of the American left, joiners and quitters. Joiners, to Brody's thinking, seek to have queer individuals granted the same rights long prized by American liberals, in this case the right to marry and have the associated economic benefits of that status, as heterosexual couples. Quitters, on the other hand, see the whole endeavor as a canard, an attempt to normalize an outsider group by buying into a deeply corrupt system. Brody, for his part, sees both groups as missing the point and suggest that the struggle he's described needs to be redirected inward. The solution, he says, is a "a vision of love and commitment that is open and flexible, but not subordinated to the consumerist logic of individual whims. A left committed to such a vision might discover resources to combat the social disintegration of post-industrial life, without the false panaceas of nationalism, trade solidarity, or state-sponsored religious initiatives... the utopian imagination must be directed inward, from which point it can radiate out to the neighbor, the spouse, the neighborhood, the city, the country and the world."
Garret Keizer thinks about the meaning of momento mori in a world threatened by increasingly violent natural disasters: "I wonder if the tradition of memento mori exists more vividly in the remnants of the gay community than in any remaining monastic tradition. From those who have lived daily in the shadow of AIDS, we may be able to learn something about that complex ethos of care-giving, self-denial, and mortal merriment without which environmentalism has about the same chances of survival as the polar bears do."
Have you seen the “The Banality of the Banality of Evil,” the altered landscape by the elusive street artist who calls himself Banksy? It has caused quite a furor, and seemingly over nothing. “We're really not sure what to make of Banksy's latest installment in "Better Out Than In." His website describes it as "The banality of the banality of evil, Oil on oil on canvas, 2013" and "a thrift store painting vandalized then re-donated to the thrift store." What we see is a beautiful pastoral landscape, except there's an SS officer on a bench in the foreground. What exactly is he getting at with "the banality of the banality of evil"? Doing loop-de-loops around Hannah Arendt's theoretical reckoning of the Nazis' rise to power isn't really how we want to spend our afternoon, but we're guessing it has something to do with Banksy not really caring much about what he's actually saying.”
What is Politics? A Conference on Hannah Arendt at Villa Aurora
Los Angeles, CA
Learn more here.
The secret of American exceptionalism may very well be the uniquely American susceptibility to narratives of decline. From the American defeat in Vietnam and the Soviet launch of Sputnik to the quagmire in Afghanistan and the current financial crisis, naysayers proclaim the end of the American century. And yet the prophecies of decline are nearly always, in a uniquely American spirit, followed by calls for rejuvenation. Americans are neither pessimists nor optimists. Instead, they are darkened by despair and fired by hope.
Decline, writes Josef Joffe in a recent essay in The American Interest, “is as American as apple pie. “ The tales of decline that populate American cultural myths have many morals, but one common shared theme: Renewal.
“Decline Time in America” is never just a disinterested tally of trends and numbers. It is not about truth, but about consequences—as in any morality tale. Declinism tells a story to shape belief and change behavior; it is a narrative that is impervious to empirical validation, whose purpose is to bring comforting coherence to the flow of events. The universal technique of mythic morality tales is dramatization and hyperbole. Since good news is no news, bad news is best in the marketplace of ideas. The winning vendor is not Pollyanna but Henny Penny, also known as Chicken Little, who always sees the sky falling. But why does alarmism work so well, be it on the pulpit or on the hustings—whatever the inconvenient facts?
Joffe, the editor of the German weekly Die Zeit, writes from the lofty perch of an all-knowing cultural critic. Declinism is, when looked at from above, little more than a marketing pitch:
Since biblical times, prophets have never gone to town on rosy oratory, and politicos only rarely. Fire and brimstone are usually the best USP, “unique selling proposition” in marketing-speak.
The origins of modern declinism, pace Joffe, are found in “the serial massacre that was World War I,” the rapacious carnage that revealed “the evil face of technology triumphant.” WWI deflated the enlightenment optimism in reason and science, showing instead the destructive impact of those very same progressive ideals.
The knowledge that raised the Eiffel Tower also birthed the machine gun, allowing one man to mow down a hundred without having to slow down for reloading. Nineteenth-century chemistry revolutionized industry, churning out those blessings from petroleum to plastics and pharmacology that made the modern world. But the same labs also invented poison gas. The hand that delivered good also enabled evil. Worse, freedom’s march was not only stopped but reversed. Democracy was flattened by the utopia-seeking totalitarians of the 20th century. Their utopia was the universe of the gulag and the death camp. Their road to salvation led to a war that claimed 55 million lives and then to a Cold War that imperiled hundreds of millions more.
America, the land of progress in Joffe’s telling, now exists in a productive tension with the anti-scientific tale of the “death of progress.”
“Technology and plenty, the critics of the Enlightenment argued, would not liberate the common man, but enslave him in the prison of “false consciousness” built by the ruling elites. The new despair of the former torchbearers of progress may well be the reason that declinism flourishes on both Left and Right. This new ideological kinship alone does not by itself explain any of the five waves of American declinism, but it has certainly broadened its appeal over time.
Joffe stands above both extremes of the declinism pendulum. Instead of embracing or rejecting the tale of decline, he names decline and its redemptive flipside the driving force of American exceptionalism. Myths of decline are necessary in order to fuel the exceptional calls for sacrifice, work, and innovation that have for centuries turned the tide of American elections and American culture.
[D]awn always follows doom—as when Kennedy called out in his Inaugural Address: “Let the word go forth that the torch has been passed to a new generation of Americans.” Gone was the Soviet bear who had grown to monstrous size in the 1950s. And so again twenty years later. At the end of Ronald Reagan’s first term, his fabled campaign commercial exulted: “It’s morning again in America. And under the leadership of President Reagan, our country is prouder and stronger and better.” In the fourth year of Barack Obama’s first term, America was “back”, and again on top. Collapse was yesterday; today is resurrection. This miraculous turnaround might explain why declinism usually blossoms at the end of an administration—and wilts quickly after the next victory.
Over and over the handwriting that showed that decline was on the wall was, in truth, “a call to arms that galvanized the nation.”
Behind this long history of nightmares of degeneration and dreams of rebirth is Joffe’s ultimate question: Are the current worries about the death of the American century simply the latest in the American cycle of gloom and glee? Or is it possible that the American dream is, finally, used up? In other words, is it true that, since “at “some point, everything comes to an end,” this may be the end for America? Might it be that, as many in Europe now argue, “The United States is a confused and fearful country in 2010.” Is it true that the US is a “hate-filled country” in unavoidable decline?
Joffe is skeptical. Here is his one part of his answer:
Will they be proven right in the case of America? Not likely. For heuristic purposes, look at some numbers. At the pinnacle of British power (1870), the country’s GDP was separated from that of its rivals by mere percentages. The United States dwarfs the Rest, even China, by multiples—be it in terms of GDP, nuclear weapons, defense spending, projection forces, R&D outlays or patent applications. Seventeen of the world’s top universities are American; this is where tomorrow’s intellectual capital is being produced. America’s share of global GDP has held steady for forty years, while Europe’s, Japan’s and Russia’s have shrunk. And China’s miraculous growth is slipping, echoing the fates of the earlier Asian dragons (Japan, South Korea, Taiwan) that provided the economic model: high savings, low consumption, “exports first.” China is facing a disastrous demography; the United States, rejuvenated by steady immigration, will be the youngest country of the industrial world (after India).
In short, if America is to decline it will be because America refuses to stay true to its tradition of innovation and reinvention.
As convincing as Joffe is, the present danger that America’s current malaise will persist comes less from economics or from politics than from the extinguishing of the nation’s moral fire. And in this regard, essays such as Joffe’s are symptoms of the problem America faces. Joffe writes from above and specifically from the position of the social scientist. He looks down on America and American history and identifies trends. He cites figures. And he argues that in spite of the worry, all is generally ok. Inequality? Not to worry, it has been worse. Democratic sclerosis? Fret not; think back to the 1880s. Soul-destroying partisanship? Have you read the newspapers of the late 18th century? In short, our problems are nothing new under the sun. Keep it in perspective. There is painfully little urgency in such essays. Indeed, they trade above all in a defense of the status quo.
There is reason to worry though, and much to worry about. Joffe may himself have seen one such worry if he had lingered longer on an essay he cites briefly, but does not discuss. In 1954, Hannah Arendt published “Europe and America: Dream and Nightmare” in Commentary Magazine. In that essay—originally given as part of a series of talks at Princeton University on the relationship between Europe and America—she asked: “WHAT IMAGE DOES Europe have of America?”
Her answer is that Europe has never seen America as an exotic land like the South Sea Islands. Instead, there are two conflicting images of America that matter for Europeans. Politically, America names the very European dream of political liberty. In this sense, America is less the new world than the embodiment of the old world, the land in which European dreams of equality and liberty are made manifest. The political nearness of Europe and America explains their kinship.
European anti-Americanism, however, is lodged in a second myth about American, the economic image of America as the land of plenty. This European image of America’s stupendous wealth may or may not be borne out in reality, but it is a fantasy that drives European opinion:
America, it is true, has been the “land of plenty” almost since the beginning of its history, and the relative well-being of all her inhabitants deeply impressed even early travelers. … It is also true that the feeling was always present that the difference between the two continents was greater than national differences in Europe itself even if the actual figures did not bear this out. Still, at some moment—presumably after America emerged from her long isolation and became once more a central preoccupation of Europe after the First World War—this difference between Europe and America changed its meaning and became qualitative instead of quantitative. It was no longer a question of better, but of altogether different conditions, of a nature which makes understanding well nigh impossible. Like an invisible but very real Chinese wall, the wealth of the United States separates it from all other countries of the globe, just as it separates the individual American tourist from the inhabitants of the countries he visits.
Arendt’s interest in this “Chinese wall” that separates Europe from America is that it lies behind the anti-Americanism of European liberals, even as it inspires the poor. “As a result,” of this myth, Arendt writes, “sympathy for America today can be found, generally speaking, among those people whom Europeans call “reactionary,” whereas an anti-American posture is one of the best ways to prove oneself a liberal.” The same can largely be said today.
The danger in such European anti-Americanism is not only that it will fire a European nationalism, but also that it will cast European nationalism as an ideological opposition to American wealth. “Anti-Americanism, its negative emptiness notwithstanding, threatens to become the content of a European movement.” In other words, European nationalism threatens to assume on a negative ideological tone.
That Europe will understand itself primarily in opposition to America as a land of wealth impacts America too, insofar as European opposition hardens Americans in their own mythic sense of themselves as a land of unfettered economic freedom and unlimited wealth. European anti-Americanism thus fosters the kind of free market ideology so rampant in America today.
What is more, when Europe and America emphasize their ideological opposition on an economic level, they deemphasize their political kinship as lands of freedom.
Myths of American decline serve a purpose on both sides of the Atlantic.
In Europe, they help justify Europe’s social democratic welfare states, as well as their highly bureaucratized regulatory state. In America, they underlie attacks on regulation and calls to limit and shrink government. These are all important issues that should be thought and debated with an eye to reality. The danger is that the European emancipation and American exceptionalism threatens to elevate ideology over reality, hardening positions that need rather to be open for innovation.
Joffe’s essay on the Canard of Decline is a welcome spur to rethinking the gloom and the glee of our present moment. It is your weekend read.
Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.
Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.
Gary Shteyngart tries Google's new digital glasses and feels alternately estranged and powerful. Above all, Shteyngart comes to feel the emergence of a new human-technological symbiosis that he explains by referring to "Bloodchild," a science fiction story by Octavia Butler. "The story takes place on a faraway planet dominated by a large insect-like species called the Tlic. The humans who have fled oppression on their own planet live on a so-called Preserve, where their bodies are used as hosts for the Tlic's eggs, culminating in a horrifyingly graphic hatching procedure often resulting in the death of the human host.... Butler wrote that she thought of "Bloodchild" as "a love story between two very different beings." Although their relationship is unequal and often gruesome, Tlic and humans need each other to survive. Today, when I think of our relationship with technology, I cannot help but think of human and Tlic, the latter's insect limbs wrapped around the former's warm-blooded trunk, about to hatch something new."
We know that eyewitness evidence is notoriously unreliable, but confessions are still thought to be meaningful. Wrongly, it seems. Using a case study where a man admitted to a murder two others were already in prison for, Marc Bookman examines the false confessions of the innocent: "People have been admitting to things they haven't done for as long as they've been committing crimes. On the North American continent, prominent examples reach back to 1692 and the Salem witch trials. DNA exonerations over the past 24 years have established not only how error-prone our system of justice is, but how more than a quarter of those wrongly convicted have been inculpated by their own words. Now an entire body of scientific research is devoted to the phenomenon of the false confession."
Turkish photographer Cihad Caner recently traveled to Syria, where he took pictures of Syrians. He then asked his subjects to alter pictures of themselves; writing and drawing on the photos, his collaborators take the last word on the state of their home.
Chris Pomorski profiles Atlantic City, intermixing the narrative of one of the city's recent homicides with a short history of the area. What Pomorski finds is a place that was promised much, and that promises much, but that didn't get, and doesn't give, what it was hoping for.
Charles Hope writes a short history of the art forger: "It is often said that art forgery has existed as long as the demand for works of art, but this is not strictly true. There is no clear evidence that art forgeries as such existed in the ancient world. There were plenty of collectors, but they seem to have found copies just as desirable as originals. Even the presence of a signature was not necessarily taken as an indication that the object in question had been made by that artist. The notion of art forgery, as we understand it today, seems to require the idea that originals possess certain qualities not found even in the best copies. It also requires the presence of an expert with the ability to distinguish between the two; but such expertise does not seem to have existed in antiquity."
The sixth annual fall conference, "Failing Fast:The Crisis of the Educated Citizen"
Olin Hall, Bard College
Learn more here.
This week on the blog, Jeff Champlin investigates the relationship between Arendt and Feminist politics. Lance Strate delves into the human condition. Your weekend read looks at the splintering of culture in an intellectual world no longer governed by a unified aesthetic or a single dominant medium.
The response has been swift and negative to the Rolling Stone Magazine cover—a picture of Dzhokhar Tsarnaev who with his now dead brother planted deadly homemade bombs near the finish-line of the Boston Marathon. The cover features a picture Tsarnaev himself posted on his Facebook page before the bombing. It shows him as he wanted himself to be seen—that itself has offended many, who ask why he is not pictured as a suspect or convict. In the photo he is young, hip, handsome, and cool. He could be a rock star, and given the context of the Rolling Stone cover, that is how he appears.
The cover is jarring, and that is intended. It is controversial, and that was probably also intended. Hundreds of thousands of comments on Facebook and around the web are critical and angry, asking how Rolling Stone could portray the bomber as a rock-star. They overlook or ignore the text accompanying the photo on the cover, which reads: “The Bomber. How a Popular, Promising Student Was Failed by His Family, Fell Into Radical Islam, and Became a Monster.” CVS and other retailers have announced they will not sell the magazine in their stores.
That is unfortunate, for the story written by Janet Reitman is exceptionally good and deserves to be read.
Controversies like this have a perverse effect. Just as the furor over Hannah Arendt’s Eichmann in Jerusalem resulted in the viral dissemination of her claims about the Jewish leaders, so too will this Rolling Stone cover be seen by millions of people who otherwise would never have heard of Rolling Stone. What is more, such publicity makes it ever less likely that the story itself will be read seriously, just as Arendt’s book was criticized by everyone, but read by few.
Reitman’s narrative itself is unexceptional. It is a common story line: young, normal kid becomes radicalized and does something none of his old friends can believe he could do. This is a now familiar narrative that we hear in the wake of the tragedies in Newtown (Adam Lanza was described as a nice quiet kid) and Columbine (Time’s cover announced “The Monsters Next Door.”)
This is also the narrative that Rolling Stone managing editor Will Dana embraced to defend the Cover on NPR arguing it was an "apt image because part of what the story is about is what an incredibly normal kid [Tsarnaev] seemed like to those who knew him best back in Cambridge.” It was echoed too by Erin Burnett, on CNN, who recently invoked Hannah Arendt’s idea of the “banality of evil.” In the easy frame the story offers, Tsarnaev was a good kid, part of a striving immigrant family, someone who loved multi-racial America. And then something went wrong. He found Islam; his family fell apart; and he became a monster.
This story is too simple. And yet within the Rolling Stone story, there is a wealth of information and reporting that does give a nuanced and thoughtful portrayal of Tsarnaev’s journey into the heart of evil.
One fact that is important to note is that Tsarnaev is not Eichmann. Eichmann was a member of the SS, a nationalist security service engaged in world war and dedicated to wiping certain races of peoples off the face of the earth. He committed genocide as part of a system of extermination, something both worse than and yet less messy than murder itself. It is Tsarnaev, who had no state apparatus behind him, who become a cold-blooded murderer. The problems that Hannah Arendt thought that the court in Jerusalem faced with Eichmann—that he was a new type of criminal—do not apply in Tsarnaev’s case. He is a murderer. To understand him is not to understand a new type of criminal. And yet it is a worthy endeavor to try to understand why more and more young men like Tsarnaev are so easily radicalized and drawn to murdering innocent people in the name of a cause.
Both Eichmann and Tsarnaev were from upwardly striving bourgeois families that struggled with economic setbacks. Eichmann was white and Austrian, Tsarnaev an immigrant in Cambridge, but both were economically disaffected. Tsarnaev wanted to make money and, like his parents, dreamed of a better life.
Tsarnaev’s family had difficulty fitting in with U.S. culture. His father was ill and could not work. His mother sought to earn money. And his older brother, whom he idolized, saw his dreams of Olympic boxing dashed partly because he was not a citizen. He increasingly turned to a radical version of Islam. When Tsarnaev’s parents both returned to Dagestan, he fell increasingly under the influence of his older brother.
Like Eichmann, Tsarnaev appears to have adopted an ideology that provided a coherent and meaningful narrative that gave his life significance. One can see this in a number of tweets and statements that are quoted in the article. For example, just before the bombing, he tweeted:
"Evil triumphs when good men do nothing."
"If you have the knowledge and the inspiration all that's left is to take action."
"Most of you are conditioned by the media."
Like Eichmann, Tsarnaev came to see himself as a hero, someone willing to suffer and even die for a noble cause. His cause was different—anti-American jihad instead of anti-Semitic Nazism—but he was an ideological idealist, a joiner, someone who found meaning and importance in belonging to a movement. A smart and talented and by most accounts good young man, he was lost and adrift, searching for someone and something to give his life purpose. He found that someone in his brother and that something in jihad against America, the land that previously he had so embraced. And he became someone who believed that what he was doing was right and necessary, even if he understood also that it was wrong.
We see clearly this ambivalent understanding of right and wrong in the note Tsarnaev apparently scrawled while he was hiding in a boat before he was captured. Here is how Reitman’s article describes what he wrote:
When investigators finally gained access to the boat, they discovered a jihadist screed scrawled on its walls. In it, according to a 30-count indictment handed down in late June, Jihad [Tsarnaev's nickname] appeared to take responsibility for the bombing, though he admitted he did not like killing innocent people. But "the U.S. government is killing our innocent civilians," he wrote, presumably referring to Muslims in Iraq and Afghanistan. "I can't stand to see such evil go unpunished. . . . We Muslims are one body, you hurt one, you hurt us all," he continued, echoing a sentiment that is cited so frequently by Islamic militants that it has become almost cliché. Then he veered slightly from the standard script, writing a statement that left no doubt as to his loyalties: "Fuck America."
Eichmann too spoke of his shock and disapproval of killing innocent Jews, but he justified doing so for the higher Nazi cause. He also said that when he found out about the sufferings of Germans at the hands of the allies, it made it easier for him to justify what he had done, because he saw it as equivalent. The fact that the Germans were aggressors, that they had started the war, and that they were killing and torturing innocent people simply did not register for Eichmann, just as it did not register for Tsarnaev that the people in the Boston marathon were innocent. There are, of course, innocent people in Iraq and Afghanistan who have died at the hands of U.S. bombs. Even for those of us who were against the wars and question their sense and justification, however, there is a difference between death in a war zone and terrorism.
The Rolling Stone article does a good job of chronicling Tsarnaev's slide into a radical jihadist ideology, one mixed with conspiracy theories.
The Prophet Muhammad, he noted on Twitter, was now his role model. "For me to know that I am FREE from HYPOCRISY is more dear to me than the weight of the ENTIRE world in GOLD," he posted, quoting an early Islamic scholar. He began following Islamic Twitter accounts. "Never underestimate the rebel with a cause," he declared.
His rebellious cause was to awaken Americans to their complicity both in the bombing of innocent Muslims and also to his belief in the common conspiracy theory that America was behind the 9/11 attacks. In one Tweet he wrote: "Idk [I don’t know] why it's hard for many of you to accept that 9/11 was an inside job, I mean I guess fuck the facts y'all are some real #patriots #gethip."
Besides these tweets that offer a provocative insight into Tsarnaev's emergent ideological convictions, the real virtue of the article is its focus on Tsarnaev's friends, his school, and his place in American youth culture. While his friends certainly do not support or condone what Tsarnaev did, many share some of his conspiratorial and anti-American beliefs. Here are two descriptions of the mainstream nature of many of his beliefs:
To be fair, Will and others note, Jahar's perspective on U.S. foreign policy wasn't all that dissimilar from a lot of other people they knew. "In terms of politics, I'd say he's just as anti-American as the next guy in Cambridge," says Theo.
This is not an uncommon belief. Payack, who [was Tsarnaev's wrestling coach and mentor and] also teaches writing at the Berklee College of Music, says that a fair amount of his students, notably those born in other countries, believe 9/11 was an "inside job." Aaronson tells me he's shocked by the number of kids he knows who believe the Jews were behind 9/11. "The problem with this demographic is that they do not know the basic narratives of their histories – or really any narratives," he says. "They're blazed on pot and searching the Internet for any 'factoids' that they believe fit their highly de-historicized and decontextualized ideologies. And the adult world totally misunderstands them and dismisses them – and does so at our collective peril," he adds.
The article presents a sad portrait of youth culture, and not just because all these “normal” kids are smoking “a copious amount of weed.” The jarring realization is that these talented and intelligent young people at a good school in a storied neighborhood come off so disaffected. What is more, their beliefs in conspiracies are accepted by the adults in their lives as commonplaces; their anti-Americanism is simply a noted fact; and their idolization of slacking (Tsarnaev's favorite word, his friends say, “was "sherm," Cambridge slang for ‘slacker’”) is seen as cute. There is painfully little concern by adults to insist that the young people face facts and confront unserious opinions.
In short, the young people in Tsarnaev's story appear to be abandoned by adults to their own youthful and quite fanciful views of reality. Youth culture dominates, and adult supervision seems absent. There is seemingly no one who, in Arendt’s language from “The Crisis in Education”, takes responsibility for teaching them to love the world as it is.
The Rolling Stone article and cover do not glorify a monster; but they do play on two dangerous trends in modern culture that Hannah Arendt worried about in her writing: First, the rise of youth culture and the abandonment of adult authority in education; and second, the fascination bourgeois culture has for vice and the short distance that separates an acceptance of vice from an acceptance of monstrosity. If only all the people who are so concerned about a magazine cover today were more concerned about the delusions and fantasies of Tsarnaev, his friends, and others like them.
Taking responsibility for teaching young people to love the world is the very essence of what Arendt understands education to be. It will be the topic of the Hannah Arendt Center upcoming conference “Failing Fast: The Crisis of the Educated Citizen.” Registration for the conference opened this week. For now, ignore the controversy and read Reitman’s article “Jahar’s World.” It is your weekend read. It is as good an argument for thinking seriously about the failure of our approach to education as one can find.
Barely more than a year old, MITx and edX now dominate discussion about the future of higher education like nothing else I have seen in my time in Cambridge, MA. I have been teaching at MIT for more than 10 years now, and can’t remember any subject touching directly on university life that came even remotely close to absorbing the attention of higher ed professionals in the region the way that edX has. From initial investments of $30 million each by the founding institutions Harvard and MIT, and each month it seems bringing announcement of new partnerships with the world’s colleges & universities (27 institutions currently belong to the “X” consortium), the levels of hype and institutional buy-in have been nothing short of extraordinary.
Because of their ubiquity in the popular press, higher ed industry periodicals, and blogosphere, Massively Open Online Courses or MOOCs have become that most dangerous topic of discussion: a subject about which everybody needs to have an opinion. Such topics can unfortunately generate more heat than light, as the requirement to have and to express a point of view often means that the strongest and most extravagant opinions will claim attention and command the terms of debate. This is unfortunate if you favor the nuanced opinion or (as I do) feel genuinely ambivalent about MOOCs and the role(s) that they might play in shaping the future of higher education.
So far much of the discourse about MOOCs has tended to settle around two competing claims -- one for, one against -- that I articulated in a tweet a few months ago. Either MOOC providers are described as delivering free or low-cost quality higher education to those hard-pressed to afford it (and so performing a valuable public service); or MOOCs are understood to be selling a "lite" version of higher education to the poor while consolidating power and prestige with a few wealthy elite schools. In this dystopian view, the democratizing claims made by Udacity, Coursera and edX (the last formed of these outfits, and the only non-profit among them) are revealed instead to be essentially colonialist ones -- the colonialists, ed-tech profiteers hell-bent on thoroughly remaking the university as a crypto-corporate enterprise. MOOCs are understood to be an engine in this transformation, and an integral part of an overall design for reshaping higher education as a neoliberal market pursuit.
I can’t doubt that there is truth in both of these sets of claims. It is difficult at the same time to ignore that arguments for and against MOOCs look past each other in crucial respects; and leave precious little ground between them. What the accounts do share is an assumption that MOOCs will transform or “revolutionize” the landscape of higher education (for good or ill). Either MOOCs will be agents for elevating some in the less advantaged and underserved corners of the world; or MOOCs are instruments for extracting bodies from classrooms and tenure-track lines from university departments. The somewhat high-flown claims to educate and elevate underserved populations of the globe, often based on stray anecdote, are offered independently of any more substantive claim about the specific learning communities who benefit (or stand to benefit) from MOOCs. Similarly, claims about the profit motives animating the companies offering MOOCs subordinate all discussion of MOOCs to the ideological positions that they supposedly exist to promote. The designs attributed to MOOCs, and to the instructors who offer MOOCs, are such as foreclose discussion rather than promote it.
While both accounts of MOOCs envision significant future consequences from their implementation, moreover, neither says very much about actually-existing MOOCs. The MOOC has become a repository for utopian and dystopian narratives about the present and future directions of higher ed. As a result, this or that fact about MOOCs is often considered (or not) insofar as it confirms the prevailing theory about them. 150,000 signing up for a class demonstrates a clear hunger on the part of many across the globe for access to a quality education; this fact authorizes enlarged claims for the ability to transform higher education by bringing MOOCs to the masses. Similarly, the replicability of the digital medium -- and the fact that course content such as video lectures, once made, do not necessarily need to be re-made each year -- is conceived as a key to how MOOCs will force everyone in higher ed to make do (not do more) with less: less student-faculty interaction, fewer tenure-track professors, down the road the prospect of fewer instructors (the majority of them adjuncts already) paid to teach in college classrooms.
In addition to fears that MOOCs will reinforce ongoing trends of budget cuts, adjunctification and layoffs of college teaching staff, another legitimate concern is that MOOCs will—by helping some schools with their branding strategies—have the effect of consolidating elite privilege with a few schools and the “superprofessors” (themselves overwhelmingly white and male) who teach MOOCs, leaving other lesser-ranked schools struggling to compete against a lower-priced virtual curriculum. The fear is that MOOCs will facilitate the emergence of two tiers in higher ed offerings: the “real” version, available only to the students whose families can afford the exorbitant tuition, or who survive by taking out massive student loan debts); and the second-rate online version. With proposals on the table such as California’s Senate Bill 520, which would grant college credit for certain approved online courses, and Coursera’s recent announcement that they will sell their MOOCs to 10 public universities in the US, these fears are unfortunately very real. I hope to see more MOOCs spring up to contest that sense of inevitable recentering of authority from within the elite universities that host them. However difficult the task may prove to be, we need to disentangle the genuinely democratizing outreach work done by online education from its re-inscription of elite privilege.
These are important and pressing concerns. By the same token, they hardly exhaust all that can be said about MOOCs today. A host of important questions about the creation and implementation of MOOCs -- about course content, mode of learning, assessment, and so on -- should not be lost amidst conversations about the larger tendency (whether benevolist and democratizing, or insidious and corporatizing) to which MOOCs properly belong. The movement of classroom tasks and functions online learning presents opportunities as well as risks; we should understand both. In an essay written late last year I tried to look without blinders at MOOCs, and to reflect both on the risks associated with their format and implementation as well as on their potential as instruments of learning and encounter. I wrote at the time that it wasn’t my intention "to defend the MOOC so much as...to hold open some alternative futures for it." For these alternative futures to emerge there needs to be vision, will, and coordinated effort on the part of many in higher ed. I am still willing at least to entertain the possibility that MOOCs may turn out to be an enabling, positive invention, while I acknowledge indicators that point in the direction of their being a lamentably misguided one. But the rush to condemn and dismiss online courses may be as fundamentally mistaken as the rush to anoint them the future of higher education.
Blended learning modes present opportunities for both pedagogical experimentation and outreach; neither opportunity should I think be dismissed lightly. I have heard many instructors of MOOCs (in both STEM and humanities subjects) remark that the experience of teaching online has transformed their thinking and approach to teaching familiar material in the traditional classroom -- whether in pace and timing, course content, evaluation and assessment, etc. My interest in MOOCs extends to how the format can be imagined to provide access to a university curriculum to populations that may not have had this kind of access, as this is the population that stands to gain most from them. But in addition to the flat, global learning community ritually invoked as the audience for MOOCs, we could benefit from thinking locally too. How can the online course format make possible new relationships not only with the most far-flung remote corners of the earth but with the neighborhoods and communities nearest to campus? Can we make MOOCs that foster meaningful links with the community or create learning communities that cut across both the university and the online platform?
Among other alternative futures for MOOCs, I imagine more opportunities to collaborate with colleagues at other institutions. The single-delivery, “sage on stage” MOOC is no more the only online model available than is the large lecture class at a brick-and-mortar school. While MOOCs are still for the most part free and non credit-bearing, we should try out (and generate metrics to assess) as many different teaching arrangements as possible. I hasten to add that this exploration should include the intellectual freedom along with the technological affordances to create a MOOC of any kind, at any time, with anybody. With instructors and modules selected in advance, some infrastructural support in each site, and a set of shared principles for continuity of curriculum and presentation, anybody could create a MOOC. Universities like Penn have already begun asking faculty to sign non-compete agreements, presumably to curb these kind of collaborations. For as long as such arrangements are permissible, however, I would urge researchers to collaborate on MOOCs themselves. This may be a tall order; but not I think impossible.
From various quarters we have heard recent calls for a slow-down of the MOOC bandwagon. An open letter from Harvard faculty to the Dean of Faculty of Arts & Sciences calls for more oversight and reflective engagement with the question of how MOOCs offered through edX will affect “the higher education system as a whole.” I support these calls as consistent with the seriousness of the proposals to transform higher ed that are currently before us. From my modest position within the ranks of MIT administration I have been glad to see great care on the part of faculty to ensure that a spirit of experimentation and exploration with regard to MOOCs remains compatible with the core principles of the university and with a residential education. Cathy Davidson at Duke will in January 2014 teach a MOOC with Coursera simultaneously combined with a brick-and-mortar course on “The History and Future of Higher Ed,” with participation from classes at other schools and universities as well. These and other movements are to me reassuring signs, indicators of collaborative engagement around a topic of great importance. They indicate a willingness too to eschew rehearsing polarized opinions for or against MOOCs in order to attend at once to their innovative construction and to their effective and responsible implementation. The challenge is to remind ourselves periodically to think small (locally, incrementally) at the same time that we heed calls to think big.