"The end of the old is not necessarily the beginning of the new."
Hannah Arendt, The Life of the Mind
This is a simple enough statement, and yet it masks a profound truth, one that we often overlook out of the very human tendency to seek consistency and connection, to make order out of the chaos of reality, and to ignore the anomalous nature of that which lies in between whatever phenomena we are attending to.
Perhaps the clearest example of this has been what proved to be the unfounded optimism that greeted the overthrow of autocratic regimes through American intervention in Afghanistan and Iraq, and the native-born movements known collectively as the Arab Spring. It is one thing to disrupt the status quo, to overthrow an unpopular and undemocratic regime. But that end does not necessarily lead to the establishment of a new, beneficent and participatory political structure. We see this time and time again, now in Putin's Russia, a century ago with the Russian Revolution, and over two centuries ago with the French Revolution.
Of course, it has long been understood that oftentimes, to begin something new, we first have to put an end to something old. The popular saying that you can't make an omelet without breaking a few eggs reflects this understanding, although it is certainly not the case that breaking eggs will inevitably and automatically lead to the creation of an omelet. Breaking eggs is a necessary but not sufficient cause of omelets, and while this is not an example of the classic chicken and egg problem, I think we can imagine that the chicken might have something to say on the matter of breaking eggs. Certainly, the chicken would have a different view on what is signified or ought to be signified by the end of the old, meaning the end of the egg shell, insofar as you can't make a chicken without it first breaking out of the egg that it took form within.
So, whether you take the chicken's point of view, or adopt the perspective of the omelet, looking backwards, reverse engineering the current situation, it is only natural to view the beginning of the new as an effect brought into being by the end of the old, to assume or make an inference based on sequencing in time, to posit a causal relationship and commit the logical fallacy of post hoc ergo propter hoc, if for no other reason that by force of narrative logic that compels us to create a coherent storyline. In this respect, Arendt points to the foundation tales of ancient Israel and Rome:
We have the Biblical story of the exodus of Israeli tribes from Egypt, which preceded the Mosaic legislation constituting the Hebrew people, and Virgil's story of the wanderings of Aeneas, which led to the foundation of Rome—"dum conderet urbem," as Virgil defines the content of his great poem even in its first lines. Both legends begin with an act of liberation, the flight from oppression and slavery in Egypt and the flight from burning Troy (that is, from annihilation); and in both instances this act is told from the perspective of a new freedom, the conquest of a new "promised land" that offers more than Egypt's fleshpots and the foundation of a new City that is prepared for by a war destined to undo the Trojan war, so that the order of events as laid down by Homer could be reversed.
Fast forward to the American Revolution, and we find that the founders of the republic, mindful of the uniqueness of their undertaking, searched for archetypes in the ancient world. And what they found in the narratives of Exodus and the Aeneid was that the act of liberation, and the establishment of a new freedom are two events, not one, and in effect subject to Alfred Korzybski's non-Aristotelian Principle of Non-Identity. The success of the formation of the American republic can be attributed to the awareness on their part of the chasm that exists between the closing of one era and the opening of a new age, of their separation in time and space:
No doubt if we read these legends as tales, there is a world of difference between the aimless desperate wanderings of the Israeli tribes in the desert after the Exodus and the marvelously colorful tales of the adventures of Aeneas and his fellow Trojans; but to the men of action of later generations who ransacked the archives of antiquity for paradigms to guide their own intentions, this was not decisive. What was decisive was that there was a hiatus between disaster and salvation, between liberation from the old order and the new freedom, embodied in a novus ordo saeclorum, a "new world order of the ages" with whose rise the world had structurally changed.
I find Arendt's use of the term hiatus interesting, given that in contemporary American culture it has largely been appropriated by the television industry to refer to a series that has been taken off the air for a period of time, but not cancelled. The typical phrase is on hiatus, meaning on a break or on vacation. But Arendt reminds us that such connotations only scratch the surface of the word's broader meanings. The Latin word hiatus refers to an opening or rupture, a physical break or missing part or link in a concrete material object. As such, it becomes a spatial metaphor when applied to an interruption or break in time, a usage introduced in the 17th century. Interestingly, this coincides with the period in English history known as the Interregnum, which began in 1649 with the execution of King Charles I, led to Oliver Cromwell's installation as Lord Protector, and ended after Cromwell's death with the Restoration of the monarchy under Charles II, son of Charles I. While in some ways anticipating the American Revolution, the English Civil War followed an older pattern, one that Mircea Eliade referred to as the myth of eternal return, a circular movement rather than the linear progression of history and cause-effect relations.
The idea of moving forward, of progress, requires a future-orientation that only comes into being in the modern age, by which I mean the era that followed the printing revolution associated with Johannes Gutenberg (I discuss this in my book, On the Binding Biases of Time and Other Essays on General Semantics and Media Ecology). But that same print culture also gave rise to modern science, and with it the monopoly granted to efficient causality, cause-effect relations, to the exclusion in particular of final and formal cause (see Marshall and Eric McLuhan's Media and Formal Cause). This is the basis of the Newtonian universe in which every action has an equal and opposite reaction, and every effect can be linked back in a causal chain to another event that preceded it and brought it into being. The view of time as continuous and connected can be traced back to the introduction of the mechanical clock in the 13th century, but was solidified through the printing of calendars and time lines, and the same effect was created in spatial terms by the reproduction of maps, and the use of spatial grids, e.g., the Mercator projection.
And while the invention of history, as a written narrative concerning the linear progression over time can be traced back to the ancient Israelites, and the story of the exodus, the story incorporates the idea of a hiatus in overlapping structures:
A1. Joseph is the golden boy, the son favored by his father Jacob, earning him the enmity of his brothers
A2. he is sold into slavery by them, winds up in Egypt as a slave and then is falsely accused and imprisoned
A3. by virtue of his ability to interpret dreams he gains his freedom and rises to the position of Pharaoh's prime minister
B1. Joseph welcomes his brothers and father, and the House of Israel goes down to Egypt to sojourn due to famine in the land of Canaan
B2. their descendants are enslaved, oppressed, and persecuted
B3. Moses is chosen to confront Pharaoh, liberate the Israelites, and lead them on their journey through the desert
C1. the Israelites are freed from bondage and escape from Egypt
C2. the revelation at Sinai fully establishes their covenant with God
C3. after many trials, they return to the Promised Land
It can be clearly seen in these narrative structures that the role of the hiatus, in ritual terms, is that of the rite of passage, the initiation period that marks, in symbolic fashion, the change in status, the transformation from one social role or state of being to another (e.g., child to adult, outsider to member of the group). This is not to discount the role that actual trials, tests, and other hardships may play in the transition, as they serve to establish or reinforce, psychologically and sometimes physically, the value and reality of the transformation.
In mythic terms, this structure has become known as the hero's journey or hero's adventure, made famous by Joseph Campbell in The Hero with a Thousand Faces, and also known as the monomyth, because he claimed that the same basic structure is universal to all cultures. The basis structure he identified consists of three main elements: separation (e.g., the hero leaves home), initiation (e.g., the hero enters another realm, experiences tests and trials, leading to the bestowing of gifts, abilities, and/or a new status), and return (the hero returns to utilize what he has gained from the initiation and save the day, restoring the status quo or establishing a new status quo).
Understanding the mythic, non-rational element of initiation is the key to recognizing the role of the hiatus, and in the modern era this meant using rationality to realize the limits of rationality. With this in mind, let me return to the quote I began this essay with, but now provide the larger context of the entire paragraph:
The legendary hiatus between a no-more and a not-yet clearly indicated that freedom would not be the automatic result of liberation, that the end of the old is not necessarily the beginning of the new, that the notion of an all-powerful time continuum is an illusion. Tales of a transitory period—from bondage to freedom, from disaster to salvation—were all the more appealing because the legends chiefly concerned the deeds of great leaders, persons of world-historic significance who appeared on the stage of history precisely during such gaps of historical time. All those who pressed by exterior circumstances or motivated by radical utopian thought-trains, were not satisfied to change the world by the gradual reform of an old order (and this rejection of the gradual was precisely what transformed the men of action of the eighteenth century, the first century of a fully secularized intellectual elite, into the men of the revolutions) were almost logically forced to accept the possibility of a hiatus in the continuous flow of temporal sequence.
Note that concept of gaps in historical time, which brings to mind Eliade's distinction between the sacred and the profane. Historical time is a form of profane time, and sacred time represents a gap or break in that linear progression, one that takes us outside of history, connecting us instead in an eternal return to the time associated with a moment of creation or foundation. The revelation in Sinai is an example of such a time, and accordingly Deuteronomy states that all of the members of the House of Israel were present at that event, not just those alive at that time, but those not present, the generations of the future. This statement is included in the liturgy of the Passover Seder, which is a ritual reenactment of the exodus and revelation, which in turn becomes part of the reenactment of the Passion in Christianity, one of the primary examples of Campbell's monomyth.
Arendt's hiatus, then represents a rupture between two different states or stages, an interruption, a disruption linked to an eruption. In the parlance of chaos and complexity theory, it is a bifurcation point. Arendt's contemporary, Peter Drucker, a philosopher who pioneered the scholarly study of business and management, characterized the contemporary zeitgeist in the title of his 1969 book: The Age of Discontinuity. It is an age in which Newtonian physics was replaced by Einstein's relativity and Heisenberg's uncertainty, the phrase quantum leap becoming a metaphor drawn from subatomic physics for all forms of discontinuity. It is an age in which the fixed point of view that yielded perspective in art and the essay and novel in literature yielded to Cubism and subsequent forms of modern art, and stream of consciousness in writing.
Beginning in the 19th century, photography gave us the frozen, discontinuous moment, and the technique of montage in the motion picture gave us a series of shots and scenes whose connections have to be filled in by the audience. Telegraphy gave us the instantaneous transmission of messages that took them out of their natural context, the subject of the famous comment by Henry David Thoreau that connecting Maine and Texas to one another will not guarantee that they have anything sensible to share with each other. The wire services gave us the nonlinear, inverted pyramid style of newspaper reporting, which also was associated with the nonlinear look of the newspaper front page, a form that Marshall McLuhan referred to as a mosaic. Neil Postman criticized television's role in decontextualizing public discourse in Amusing Ourselves to Death, where he used the phrase, "in the context of no context," and I discuss this as well in my recently published follow-up to his work, Amazing Ourselves to Death.
The concept of the hiatus comes naturally to the premodern mind, schooled by myth and ritual within the context of oral culture. That same concept is repressed, in turn, by the modern mind, shaped by the linearity and rationality of literacy and typography. As the modern mind yields to a new, postmodern alternative, one that emerges out of the electronic media environment, we see the return of the repressed in the idea of the jump cut writ large.
There is psychological satisfaction in the deterministic view of history as the inevitable result of cause-effect relations in the Newtonian sense, as this provides a sense of closure and coherence consistent with the typographic mindset. And there is similar satisfaction in the view of history as entirely consisting of human decisions that are the product of free will, of human agency unfettered by outside constraints, which is also consistent with the individualism that emerges out of the literate mindset and print culture, and with a social rather that physical version of efficient causality. What we are only beginning to come to terms with is the understanding of formal causality, as discussed by Marshall and Eric McLuhan in Media and Formal Cause. What formal causality suggests is that history has a tendency to follow certain patterns, patterns that connect one state or stage to another, patterns that repeat again and again over time. This is the notion that history repeats itself, meaning that historical events tend to fall into certain patterns (repetition being the precondition for the existence of patterns), and that the goal, as McLuhan articulated in Understanding Media, is pattern recognition. This helps to clarify the famous remark by George Santayana, "those who cannot remember the past are condemned to repeat it." In other words, those who are blind to patterns will find it difficult to break out of them.
Campbell engages in pattern recognition in his identification of the heroic monomyth, as Arendt does in her discussion of the historical hiatus. Recognizing the patterns are the first step in escaping them, and may even allow for the possibility of taking control and influencing them. This also means understanding that the tendency for phenomena to fall into patterns is a powerful one. It is a force akin to entropy, and perhaps a result of that very statistical tendency that is expressed by the Second Law of Thermodynamics, as Terrence Deacon argues in Incomplete Nature. It follows that there are only certain points in history, certain moments, certain bifurcation points, when it is possible to make a difference, or to make a difference that makes a difference, to use Gregory Bateson's formulation, and change the course of history. The moment of transition, of initiation, the hiatus, represents such a moment.
McLuhan's concept of medium goes far beyond the ordinary sense of the word, as he relates it to the idea of gaps and intervals, the ground that surrounds the figure, and explains that his philosophy of media is not about transportation (of information), but transformation. The medium is the hiatus.
The particular pattern that has come to the fore in our time is that of the network, whether it's the decentralized computer network and the internet as the network of networks, or the highly centralized and hierarchical broadcast network, or the interpersonal network associated with Stanley Milgram's research (popularly known as six degrees of separation), or the neural networks that define brain structure and function, or social networking sites such as Facebook and Twitter, etc. And it is not the nodes, which may be considered the content of the network, that defines the network, but the links that connect them, which function as the network medium, and which, in the systems view favored by Bateson, provide the structure for the network system, the interaction or relationship between the nodes. What matters is not the nodes, it's the modes.
Hiatus and link may seem like polar opposites, the break and the bridge, but they are two sides of the same coin, the medium that goes between, simultaneously separating and connecting. The boundary divides the system from its environment, allowing the system to maintain its identity as separate and distinct from the environment, keeping it from being absorbed by the environment. But the membrane also serves as a filter, engaged in the process of abstracting, to use Korzybski's favored term, letting through or bringing material, energy, and information from the environment into the system so that the system can maintain itself and survive. The boundary keeps the system in touch with its situation, keeps it contextualized within its environment.
The systems view emphasizes space over time, as does ecology, but the concept of the hiatus as a temporal interruption suggests an association with evolution as well. Darwin's view of evolution as continuous was consistent with Newtonian physics. The more recent modification of evolutionary theory put forth by Stephen Jay Gould, known as punctuated equilibrium, suggests that evolution occurs in fits and starts, in relatively rare and isolated periods of major change, surrounded by long periods of relative stability and stasis. Not surprisingly, this particular conception of discontinuity was introduced during the television era, in the early 1970s, just a few years after the publication of Peter Drucker's The Age of Discontinuity.
When you consider the extraordinary changes that we are experiencing in our time, technologically and ecologically, the latter underlined by the recent news concerning the United Nations' latest report on global warming, what we need is an understanding of the concept of change, a way to study the patterns of change, patterns that exist and persist across different levels, the micro and the macro, the physical, chemical, biological, psychological, and social, what Bateson referred to as metapatterns, the subject of further elaboration by biologist Tyler Volk in his book on the subject. Paul Watzlawick argued for the need to study change in and of itself in a little book co-authored by John H. Weakland and Richard Fisch, entitled Change: Principles of Problem Formation and Problem Resolution, which considers the problem from the point of view of psychotherapy. Arendt gives us a philosophical entrée into the problem by introducing the pattern of the hiatus, the moment of discontinuity that leads to change, and possibly a moment in which we, as human agents, can have an influence on the direction of that change.
To have such an influence, we do need to have that break, to find a space and more importantly a time to pause and reflect, to evaluate and formulate. Arendt famously emphasizes the importance of thinking in and of itself, the importance not of the content of thought alone, but of the act of thinking, the medium of thinking, which requires an opening, a time out, a respite from the onslaught of 24/7/365. This underscores the value of sacred time, and it follows that it is no accident that during that period of initiation in the story of the exodus, there is the revelation at Sinai and the gift of divine law, the Torah or Law, and chief among them the Ten Commandments, which includes the fourth of the commandments, and the one presented in greatest detail, to observe the Sabbath day. This premodern ritual requires us to make the hiatus a regular part of our lives, to break the continuity of profane time on a weekly basis. From that foundation, other commandments establish the idea of the sabbatical year, and the sabbatical of sabbaticals, or jubilee year. Whether it's a Sabbath mandated by religious observance, or a new movement to engage in a Technology Sabbath, the hiatus functions as the response to the homogenization of time that was associated with efficient causality and literate linearity, and that continues to intensify in conjunction with the technological imperative of efficiency über alles.
To return one last time to the quote that I began with, the end of the old is not necessarily the beginning of the new because there may not be a new beginning at all, there may not be anything new to take the place of the old. The end of the old may be just that, the end, period, the end of it all. The presence of a hiatus to follow the end of the old serves as a promise that something new will begin to take its place after the hiatus is over. And the presence of a hiatus in our lives, individually and collectively, may also serve as a promise that we will not inevitably rush towards an end of the old that will also be an end of it all, that we will be able to find the opening to begin something new, that we will be able to make the transition to something better, that both survival and progress are possible, through an understanding of the processes of continuity and change.
Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.
Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.
Jonathan Schell has died. I first read "The Fate of the Earth" as a college freshman in Introduction to Political Theory and it was and is one of those books that forever impacts the young mind. Jim Sleeper, writing in the Yale Daily News, gets to the heart of Schell’s power: “From his work as a correspondent for The New Yorker in the Vietnam War through his rigorous manifesto for nuclear disarmament in "The Fate of the Earth", his magisterial re-thinking of state power and people’s power in “The Unconquerable World: Power, Nonviolence, and the Will of the People,” and his wry, rigorous assessments of politics for The Nation, Jonathan showed how varied peoples’ democratic aspirations might lead them to address shared global challenges.” The Obituary in the New York Times adds: “With “The Fate of the Earth” Mr. Schell was widely credited with helping rally ordinary citizens around the world to the cause of nuclear disarmament. The book, based on his extensive interviews with members of the scientific community, outlines the likely aftermath of a nuclear war and deconstructs the United States’ long-held rationale for nuclear buildup as a deterrent. “Usually, people wait for things to occur before trying to describe them,” Mr. Schell wrote in the book’s opening section. “But since we cannot afford under any circumstances to let a holocaust occur, we are forced in this one case to become the historians of the future — to chronicle and commit to memory an event that we have never experienced and must never experience.””
In an interview, Simon Schama, author of the forthcoming book and public television miniseries "The Story of the Jews," uses early Jewish settlement in America as a way into why he thinks that Jews have often been cast as outsiders: "You know, Jews come to Newport, they come to New Amsterdam, where they run into Dutch anti-Semites immediately. One of them, at least — Peter Stuyvesant, the governor. But they also come to Newport in the middle of the 17th century. And Newport is significant in Rhode Island because Providence colony is founded by Roger Williams. And Roger Williams is a kind of fierce Christian of the kind of radical — in 17th-century terms — left. But his view is that there is no church that is not corrupt and imperfect. Therefore, no good Christian is ever entitled to form a government [or] entitled to bar anybody else’s worship. That includes American Indians, and it certainly includes the Jews. And there’s an incredible spark of fire of toleration that begins in New England. And Roger Williams is himself a refugee from persecution, from Puritan Massachusetts. But the crucial big point to make is that Jews have had a hard time when nations and nation-states have founded themselves on myths about soil, blood and tribe."
Noam Scheiber describes the “wakeful nightmare for the lower-middle-aged” that has taken over the world of technology. The desire for the new, new thing has led to disdain for age; “famed V.C. Vinod Khosla told a conference that “people over forty-five basically die in terms of new ideas.” The value of experience and the wisdom of age or even of middle are scorned when everyone walks around with encyclopedias and instruction manuals in our pockets. The result: “Silicon Valley has become one of the most ageist places in America. Tech luminaries who otherwise pride themselves on their dedication to meritocracy don’t think twice about deriding the not-actually-old. “Young people are just smarter,” Facebook CEO Mark Zuckerberg told an audience at Stanford back in 2007. As I write, the website of ServiceNow, a large Santa Clara–based I.T. services company, features the following advisory in large letters atop its “careers” page: “We Want People Who Have Their Best Work Ahead of Them, Not Behind Them.””
Kenan Malik wonders how non-believers can appreciate sacred art. Perhaps, he says, the godless can understand it as "an exploration of what it means to be human; what it is to be human not in the here and now, not in our immediacy, nor merely in our physicality, but in a more transcendental sense. It is a sense that is often difficult to capture in a purely propositional form, but one that we seek to grasp through art or music or poetry. Transcendence does not, however, necessarily have to be understood in a religious fashion, solely in relation to some concept of the divine. It is rather a recognition that our humanness is invested not simply in our existence as individuals or as physical beings but also in our collective existence as social beings and in our ability, as social beings, to rise above our individual physical selves and to see ourselves as part of a larger project, to project onto the world, and onto human life, a meaning or purpose that exists only because we as human beings create it."
The Niemen Journalism lab has the straight scoop about the algorithm, written by Ken Scwhenke, that wrote the first story about last week's West Coast earthquake. Although computer programs like Schwenke's may be able to take over journalism's function as a source of initial news (that is, a notice that something is happening,) it seems unlikely that they will be able to take over one of its more sophisticated functions, which is to help people situate themselves in the world rather than merely know what's going on in it.
In an interview, Kate Beaton, the cartoonist responsible for the history and literature web comic Hark A Vagrant!, talks about how her comics, perhaps best described as academic parody, can be useful for teachers and students: "Oh yes, all the time! That’s the best! It’s so flattering—but I get it, the comics are a good icebreaker. If you are laughing at something, you already like it, and want to know more. If they’re laughing, they’re learning, who doesn’t want to be in on the joke? You can’t take my comics at face value, but you can ask, ‘What’s going on here? What’s this all about?’ Then your teacher gets down to brass tacks."
From the Hannah Arendt Center Blog
This week on the blog, our Quote of the Week comes from Arendt Center Research Associate, Thomas Wild, who looks at the close friendship between Hannah Arendt and Alfred Kazin who bonded over literature, writers, and the power of the written word.
In the most recent NY Review of Books, David Cole wonders if we've reached the point of no return on the issue of privacy:
“Reviewing seven years of the NSA amassing comprehensive records on every American’s every phone call, the board identified only one case in which the program actually identified an unknown terrorist suspect. And that case involved not an act or even an attempted act of terrorism, but merely a young man who was trying to send money to Al-Shabaab, an organization in Somalia. If that’s all the NSA can show for a program that requires all of us to turn over to the government the records of our every phone call, is it really worth it?”
Cole is beyond convincing in listing the dangers to privacy in the new national security state. Like many others in the media, he speaks the language of necessary trade-offs involved in living in a dangerous world, but suggests we are trading away too much and getting back too little in return. He warns that if we are not careful, privacy will disappear. He is right.
What is often forgotten and is absent in Cole’s narrative is that most people—at least in practice—simply don’t care that much about privacy. Whether snoopers promise security or better-targeted advertisements, we are willing to open up our inner worlds for the price of convenience. If we are to save privacy, the first step is articulating what it is about privacy that makes it worth saving.
Cole simply assumes the value of privacy and doesn’t address the benefits of privacy until his final paragraph. When he does come to explaining why privacy is important, he invokes popular culture dystopias to suggest the horror of a world without privacy:
More broadly, all three branches of government—and the American public—need to take up the challenge of how to preserve privacy in the information age. George Orwell’s 1984, Ray Bradbury’s Fahrenheit 451, and Philip K. Dick’s The Minority Report all vividly portrayed worlds without privacy. They are not worlds in which any of us would want to live. The threat is no longer a matter of science fiction. It’s here. And as both reports eloquently attest, unless we adapt our laws to address the ever-advancing technology that increasingly consumes us, it will consume our privacy, too.
There are two problems with such fear mongering in defense of privacy. The first is that these dystopias seem too distant. Most of us don’t experience the violations of our privacy by the government or by Facebook as intrusions. The second is that on a daily basis the fact that my phone knows where I am and that in a pinch the government could locate me is pretty convenient. These dystopian visions can appear not so dystopian.
Most writing about privacy simply assume that privacy is important. We are treated to myriad descriptions of the way privacy is violated. The intent is to shock us. But rarely are people shocked enough to actually respond in ways that protect the privacy they often say that they cherish. We have collectively come to see privacy as a romantic notion, a long-forgotten idle, exotic and even titillating in its possibilities, but ultimately irrelevant in our lives.
There is, of course, a reason why so many advocates of privacy don’t articulate a meaningful defense of privacy: It is because to defend privacy means to defend a rich and varied sphere of difference and plurality, the right and importance of people actually holding opinions divergent from one’s own. In an age of political correctness and ideological conformism, privacy sounds good in principle but is less welcome in practice when those we disagree with assert privacy rights. Thus many who defend privacy do so only in the abstract.
When it comes to actually allowing individuals to raise their children according to their religious or racial beliefs or when the question is whether people can marry whomever they want, defenders of privacy often turn tail and insist that some opinions and some practices must be prohibited. Over and over today, advocates of privacy show that they value an orderly, safe, and respectful public realm and that they are willing to abandon privacy in the name of security and a broad conception of civility according to which no one should have to encounter opinions and acts that give them offense.
The only major thinker of the last 100 years who insisted fully and consistently on the crucial importance of a rich and vibrant private realm is Hannah Arendt. Privacy, Arendt argues, is essential because it is what allows individuals to emerge as unique persons in the world. The private realm is the realm of “exclusiveness,” it is that realm in which we “choose those with whom we wish to spend our lives, personal friends and those we love.” The private choices we make are guided by nothing objective or knowable, “but strikes, inexplicably and unerringly, at one person in his uniqueness, his unlikeness to all other people we know.” Privacy is controversial because the “rules of uniqueness and exclusiveness are, and always will be, in conflict with the standards of society.” Arendt’s defense of mixed marriages (and by extension gay marriages) proceeds—no less than her defense of the right of parents to educate their children in single-sex or segregated schools—from her conviction that the uniqueness and distinction of private lives need to be respected and protected.
Privacy, for Arendt, is connected to the “sanctity of the hearth” and thus to the idea of private property. Indeed, property itself is respected not on economic grounds, but because “without owning a house a man could not participate in the affairs of the world because he had no location in it which was properly his own.” Property guarantees privacy because it enforces a boundary line, “ kind of no man’s land between the private and the public, sheltering and protecting both.” In private, behind the four walls of house and heath, the “sacredness of the hidden” protects men from the conformist expectations of the social and political worlds.
In private, shaded from the conformity of societal opinions as well from the demands of the public world, we can grow in our own way and develop our own idiosyncratic character. Because we are hidden, “man does not know where he comes from when he is born and where he goes when he dies.” This essential darkness of privacy gives flight to our uniqueness, our freedom to be different. It is privacy, in other words, that we become who we are. What this means is that without privacy there can be no meaningful difference. The political importance of privacy is that privacy is what guarantees difference and thus plurality in the public world.
Arendt develops her thinking on privacy most explicitly in her essays on education. Education must perform two seemingly contradictory functions. First, education leads a young person into the public world, introducing them and acclimating them to the traditions, public language, and common sense that precede him. Second, education must also guard the child against the world, care for the child so that “nothing destructive may happen to him from the world.” The child, to be protected against the destructive onslaught of the world, needs the privacy that has its “traditional place” in the family.
Because the child must be protected against the world, his traditional place is in the family, whose adult members return back from the outside world and withdraw into the security of private life within four walls. These four walls, within which people’s private family life is lived, constitute a shield against the world and specifically against the public aspect of the world. This holds good not only for the life of childhood but for human life in general…Everything that lives, not vegetative life alone, emerges from darkness and, however, strong its natural tendency to thrust itself into the light, it nevertheless needs the security of darkness to grow at all.
The public world is unforgiving. It can be cold and hard. All persons count equally in public, and little if any allowance is made for individual hardships or the bonds of friendship and love. Only in privacy, Arendt argues, can individuals emerge as unique individuals who can then leave the private realm to engage the political sphere as confident, self-thinking, and independent citizens.
The political import of Arendt’s defense of privacy is that privacy is what allows for meaningful plurality and differences that prevent one mass movement, one idea, or one opinion from imposing itself throughout society. Just as Arendt valued the constitutional federalism in the American Constitution because it multiplied power sources through the many state and local governments in the United States, so did she too value privacy because it nurtures meaningfully different and even opposed opinions, customs, and faiths. She defends the regional differences in the United States as important and even necessary to preserve the constitutional structure of dispersed power that she saw as the great bulwark of freedom against the tyranny of the majority. In other words, Arendt saw privacy as the foundation not only of private eccentricity, but also of political freedom.
Cole offers a clear-sighted account of the ways that government is impinging on privacy. It is essential reading and it is your weekend read.
Is there such a thing as too much free speech? The Editors at N+1 think so. They posted an editorial this week lamenting the overabundance of speaking that has swept over our nation like a plague:
A strange mania governs the people of our great nation, a mania that these days results in many individual and collective miseries. This is the love of opinion, of free speech—a furious mania for free, spoken opinion. It exhausts us.
The N+1 Editors feel besieged. And we can all sympathize with their predicament. Too many people are writing blogs; too many voices are tweeting; too many friends are pontificating about something on Facebook. And then there are the trolls. It’s hard not to sympathize with our friends at N+1. Why do we have to listen to all of these folks? Shouldn’t all these folks just stop and read N+1 instead?
Of course it is richly hypocritical for the Editors of an opinion journal to complain of an overabundance of opinions. And N+1 acknowledges and even trumpets its hypocrisy.
We are aware that to say [that others should stop expressing their opinions] (freely! our opinion!) makes us hypocrites. We are also aware that America’s hatred of hypocrisy is one of few passions to rival its love of free speech—as if the ideal citizen must see something, say something, and it must be the same thing, all the time. But we’ll be hypocrites because we’re tired, and we want eventually to stop talking.
Beyond the hypocrisy N +1 has a point: The internet has unleashed packs upon packs of angry often rabid dogs. These haters attack anything and everything, including each other. Hate and rage are everywhere:
The ragers in our feeds, our otherwise reasonable friends and comrades: how do they have this energy, this time, for these unsolicited opinions? They keep finding things to be mad about. Here, they’ve dug up some dickhead writer-professor in Canada who claims not to teach women writers in his classes. He must be denounced, and many times! OK. Yes. We agree. But then it’s some protest (which we support), and then some pop song (which we like, or is this the one we don’t like?), and then some egregiously false study about austerity in Greece (full of lies!). Before we know it, we’ve found ourselves in a state of rage, a semi-permanent state of rage in fact, of perma-rage, our blood boiled by the things that make us mad and then the unworthy things that make other people mad.
Wouldn’t it be nice of public discourse were civil and loving? I too would prefer a rational discussion about the Boycott, Diversity, and Sanction movement. I would be thrilled if the Tea Party and Occupy Wall Street could join forces to fight political corruption and the over-bureaucratization of government that disempowers individuals. And of course I would love it if those who religiously attack Hannah Arendt for her opinion that Adolf Eichmann was a superficial and banal man responsible for unspeakable evils could find common cause with those who find her provocative, moving and meaningful.
Of course it is exhausting dealing with those with whom we don’t see eye to eye. And there is always the impulse to say simply, “enough! I just don’t want to hear your opinions anymore.” This is precisely what N+1 is saying: “We don’t care!”
We assert our right to not care about stuff, to not say anything, to opt out of debate over things that are silly and also things that are serious—because why pretend to have a strong opinion when we do not? Why are we being asked to participate in some imaginary game of Risk where we have to take a side? We welcome the re-emergence of politics in the wake of the financial crash, the restoration of sincerity as a legitimate adult posture. But already we see this new political sincerity morphing into a set of consumer values, up for easy exploitation.
Underlying N+1’s ironic distance from the arena of opinions and discord is a basic anti-political fantasy that opinion is a waste of time, if it is not destructive. Wouldn’t it be better to skip the opinions and the battles and the disagreements and just cut straight to the truth? Just listen to the truth.
Truth is not an imperative, but something that must be discovered. Unlike liquid opinion, truth does not always circulate. It is that which you experience, deeply, and cannot forget. The right to not care is the right to sit still, to not talk, to be subject to unclarity and allow knowledge to come unbidden to you. To be in a constant state of rage, by contrast, is only the other side of piety and pseudoscience, the kind of belief that forms a quick chorus and cannot be disproved. Scroll down your Facebook feed and see if you don’t find one ditto after another. So many people with “good” or “bad politics,” delivered with conviction to rage or applause; so little doubt, error, falsifiability—surely the criteria by which anything true, or democratic, could ever be found.
What N+1 embraces is truth over opinion and escapism against engagement with others. What they forget, however, is that there are two fundamentally opposed routes to truth.
In one, the truthseeker turns away from the world of opinion. The world in which we live is a world of shadows and deceptions. Truth won’t be found in the marketplace of ideas, but on the mountaintop in the blinding light of the sun. Like Plato’s philosopher king, we must climb out of the cave and ascend to the heights. Alone, turned toward the heavens and the eternal truths that surf upon the sunrays, we open ourselves to the experience of truth.
A second view of truth is more mundane. The truthseeker stays firmly planted in the world of opinion and deception. Truth is a battle and it is fought with the weapons of words. Persuasion and rhetoric replace the light of the sun. The winner gains not insight but power. Truth doesn’t emerge from an experience; truth is the settled sentiment of the most persuasive opinion.
Both the mountain path and the road through the marketplace are paths to truth, but of different kinds. Philosophers and theologians may very well need to separate themselves from the world of opinion if they are to free themselves to experience truth. Philosophical truths, as Hannah Arendt argues, address “man in his singularity” and are thus “unpolitical by nature.” For her, philosophy and also philosophical truths are anti-political.
Politicians cannot concern themselves with absolute truths; they must embrace the life of the citizen and the currency of opinion rather than the truths of the philosopher. In politics, “no opinion is self-evident,” as Arendt understood. “In matters of opinion, but not in matters of [philosophical] truth, our thinking is discursive, running as it were, from place to place, from one part of the world to another, through all kinds of conflicting views, until it finally ascends from these particularities to some impartial generality.” In politics, truth may emerge, but it must go through the shadows that darken the marketplace.
What Arendt understands about political truths is that truths do indeed “circulate” in messy and often uncomfortable ways that the n+1 editorial board wishes to avoid. Political thought, Arendt argues, “is representative.” By that she means that it must sample as many different viewpoints and opinions as is possible. “I form an opinion by considering a given issue from different viewpoints, by making present to my mind the standpoints of those who are absent; that is, I represent them.” It is in hearing, imagining, and representing opposing and discordant views that one comes to test out his or her own views. It is not a matter of empathy, of feeling like someone else. It is rather an imaginative experiment in which I test my views against all comers. In this way, the enlarged mentality of imaginative thinking is the prerequisite for judgment.
When Arendt said of Adolf Eichmann that he was possessed of the “fearsome word-and-thought-defying banality of evil” because he did not think, what she meant was that he was simply incapable or unwilling to think from the perspective of others. His use of clichés was not thoughtlessness itself, but was evidence that he had barricaded himself inside an ideological cage. Above all, his desire to make others including Jews understand his point of view—his hope that they could see that he was a basically good man caught up on the wrong side of history—was for Arendt evidence of his superficiality and his lack of imagination. He simply could not and did not ever allow himself to challenge his own rationalizations and justifications by thinking from the perspective of Jews and his other victims. What allowed Eichmann to so efficiently dispatch millions to their deaths was his inability to think and encounter opinions that were different from his own.
In the internet age we are bombarded with such a diversity of angry and insulting and stupid and offensive viewpoints that it is only naturally to alternate between the urge to respond violently and the urge to withdraw.
It is easy to deride political opinion and idolize truth. But that is to forget that “seen from the viewpoint of politics, truth has a despotic character.”
Political thinking requires that we resist both the desire to fight opinions with violence and the desire to flee from opinions altogether. Instead, we need to learn to think in and with others whose opinions we often hate. We must find in the melee of divergent and offending opinions the joy that exists in the experience of human plurality. We don’t need to love or agree with those we find offensive; but so long as they are talking instead of fighting, we should respect them and listen to them. Indeed, we should care about them and their beliefs. That is why the N+1 manifesto for not caring is your weekend read.
This Quote of the Week was originally published on September 3, 2012.
It can be dangerous to tell the truth: “There will always be One against All, one person against all others. [This is so] not because One is terribly wise and All are terribly foolish, but because the process of thinking and researching, which finally yields truth, can only be accomplished by an individual person. In its singularity or duality, one human being seeks and finds – not the truth (Lessing) –, but some truth.”
-Hannah Arendt, Denktagebuch, Book XXIV, No. 21
Hannah Arendt wrote these lines when she was confronted with the severe and often unfair, even slanderous, public criticism launched against her and her book Eichmann in Jerusalemafter its publication in 1963. The quote points to her understanding of the thinking I (as opposed to the acting We) on which she bases her moral and, partly, her political philosophy.
It is the thinking I, defined with Kant as selbstdenkend (self-thinking [“singularity”]) and an-der-Stelle-jedes-andern-denkend (i.e., in Arendt’s terms, thinking representatively or practicing the two-in-one [“duality”]). Her words also hint at an essay she published in 1967 titled “Truth and Politics,” wherein she takes up the idea that it is dangerous to tell the truth, factual truth in particular, and considers the teller of factual truth to be powerless. Logically, the All are the powerful, because they may determine what at a specific place and time is considered to be factual truth; their lies, in the guise of truth, constitute reality. Thus, it is extremely hard to fight them.
In answer to questions posed in 1963 by the journalist Samuel Grafton regarding her report on Eichmann and published only recently, Arendt states: “Once I wrote, I was bound to tell the truth as I see it.” The statement reveals that she was quite well aware of the fact that her story, i.e., the result of her own thinking and researching, was only one among others. She also realized the lack of understanding and, in many cases, of thinking and researching, on the part of her critics.
Thus, she lost any hope of being able to publicly debate her position in a “real controversy,” as she wrote to Rabbi Hertzberg (April 8, 1966). By the same token, she determined that she would not entertain her critics, as Socrates did the Athenians: “Don’t be offended at my telling you the truth.” Reminded of this quote from Plato’s Apology (31e) in a supportive letter from her friend Helen Wolff, she acknowledged the reference, but acted differently. After having made up her mind, she wrote to Mary McCarthy: “I am convinced that I should not answer individual critics. I probably shall finally make, not an answer, but a kind of evaluation of this whole strange business.” In other words, she did not defend herself in following the motto “One against All,” which she had perceived and noted in her Denktagebuch. Rather, as announced to McCarthy, she provided an “evaluation” in the 1964 preface to the German edition of Eichmann in Jerusalem and later when revising that preface for the postscript of the second English edition.
Arendt also refused to act in accordance with the old saying: Fiat iustitia, et pereat mundus(let there be justice, though the world perish). She writes – in the note of the Denktagebuchfrom which today’s quote is taken – that such acting would reveal the courage of the teller of truth “or, perhaps, his stubbornness, but neither the truth of what he had to say nor even his own truthfulness.” Thus, she rejected an attitude known in German cultural tradition under the name of Michael Kohlhaas. A horse trader living in the 16th century, Kohlhaas became known for endlessly and in vain fighting injustice done to him (two of his horses were stolen on the order of a nobleman) and finally taking the law into his own hands by setting fire to houses in Wittenberg.
Even so, Arendt has been praised as a woman of “intellectual courage” with regard to her book on Eichmann (see Richard Bernstein’s contribution to Thinking in Dark Times).
Intellectual courage based on thinking and researching was rare in Arendt’s time and has become even rarer since then. But should Arendt therefore only matter nostalgicly? Certainly not. Her emphasis on the benefits of thinking as a solitary business still remains current. Consider, for example, the following reference to Sherry Turkle, a sociologist at MIT and author of the recent book Alone Together. In an interview with Peter Haffner (published on July 27, 2012, in SZ Magazin), she argues that individuals who become absorbed in digital communication lose crucial components of their faculty of thinking. Turkle says (my translation): Students who spend all their time and energy on communication via SMS, Facebook, etc. “can hardly concentrate on a particular subject. They have difficulty thinking a complex idea through to its end.” No doubt, this sounds familiar to all of us who know about Hannah Arendt’s effort to promote thinking (and judging) in order to make our world more human.
To return to today’s quote: It can be dangerous to tell the truth, but thinking is dangerous too. Once in a while, not only the teller of truth but the thinking 'I' as well may find himself or herself in the position of One against All.
The response has been swift and negative to the Rolling Stone Magazine cover—a picture of Dzhokhar Tsarnaev who with his now dead brother planted deadly homemade bombs near the finish-line of the Boston Marathon. The cover features a picture Tsarnaev himself posted on his Facebook page before the bombing. It shows him as he wanted himself to be seen—that itself has offended many, who ask why he is not pictured as a suspect or convict. In the photo he is young, hip, handsome, and cool. He could be a rock star, and given the context of the Rolling Stone cover, that is how he appears.
The cover is jarring, and that is intended. It is controversial, and that was probably also intended. Hundreds of thousands of comments on Facebook and around the web are critical and angry, asking how Rolling Stone could portray the bomber as a rock-star. They overlook or ignore the text accompanying the photo on the cover, which reads: “The Bomber. How a Popular, Promising Student Was Failed by His Family, Fell Into Radical Islam, and Became a Monster.” CVS and other retailers have announced they will not sell the magazine in their stores.
That is unfortunate, for the story written by Janet Reitman is exceptionally good and deserves to be read.
Controversies like this have a perverse effect. Just as the furor over Hannah Arendt’s Eichmann in Jerusalem resulted in the viral dissemination of her claims about the Jewish leaders, so too will this Rolling Stone cover be seen by millions of people who otherwise would never have heard of Rolling Stone. What is more, such publicity makes it ever less likely that the story itself will be read seriously, just as Arendt’s book was criticized by everyone, but read by few.
Reitman’s narrative itself is unexceptional. It is a common story line: young, normal kid becomes radicalized and does something none of his old friends can believe he could do. This is a now familiar narrative that we hear in the wake of the tragedies in Newtown (Adam Lanza was described as a nice quiet kid) and Columbine (Time’s cover announced “The Monsters Next Door.”)
This is also the narrative that Rolling Stone managing editor Will Dana embraced to defend the Cover on NPR arguing it was an "apt image because part of what the story is about is what an incredibly normal kid [Tsarnaev] seemed like to those who knew him best back in Cambridge.” It was echoed too by Erin Burnett, on CNN, who recently invoked Hannah Arendt’s idea of the “banality of evil.” In the easy frame the story offers, Tsarnaev was a good kid, part of a striving immigrant family, someone who loved multi-racial America. And then something went wrong. He found Islam; his family fell apart; and he became a monster.
This story is too simple. And yet within the Rolling Stone story, there is a wealth of information and reporting that does give a nuanced and thoughtful portrayal of Tsarnaev’s journey into the heart of evil.
One fact that is important to note is that Tsarnaev is not Eichmann. Eichmann was a member of the SS, a nationalist security service engaged in world war and dedicated to wiping certain races of peoples off the face of the earth. He committed genocide as part of a system of extermination, something both worse than and yet less messy than murder itself. It is Tsarnaev, who had no state apparatus behind him, who become a cold-blooded murderer. The problems that Hannah Arendt thought that the court in Jerusalem faced with Eichmann—that he was a new type of criminal—do not apply in Tsarnaev’s case. He is a murderer. To understand him is not to understand a new type of criminal. And yet it is a worthy endeavor to try to understand why more and more young men like Tsarnaev are so easily radicalized and drawn to murdering innocent people in the name of a cause.
Both Eichmann and Tsarnaev were from upwardly striving bourgeois families that struggled with economic setbacks. Eichmann was white and Austrian, Tsarnaev an immigrant in Cambridge, but both were economically disaffected. Tsarnaev wanted to make money and, like his parents, dreamed of a better life.
Tsarnaev’s family had difficulty fitting in with U.S. culture. His father was ill and could not work. His mother sought to earn money. And his older brother, whom he idolized, saw his dreams of Olympic boxing dashed partly because he was not a citizen. He increasingly turned to a radical version of Islam. When Tsarnaev’s parents both returned to Dagestan, he fell increasingly under the influence of his older brother.
Like Eichmann, Tsarnaev appears to have adopted an ideology that provided a coherent and meaningful narrative that gave his life significance. One can see this in a number of tweets and statements that are quoted in the article. For example, just before the bombing, he tweeted:
"Evil triumphs when good men do nothing."
"If you have the knowledge and the inspiration all that's left is to take action."
"Most of you are conditioned by the media."
Like Eichmann, Tsarnaev came to see himself as a hero, someone willing to suffer and even die for a noble cause. His cause was different—anti-American jihad instead of anti-Semitic Nazism—but he was an ideological idealist, a joiner, someone who found meaning and importance in belonging to a movement. A smart and talented and by most accounts good young man, he was lost and adrift, searching for someone and something to give his life purpose. He found that someone in his brother and that something in jihad against America, the land that previously he had so embraced. And he became someone who believed that what he was doing was right and necessary, even if he understood also that it was wrong.
We see clearly this ambivalent understanding of right and wrong in the note Tsarnaev apparently scrawled while he was hiding in a boat before he was captured. Here is how Reitman’s article describes what he wrote:
When investigators finally gained access to the boat, they discovered a jihadist screed scrawled on its walls. In it, according to a 30-count indictment handed down in late June, Jihad [Tsarnaev's nickname] appeared to take responsibility for the bombing, though he admitted he did not like killing innocent people. But "the U.S. government is killing our innocent civilians," he wrote, presumably referring to Muslims in Iraq and Afghanistan. "I can't stand to see such evil go unpunished. . . . We Muslims are one body, you hurt one, you hurt us all," he continued, echoing a sentiment that is cited so frequently by Islamic militants that it has become almost cliché. Then he veered slightly from the standard script, writing a statement that left no doubt as to his loyalties: "Fuck America."
Eichmann too spoke of his shock and disapproval of killing innocent Jews, but he justified doing so for the higher Nazi cause. He also said that when he found out about the sufferings of Germans at the hands of the allies, it made it easier for him to justify what he had done, because he saw it as equivalent. The fact that the Germans were aggressors, that they had started the war, and that they were killing and torturing innocent people simply did not register for Eichmann, just as it did not register for Tsarnaev that the people in the Boston marathon were innocent. There are, of course, innocent people in Iraq and Afghanistan who have died at the hands of U.S. bombs. Even for those of us who were against the wars and question their sense and justification, however, there is a difference between death in a war zone and terrorism.
The Rolling Stone article does a good job of chronicling Tsarnaev's slide into a radical jihadist ideology, one mixed with conspiracy theories.
The Prophet Muhammad, he noted on Twitter, was now his role model. "For me to know that I am FREE from HYPOCRISY is more dear to me than the weight of the ENTIRE world in GOLD," he posted, quoting an early Islamic scholar. He began following Islamic Twitter accounts. "Never underestimate the rebel with a cause," he declared.
His rebellious cause was to awaken Americans to their complicity both in the bombing of innocent Muslims and also to his belief in the common conspiracy theory that America was behind the 9/11 attacks. In one Tweet he wrote: "Idk [I don’t know] why it's hard for many of you to accept that 9/11 was an inside job, I mean I guess fuck the facts y'all are some real #patriots #gethip."
Besides these tweets that offer a provocative insight into Tsarnaev's emergent ideological convictions, the real virtue of the article is its focus on Tsarnaev's friends, his school, and his place in American youth culture. While his friends certainly do not support or condone what Tsarnaev did, many share some of his conspiratorial and anti-American beliefs. Here are two descriptions of the mainstream nature of many of his beliefs:
To be fair, Will and others note, Jahar's perspective on U.S. foreign policy wasn't all that dissimilar from a lot of other people they knew. "In terms of politics, I'd say he's just as anti-American as the next guy in Cambridge," says Theo.
This is not an uncommon belief. Payack, who [was Tsarnaev's wrestling coach and mentor and] also teaches writing at the Berklee College of Music, says that a fair amount of his students, notably those born in other countries, believe 9/11 was an "inside job." Aaronson tells me he's shocked by the number of kids he knows who believe the Jews were behind 9/11. "The problem with this demographic is that they do not know the basic narratives of their histories – or really any narratives," he says. "They're blazed on pot and searching the Internet for any 'factoids' that they believe fit their highly de-historicized and decontextualized ideologies. And the adult world totally misunderstands them and dismisses them – and does so at our collective peril," he adds.
The article presents a sad portrait of youth culture, and not just because all these “normal” kids are smoking “a copious amount of weed.” The jarring realization is that these talented and intelligent young people at a good school in a storied neighborhood come off so disaffected. What is more, their beliefs in conspiracies are accepted by the adults in their lives as commonplaces; their anti-Americanism is simply a noted fact; and their idolization of slacking (Tsarnaev's favorite word, his friends say, “was "sherm," Cambridge slang for ‘slacker’”) is seen as cute. There is painfully little concern by adults to insist that the young people face facts and confront unserious opinions.
In short, the young people in Tsarnaev's story appear to be abandoned by adults to their own youthful and quite fanciful views of reality. Youth culture dominates, and adult supervision seems absent. There is seemingly no one who, in Arendt’s language from “The Crisis in Education”, takes responsibility for teaching them to love the world as it is.
The Rolling Stone article and cover do not glorify a monster; but they do play on two dangerous trends in modern culture that Hannah Arendt worried about in her writing: First, the rise of youth culture and the abandonment of adult authority in education; and second, the fascination bourgeois culture has for vice and the short distance that separates an acceptance of vice from an acceptance of monstrosity. If only all the people who are so concerned about a magazine cover today were more concerned about the delusions and fantasies of Tsarnaev, his friends, and others like them.
Taking responsibility for teaching young people to love the world is the very essence of what Arendt understands education to be. It will be the topic of the Hannah Arendt Center upcoming conference “Failing Fast: The Crisis of the Educated Citizen.” Registration for the conference opened this week. For now, ignore the controversy and read Reitman’s article “Jahar’s World.” It is your weekend read. It is as good an argument for thinking seriously about the failure of our approach to education as one can find.
For two years I taught literature, reading and writing at a public university in one of New York City’s outer Boroughs. Of course having come out of a liberal arts “thinking” institution what I really thought (maybe hoped) I was teaching was new perspectives. Ironically, the challenge that most struck me was not administrative, nor class size or terrible grammar and endless hours of grading, the most pressing obstacle lay in creating a case for the value of “thinking.”
I state “case” because I regularly felt like my passions and beliefs, as well as my liberal arts education went on daily trial. I had originally come from a hard-scrabble immigrant reality, but my perception of reality had been altered by my education experience, and as an educator I felt the need to authenticate my progressive (core text) education with my students.
I was regularly reminded that the immediate world of the “average” student (citizen) with all its pressing, “real” concerns does not immediately open itself to “thought” in the liberal arts sense. We are a specialization, automation, struggling and hyper competitive society. The “learning time” of a student citizen is spent in the acquisition of “marketable,” and differentiating skills, while their “free time” is the opportunity to decompress from, or completely escape the pressures of competitive skill acquisition. The whole cycle is guided by an air of anxiety fostered in our national eduction philosophy, as well as the troubled economy and scattered society at large. I don’t think one can teach the humanities without listening to their students, and listening to the students calls for a deep inventory on the value of “thought” in the humanities sense, and then ultimately in how to most truthfully communicate this value to the students.
I need to add here that my students were quite smart and insightful. This made it even greater of a challenge. Their intelligence was one of realism. I needed to both acknowledge and sway their perspective, as well as my own.
Each semester I began with a close-reading of David Foster Wallace's commencement speech at Kenyon College, “What is Water.” He begins his speech with the parable of two fish swimming by an older fish which as it swims by asks, How is the Water?” The little ones swim on and only later ask each other, “What is water?” Didactic parable, cliche -- yes -- but Wallace goes on to deconstruct the artifice of commencement speeches, parables, and cliches, and then rebuilds them. Having so skillfully deconstructed them he has invited his listers into the form making, and as he communicates the truth beneath what had earlier seemed lofty or cliche, the listers follow him towards meaning making. Ultimately Wallace states that education is “less about teaching you how to think, and more about teaching you of the choice in what to think about.” To have agency is to be a meaning maker. And as more and more cultural institutions artfully vie for the citizens devotion and loyalty -- politics, religion, but even more so, corporate houses and pop culture designs, in the ever growing noise of institutional marketing the call to choose seems ever more muted.
The choice, for so many students today, is simply in how to most skillfully compartmentalize themselves and their lives in the face of the anxieties of their immediate world. The choice for many young teachers, facing their own set of related anxieties, is in how far are they willing step away from the ideal of learning-living-teaching integration model -- so easy is it today as an educator to simply become disenchanted, frustrated and aloof. Sometimes, “thinking” is the process of choosing what to keep and what to give away.
Wallace's insightful, no b.s, humorous and sincere tone resonated with my students, that is of course until they found out that Wallace killed himself. Then, that’s what everyone wanted to focus on. I can not blame them. There is a ‘text’ to ‘personal’ mystery, a ‘content’ to ‘context’ disjunction that opens itself at such a revelation, a mystery that the “thinking” mind wants to explore. The modern “thinking” mind draws little separation between the lofty and the sublime, the public and the personal. Such is a byproduct of a generation raised on reality television and celebrity stories. I, in all sincerity cannot judge this. My generation, the X’s who came of age on the cusp of the Millennials, were culturally educated by MTV, The Real World and Road Rules, and thus we crave hip, colorful, appropriately gentrified spaces to occupy -- think of artist collectives, or Facebook and Google working environments (bean bags, chill and chic prescription sunglasses, lounge happy hour with juice bars, untraditional working hours, colorful earth tones). I digress, I meant to make some observation of “thinking.”
I was excited to teach what excited me: I began with Wallace, then Kafka, O’Connor (Flannery or Frank), Platonov, Carver, Babel, Achebe Kundera, Elliot, etc... It is, essentially, the seven sisters freshmen reading list, a popular catalogue of classic stories peppered with some international obscurity. It is the “cool” thing in liberal arts. But, over and over my students came to me complaining that they could not find this relevant to their lives. After such reports I would tweak my lesson plans to give a greater introduction to the works, going deeper into the philosophical tenets of the stories, and into the universal reward of being able to utilize the tools of the thinking, writing mind. Induct, deduct, compare, contrast, relate, “give it greater shape,” I would say. “Breath life into it.”
To have the skills to decipher plot, to record the echo of a narrative, to infer characterization from setting, to understand the complex structure of a character, to be invited to participate in the co-creation of a narrative which gently guides you through action but leaves the moral implications up to the reader. These are “indispensable,” I would advise my students. “Indispensable for human agency.” Some would slowly gravitate to my vision, as I prodded further and further into their motivations for being in school, career, and other ‘relevant’ choice. Yet, they often felt only like visitors in my library, preparing to check out and return to the “default” education thinking mode as soon as the quarter, mid, or end semester exam periods began. The pressures of what they call “the real world” are much stronger then the ghosts of books and introspective thought -- vague, powerless, intangible.
“The real world:” Here I am reminded of the scene from the Matrix when Morpheus unveils to Neo “the desert of the real.” A barren waste land of human energy as only a power source nourished for consumption. The Matrix, I will add here, is based on a work by Jean Baudrillard, a french philosopher who warns of a modern society as a place existing in consumption and entertainment, devoid of meaning making -- the urge towards agency, in hibernation; the map towards meaning, defunct. In describing this new world he coined the phrase “the desert of the real.” Again, I fall into tangental thought.
I needed to find a way to invite, seduce, capture my students. I tried using myself as a conduit.
I pride myself on the fact that I am an immigrant, a former “at risk” student, that my tattoos all have mythological meaning and thought behind them, that I am a high-school drop out with credentials to my name, a top tier education, a masters degree, etc... I felt like these could help me bridge for my students the platforms of reality-setting discourse and humanistic thought. I had, and still do, believed that real “thinking” is indispensable in being human, in being free, and in the ability to have fun and play with the world.
Again, my students would, at times, meet me in the middle space I wanted to create, though rarely did this space become living for them, instead they lay their heads to the sound of another’s palpitation and breath, and then moved on. Maybe I planted a seed, I like to think. But then, maybe, they were bringing me somewhere as well.
They could not recklessly follow me, or I them. It was an issue of pragmatic bonds. For a moment, my class, or an individual student I was reading with would delve into the power of words with me and the ending of Andrei Platonov’s “Potudon River” would finally break through the events of the page: “Not every grief can be comforted; there is a grief that ends only after the heart has been worn away in long oblivion, or in distraction amidst life’s everyday concerns.” And my students would draw new understanding of the passage, enter it through a word or phrase that could unlock that middle space between their worlds and the world of literature, philosophy, metaphor. “Grief,” “long oblivion,” life’s everyday concerns,” all the sudden my students would give these new meaning, now only slightly guided by the story and letting their lives find a grip to the reigns. They would find new connections, and again they would return to the “real” world.
More and more I struggled to make thinking relevant. “Will this help me get a better job?” I was asked.
Thinking about it I had to encounter my own struggles with this question. I know the answers. I know the programed liberal arts answer, and the “real” answer. I know that the liberal arts answer exposes the “real” as something at best lacking, at its worst empty. I also know that the real, is real; it happens in real time, removed from the concerns of literature, poetry, and philosophy which concern themselves with the work of mans eternity.
“Unlikely,” I would answer. For gods sake, though I was teaching all these things I cared so deeply about, I also worked nights as a bartender to satisfy the demands of the real. I had to produce something consumable and all of my learning and thoughts on thinking are not that.
Here I acknowledge that this answer is not entirely true. We can find jobs which call for liberal arts skills, but these are few and far between and rarely afford a comfortable standard of living. We may also posit the argument that liberal arts skills will contribute to ones ability to perform better and have a greater understanding of ones job, but this argument does not lend itself to substantial evidence, no matter how much I may actually believe it. This was the litmus test of my “thinking,” and it only survives in embracing the privacies of my world, that I chose my private world despite and above the “real.”
“Unlikely.” And where does that leave us?
Ultimately, all I have as a conscious being is the ability to tell stories, to choose and create my narrative from the scattered world I am provided. Ultimately, after deconstructing both the “real” and the “lofty” I could only encourage my students to choose their own themes. To the question of “what is water?” I could only answer, “the desert.”
Oddly enough, and as “unlikely” as it may seem, when I answered with honesty, to them as well as myself, they followed. -- we could talk.
Thomas Levin of Princeton came to Bard Tuesday to give a lecture to the Drones Seminar, a weekly class I am participating in, led by my colleague Thomas Keenan and conceived by two of our students Arthur Holland and Dan Gettinger. Levin has studied surveillance techniques for years and he came to think with us about how the present obsession with drones will transform our landscape and our imaginations. At a time when the obsession with drones in the media is focused on their offensive capacities, it is important to recall that drones were originally developed as a surveillance technology. If drones are to become omnipresent in our lives, what will that mean?
Levin began by reminding us of the embrace of other surveillance devices in mass culture, like recording devices at the turn of the 20th century. He offered old postcards and cartoons in which unsuspecting servants or children were caught goofing off or insulting their superiors with newfangled recording devices like the cylinder phonograph and, later, hidden cameras and spy satellites. The realization emerges that we are being watched, and this sense pervades the popular consciousness. In looking to these representations from mass culture of the fear, awareness, and even expectation that we will be watched and listened to, Levin finds the emergence of what he calls “rhetoric of surveillance.”
In short, we talk and think constantly about the fact that we are or may be being watched. This cannot but change the way we behave and act. Levin poses this question. What, he asks, is the emerging drone imaginary?
To answer that question it is helpful to revisit an uncannily prescient imagination of the rise of drones in a text written over half a century ago, Ernst Jünger’s The Glass Bees. Originally published in 1957 and recently reissued in translation with an introduction by science fiction novelist Bruce Sterling, Jünger’s text centers around a job interview between an unnamed former light cavalry officer and Giacomo Zapparoni, secretive, filthy rich, and powerful proprietor of The Zapparoni Works that “manufactured robots for every imaginable purpose.” Zapparoni’s secret, however, is that he instead of big and hulking robots, he specialized in Lilliputian robots that gave “the impression of intelligent ants.”
The robots were not powerful in themselves, but they worked together. Like drone bees and drone ants—that exist only for procreation and then die—the small robots, or drones, serve specific purposes in industry or business. Zapparoni’s tiny robots “could count, weigh, sort gems or paper money….” Their power came from their coordination.
The robots “worked in dangerous locations, handling explosives, dangerous viruses, and even radioactive materials. Swarms of selectors could not only detect the faintest smell of smoke but could also extinguish a fire at an early stage; others repaired defective wiring, and still others fed upon filth and became indispensable in all jobs where cleanliness was essential.” Dispensable and efficient, Zapparoni’s little robots could do the most dangerous and least desirable tasks.
In The Glass Bees, we are introduced to Zapparoni’s latest invention: flying glass bees that can pollinate flowers much more efficiently and quickly than natural bees. The bees “were about the size of a walnut still encased in its green shell.” They were completely transparent and they were an improvement upon nature, at least insofar as the pollination of flowers was concerned. If a true or natural bee “sucked first on the calyx, at least a dessert remained.” But Zapparoni’s glass bees “proceeded more economically; that is, they drained the flower more thoroughly.” What is more, the bees were a marvel of agility and skill: “Given the flying speed, the fact that no collisions occurred during these flights back and forth was a masterly feat.” According to the cavalry officer, “It was evident that the natural procedure had been simplified, cut short, and standardized.”
Before our hero is introduced to Zapparoni’s bees, he is given a warning: “Beware of the bees!” And yet he forgets this warning. Watching the glass bees, the cavalry officer is fascinated. He felt himself “come under the spell of the deeper domain of techniques,” which like a spectacle “both enthralled and mesmerized.” His mind, he writes, went to sleep and he “forgot time” and “also entirely forgot the possibility of danger.”
Jünger’s book tells, in part, the story of our fascination and subjection to technologies of surveillance. On Facebook or Words with Friends, or even using our smart phones or GPS systems, we allow our fascination with technology to dull our sense of its danger. As Jünger writes: “Technical perfection strives toward the calculable, human perfection toward the incalculable. Perfect mechanisms—around which, therefore, stands an uncanny but fascinating halo of brilliance—evoke both fear and a titanic pride which will be humbled not by insight but only by catastrophe.”
The protagonist of The Glass Bees, a former member of the Light Cavalry and later a tank inspector, had once been fascinated by the “succession of ever new models becoming obsolete at an ever increasing speed, this cunning question-and-answer game between overbred brains.” What he came to see is that “the struggle for power had reached a new stage; it was fought with scientific formulas. The weapons vanished in the abyss like fleeting images, like pictures one throws into the fire. New ones were produced in protean succession.” Victory ceased to be about physical battle; it became, instead, a contest of technical mastery and knowledge.
The danger drones pose is not necessarily military. As General Stanley McChrystal rightly said when I asked him about this last week at the New York Historical Society, drones are simply another military tool that can be used for good or ill. Many fret today about collateral damage by drones and forget that if we had to send in armies to do these tasks the collateral damage would be much greater. Others worry about assassination, but drones are simply the tool, not the person pulling the trigger. It may be true that having drones when others don’t offers an enormous military advantage and makes the decision to go to kill easier, but when both sides have drones, we will all think heavily between beginning a cycle of illegal assassinations.
Rather, the danger of drones is how they change us as humans. As we humans interact more regularly with drones and machines and computers, we will inevitably come to expect ourselves and our friends and our colleagues and our lovers to act with the efficiency and selflessness of drones. Sherry Turkle worries that mechanical companions offer such fascination and unquestionable love that humans are beginning to prefer spending time with their machines than with other humans—who make demands, get tired, act cranky, and disappoint us. Ron Arkin has argued that robot soldiers will be more humane at war than human soldiers, who often act rashly out of exhaustion, anger, or revenge. Doctors are learning to rely on Watson and artificially intelligent medical machines, who can bring databases of knowledge to bear on diagnoses with the speed and objectivity that humans can only dream of. In every area of human life where humans once were thought to be necessary, drones and machines are proving more reliable, more capable, and more desirable.
The danger drones represent is not what they do better than humans, but that they do it better than humans. They are a further step in the human dream of self-improvement—the desire to overcome our shame at our all-too-human limitations.
The incredible popularity of drones today is partly a result of their freeing us to fight wars with ever-reduced human and economic costs. But drones are popular also because they appeal to the human desire for perfection. The question is, however, how perfect we humans can be before we begin to lose our humanity. That is, of course, the force of Jünger’s warning: Beware of the bees!
As drones appear everywhere around us, you would do well to put down the newspaper and turn off You Tube and, instead, revisit Ernst Jünger’s classic tale of drones. The Glass Bees is your weekend read. You can read Bruce Sterling’s introduction to The Glass Bees here.