Hannah Arendt Center for Politics and Humanities
14Apr/141

Hiatus, Discontinuity, and Change

Arendtquote

"The end of the old is not necessarily the beginning of the new."

Hannah Arendt, The Life of the Mind

This is a simple enough statement, and yet it masks a profound truth, one that we often overlook out of the very human tendency to seek consistency and connection, to make order out of the chaos of reality, and to ignore the anomalous nature of that which lies in between whatever phenomena we are attending to.

Perhaps the clearest example of this has been what proved to be the unfounded optimism that greeted the overthrow of autocratic regimes through American intervention in Afghanistan and Iraq, and the native-born movements known collectively as the Arab Spring. It is one thing to disrupt the status quo, to overthrow an unpopular and undemocratic regime. But that end does not necessarily lead to the establishment of a new, beneficent and participatory political structure. We see this time and time again, now in Putin's Russia, a century ago with the Russian Revolution, and over two centuries ago with the French Revolution.

Of course, it has long been understood that oftentimes, to begin something new, we first have to put an end to something old. The popular saying that you can't make an omelet without breaking a few eggs reflects this understanding, although it is certainly not the case that breaking eggs will inevitably and automatically lead to the creation of an omelet. Breaking eggs is a necessary but not sufficient cause of omelets, and while this is not an example of the classic chicken and egg problem, I think we can imagine that the chicken might have something to say on the matter of breaking eggs. Certainly, the chicken would have a different view on what is signified or ought to be signified by the end of the old, meaning the end of the egg shell, insofar as you can't make a chicken without it first breaking out of the egg that it took form within.

eggs

So, whether you take the chicken's point of view, or adopt the perspective of the omelet, looking backwards, reverse engineering the current situation, it is only natural to view the beginning of the new as an effect brought into being by the end of the old, to assume or make an inference based on sequencing in time, to posit a causal relationship and commit the logical fallacy of post hoc ergo propter hoc, if for no other reason that by force of narrative logic that compels us to create a coherent storyline.  In this respect, Arendt points to the foundation tales of ancient Israel and Rome:

We have the Biblical story of the exodus of Israeli tribes from Egypt, which preceded the Mosaic legislation constituting the Hebrew people, and Virgil's story of the wanderings of Aeneas, which led to the foundation of Rome—"dum conderet urbem," as Virgil defines the content of his great poem even in its first lines. Both legends begin with an act of liberation, the flight from oppression and slavery in Egypt and the flight from burning Troy (that is, from annihilation); and in both instances this act is told from the perspective of a new freedom, the conquest of a new "promised land" that offers more than Egypt's fleshpots and the foundation of a new City that is prepared for by a war destined to undo the Trojan war, so that the order of events as laid down by Homer could be reversed.

 Fast forward to the American Revolution, and we find that the founders of the republic, mindful of the uniqueness of their undertaking, searched for archetypes in the ancient world. And what they found in the narratives of Exodus and the Aeneid was that the act of liberation, and the establishment of a new freedom are two events, not one, and in effect subject to Alfred Korzybski's non-Aristotelian Principle of Non-Identity. The success of the formation of the American republic can be attributed to the awareness on their part of the chasm that exists between the closing of one era and the opening of a new age, of their separation in time and space:

No doubt if we read these legends as tales, there is a world of difference between the aimless desperate wanderings of the Israeli tribes in the desert after the Exodus and the marvelously colorful tales of the adventures of Aeneas and his fellow Trojans; but to the men of action of later generations who ransacked the archives of antiquity for paradigms to guide their own intentions, this was not decisive. What was decisive was that there was a hiatus between disaster and salvation, between liberation from the old order and the new freedom, embodied in a novus ordo saeclorum, a "new world order of the ages" with whose rise the world had structurally changed.

I find Arendt's use of the term hiatus interesting, given that in contemporary American culture it has largely been appropriated by the television industry to refer to a series that has been taken off the air for a period of time, but not cancelled. The typical phrase is on hiatus, meaning on a break or on vacation. But Arendt reminds us that such connotations only scratch the surface of the word's broader meanings. The Latin word hiatus refers to an opening or rupture, a physical break or missing part or link in a concrete material object. As such, it becomes a spatial metaphor when applied to an interruption or break in time, a usage introduced in the 17th century. Interestingly, this coincides with the period in English history known as the Interregnum, which began in 1649 with the execution of King Charles I, led to Oliver Cromwell's installation as Lord Protector, and ended after Cromwell's death with the Restoration of the monarchy under Charles II, son of Charles I. While in some ways anticipating the American Revolution, the English Civil War followed an older pattern, one that Mircea Eliade referred to as the myth of eternal return, a circular movement rather than the linear progression of history and cause-effect relations.

The idea of moving forward, of progress, requires a future-orientation that only comes into being in the modern age, by which I mean the era that followed the printing revolution associated with Johannes Gutenberg (I discuss this in my book, On the Binding Biases of Time and Other Essays on General Semantics and Media Ecology). But that same print culture also gave rise to modern science, and with it the monopoly granted to efficient causality, cause-effect relations, to the exclusion in particular of final and formal cause (see Marshall and Eric McLuhan's Media and Formal Cause). This is the basis of the Newtonian universe in which every action has an equal and opposite reaction, and every effect can be linked back in a causal chain to another event that preceded it and brought it into being. The view of time as continuous and connected can be traced back to the introduction of the mechanical clock in the 13th century, but was solidified through the printing of calendars and time lines, and the same effect was created in spatial terms by the reproduction of maps, and the use of spatial grids, e.g., the Mercator projection.

And while the invention of history, as a written narrative concerning the linear progression over time can be traced back to the ancient Israelites, and the story of the exodus, the story incorporates the idea of a hiatus in overlapping structures:

A1.  Joseph is the golden boy, the son favored by his father Jacob, earning him the enmity of his brothers

A2.  he is sold into slavery by them, winds up in Egypt as a slave and then is falsely accused and imprisoned

A3.  by virtue of his ability to interpret dreams he gains his freedom and rises to the position of Pharaoh's prime minister

 

B1.  Joseph welcomes his brothers and father, and the House of Israel goes down to Egypt to sojourn due to famine in the land of Canaan

B2.  their descendants are enslaved, oppressed, and persecuted

B3.  Moses is chosen to confront Pharaoh, liberate the Israelites, and lead them on their journey through the desert

 

C1.  the Israelites are freed from bondage and escape from Egypt

C2.  the revelation at Sinai fully establishes their covenant with God

C3.  after many trials, they return to the Promised Land

It can be clearly seen in these narrative structures that the role of the hiatus, in ritual terms, is that of the rite of passage, the initiation period that marks, in symbolic fashion, the change in status, the transformation from one social role or state of being to another (e.g., child to adult, outsider to member of the group). This is not to discount the role that actual trials, tests, and other hardships may play in the transition, as they serve to establish or reinforce, psychologically and sometimes physically, the value and reality of the transformation.

In mythic terms, this structure has become known as the hero's journey or hero's adventure, made famous by Joseph Campbell in The Hero with a Thousand Faces, and also known as the monomyth, because he claimed that the same basic structure is universal to all cultures. The basis structure he identified consists of three main elements: separation (e.g., the hero leaves home), initiation (e.g., the hero enters another realm, experiences tests and trials, leading to the bestowing of gifts, abilities, and/or a new status), and return (the hero returns to utilize what he has gained from the initiation and save the day, restoring the status quo or establishing a new status quo).

Understanding the mythic, non-rational element of initiation is the key to recognizing the role of the hiatus, and in the modern era this meant using rationality to realize the limits of rationality. With this in mind, let me return to the quote I began this essay with, but now provide the larger context of the entire paragraph:

The legendary hiatus between a no-more and a not-yet clearly indicated that freedom would not be the automatic result of liberation, that the end of the old is not necessarily the beginning of the new, that the notion of an all-powerful time continuum is an illusion. Tales of a transitory period—from bondage to freedom, from disaster to salvation—were all the more appealing because the legends chiefly concerned the deeds of great leaders, persons of world-historic significance who appeared on the stage of history precisely during such gaps of historical time. All those who pressed by exterior circumstances or motivated by radical utopian thought-trains, were not satisfied to change the world by the gradual reform of an old order (and this rejection of the gradual was precisely what transformed the men of action of the eighteenth century, the first century of a fully secularized intellectual elite, into the men of the revolutions) were almost logically forced to accept the possibility of a hiatus in the continuous flow of temporal sequence.

Note that concept of gaps in historical time, which brings to mind Eliade's distinction between the sacred and the profane. Historical time is a form of profane time, and sacred time represents a gap or break in that linear progression, one that takes us outside of history, connecting us instead in an eternal return to the time associated with a moment of creation or foundation. The revelation in Sinai is an example of such a time, and accordingly Deuteronomy states that all of the members of the House of Israel were present at that event, not just those alive at that time, but those not present, the generations of the future. This statement is included in the liturgy of the Passover Seder, which is a ritual reenactment of the exodus and revelation, which in turn becomes part of the reenactment of the Passion in Christianity, one of the primary examples of Campbell's monomyth.

Arendt's hiatus, then represents a rupture between two different states or stages, an interruption, a disruption linked to an eruption. In the parlance of chaos and complexity theory, it is a bifurcation point. Arendt's contemporary, Peter Drucker, a philosopher who pioneered the scholarly study of business and management, characterized the contemporary zeitgeist in the title of his 1969 book: The Age of Discontinuity. It is an age in which Newtonian physics was replaced by Einstein's relativity and Heisenberg's uncertainty, the phrase quantum leap becoming a metaphor drawn from subatomic physics for all forms of discontinuity. It is an age in which the fixed point of view that yielded perspective in art and the essay and novel in literature yielded to Cubism and subsequent forms of modern art, and stream of consciousness in writing.

cubism

Beginning in the 19th century, photography gave us the frozen, discontinuous moment, and the technique of montage in the motion picture gave us a series of shots and scenes whose connections have to be filled in by the audience. Telegraphy gave us the instantaneous transmission of messages that took them out of their natural context, the subject of the famous comment by Henry David Thoreau that connecting Maine and Texas to one another will not guarantee that they have anything sensible to share with each other. The wire services gave us the nonlinear, inverted pyramid style of newspaper reporting, which also was associated with the nonlinear look of the newspaper front page, a form that Marshall McLuhan referred to as a mosaic. Neil Postman criticized television's role in decontextualizing public discourse in Amusing Ourselves to Death, where he used the phrase, "in the context of no context," and I discuss this as well in my recently published follow-up to his work, Amazing Ourselves to Death.

The concept of the hiatus comes naturally to the premodern mind, schooled by myth and ritual within the context of oral culture. That same concept is repressed, in turn, by the modern mind, shaped by the linearity and rationality of literacy and typography. As the modern mind yields to a new, postmodern alternative, one that emerges out of the electronic media environment, we see the return of the repressed in the idea of the jump cut writ large.

There is psychological satisfaction in the deterministic view of history as the inevitable result of cause-effect relations in the Newtonian sense, as this provides a sense of closure and coherence consistent with the typographic mindset. And there is similar satisfaction in the view of history as entirely consisting of human decisions that are the product of free will, of human agency unfettered by outside constraints, which is also consistent with the individualism that emerges out of the literate mindset and print culture, and with a social rather that physical version of efficient causality. What we are only beginning to come to terms with is the understanding of formal causality, as discussed by Marshall and Eric McLuhan in Media and Formal Cause. What formal causality suggests is that history has a tendency to follow certain patterns, patterns that connect one state or stage to another, patterns that repeat again and again over time. This is the notion that history repeats itself, meaning that historical events tend to fall into certain patterns (repetition being the precondition for the existence of patterns), and that the goal, as McLuhan articulated in Understanding Media, is pattern recognition. This helps to clarify the famous remark by George Santayana, "those who cannot remember the past are condemned to repeat it." In other words, those who are blind to patterns will find it difficult to break out of them.

Campbell engages in pattern recognition in his identification of the heroic monomyth, as Arendt does in her discussion of the historical hiatus.  Recognizing the patterns are the first step in escaping them, and may even allow for the possibility of taking control and influencing them. This also means understanding that the tendency for phenomena to fall into patterns is a powerful one. It is a force akin to entropy, and perhaps a result of that very statistical tendency that is expressed by the Second Law of Thermodynamics, as Terrence Deacon argues in Incomplete Nature. It follows that there are only certain points in history, certain moments, certain bifurcation points, when it is possible to make a difference, or to make a difference that makes a difference, to use Gregory Bateson's formulation, and change the course of history. The moment of transition, of initiation, the hiatus, represents such a moment.

McLuhan's concept of medium goes far beyond the ordinary sense of the word, as he relates it to the idea of gaps and intervals, the ground that surrounds the figure, and explains that his philosophy of media is not about transportation (of information), but transformation. The medium is the hiatus.

The particular pattern that has come to the fore in our time is that of the network, whether it's the decentralized computer network and the internet as the network of networks, or the highly centralized and hierarchical broadcast network, or the interpersonal network associated with Stanley Milgram's research (popularly known as six degrees of separation), or the neural networks that define brain structure and function, or social networking sites such as Facebook and Twitter, etc. And it is not the nodes, which may be considered the content of the network, that defines the network, but the links that connect them, which function as the network medium, and which, in the systems view favored by Bateson, provide the structure for the network system, the interaction or relationship between the nodes. What matters is not the nodes, it's the modes.

Hiatus and link may seem like polar opposites, the break and the bridge, but they are two sides of the same coin, the medium that goes between, simultaneously separating and connecting. The boundary divides the system from its environment, allowing the system to maintain its identity as separate and distinct from the environment, keeping it from being absorbed by the environment. But the membrane also serves as a filter, engaged in the process of abstracting, to use Korzybski's favored term, letting through or bringing material, energy, and information from the environment into the system so that the system can maintain itself and survive. The boundary keeps the system in touch with its situation, keeps it contextualized within its environment.

The systems view emphasizes space over time, as does ecology, but the concept of the hiatus as a temporal interruption suggests an association with evolution as well. Darwin's view of evolution as continuous was consistent with Newtonian physics. The more recent modification of evolutionary theory put forth by Stephen Jay Gould, known as punctuated equilibrium, suggests that evolution occurs in fits and starts, in relatively rare and isolated periods of major change, surrounded by long periods of relative stability and stasis. Not surprisingly, this particular conception of discontinuity was introduced during the television era, in the early 1970s, just a few years after the publication of Peter Drucker's The Age of Discontinuity.

When you consider the extraordinary changes that we are experiencing in our time, technologically and ecologically, the latter underlined by the recent news concerning the United Nations' latest report on global warming, what we need is an understanding of the concept of change, a way to study the patterns of change, patterns that exist and persist across different levels, the micro and the macro, the physical, chemical, biological, psychological, and social, what Bateson referred to as metapatterns, the subject of further elaboration by biologist Tyler Volk in his book on the subject. Paul Watzlawick argued for the need to study change in and of itself in a little book co-authored by John H. Weakland and Richard Fisch, entitled Change: Principles of Problem Formation and Problem Resolution, which considers the problem from the point of view of psychotherapy. Arendt gives us a philosophical entrée into the problem by introducing the pattern of the hiatus, the moment of discontinuity that leads to change, and possibly a moment in which we, as human agents, can have an influence on the direction of that change.

To have such an influence, we do need to have that break, to find a space and more importantly a time to pause and reflect, to evaluate and formulate. Arendt famously emphasizes the importance of thinking in and of itself, the importance not of the content of thought alone, but of the act of thinking, the medium of thinking, which requires an opening, a time out, a respite from the onslaught of 24/7/365. This underscores the value of sacred time, and it follows that it is no accident that during that period of initiation in the story of the exodus, there is the revelation at Sinai and the gift of divine law, the Torah or Law, and chief among them the Ten Commandments, which includes the fourth of the commandments, and the one presented in greatest detail, to observe the Sabbath day. This premodern ritual requires us to make the hiatus a regular part of our lives, to break the continuity of profane time on a weekly basis. From that foundation, other commandments establish the idea of the sabbatical year, and the sabbatical of sabbaticals, or jubilee year. Whether it's a Sabbath mandated by religious observance, or a new movement to engage in a Technology Sabbath, the hiatus functions as the response to the homogenization of time that was associated with efficient causality and literate linearity, and that continues to intensify in conjunction with the technological imperative of efficiency über alles.

hiatus

To return one last time to the quote that I began with, the end of the old is not necessarily the beginning of the new because there may not be a new beginning at all, there may not be anything new to take the place of the old. The end of the old may be just that, the end, period, the end of it all. The presence of a hiatus to follow the end of the old serves as a promise that something new will begin to take its place after the hiatus is over. And the presence of a hiatus in our lives, individually and collectively, may also serve as a promise that we will not inevitably rush towards an end of the old that will also be an end of it all, that we will be able to find the opening to begin something new, that we will be able to make the transition to something better, that both survival and progress are possible, through an understanding of the processes of continuity and change.

-Lance Strate

Tagged as: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , 1 Comment
24Mar/140

Amor Mundi 3/23/14

Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.

Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.

What Silver Knows

foxData journalist Nate Silver reopened his FiveThirtyEight blog this past week, after leaving the New York Times last year. Although the website launched with a full slate of articles, the opening salvo is a manifesto he calls "What The Fox Knows," referencing the maxim from the poet Archilochus’, “The fox knows many things, but the hedgehog knows one big thing.” For Silver, this means, “We take a pluralistic approach and we hope to contribute to your understanding of the news in a variety of ways.” What separates FiveThirtyEight is its focus on big data, the long trail of information left by everything we do in a digital world. From big data, Silver believes he can predict outcomes more accurately than traditional journalism, and that he will also be better able to explain and predict human behavior. “Indeed, as more human behaviors are being measured, the line between the quantitative and the qualitative has blurred. I admire Brian Burke, who led the U.S. men’s hockey team on an Olympic run in 2010 and who has been an outspoken advocate for gay-rights causes in sports. But Burke said something on the hockey analytics panel at the MIT Sloan Sports Analytics Conference last month that I took issue with. He expressed concern that statistics couldn’t measure a hockey player’s perseverance. For instance, he asked, would one of his forwards retain control of the puck when Zdeno Chara, the Boston Bruins’ intimidating 6’9″ defenseman, was bearing down on him? The thing is, this is something you could measure. You could watch video of all Bruins games and record how often different forwards kept control of the puck. Soon, the NHL may install motion-tracking cameras in its arenas, as other sports leagues have done, creating a record of each player’s x- and y-coordinates throughout the game and making this data collection process much easier.” As the availability of data increases beyond comprehension, humans will necessarily turn the effort of analysis over to machines running algorithms. Predictions and simulations will abound and human actions—whether voting for a president or holding on to a hockey puck—will increasingly appear to be predictable behavior. The fact that actions are never fully predictable is already fading from view; we have become accustomed to knowing how things will end before they begin. At the very least, Nate Silver and his team at FiveThirtyEight will try to “critique incautious uses of statistics when they arise elsewhere in news coverage.”

All in All, Another Tweet in the Wall

tejuAuthor Teju Cole recently composed and released an essay called “A Piece of The Wall” exclusively on Twitter. In an interview, along with details about the technical aspects of putting together what's more like a piece of radio journalism than a piece of print journalism, Cole notes that there may be a connection between readership and change: "I’m not getting my hopes up, but the point of writing about these things, and hoping they reach a big audience, has nothing to do with “innovation” or with “writing.” It’s about the hope that more and more people will have their conscience moved about the plight of other human beings. In the case of drones, for example, I think that all the writing and sorrow about it has led to a scaling back of operations: It continues, it’s still awful, but the rate has been scaled back, and this has been in specific response to public criticism. I continue to believe the emperor has a soul."

A Religious Age?

bergerPeter Berger has a thoughtful critique of Charles Taylor’s A Secular Age, one that accepts Taylor’s philosophical premise but denies its sociological reality. “I think that Taylor’s magnum opus makes a very significant contribution, though I disagree with its central proposition: We don’t live in a “secular age”; rather in most of the world we live in a turbulently religious age (with the exception of a few places, like university philosophy departments in Canada and football clubs in Britain). (Has Taylor been recently in Nepal? Or for that matter in central Texas?) Taylor is a very sophisticated philosopher, not an empirically oriented sociologist of religion. It so happens that we now have a sizable body of empirical data from much of the world (including America and Europe) on what ordinary religious people actually believe and how they relate their faith to various secular definitions of reality). Let me just mention the rich work of Robert Wuthnow, Nancy Ammerman and Tanya Luhrmann in the US, and Grace Davie, Linda Woodhead and Daniele Hervieu-Leger in Europe. There is a phrase that sociology students learn in the first year of graduate study—frequency distribution:  It is important for me to understand just what X is; it is even more important for me to know how much X there is at a given time in a given place. The phrase is to be recommended to all inclined to make a priori  statements about anything. In this case, I think that Taylor has made a very useful contribution in his careful description of what he calls “the immanent frame” (he also calls it “exclusive humanism”)—a sense of reality that excludes all references to transcendence or anything beyond mundane human experience. Taylor also traced the historical development of this definition of reality.” Maybe the disagreement is more subtle: Religion continues in the secular age, but it is more personal. Quite simply, churches were once the tallest and most central buildings, representing the center of public and civic life. That is no longer the case in Europe; nor in Nepal.

Looking Under the Skin

scarlettAnthony Lane in The New Yorker asks the question, “Why should we watch Scarlett Johansson with any more attention than we pay to other actors?” His answer concerns Johansson’s role and performance in her new movie “Under the Skin.” Lane is near obsessed with Johansson’s ability to reveal nothing and everything with a look—what he calls the “Johansson look, already potent and unnerving. She was starting to poke under the skin.” He continues describing Johansson in a photo shoot: ““Give me nothing,” Dukovic said, and Johansson wiped the expression from her face, saying, “I’ll just pretend to be a model.” Pause. “I rarely have anything inside me.” Then came the laugh: dry and dirty, as if this were a drama class and her task was to play a Martini. Invited to simulate a Renaissance picture, she immediately slipped into a sixteenth-century persona, pretending to hold a pose for a painter and kvetching about it: “How long do I have to sit here for? My sciatica is killing me.” You could not wish for a more plausible insight into the mind-set of the Mona Lisa. A small table and a stool were provided, and Johansson sat down with her arms folded in front of her. “I want to look Presidential,” she declared. “I want this to be my Mt. Rushmore portrait.” Once more, Dukovic told her what to show: “Absolutely nothing.” Not long after, he and his team began to pack up. The whole shoot had taken seventeen minutes. She had given him absolutely everything. We should not be surprised by this. After all, film stars are those unlikely beings who seem more alive, not less, when images are made of them; who unfurl and reach toward the light, instead of seizing up, when confronted by a camera; and who, by some miracle or trick, become enriched versions of themselves, even as they ramify into other selves on cue. Clarence Sinclair Bull, the great stills photographer at M-G-M, said of Greta Garbo that “she seems to feel the emotion for each pose as part of her personality.” From the late nineteen-twenties, he held a near-monopoly on pictures of Garbo, so uncanny was their rapport. “All I did was to light the face and wait. And watch,” he said. Why should we watch Johansson with any more attention than we pay to other actors?”

Fantasizing About Being Lost

malaysiaGeoffrey Gray suggests a reason why we've become obsessed with the missing plane: "Wherever the Malaysia Airlines plane is, it found a hiding place. And the longer it takes investigators to discover where it is and what went wrong, the longer we have to indulge in the fantasy that we too might be able to elude the computers tracking our clicks, text messages, and even our movements. Hidden from the rest of the world, if only for an imagined moment, we feel what the passengers of Flight 370 most likely don't: safe."

 

This Week on the Hannah Arendt Center Blog

This week on the blog, learn more about the Program Associate position now available at the Arendt Center. In the Quote of the Week, Ian Zuckerman looks at the role some of Arendt's core themes play in Kubrik's famed nuclear satire, "Dr Strangelove." And, HannahArendt.net issues a call for papers for their upcoming 'Justice and Law' edition being released in August of this year.

17Feb/140

The Dystopia of Knowledge

Arendtquote

“This future man, whom the scientists tell us they will produce in no more than a hundred years, seems to be possessed by a rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking), which he wishes to exchange, as it were, for something he has made himself.”

Hannah Arendt, The Human Condition

The future man of whom Arendt writes is one who has been released from earthly ties, from nature.  He has been released from earth as a physical space but also as “the quintessence of the human condition.”  He will have been able to “create life in a test tube” and “extend man’s life-span far beyond the hundred-year limit.”  The idea that this man would wish to exchange his given existence for something artificial is part of a rather intricate intellectual historical argument about the development of modern science.

The more man has sought after perfect knowledge of nature, the more he has found himself in nature’s stead, and the more uncertain he has felt, and the more he has continued to seek, with dire consequences.  This is the essential idea.  The negative consequences are bundled together within Arendt’s term, “world alienation,” and signify, ultimately, the endangerment of possibilities for human freedom.  Evocative of dystopian fiction from the first half of the twentieth century, this theme has enjoyed renewed popularity in our current world of never-ending war and ubiquitous surveillance facilitated by technical innovation.

surv

Arendt’s narration gravitates around Galileo’s consummation of the Copernican revolution, which marks the birth of “the modern astrophysical world view.”  The significance of Galileo, Arendt writes, is that with him we managed to find “the Archimedean point” or the universal point of view.  This is an imagined point outside the earth from which it should be possible to make objective observations and formulate universal natural laws.  Our reaching of the Archimedean point, without leaving the earth, was responsible for natural science’s greatest triumphs and the extreme pace of discovery and technical innovation.

This was also a profoundly destabilizing achievement, and Arendt’s chronicle of its cultural effects takes on an almost psychological resonance.  While we had known since Plato that the senses were unreliable for the discovery of truth, she says, Galileo’s telescope told us that we could not trust our capacity for reason, either.  Instead, a manmade instrument had shown us the truth, undermining both reason and faith in reason.

In grappling with the resulting radical uncertainty, we arrived at Descartes’ solution of universal doubt.  Arendt describes this as a turn towards introspection, which provides a solution insofar as it takes place within the confines of one’s mind.  External forces cannot intrude here, at least upon the certainty that mental processes are true in the sense that they are real.  Man’s turn within himself afforded him some control.  This is because it corresponded with “the most obvious conclusion to be drawn from the new physical science: though one cannot know truth as something given and disclosed, man can at least know what he makes himself.” According to Arendt, this is the fundamental reasoning that has driven science and discovery at an ever-quickening pace.  It is at the source of man’s desire to exchange his given existence “for something he has made himself.”

The discovery of the Archimedean point with Galileo led us to confront our basic condition of uncertainty, and the Cartesian solution was to move the Archimedean point inside man.  The human mind became the ultimate point of reference, supported by a mathematical framework that it produces itself.  Mathematics, as a formal structure produced by the mind, became the highest expression of knowledge.  As a consequence, “common sense” was internalized and lost its worldly, relational aspect.  If common sense only means that all of us will arrive at the same answer to a mathematical question, then it refers to a faculty that is internally held by individuals rather than one that fits us each into the common world of all, with each other, which is Arendt’s ideal.  She points to the loss of common sense as a crucial aspect of “world alienation.”

This loss is closely related to Arendt’s concerns about threats to human political communication. She worries that we have reached the point at which the discoveries of science are no longer comprehensible.  They cannot be translated from the language of mathematics into speech, which is at the core of Arendt’s notion of political action and freedom.

The threat to freedom is compounded when we apply our vision from the Archimedean point to ourselves.  Arendt cautions, “If we look down from this point upon what is going on on earth and upon the various activities of men, … then these activities will indeed appear to ourselves as no more than ‘overt behavior,’ which we can study with the same methods we use to study the behavior of rats.” (“The Conquest of Space and the Stature of Man” in Between Past and Future)

She argues against the behaviorist perspective on human affairs as a false one, but more frightening for her is the fact it could become reality.  We may be seeking this transformation through our desire to control and know and thus live in a world that we have ourselves created.  When we look at human affairs from the Archimedean, objective scientific point of view, our behavior appears to be analyzable, predictable, and uniform like the activity of subatomic particles or the movement of celestial bodies.  We are choosing to look at things with such far remove that, like these other activities and movements, they are beyond the grasp of experience.  “World alienation” refers to this taking of distance, which collapses human action into behavior.  The purpose would be to remedy the unbearable condition of contingency, but in erasing contingency, by definition, we erase the unexpected events that are the worldly manifestations of human freedom.

To restate the argument in rather familiar terms: Our quest for control, to put an end to the unbearable human condition of uncertainty and contingency, leads to a loss of both control and freedom.  This sentiment should be recognizable as a hallmark of the immediate post-war period, represented in works of fiction like Kubrick’s Dr. Strangelove, Beckett’s Endgame, and Orwell’s 1984.  We can also find it even earlier in Koestler’s Darkness at Noon and Huxley’s Brave New World.  There has been a recent recovery and reemergence of the dystopian genre, at least in one notable case, and with it renewed interest in Arendt’s themes as they are explored here.

Dave Eggers’ The Circle, released in 2013, revolves around an imagined Bay Area cultish tech company that is a combination of Google, Facebook, Twitter, and PayPal.  In its apparent quest for progress, convenience, and utility, it creates an all-encompassing universe in which all of existence is interpreted in terms of data points and everything is recorded. The protagonist, an employee of the Circle, is eventually convinced to “go transparent,” meaning that her every moment is live streamed and recorded, with very few exceptions.   Reviews of the book have emphasized our culture of over-sharing and the risks to privacy that this entails.  They have also drawn parallels between this allegorical warning and the Snowden revelations.  Few, though, if any, have discussed the book in terms of the human quest for absolute knowledge in order to eliminate uncertainty and contingency, with privacy as collateral damage.

dave

In The Circle, the firm promotes transparency and surveillance as solutions to crime and corruption.  Executives claim that through acquired knowledge and technology, anything is possible, including social harmony and world peace.  The goal is to organize human affairs in a harmonious way using technical innovation and objective knowledge.  This new world is to be man made so that it can be manipulated for progressive ends.  In one key conversation, Mae, the main character, confronts one of the three firm leaders, saying, “… you can’t be saying that everyone should know everything,” to which he replies, “… I’m saying that everyone should have a right to know everything and should have the tools to know anything.  There’s not enough time to know everything, though I certainly wish there was.”

In this world, there are several senses in which man has chosen to replace existence as given with something he has made himself.  First and most obviously, new gadgets dazzle him at every turn, and he is dependent on them.  Second, he reduces all information “to the measure of the human mind.”  The technical innovations and continuing scientific discoveries are made with the help of manmade instruments, such that:  “Instead of objective qualities … we find instruments, and instead of nature or the universe—in the words of Heisenberg—man encounters only himself.” (The Human Condition, p. 261) Everything is reduced to a mathematical calculation.  An employee’s (somewhat forced) contributions to the social network are tabulated and converted into “retail raw,” the dollar measure of consumption they have inspired (through product placement, etc.).  All circlers are ranked, in a competitive manner, according to their presence on social media.  The effects in terms of Arendt’s notion of common sense are obvious.  Communication takes place in flat, dead prose.  Some reviewers have criticized Eggers for the writing style, but what appears to be bad writing actually matches the form to the content in this case.

Finally, it is not enough to experience reality here; all experience must be recorded, stored, and made searchable by the Circle.  Experience is thus replaced with a man made replica.  Again, the logic is that we can only know what we produce ourselves.  As all knowledge is organized according to human artifice, the human mind, observing from a sufficient distance, can find the patterns within it.  These forms, pleasing to the mind, are justifiable because they work.

blue

They produce practical successes.  Here, harmony is discovered because it is created.  Arendt writes:

“If it should be true that a whole universe, or rather any number of utterly different universes will spring into existence and ‘prove’ whatever over-all pattern the human mind has constructed, then man may indeed, for a moment, rejoice in a reassertion of the ‘pre-established harmony between pure mathematics and physics,’ between mind and matter, between man and the universe.  But it will be difficult to ward off the suspicion that this mathematically preconceived world may be a dream world where every dreamed vision man himself produces has the character of reality only as long as the dream lasts.”

If harmony is artificially created, then it can only last so long as it is enforced.  Indeed, in the end of the novel, when the “dream” is revealed as nightmare, Mae is faced with the choice of prolonging it.  We can find a similar final moment of hope in The Human Condition.  As she often does, Arendt has set up a crushing course of events, a seeming onslaught of catastrophe, but she leaves us with at least one ambiguous ray of light: “The idea that only what I am going to make will be real—perfectly true and legitimate in the realm of fabrication—is forever defeated by the actual course of events, where nothing happens more frequently than the totally unexpected.”

-Jennifer M. Hudson

6Jan/140

Amor Mundi 1/5/14

Arendtamormundi

Hannah Arendt considered calling her magnum opus Amor MundiLove of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.

Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.

The Missing NSA Debate About Capitalism

nsaHero or traitor? That is the debate The New York Times wants about Edward Snowden. But the deeper question is what, if anything, will change? Evgeny Morozov has a strong essay in The Financial Times: "Mr. Snowden created an opening for a much-needed global debate that could have highlighted many of these issues. Alas, it has never arrived. The revelations of the US's surveillance addiction were met with a rather lacklustre, one-dimensional response. Much of this overheated rhetoric - tinged with anti-Americanism and channelled into unproductive forms of reform - has been useless." The basic truth is that "No laws and tools will protect citizens who, inspired by the empowerment fairy tales of Silicon Valley, are rushing to become data entrepreneurs, always on the lookout for new, quicker, more profitable ways to monetise their own data - be it information about their shopping or copies of their genome. These citizens want tools for disclosing their data, not guarding it.... What eludes Mr. Snowden - along with most of his detractors and supporters - is that we might be living through a transformation in how capitalism works, with personal data emerging as an alternative payment regime. The benefits to consumers are already obvious; the potential costs to citizens are not. As markets in personal information proliferate, so do the externalities - with democracy the main victim. This ongoing transition from money to data is unlikely to weaken the clout of the NSA; on the contrary, it might create more and stronger intermediaries that can indulge its data obsession. So to remain relevant and have some political teeth, the surveillance debate must be linked to debates about capitalism - or risk obscurity in the highly legalistic ghetto of the privacy debate."

The Non-Private World Today

worldConsidering the Fourth Amendment implications of the recent Federal injunction on the NSA's domestic spying program, David Cole notes something important about the world we're living in: "The reality of life in the digital age is that virtually everything you do leaves a trace that is shared with a third party-your Internet service provider, phone company, credit card company, or bank. Short of living off the grid, you don't have a choice in the matter. If you use a smartphone, you are signaling your whereabouts at all times, and sharing with your phone provider a track record of your thoughts, interests, and desires. Technological innovations have made it possible for all of this information to be collected, stored, and analyzed by computers in ways that were impossible even a decade ago. Should the mere existence of this information make it freely searchable by the NSA, without any basis for suspicion?"

The End of the Blog

blogJason Kottke thinks that the blog is no longer the most important new media form: "The primary mode for the distribution of links has moved from the loosely connected network of blogs to tightly integrated services like Facebook and Twitter. If you look at the incoming referers to a site like BuzzFeed, you'll see tons of traffic from Facebook, Twitter, Reddit, Stumbleupon, and Pinterest but not a whole lot from blogs, even in the aggregate. For the past month at kottke.org, 14 percent of the traffic came from referrals compared to 30 percent from social, and I don't even work that hard on optimizing for social media. Sites like BuzzFeed and Upworthy aren't seeking traffic from blogs anymore. Even the publicists clogging my inbox with promotional material urge me to 'share this on my social media channels' rather than post it to my blog." Of course, it may be the case that the blog form remains deeply important, but only for those blogs that people visit regularly and then distribute through social media. The major blogs are more powerful and popular than ever. What we are learning is that not everyone is a blogger.

Against Daddy Days

daddyTa-Nehisi Coates explains why he's frustrated about the way we're having the conversation about paternity leave: "So rather than hear about the stigma men feel in terms of taking care of kids, I'd like for men to think more about the stigma that women feel when they're trying to build a career and a family. And then measure whatever angst they're feeling against the real systemic forces that devalue the labor of women. I think that's what's at the root of much of this: When some people do certain work we cheer. When others do it we yawn. I appreciated the hosannas when I was strolling down Flatbush, but I doubt the female electrician walking down the same street got the same treatment."

The Professional Palate Unmasked

nyBreaking a tradition of his profession, New York magazine restaurant critic Adam Platt has decided to reveal his face. During his explanation, he stakes a claim for the continued importance of the critic in the digital age: "So is there still room for the steady (and, yes, sometimes weary) voice of the professional in a world where everyone's a critic? Of course there is. This is especially true in the theatrical realm of restaurants, where the quality and enjoyment of your dinner can vary dramatically depending on where you sit, what time of day you eat, how long the restaurant has been open, and what you happened to order. Anonymity would be nice, but it's always been less important than a sturdy gut and a settled palate. Most important of all, however, is a healthy expense account, because if a critic's employer allows for enough paid visits to a particular restaurant, even the most elaborately simpering treatment won't change his or her point of view."

 

9Sep/130

Amor Mundi 9/8/13

Arendtamormundi

Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.

Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.

Balancing Solitude and Society

Illustration by Dan Williams

Illustration by Dan Williams

It is a new year, not only for Jews celebrating Rosh Hashanah but also for hundreds of thousands of college and university students around the world. Over at Harvard, they invited Nannerl O. Keohane—past President of Wellesley College—to give the new students some advice on how to reflect upon and imagine the years of education that lay before them. Above all, Keohane urges students to take time to think about what they want from their education: “You now have this incredible opportunity to shape who you are as a person, what you are like, and what you seek for the future. You have both the time and the materials to do this. You may think you’ve never been busier in your life, and that’s probably true; but most of you have “time” in the sense of no other duties that require your attention and energy. Shaping your character is what you are supposed to do with your education; it’s not competing with something else. You won’t have many other periods in your life that will be this way until you retire when, if you are fortunate, you’ll have another chance; but then you will be more set in your ways, and may find it harder to change.”

The March, Fifty Years On

mlkRobin Kelly, writing on the 1963 March on Washington and the March's recent fiftieth anniversary celebrations, zooms out a little bit on the original event. It has, he says, taken on the characteristics of a big, feel good event focused on Civil Rights and directly responsible for the passage of the Civil Rights Act, when, in fact, all those people also came to Washington in support of economic equality and the gritty work of passing laws was accomplished later, with additional momentum and constraints. It's important to remember, he says, that "big glitzy marches do not make a movement; the organizations and activists who came to Washington, D. C., will continue to do their work, fight their fights, and make connections between disparate struggles, no matter what happens in the limelight."

Famous Last Words

textRobinson Meyer investigates what, exactly, poet Seamus Heaney's last words were. Just before he passed away last week at 74, Heaney, an Irish Nobel Laureate, texted the Latin phrase noli timere, don't be afraid, to his wife. Heaney's son Michael mentioned this in his eulogy for his father, and it was written down and reported as, variously, the correct phrase or the incorrect nolle timore. For Meyer, this mis-recording of the poet's last words is emblematic of some of the transcriptions and translations he did in his work, and the further translations and transcriptions we will now engage in because he is gone. "We die" Meyer writes, "and the language gets away from us, in little ways, like a dropped vowel sound, a change in prepositions, a mistaken transcription. Errors in transfer make a literature."

We're All Billy Pilgrim Now

gearsJay Rosen, who will be speaking at the Hannah Arendt Center’s NYC Lecture Series on Sunday, Oct. 27th at 5pm, has recently suggested that journalism solves the problem of awayness - “Journalism enters the picture when human settlement, daily economy, and political organization grow beyond the scale of the self-informing populace.” C.W. Anderson adds that "awayness" should include alienation from a moment in time as well as from a particular place: "Think about how we get our news today: We dive in and out of Twitter, with its short bursts of immediate information. We click over to a rapidly updating New York Times Lede blog post, with it's rolling updates and on the ground reports, complete with YouTube videos and embedded tweets. Eventually, that blog post becomes a full-fledged article, usually written by someone else. And finally, at another end of the spectrum, we peruse infographics that can sum up decades of data into a single image. All of these are journalism, in some fashion. But the kind of journalisms they are - what they are for - is arguably very different. They each deal with the problem of context in different ways."

...Because I Like it

readingAdam Gopnik makes a case for the study of English, and of the humanities more broadly. His defense is striking because it rejects a recent turn towards their supposed use value, instead emphasizing such study for its own sake: "No sane person proposes or has ever proposed an entirely utilitarian, production-oriented view of human purpose. We cannot merely produce goods and services as efficiently as we can, sell them to each other as cheaply as possible, and die. Some idea of symbolic purpose, of pleasure seeking rather than rent seeking, of Doing Something Else, is essential to human existence. That’s why we pass out tax breaks to churches, zoning remissions to parks, subsidize new ballparks and point to the density of theatres and galleries as signs of urban life, to be encouraged if at all possible. When a man makes a few billion dollars, he still starts looking around for a museum to build a gallery for or a newspaper to buy. No civilization we think worth studying, or whose relics we think worth visiting, existed without what amounts to an English department—texts that mattered, people who argued about them as if they mattered, and a sense of shame among the wealthy if they couldn’t talk about them, at least a little, too. It’s what we call civilization."

Featured Events

smallfailingOctober 3-4, 2013

The sixth annual fall conference, "Failing Fast:The Crisis of the Educated Citizen"

Olin Hall, Bard College

Learn more here.
12Aug/130

Can We Survive Entertainment?

Arendtquote

"The state of affairs, which indeed is equaled nowhere else in the world, can properly be called mass culture; its promoters are neither the masses nor their entertainers, but are those who try to entertain the masses with what once was an authentic object of culture, or to persuade them that Hamlet can be as entertaining as My Fair Lady, and educational as well. The danger of mass education is precisely that it may become very entertaining indeed; there are many great authors of the past who have survived centuries of oblivion and neglect, but it is still an open question whether they will be able to survive an entertaining version of what they have to say. "

-Hannah Arendt, "Mass Culture and Mass Media"

I recently completed work on a book entitled Amazing Ourselves to Death: Neil Postman's Brave New World Revisited, to be published by Peter Lang. And as the title implies, the book takes up the arguments made by Postman in his book, Amusing Ourselves to Death: Public Discourse in the Age of Show Business, published nearly three decades ago, and considers them in light of the contemporary media environment, and the kind of culture that it has given rise to.  I bring this up because the passage from Hannah Arendt's essay, "Mass Culture and Mass Media," is a quote that I first read in Amusing Ourselves to Death.  Interestingly, Postman used it not in his chapter on education, but in one focusing on religion, one that placed particular emphasis on the phenomenon of televangelism that exploded into prominence back in the eighties.  To put the quote into the context that Postman had earlier placed it in, he prefaced the passage with the following:

There is a final argument that whatever criticisms may be made of televised religion, there remains the inescapable fact that it attracts viewers by the millions. This would appear to be the meaning of the statements, quoted earlier by Billy Graham and Pat Robertson, that there is a need for it among the multitude. To which the best reply I know was made by Hannah Arendt, who, in reflecting on the products of mass culture, wrote:

And this is where Arendt's quote appears, after which Postman provides the following commentary:

If we substitute the word "religion" for Hamlet, and the phrase "great religious traditions" for "great authors of the past," this question may stand as the decisive critique of televised religion. There is no doubt, in other words, that religion can be made entertaining. The question is, by doing so, do we destroy it as an "authentic object of culture"? And does the popularity of a religion that employs the full resources of vaudeville drive more traditional religious conceptions into manic and trivial displays?

In returning to Postman's critique of the age of television, I decided to use this same quote in my own book, noting how Postman had used it earlier, but this time placing it in a chapter on education.  In particular, I brought it up following a brief discussion of the latest fad in higher education, massive open online courses, abbreviated as MOOCs.

moocs

A MOOC can contain as many as 100,000 students, which raises the question of, in what sense is a MOOC a course, and in what sense is the instructor actually teaching?  It is perhaps revealing that the acronym MOOC is a new variation on other terms associated with new media, such as MMO, which stands for massive multiplayer online (used to describe certain types of games), and the more specific MMORPG, which stands for massive multiplayer online role-playing game.  These terms are in turn derived from older ones such as MUD, multi-user dungeon, and MUSH, multi-user shared hallucination, and also MOO, multi-user dungeon, object oriented.  In other words, the primary connotation is with gaming, not education.  Holding this genealogy aside, it is clear that offering MOOCs is presently seen as a means to lend prestige to universities, and they may well be a means to bring education to masses of people who could not otherwise afford a college course, and also to individuals who are not interested in pursuing traditional forms of education, but then again, there is nothing new about the phenomenon of the autodidact, which was made possible by the spread of literacy and easy availability of books. There is no question that much can be learned from reading books, or listening to lectures via iTunes, or watching presentations on YouTube, but is that what we mean by education? By teaching?

Regarding Arendt's comments on the dangers of mass education, we might look to the preferences of the most affluent members of our society? What do people with the means to afford any type of education available tend to choose for their children, and for themselves? The answer, of course, is traditional classrooms with very favorable teacher-student ratios, if not private, one-on-one tutoring (the same is true for children with special needs, such as autism).  There should be no question as to what constitutes the best form of education, and it may be that we do not have the resources to provide it, but still we can ask whether money should be spent on equipping classrooms with the latest in educational technology, when the same limited resources could be used to hire more teachers?  It is a question of judgment, of the ability to decide on priorities based on objective assessment, rather than automatically jumping on the new technology bandwagon time and time again.

The broader question that concerns both Arendt and Postman is whether serious discourse, be it educational, religious, or political, can survive the imperative to make everything as entertaining as possible.  For Arendt, this was a feature of mass media and their content, mass culture. Postman argues that of the mass media, print media retains a measure of seriousness, insofar as the written word is a relatively abstract form of communication, one that provides some degree of objective distance from its subject matter, and that requires relatively coherent forms of organization. Television, on the other hand, is an image-centered medium that places a premium on attracting and keeping audiences, not to mention the fact that of all the mass media, it is the most massive.  The bias of the television medium is towards showing, rather than telling, towards displaying exciting visuals, and therefore towards entertaining content.  Of course, it's possible to run counter to the medium's bias, in which case you get something like C-SPAN, whose audience is miniscule.

tv

The expansion of television via cable and satellite has given us better quality entertainment, via the original series appearing on HBO, Showtime, Starz, and AMC, but the same is not true about the quality of journalism.  Cable news on CNN, MSNBC, and FOX does not provide much in the way of in-depth reporting or thoughtful analysis. Rather, what we get is confrontation and conflict, which of course is dramatic, and above all entertaining, but contributes little to the democratic political process.  Consider that at the time of the founding of the American republic, the freedom to express opinions via speech and press was associated with the free marketplace of ideas, that is, with the understanding that different views can be subject to relatively objective evaluation, different descriptions can be examined in order to determine which one best matches with reality, different proposals can be analyzed in order to determine which one might be the best course of action.  The exchange of opinions was intended to open up discussion, and eventually lead to some form of resolution. Today, as can be seen best on cable news networks, when pundits express opinions, it's to close down dialogue, the priority being to score points, to have the last word if possible, and at minimum to get across a carefully prepared message, rather than to listen to what the other person has to say, and find common ground.  And this is reflected in Congress, as our elected representatives are unwilling to talk to each other, work with each other, negotiate settlements, and actually be productive as legislators.

Once upon a time, the CBS network news anchor Walter Cronkite was dubbed "the most trusted man in American." And while his version of the news conformed to the biases of the television medium, still he tried to engage in serious journalism as much as he was able to within those constraints. Today, we would be hard put to identify anyone as our most trusted source of information, certainly none of the network news anchors would qualify, but if anyone deserves the title, at least for a large segment of American society, it would be Jon Stewart of The Daily Show.  And while there is something to be said for the kind of critique that he and his compatriot Stephen Colbert provide, what they provide us with, after all, are comedy programs, and at best we can say that they do not pretend to be providing anything other than entertainment.  But we are left with the question, when so many Americans get their news from late night comedians, does that mean that journalism has become a joke?

Cable television has also given us specialized educational programming via the National Geographic Channel, the History Channel, and the Discovery Channel, and while this has provided an avenue for the dissemination of documentaries, audiences are especially drawn to programs such as Dog Whisperer with Cesar Milan, Moonshiners, Ancient Aliens, UFO Files, and The Nostradamus Effect.  On the Animal Planet channel, two specials entitled Mermaids: The Body Found and Mermaids: The New Evidence, broadcast in 2012 and 2013 respectively, gave the cable outlet its highest ratings in its seventeen-year history. These fake documentaries were assumed to be real by many viewers, prompting the National Oceanic and Atmospheric Administration to issue a statement stating that mermaids do not actually exist.  And it is almost to easy to mention that The Learning Channel, aka TLC, has achieved its highest ratings by turning to reality programs, such as Toddlers & Tiaras, and its notorious spin-off, Here Comes Honey Boo Boo.

honey

Many more examples come to mind, but it is also worth asking whether Facebook status updates and tweets on Twitter provide any kind of alternative to serious, reasoned discourse?  In the foreword to Amusing Ourselves to Death, Postman wrote, "As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists 'failed to take into account man's almost infinite appetite for distractions.'"  Does the constant barrage of stimuli that we receive today via new media, and the electronic media in general, make it easier or harder for us to think, and to think about thinking, as Arendt would have us do? Huxley's final words in Brave New World Revisited are worth recalling:

Meanwhile, there is still some freedom left in the world. Many young people, it is true, do not seem to value freedom.  But some of us still believe that, without freedom, human beings cannot become fully human and that freedom is therefore supremely valuable. Perhaps the forces that now menace freedom are too strong to be resisted for very long. It is still our duty to do whatever we can to resist them. (1958, pp. 122-123)

It's not that distractions and entertainment are inherently evil, or enslaving, but what Huxley, Postman, and Arendt all argue for is the need for placing limits on our amusements, maintaining a separation between contexts, based on what content is most appropriate. Or as was so famously expressed in Ecclesiastes: "To everything there is a season, and a time to every purpose under heaven." The problem is that now the time is always 24/7/365, and the boundaries between contexts dissolve within the electronic media environment.  Without a context, there is no balance, the key ecological value that relates to the survival, and sustainability of any given culture.  For Postman, whose emphasis was on the prospects for democratic culture, we have become a culture dangerously out of balance.  For Arendt, in "Mass Culture and Mass Media," the emphasis was somewhat different, but the conclusion quite similar, as can be seen in her final comments:

An object is cultural to the extent that it can endure; this durability is the very opposite of its functionality, which is the quality which makes it disappear again from the phenomenal world by being used and used up. The "thingness" of an object appears in its shape and appearance, the proper criterion of which is beauty. If we wanted to judge an object by its use value alone, and not also by its appearance… we would first have to pluck out our eyes. Thus, the functionalization of the world which occurs in both society and mass society deprives the world of culture as well as beauty.  Culture can be safe only with those who love the world for its own sake, who know that without the beauty of man-made, worldly things which we call works of art, without the radiant glory in which potential imperishability is made manifest to the world and in the world, all human life would be futile and no greatness could endure.

Our constant stream of technological innovation continues to contribute to the functionalization of the world, and the dominance of what Jacques Ellul called "la technique," the drive toward efficiency as the only value that can be effectively invoked in the kind of society that Postman termed a technopoly, a society in which culture is completed dominated by this technological imperative.  The futility of human life that Arendt warns us about is masked by our never-ending parade of distractions and amusements; the substitution of the trivial for greatness is disguised by the quality and quantity of our entertainment.  We experience the extremes of the hyperrational and the hyperreal, both of which focus our attention on the ephemeral, rather than the eternal that Arendt upholds.  She argues for the importance of loving the world for its own sake, which requires us to be truly ecological in our orientation, balanced in our approach, clear and true in our minds and our hearts.  Is there any question that this is what is desperately needed today? Is there any question that this is what seems to elude us time and time again, as all of our innovations carry us further and further away from the human lifeworld?

-Lance Strate

24May/130

Looking Beyond A Digital Harvard

ArendtWeekendReading

Graduation is upon us. Saturday I will be in full academic regalia mixing with the motley colors of my colleagues as we send forth yet another class of graduates onto the rest of their lives. I advised three senior projects this year. One student is headed to East Jerusalem, where she will be a fellow at the Bard Honors College at Al Quds University. Another is staying at Bard where he will co-direct Bard’s new Center for the Study of the Drone. The third is returning to the United Kingdom where he will be the fourth person in a new technology driven public relations start up. A former student just completed Bard’s Masters in Teaching and will begin a career as a high school teacher. Another recent grad is returning from Pakistan to New York where she will earn a Masters in interactive technology at the Tisch School for the Arts at NYU.  These are just a few of the extraordinary opportunities that young graduates are finding or making for themselves.

graduation

The absolute best part of being a college professor is the immersion in optimism from being around exceptional young people. Students remind us that no matter how badly we screw things up, they keep on dreaming and working to reinvent the world as a better and more meaningful place. I sometimes wonder how people who don’t have children or don’t teach can possibly keep their sanity. I count my lucky stars to be able to live and work around such amazing students.

I write this at a time, however, in which the future of physical colleges where students and professors congregate in small classrooms to read and think together is at a crossroads. In The New Yorker, Nathan Heller has perhaps the most illuminating essay on MOOC’s yet to be written. His focus is on Harvard University, which brings a different perspective than most such articles. Heller asks how MOOCs will change not only our wholesale educational delivery at state and community colleges across the country, but also how the rush to transfer physical courses into online courses will transform elite education as well. He writes: “Elite educators used to be obsessed with “faculty-to-student-ratio”; now schools like Harvard aim to be broadcast networks.”

By focusing on Harvard, Heller shifts the traditional discourse surrounding MOOCs, one that usually concentrates on economics. When San Jose State or the California State University system adopts MOOCs, the rationale is typically said to be savings for an overburdened state budget. While many studies show that students actually do better in electronic online courses than they do in physical lectures, a combination of cynicism and hope leads professors to be suspicious of such claims. The replacement of faculty by machines is thought to be a coldly economic calculation.

But at Harvard, which is wealthier than most oil sheikdoms, the warp speed push into online education is not simply driven by money (although there is a desire to corner a market in the future). For many of the professors Heller interviews in his essay, the attraction of MOOCs is that they will actually improve the elite educational experience.

Take for example Gregory Nagy, professor of classics, and one of the most popular professors at Harvard. Nagy is one of Harvard’s elite professors flinging himself headlong into the world of online education. He is dividing his usual hour-long lectures into short videos of about 6 minutes each—people get distracted watching lectures on their Iphones at home or on the bus. He imagines “each segment as a short film” and says that, “crumbling up the course like this forced him to study his own teaching more than he had at the lectern.” For Nagy, the online experience is actually forcing him to be more clear; it allows for spot-checking the participants comprehension of the lecture through repeated multiple-choice quizzes that must be passed before students can continue on to the next lecture. Dividing the course into digestible bits that can be swallowed whole in small meals throughout the day is, Nagy argues, not cynical, but progress. “Our ambition is actually to make the Harvard experience now closer to the MOOC experience.”

harvard

It is worth noting that the Harvard experience of Nagy’s real-world class is not actually very personal or physical. Nagy’s class is called “Concepts of the Hero in Classical Greek Civilization.” Students call it “Heroes for Zeroes” because it has a “soft grading curve” and it typically attracts hundreds of students. When you strip away Nagy’s undeniable brilliance, his physical course is a massive lecture course constrained only by the size of the Harvard’s physical plant. For those of us who have been on both sides of the lectern, we know such lectures can be entertaining and informative. But we also know that students are anonymous, often sleepy, rarely prepared, and none too engaged with their professors. Not much learning goes on in such lectures that can’t be simply replicated on a TV screen. And in this context, Nagy is correct. When one compares a large lecture course with a well-designed online course, it may very well be that the online course is a superior educational venture—even at Harvard.

As I have written here before, the value of MOOCs is to finally put the college lecture course out of its misery. There is no reason to be nostalgic for the lecture course. It was never a very good idea. Aside from a few exceptional lecturers—in my world I can think of the reputations of Hegel, his student Eduard Gans, Martin Heidegger, and, of course, Hannah Arendt—college lectures are largely an economical way to allow masses of students to acquire basic introductory knowledge in a field. If the masses are now more massive and the lectures more accessible, I’ll accept that as progress.

The real problems MOOCs pose is not that they threaten to replace lecture courses, but that they intensify our already considerable confusion regarding what education is. Elite educational institutions, as Heller writes, no longer compete against themselves. He talks with Gary King, University Professor of Quantitative Social Science and Drew Gilpin Faust, Harvard’s President, who see Harvard’s biggest threat not to be Yale or Amherst but “The University of Phoenix,” the for-profit university. The future of online education, King argues, will be driven by understanding education as a “data-gathering resource.” Here is his argument:

Traditionally, it has been hard to assess and compare how well different teaching approaches work. King explained that this could change online through “large-scale measurement and analysis,” often known as big data. He said, “We could do this at Harvard. We could not only innovate in our own classes—which is what we are doing—but we could instrument every student, every classroom, every administrative office, every house, every recreational activity, every security officer, everything. We could basically get the information about everything that goes on here, and we could use it for the students. A giant, detailed data pool of all activities on the campus of a school like Harvard, he said, might help students resolve a lot of ambiguities in college life.

At stake in the battle over MOOCs is not merely a few faculty jobs. It is a question of how we educate our young people. Will they be, as they increasingly are, seen as bits of data to be analyzed, explained, and guided by algorithmic regularities, or are they human beings learning to be at home in a world of ambiguity.

Most of the opposition to MOOCs continues to be economically tinged. But the real danger MOOCs pose is their threat to human dignity. Just imagine that after journalists and professors and teachers, the next industry to be replaced by machines is babysitters. The advantages are obvious. Robotic babysitters are more reliable than 18 year olds, less prone to be distracted by text messages or twitter. They won’t be exhausted and will have access to the highest quality first aid databases. Of course they will eventually also be much cheaper. But do we want our children raised by machines?

That Harvard is so committed to a digital future is a sign of things to come. The behemoths of elite universities have their sights set on educating the masses and then importing that technology back into the ivy quadrangles to study their own students and create the perfectly digitized educational curriculum.

And yet it is unlikely that Harvard will ever abandon personalized education. Professors like Peter J. Burgard, who teaches German at Harvard, will remain, at least for the near future.

Burgard insists that teaching requires “sitting in a classroom with students, and preferably with few enough students that you can have real interaction, and really digging into and exploring a knotty topic—a difficult image, a fascinating text, whatever. That’s what’s exciting. There’s a chemistry to it that simply cannot be replicated online.”

ard

Burgard is right. And at Harvard, with its endowment, professors will continue to teach intimate and passionate seminars. Such personalized and intense education is what small liberal arts colleges such as Bard offer, without the lectures and with a fraction of the administrative overhead that weighs down larger universities. But at less privileged universities around the land, courses like Burgard’s will likely become ever more rare. Students who want such an experience will look elsewhere. And here I return to my optimism around graduation.

Dale Stephens of Uncollege is experimenting with educational alternatives to college that foster learning and thinking in small groups outside the college environment. In Pittsburgh, the Saxifrage School and the Brooklyn Institute of Social Science are offering college courses at a fraction of the usual cost, betting that students will happily use public libraries and local gyms in return for a cheaper and still inspiring educational experience. I tell my students who want to go to graduate school that the teaching jobs of the future may not be at universities and likely won’t involve tenure. I don’t know where the students of tomorrow will go to learn and to think, but I know that they will go somewhere. And I am sure some of my students will be teaching them. And that gives me hope.

As graduates around the country spring forth, take the time to read Nathan Heller’s essay, Laptop U. It is your weekend read.

You can also read our past posts on education and on the challenge of MOOCs here.

-RB

13Nov/120

The Aftermath of the Arab Spring: Women, Activism, and Non-Interference

In the two years since its inception, the Arab Spring remains an extraordinarily difficult phenomenon to define and assess. Its local, national, and regional consequences have been varied and contradictory, and many of them are not obviously or immediately heartening. These observations certainly apply to Syria: although growing numbers of the country’s military personnel are abandoning their posts, the Assad regime’s war with the Sunni insurgency still threatens to draw Turkey, Lebanon, Iran, and Jordan into an intractable sectarian conflict. But they are, if anything, even more relevant to Egypt. There the overthrow of the Mubarak regime occurred with less brutality, all things considered, than we might have reasonably feared. But, the nature of the country’s social and political reconstruction nevertheless remains extremely uncertain, given the delicate balance of forces between the Muslim Brotherhood, the Salafist Nour Party, and the country’s diverse liberal and activist camps.

The effects of Egypt’s revolution have been particularly ambiguous for the country’s women. To be sure, women have played a noteworthy role in the Tahrir Square protests in January and February 2011, and many local and foreign observers commented on the lack of intimidation and harassment they faced in the days leading to Mubarak’s fall. But as Wendell Steavenson details in the most recent New Yorker, the protests were by no means free of gendered violence, and the revolution has yet to create a more comfortable or equitable place for women in Egyptian public life.

Let me touch on one example from Steavenson’s article. Hend Badawi, a twenty-three-year-old graduate student, was protesting against the interim military government in Tahrir Square in December 2011 when she was confronted by a group of soldiers. In the course of her arrest, the soldiers tore off Badawi’s headscarf, dragged her several hundred meters by the hair, cursed at her, struck her, and groped her breasts and behind. One of the soldiers also apparently told her that “if my sister went to Tahrir, I would shoot her”  After being taken to a parliament building, Badawi was beaten again and interrogated for several hours before landing in a military hospital, where she was treated for severe lacerations on her feet, a broken wrist, and multiple broken fingers.

The next day, Field Marshal Mohamed Tantawi, at that time Egypt’s effective ruler, paid a visit to the hospital for a photo op with a state-TV camera crew. Despite her injuries, Badawi confronted him: “We don’t want your visit!” she reportedly screamed. “We are not the ones who are thugs! You’ve beaten us and ruined us! Shame on you! Get out!” News of the tongue-lashing quickly made the rounds on Twitter and Facebook, and when Badawi was moved to a civilian hospital, she used a video camera smuggled in by friends to issue a lengthier statement about her ordeal. The resulting video went viral, and independent TV stations used it to challenge government claims that the Army had not used violence against civilians.

One might expect that Badawi would be honored for her courage and conviction, and I can only imagine that she is, at least among pro-democracy activists. But her family, which happened to sympathize with the Mubarak regime, was appalled. Badawi had gone to Tahrir Square without informing them, and they blamed her not only for the violent treatment she had received, but also for the damage they believed she had done to the family’s reputation. Badawi’s relatives locked her in her room; her elderly aunt yelled at her frequently; and her brother Ahmed hit her. Later, when Badawi’s family did not allow her to return to Tahrir for the first anniversary of the revolution, she basically reenacted the protests of the previous year—only this time on a more intimate scale. As she related to Steavenson, she launched a hunger strike to protest her treatment at her family’s hands and made placards that read, “Hend wants to topple the siege! Down with Ahmed!”

Badawi’s experience is particular and inevitably her own, but it nevertheless exemplifies the conundrums that many women face in contemporary Egypt. As the daughter of a pious rural family, she has benefitted from the increasing levels of affluence, education, and occupational opportunity that at least some young people, both women and men, have enjoyed over the past several decades. But she has also come face to face with the possibilities and the limits created by Egypt’s Islamic Revival, which has established new expectations for women’s comportment on the street and in other public institutions. (If many women in Cairo went bareheaded and wore skirts and blouses at the beginning of Mubarak’s reign, almost all now wear headscarves, and the niqab is not an uncommon sight.) Finally, Badawi’s life has been shaped not simply by her family’s notions of appropriate womanly behavior, but by a wider climate of pervasive sexual harassment. According to one 2008 survey, sixty percent of Egyptian men admit to having harassed a woman, and the country’s police and security forces either openly condone such treatment or engage in even more serious assaults themselves.

Badawi chafes at the “customs and traditions”—a common Arabic phrase, which she employs sardonically—that mold and circumscribe her life. And, like at least some other women, she regards Egypt’s recent upheaval as a potential opening, an “opportunity to mix my inner revolution with the revolution of my country". But it is significant, I think, that Badawi does not seek a “Western” form of women’s equality and emancipation. Although she appreciates “the space and freedom” that appear to be available to women on American TV shows, she nevertheless intends to pursue them “in the context of my religion”. At the same time, many of the reforms that she and other women’s advocates might champion are now thoroughly tainted by their association with the autocratic Mubarak regime. For example, many Egyptians dismiss recent amendments to the country’s “personal-status laws”—which allowed women to initiate no-fault divorces and enhanced their child-custody rights—as cosmetic changes that only aimed to improve the government’s international image. Many other citizens, meanwhile, view Mubarak’s 2010 effort to mandate a quota for female members of parliament as a patent violation of democratic procedure.

These developments offer no clear path forward for Badawi and other Egyptian women, whether or not they regard themselves as activists. But they also pose a distinct challenge to outside observers—like me—who sympathize with their efforts to transform Egyptian society. Ten years ago, the Columbia anthropologist Lila Abu-Lughod drew on the impending American invasion of Afghanistan to question the notion that the U.S. should “save” Muslim women from oppression. Instead of adopting a position of patronizing superiority, Abu-Lughod urged concerned Americans to ally themselves with local activists in the Middle East and to work with them on the issues that they deemed most important. In the context of the Arab Spring, however, even this advice appears to have its shortcomings. I worry that American (or wider “Western”) support for women like Hend Badawi, however well-meaning, will unintentionally undermine the very reforms that the activists themselves favor. I also suspect that a considerable number of Egyptians will resent even the most “enlightened” coalitions as yet another instance of anti-democratic meddling if not neo-colonial imposition. After all, the U.S. did much to keep Mubarak in power for thirty years. Why now should Americans, whether they are affiliated with the U.S. government or not, attempt to intervene even indirectly in Egypt’s transformation?

I certainly believe, from a political and scholarly perspective, that Americans should care a great deal about the consequences of the revolutions in Egypt and other North African and Middle Eastern states. In the end, however, I wonder if the most advisable practical course may be to adopt an attitude of principled non-interference in those cases where mass violence is not imminent. In short, we should allow Egyptians (and other Middle Easterners) room to work out the consequences and implications of the Arab Spring on their own, even if we are not entirely comfortable with the results.

-Jeff Jurgens

Note: Lila Abu-Lughod’s argument, which I reference near the end of this post, appears in “Do Muslim Women Really Need Saving? Anthropological Reflections on Cultural Relativism and its Others.” American Anthropologist 104.3 (2002): 783-790.

25Sep/120

Does the President Matter?

“Hence it is not in the least superstitious, it is even a counsel of realism, to look for the unforeseeable and unpredictable, to be prepared for and to expect “miracles” in the political realm. And the more heavily the scales are weighted in favor of disaster, the more miraculous will the deed done in freedom appear.”

                        —Hannah Arendt, What is Freedom?

This week at Bard College, in preparation for the Hannah Arendt Center Conference "Does the President Matter?", we put up 2 writing blocks around campus, multi-paneled chalkboards that invite students to respond to the question: Does the President Matter?  The blocks generated quite a few interesting comments. Many mentioned the Supreme Court. Quite a few invoked the previous president, war, and torture. And, since we are at Bard, others responded: it depends what you mean by matters.

This last comment struck me as prescient. It does depend on what you mean by matters.

If what we mean is, say, an increasing and unprecedented power by a democratic leader not seen since the time of enlightened monarchy, the president does matter. We live in an age of an imperial presidency. The President can, at least he does, send our troops into battle without the approval of Congress. The President can, and does, harness the power of the TV, Internet, and twitter to bypass his critics and reach the masses more directly than ever before. The president can, and does, appoint Supreme Court Justices with barely a whimper from the Senate; and the president’s appointments can, and do, swing the balance on a prisoner’s right to habeas corpus, a woman’s right to choose, or a couple’s right to marry.

And yet, what if by matter, we mean something else? What if we mean, having the power to change who we are in meaningful ways? What if by matter we mean: to confront honestly the enormous challenges of the present? What if by matter we mean: to make unpredictable and visionary choices, to invite and inspire a better future?

­On the really big questions—the thoughtless consumerism that degrades our environment and our souls; the millions of people who have no jobs and increasingly little prospect for productive employment; the threat of devastating terrorism; and the astronomical National Debt: 16 trillion and counting for the US. -- That is $140,000 for each taxpayer. -- Add to that the deficiency in Public Pension Obligations (estimated at anywhere from $1 to $5 trillion.) Not to mention the 1 trillion dollars of inextinguishable student debt that is creating a lost generation of young people whose lives are stifled by unwise decisions made before they were allowed to buy a beer.

This election should be about a frank acknowledgement of the unsustainability of our economic, social, and environmental practices and expectations. We should be talking together about how we should remake our future in ways that are both just and exciting. This election should be scary and exciting. But so far it’s small-minded and ugly.

Around the world, we witness worldwide distrust and disdain for government. In Greece there is a clear choice between austerity and devaluation; but Greek leaders have saddled their people with half-hearted austerity that causes pain without prospect for relief.  In Italy, the paralysis of political leaders has led to resignation and the appointment of an interim technocratic government. In Germany, the most powerful European leader delays and denies, trusting that others will blink every time they are brought to the mouth of the abyss.

No wonder that the Tea Party and Occupy Wall Street in the US, and the Pirate Parties in Europe share a common sense that liberal democratic government is broken. A substantial—and highly educated—portion of the electorate has concluded that our government is so inept and so compromised that it needs to be abandoned or radically constrained. No president, it seems, is up to the challenge of fixing our broken political system.

Every President comes to Washington promising reform!  And they all fail.  According to Jon Rauch, a leading journalist for The Atlantic and the National Journal, this is inevitable. He has this to say in his book Government's End:

If the business of America is business, the business of government programs and their clients is to stay in business. And after a while, as the programs and the clients and their political protectors adapt to nourish and protect each other, government and its universe of groups reach a turning point—or, perhaps more accurately, a point from which there is no turning back. That point has arrived. Government has become what it is and will remain: a large, incoherent, often incomprehensible mass that is solicitous of its clients but impervious to any broad, coherent program of reform. And this evolution cannot be reversed.

On the really big questions of transforming politics, the President is, Rauch argues, simply powerless. President Obama apparently agrees. Just last week he said, in Florida: "The most important lesson I've learned is that you can't change Washington from the inside. You can only change it from the outside."

A similar sentiment is offered by Laurence Lessig, a founding member of Creative Commons. In his recent book Republic 2.0, Lessig writes:

The great threat today is in plain sight. It is the economy of influence now transparent to all, which has normalized a process that draws our democracy away from the will of the people. A process that distorts our democracy from ends sought by both the Left and the Right: For the single most salient feature of the government that we have evolved is not that it discriminates in favor of one side and against the other. The single most salient feature is that it discriminates against all sides to favor itself. We have created an engine of influence that seeks not some particular strand of political or economic ideology, whether Marx or Hayek. We have created instead an engine of influence that seeks simply to make those most connected rich.

The system of influence and corruption through PACs, SuperPacs, and lobbyists is so entrenched, Lessig writes, that no reform seems plausible.  All that is left is the Hail Mary idea of a new constitutional convention—an idea Lessig promotes widely, as with his Conference On the Constitutional Convention last year at Harvard.

For Rauch on the Right and Lessig on the Left, government is so concerned with its parochial interests and its need to stay in business that we have forfeited control over it. We have, in other words, lost the freedom to govern ourselves.

The question  "Does the President Matter?" is asked, in the context of the Arendt Center conference, from out of Hannah Arendt's maxim that Freedom is the fundamental raison d'etre of politics. In "What is Freedom?", Arendt writes:

“Freedom is actually the reason that men live together in political organization at all. Without it, political life as such would be meaningless. The raison d’être of politics is freedom.”

So what is freedom? To be free, Arendt says, is to act. Arendt writes: "Men are free as long as they act, neither before nor after; for to be free and to act are the same.”

What is action? Action is something done spontaneously. It brings something new into the world. Man is the being capable of starting something new. Political action, and action in general, must happen in public. Like the performing arts—dance, theatre, and music—politics and political actions requires an audience. Political actors act in front of other people. They need spectators, so that the spectators can be drawn to the action; and when the spectators find the doings of politicians right, or true, or beautiful, they gather around and form themselves into a polity. The political act, the free act must be surprising if it is to draw people to itself. Only an act that is surprising and bold is a political act, because only such an act will strike others, and make them pay attention.

The very word politics derives from the Greek polis which itself is rooted in the Greek pelein, a verb used to describe the circular motion of smoke rings rising up from out of a pipe. The point is that politics is the gathering of a plurality around a common center. The plurality does not become a singularity in circling around a polestar, but it does acknowledgement something common, something that unites the members of a polity in spite of their uniqueness and difference.

When President Washington stepped down after his second term; when President Lincoln emancipated the slaves; when FDR created the New Deal; when President Eisenhower called the Arkansas National Guard into Federal Service in order to integrate schools in Little Rock; these presidents acted in ways that helped refine, redefine, and re-imagine what it means to be an American.

Arendt makes one further point about action and freedom that is important as they relate to the question: Does the President Matter? Courage, she writes, is "the political virtue par excellence."  To act in public is leave the security of one's home and enter the world of the public. Such action is dangerous, for the political actor might be jailed for his crime or even killed. Arendt's favorite example of political courage is Socrates, who was killed for his courageous engagement of his fellow Athenians. We must always recall that Socrates was sentenced to death for violating the Athenian law.

Political action also requires courage because the actor can suffer a fate even worse than death. He may be ignored. At least to be killed for one's ideas means that one is recognized as capable of action, of saying and doing something that matters. To be ignored, however, denies the actor the basic human capacity for action and freedom.

One fascinating corollary of Arendt's understanding of the identity of action and freedom is that action, any action—any original deed, any political act that is new and shows leadership—is, of necessity, something that was not done before. It is, therefore, always against the law.

This is an insight familiar to readers of Fyodor Dostoevsky. In Crime and Punishment Raskolnikov says:

Let's say, the lawgivers and founders of mankind, starting from the most ancient and going on to the Lycurguses, the Solons, the Muhammads, the Napoleons, and so forth, that all of them to a man were criminals, from the fact alone that in giving a new law they thereby violated the old one.

All leaders are, in important ways, related to criminals. This is an insight Arendt and Nietzsche too share.

Shortly after we began to plan this conference, I heard an interview with John Ashcroft speaking on the Freakonomics Radio Show. He said:

"Leadership in a moral and cultural sense may be even more important than what a person does in a governmental sense. A leader calls people to their highest and best. ... No one ever achieves greatness merely by obeying the law. People who do above what the law requires become really valuable to a culture. And a President can set a tone that inspires people to do that."

My first reaction was: This is a surprising thing for the Attorney General of the United States to say. My second reaction was: I want him to speak at the conference. Sadly, Mr. Ashcroft could not be with us here today. But this does not change the fact that, in an important way, Ashcroft is right. Great leaders will rise above the laws in crisis. They will call us to our highest and best.

What Ashcroft doesn't quite say, and yet Arendt and Dostoevsky make clear, is that there is a thin and yet all-so-important line separating great leaders from criminals. Both act in ways unexpected and novel. In a sense, both break the law.

But only the leader's act shows itself to be right and thus re-makes the law.  Hitler may have acted and shown a capacity for freedom; his action, however, was rejected. He was a criminal, not a legislator.  Martin Luther King Jr. or Gandhi also broke the laws in actions of civil disobedience. Great leader show in their lawbreaking that the earlier law had been wrong; they forge a new moral and also written law through the force and power of moral example.

In what is perhaps the latest example in the United States of a Presidential act of lawbreaking, President George W. Bush clearly broke both U.S. and international law in his prosecution of the war on terror. At least at this time it seems painfully clear that President George W. Bush's decision to systematize torture stands closer to a criminal act than an act of great legislation.

In many ways Presidential politics in the 21st takes place in the shadow of George W. Bush's overreach. One result is that we have reacted against great and daring leadership. In line with the spirit of equality that drives our age, we ruthlessly expose the foibles, missteps, scandals and failures of anyone who rises to prominence. Bold leaders are risk takers. They fail and embarrass themselves. They have unruly skeletons in their closets. They will hesitate to endure and rarely prevail in the public inquisition that the presidential selection process has become.

These candidates, who are inoffensive enough to prevail, are branded by their consultants as pragmatists. Our current pragmatists are Products of Harvard Business School and Harvard Law School. Mr. Romney loves data. President Obama worships experts. They are both nothing if not faithful to the doctrine of technocratic optimism, that we with the right people in charge we can do anything. The only problem is they refuse to tell us what it is they want to do. They have forgotten that politics is a matter of thinking, not a pragmatic exercise in technical efficiency.

Look at the Mall in Washington: the Washington monument honors our first President,  the Jefferson Memorial, the Lincoln Memorial, the Memorial to Franklin Delano Roosevelt.  There is not a monument to any president since FDR. And yet, just 2 years ago we dedicated the Martin Luther King Memorial. It doesn't seem like an accident that the leaders of the Civil Rights Movement were not politicians. Our leaders today do not gravitate to the presidency. The presidency does not attract leaders. Bold leaders today are not the people running for office.

Yet, people crave what used to be called a statesman. To ask: "Does the President Matter?" is to ask:  might a president, might a political leader, be able to transform our nation, to restore the dignity and meaning of politics? It is to ask, in other words, for a miracle.

At the end of her essay, "What is Freedom?", Hannah Arendt said this about the importance of miracles in politics.

Hence it is not in the least superstitious, it is even a counsel of realism, to look for the unforeseeable and unpredictable, to be prepared for and to expect “miracles” in the political realm. And the more heavily the scales are weighted in favor of disaster, the more miraculous will the deed done in freedom appear.

She continued:

It is men who perform miracles—men who because they have received the twofold gift of freedom and action can establish a reality of their own.

I don't know if the president matters.

But I know that he or she must. Which is why we must believe that miracles are possible. And that means we, ourselves, must act in freedom to make the miraculous happen.

In the service of the not-yet-imagined possibilities of our time, our goal over the two days of the conference days was to engage in the difficult, surprising, and never-to-be-understood work of thinking, and of thinking together, in public, amongst others. We heard from philosophers and businessmen, artists and academics. The speakers came from across the political spectrum, but they shared a commitment to thinking beyond ideology. Such thinking is itself a form of action, especially so in a time of such ideological rigidity. Whether our meeting here at Bard gives birth to the miracle of political action--that is up to you.  If we succeeded in thinking together, in provoking, and in unsettling, we perhaps sowed the seeds that will one day blossom into the miracle of freedom.

-RB

Watch Roger's  opening talk from the conference, "Does the President Matter?" here.

5Oct/110

Truth-telling in the Age of Opinion-Laurel Harig

In the age of rapid- response media, truths are deployed like hard drives, consumed and then over-written by newer, faster, more expedient truths. We want instant insight and commentary, not hard- won wisdom. Contemporary journalism in the United States is broken when there is no culture of analysis to support it, when pundits offer pre-packaged opinions that are wielded with nonchalance by everyone from citizens to senators alike. Debate meanders circularly and there is no resolution because there are no facts or values held in common. This is how something like climate change which is recognized by 98% of scientists can become a matter for debate. The remaining 2% of scientists can become a credible reason for doubt. After all, truth is all in how you tell it, which facts you reveal and which you keep hidden, which are distorted and which are twisted beyond recognition by losing their context and history. The appearance of fact is enough in a timeless, soulless world. What is truth-telling in the age of opinion?

Listening to Syrian- American hip-hop artist Omar Offendum’s album, SyrianamericanA, throws into relief the tensions and richness of cross-cultural experience. The narrator is living a life that is familiar to those who cross between the Arab world and the West. Each verse becomes a meditation on colonialism, Orientalism, the nomadism of “success,” feeling torn between two cultures, two moralities, two inseparable, dissimilar lives. ” Look up in the sky, it’s a bird, it’s a plane,” he sings. “it’s an Arab super hero and he came to bring change.” The voices of truth-telling in the future belong to those who are caught, by chance or circumstance, in between two or more conflicting narratives of power– when ideologies are examined in the light of lives we must live, the story unravels and we can see beyond the frame.

Tunisian revolutionaries have expressed “we don’t want to be called by the names of flowers!” Especially after the Tunisian Ministry of Tourism has marketed the country for years as a land of exotic fragrances and accessible to Europe Mediterranean charm. The Arab revolutions, not “the Arab Spring,” or “the Jasmine Revolution,” offer new possibilities for speaking and thinking from and to the centers of power. Once ignored by the mainstream media, activists, in particular from the Egyptian youth movements, have been featured on Al-Jazeera and honored by establishments of “human rights.” With this recognition, however, comes an even greater challenge. The call by Egyptian activists at the beginning of the revolution was for each man, woman and child to come down into the square. Not only those who have access to blogs, Twitter or  Facebook, those who are young, globally connected, or connected to leftist politics were responsible for the events which are continuing to shake the foundations of the world we thought we knew. We all have a responsibility to the cities, the politics we find ourselves in. Hannah Arendt said famously that “freedom has a space, a place.” (The Promise of Politics) These spaces, Arendt says, are the heart of the city or polis and contain the essence of democracy. The Bahraini regime knew this perfectly well when they destroyed the Pearl Roundabout which had been the epicenter of demonstrations in March of 2011. Around the world, public spaces are being reshaped and reclaimed as spaces of dissent, debate and action.

These spaces are not given for free. Waves of development have ripped out the collective spaces from cities, turning historic neighborhoods into block of “luxury flats” or boutique hotels which cater exclusively to foreigners. Gentrification pushes families further away from the centers of cities into hard to access suburbs. Beirut’s cosmopolitan charm is largely a fiction invented by the tourism industry. Recently in Beirut, several friends have been wounded by thugs of the Syrian regime. People are pulled off bar stools for criticizing Assad’s regime and beaten up in nearby alleys. The freedom that we struggle for is not an abstract, but a daily sensous reality. It demands an awareness and a greater attention to the small politics of daily life. Sometimes a revolution can be a few previously unspoken words, sometimes it can be a look for or against what is easily apparent. At all times, it is the will to resist “the way things are.”

A friend of ours who was being prosecuted by a military court for his activism committed suicide last week in Beirut. “I die as I have lived,” he wrote, “a free spirit, an anarchist, owing no aliegance to rulers heavenly or earthly.” In the discourse surrounding his death, however, one truth risks being drowned out by the fervor to write his death as a heroic gesture, a revolutionary position. That truth, rather quietly, is that Nour had struggled for many years with severe depression. It seems wrong to paint him as a hero in death when he might have lived as a man. If we were to follow Nour’s example, we would work tirelessly and quietly for the causes we believe in. Truth, in the manner of an enduring wisdom, is always soft-spoken, always humble and often found in unexpected places.

http://www.youtube.com/watch?v=Tz50_1Y2pXU&feature=player_em

3Oct/110

Truth-telling in an Age of Wiki-Arthur Holland Michel

You Can’t Tell it if you Don’t Have It

We cannot be truth-tellers unless we are truth-seekers. So, in a roundabout way, if we want to talk about truth-telling in an age of democracy, we must first think about truth-seeking in an age of democracy.

Changing Times, Changing Truth

We also have to face the fact that the true democracy of our time is not a democracy of structure and process, but instead a democracy of information. Governments are no longer held strictly accountable through an institutional system of checks and balances meant to keep tabs on behalf of the people, but directly by the people, through WikiLeaks, Twitter, Facebook, and Youtube. Our understanding of truth has to play catch-up to the times we live in.

Nowhere has this been seen more clearly than in the Arab Spring. Investigations by government agencies or international watchdogs have been replaced with this:

@AnonymousSyria: Brave protesters defy the terror of machine guns in #Palmyra tonight. Incredible. At least one injured. #Syria

With “a click of the button,” @AnonymousSyria distributed, and continues to distribute, information on the government crackdown to almost 6,000 people following his Twitter. These people re-post the video. International news channels write articles about the video. People in other countries re-post the articles. Information travels very quickly. And most importantly, it by-passes the government.

This democracy of information has also caused an information glut which calls into question the very nature of truth. The internet has made the “truth” more accessible to the average person than it has ever been, but it has also deeply called into question the very nature of that truth. Encyclopaedias held for several centuries the position of books of truth, but today’s (by a long shot) most popular and prolific and expansive encyclopaedia can be edited any person with an internet connection and so much as half a brain. “Seeking” and telling the truth in an age of truth-glut, when truths can be created and deleted at the click of a button, is deeply problematic. It means that the world is full of truths, often contradictory.

Truth-seeking as a Private Struggle

For a long time, “truth-seeking” has meant “demanding the truth.” In democracy, the energy with which we seek truth is the energy that fuels a healthy civil society. That same energy arguably almost toppled the British government this summer when the News of the World hacking scandal broke out and the whole British government was found to have been living cosily for over a decade in Rupert Murdoch’s pocket. The scandal was old-fashioned in the sense that the British public still felt they had to “demand the truth” from the politicians. Scandals of this kind are becoming a rare breed, because politicians are no longer the merchants of truth. We have Wikileaks for that. Anybody with an internet connection can access a world of information. Truth-seeking does not happen in the public – and by that I mean institutional – arena anymore. Instead, it happens on the personal computer. Truth-seeking has become a private struggle. The truth-seeking of our age is one of which Hannah Arendt would approve.

Don’t be Fooled

But in that privacy, it is easy to be misled. And likewise, it is easy to mislead others. That is why attention is required anew as to how to be truth-seekers and truth-tellers. In the age of Wiki, we are so deeply beset by information that dresses itself as fact, that the question has become, How do we know when a fact is actually true? If institutional systems no longer hold the keys to the vaults of truth, how do we as individuals certify truth when we find it? Wikipedia is hardly the answer. What, then, are we left with?

The Gaddafi loyalists and mercenaries who were holed up in abandoned offices and apartment blocks and houses on the outskirts of Tripoli and in the town of Bani Walid last month may very well have been “demanding the truth” about the situation. But imagine that they receive two conflicting reports – one report states that Gaddafi is alive and well and has opened a counter-offensive on the rebel stronghold at Benghazi. The other report informs the men that Gaddafi has surrendered himself. The rebels, though they demand the truth, will be far more likely to openly accept the former report as the true one. They are not consciously lying to themselves. Likewise, when they rush to their comrades and tell them that Gaddafi is on the counter-offensive, they are not consciously choosing to tell a lie. Truth-telling, then, is not necessarily about actively choosing to tell the truth over a lie and neither is truth seeking the mere act of “demanding the truth.”

If You Aren’t Getting any Closer, You’re getting Further Away

Truth-telling as it applies to a healthy democracy is an honest relationship to facts and the nature of ‘fact.’ The Gaddafi soldiers have fallen back on instinct and belief as their compass for truth. What we have to realise is that we too are falling back on instinct most times that we decide between truths. Congresswoman Bachman is a good example – though we might disagree with her views, her unrelenting belief in the creation myth is a product of her humanity, not of her stupidity. Falling back on instinct may be useful when, as Frost would put it, you come to a fork in the road, but when it comes to our system of governance and collective decision-making, instinct just will not cut it. As truth-seeking becomes a personal struggle, we need to acquire the tools to engage with information on a personal level. For each “truth” that we find ourselves believing, we should ask of ourselves, “Why do I believe this?” If we cannot satisfactorily answer that question, then we cannot trust the information. We must, I believe, settle for a constant sense of unease with the facts. Our challenge in the age of Wiki is to accept that we can never get to truth, we can only ever be in either a state of approximation to it, or distancing from it.

How to be a Truth-Teller in an Age of Wiki

Truth-seeking must therefore become constant state of seeking the truth, instead of just a way to know when the truth has been reached. Real truth-seekers do not find and settle on a truth and move on. They establish what they can by the available information – and they keep seeking, motivated by a healthy dose of scepticism and a strong aversion to the complacency of “knowing the truth.” When we go to Wikipedia, we must read the sources. When we have a good conversation, we have to remember that its goodness, and not its content, was the only true fact about it. And for every news article we read, we must read three more, from different sources. We must, to be true truth-seekers, settle to be like Moses; we must accept that we will be denied the Promised Land. We are denied pure truth just as much now as we were twenty two years ago, when I was born. The impossibility of truth, and therefore of proper truth-telling, remains a stubborn fact that we must grapple with. What has changed is that our struggle to draw near truth is now a personal struggle. And, like Moses, we must never lose faith that it is out there. As truth-tellers and truth seekers we must be relentlessly tenacious. It is that faith and tenacity which will keep democracy alive.

If this is the necessary truth-seeking for our age, the correct truth-telling will therefore be an understanding of the limits of our ability to attain truth, and a respect for the great power we wield to relay facts and for those facts to be taken as true. In other words, truth-telling is about simultaneously believing we are powerless, while acting as though we have a great and potentially destructive power.

Moral of the Story: Ask Questions, Because You Know Nothing

I remember when we read King Lear in my freshman year of college. All my classmates said that the Duke of Gloucester began to see the truth only after he was blinded. We all agreed that this was the great irony of the play. But what Gloucester really does is what we have to begin doing as truth-tellers in an age of Wiki – his blindness makes him realise the limits of his ability to know the truth of the physical world, so he starts to ask the right questions. The great irony of our time is that the unprecedented wealth of information at our disposal really only shows that we know nothing for certain. And so, in the age of Wiki, we have to ask the right questions, every single day and every hour – not of governments, but of each other and, most importantly, of ourselves.