The Hannah Arendt Center recently received a special donation from Konstanze Bachman in honor of her mother, Maria Schmitt-Kuemmell. We accept this gift with our sincerest thanks.
The donation is Hannah Arendt's 1929 dissertation Der Liebesbegriff bei Augustin: Versuch Einer Philosophischen Interpretation, or "The Concept of Love in Augustine. Attempt a Philosophical Interpretation" (Berline, Springer Verlag, 1929).
Under the direction of Karl Jaspers and the influence of Martin Heidegger, Hannah Arendt first began her scholarly career in 1929 by writing a dissertation on Saint Augustine's concept of caritas, or neighborly love. Four years later, her life in Germany ended abruptly with Hitler's rise to power in 1933, events which forced Arendt into exile in France. She eventually moved to New York and took her dissertation with her.
During the late 1950s and early 1960s, Arendt completed some of her most influential studies of political life. It was at this time that Arendt also reworked her dissertation using some of the arguments and observations she had used in her other works. In this sense, Arendt's dissertation became a bridge over which she traveled back and forth between her early years in Germany and her professional career in New York, two periods which, despite their separation in time and space, both found relevance in Augustine's questions about human freedom and the possibility of political life in a scientific and technological age.
This is the 9th booklet of the Philosophische Forschungen, which was published by Karl Jaspers in Heidelberg.
The Hannah Arendt Center intends to archive the book in the Hannah Arendt Collection at Bard College's Stevenson Library.
(Featured Image: An aerial view of a jumble of cars, Source: Slickzine)
“If, in concluding, we return once more to the discovery of the Archimedean point and apply it, as Kafka warned us not to do, to man himself and to what he is doing on this earth, it at once becomes manifest that all of his activities, watched from a sufficiently removed vantage point in the universe, would appear not as activities of any kind but as processes, so that, as a scientist recently put it, modern motorization would appear like a process of biological mutation in which human bodies gradually begin to be covered by shells of steel.”
--Hannah Arendt, The Human Condition, 322-3.
In the preface to The Human Condition, Hannah Arendt not only starts provocatively with the point of view of an “earth born object made by man,” but describes this object, the recently launched Sputnik satellite, as the realization of the dream of science fiction literature that illuminates “mass sentiments and mass desires”. In this passage quoted above from the very last section of the book, Arendt returns to space and for a moment herself sounds like a science fiction writer, inviting the reader to look with her from a number of challenging perspectives.
Monday, August 16, 2010: “Earth Alienation: From Galileo to Google”
Lecturer: Roger Berkowitz, Associate Professor of Political Studies and Human Rights at Bard College; Academic Director, Hannah Arendt Center for Politics and the Humanities.
In this lecture, Roger Berkowitz welcomes the incoming Class of 2014 at Bard College with an important question: “Is humanity important?” The human race has witnessed impressive scientific and technological achievements, some of the most remarkable of which have occurred in the past 50 years. While some of these have advanced the history of humanity, others threaten to dampen its spark. Nuclear and biological weapons are capable of killing untold millions of people, and the urge to embrace automation in our everyday lives cultivates the fear that society may one day embrace euthanization as a way to rid itself of “superfluous persons”. Acknowledging this increasingly dangerous world we live in, Berkowitz argues it is imperative that we at this moment in time take a closer look at ourselves and consider our significance. He proposes two sources that can help us in our task: Galileo and Google.
Privacy is sacrificed unthinkingly to government and corporations; transparency and sharing trump depth and inscrutability; and we justifiably bemoan the death of privacy. Technology is blamed, but the truth is that privacy is being lost not because it can be, but because we have forgotten why it is important.
Thomas Meaney and Yascha Mounk argue in a long essay in The Nation that the democratic moment is passing if not yet already passed. The sweep of their essay is broad. Alexis de Tocqueville saw American democracy replacing the age of European aristocracy. He worried that democratic equality would be unable to preserve the freedoms associated with aristocratic individualism, but he knew that the move from aristocracy to democracy was unstoppable. So today, Meaney and Mounk write, we are witnessing the end of the age of democracy and equality. This is so, they suggest, even if we do not yet know what will replace it.
Meaney and Mounk build their argument on a simple critical insight, a kind of “unmasking” of what might be called the hypocrisy of modern democracy. Democracy is supposed to be the will of the people. It is a long time since the small group of Athenian citizens governed themselves. Modern democrats have defended representative democracy as a pragmatic alternative because gathering all the citizens of modern states together for democratic debate is simply impossible. But technology has changed that.
As long as direct democracy was impracticable within the confines of the modern territorial state, the claim that representative institutions constituted the truest form of self-government was just about plausible. But now, in the early twenty-first century, the claim about direct democracy being impossible at the national level and beyond is no longer credible. As the constraints of time and space have eroded, the ubiquitous assumption that we live in a democracy seems very far from reality. The American people may not all fit into Madison Square Garden, but they can assemble on virtual platforms and legislate remotely, if that is what they want. Yet almost no one desires to be that actively political, or to replace representation with more direct political responsibility. Asked to inform themselves about the important political issues of the day, most citizens politely decline. If forced to hold an informed opinion on every law and regulation, many would gladly mount the barricades to defend their right not to rule themselves in such a burdensome manner. The challenge posed by information technology lies not in the possibility that we might adopt more direct forms of democracy but in the disquieting recognition that we no longer dream of ruling ourselves.
In short, democracy understood as self-government is now once again possible in the technical age. Such techno-democratic possibility is not, however, leading to more democracy. Thus, Meaney and Mounk conclude, technology allows us to see through the illusions of democracy as hypocritical and hollow.
The very word “democracy” indicts the political reality of most modern states. It takes a considerable degree of delusion to believe that any modern government has been “by” the people in anything but the most incidental way. In the digital age, the claim that the political participation of the people in decision-making makes democracy a legitimate form of government is only that much hollower. Its sole lingering claim to legitimacy—that it allows the people the regular chance to remove leaders who displease them—is distinctly less inspiring. Democracy was once a comforting fiction. Has it become an uninhabitable one?
Such arguments by “unmasking” are attractive and popular today. They work, as Peter Baehr argued recently in a talk at the Arendt Center, through the logic of exposure, by accusing “a person, argument or way of life of being fundamentally defective.” It may be that there are populist democratic revolts happening in Turkey and Thailand, revolts that are unsettling to elites. Similarly, the democratic energies of the Tea Party and Occupy Wall Street are seen by many as evidence of the crisis of democracy. Democracy, it is said, is defective, based on a deception and buttressed by illusion. But it hardly does a service to truth to see democratic ferment as proof of the end of democracy.
Meaney and Mounck argue that there are three main reasons that have brought democracy to the brink of crisis. First, the interrelation of democracies within a global financial world means that democratic leaders are increasingly beholden to banks and financiers than to their citizens.
[W]ith world trade more pervasive, and with the domestic economies of even the most affluent nations deeply dependent on foreign investments, the ideological predilections of a few governments have become the preoccupation of all. There is a reason why all mainstream politicians now make decisions based on variables such as the risk of capital flight and the reactions of bond rating agencies, rather than on traditional calculations about the will of their electorates. As the German economist Wolfgang Streeck has argued, this shift in political calculus occurred because the most significant constituency of democracies is no longer voters but the creditors of public debt.
Second, democracies have come to be associated not just with self-government, but with good government leading to peace and plenty. But this is a fallacy. There is no reason that democracies will be better governed than autocracies or that economic growth in democracies will outperform that of autocracies. This creates an “expectations gap” in which people demand of democracies a level of success they cannot deliver.
Third, democracy has largely been sold around the world as “synonymous with modernization, economic uplift and individual self-realization.” Democratic politicians, often an elite, wrapped their power in largesse and growth that papered over important religious and moral differences. Today populism in Thailand, Egypt, and Turkey clashes with the clientism of democratic rulers and threatens the quasi-democratic alliance of the elites and the masses.
Meaney and Mounk are no doubt correct in perceiving challenges to democracy today. And they are right that democratic citizens consistently prefer technocratic competence over democratic dissent and debate. As they write,
…we live in highly bureaucratic states that require ever-increasing degrees of technical competence. We expect our governments to do more and to do it better. The more our expectations are addressed, the more bureaucratic and opaque government becomes and the less democratic control is possible.
The danger of representative democracy is that it imagines government as something we outsource to a professional class so that we can get on with what is most important in our lives. There is a decided similarity between representative democracy and technocracy, in that both presume that political administration is a necessary but uninspiring activity to be avoided and relegated to a class of bureaucrats and technocrats. The threat of representative democracy is that it is founded upon and regenerates an anti-political and apolitical culture, one that imagines politics as menial work to be done by others.
What Meaney and Mounk overlook, however, is that at least in the United States, we have never simply been a representative democracy. The United States is a complicated political system that cannot justly or rightly be called either a democracy or a representative democracy. Rightly understood, the USA is a federal, democratic, constitutional republic. Its democratic elements are both limited and augmented by its constitutional and federalist character as well as by its republican tradition. At least until recently, it combined a strong national government with equally strong traditions of state and local power. If citizens could not be involved in national politics, they could and often were highly involved in local governance. And local institutions, empowered by the participation of energized citizens, were frequently more powerful or at least as powerful as were national institutions.
Of course, the late 20th and early 21st centuries have witnessed a tectonic constitutional shift in America away from local institutions and toward a highly powerful, centralized, and bureaucratized national government. But this shift is neither inevitable nor irreversible. Indeed, largely driven by the right, the new federalism has returned to states some traditional powers. These powers can be used, however, by the left and the right. As Ben Barber has been arguing from the left, there is an opportunity in the dysfunctional national government to return power and vitality to our cities and our towns. Both Occupy Wall Street and the Tea Party show that there are large numbers of people who are dissatisfied with our political centralization and feel disenfranchised and distant from the ideals of democratic self-government. The Tea Party, more than Occupy, has channeled that disenchantment into local political organizations and institutions. But the opportunity to do so is present on the left as well as on the right.
There is a deeply religious element to American democracy that is bound up with the idea and reality of American exceptionalism, a reservoir of democratic potency that is not yet tapped out. Meaney and Mounk see this, albeit in a throwaway line that is buried in their essay:
Outside of a few outliers such as India and the United States, where deep in the provinces one still encounters something like religious zeal for democracy, many people in nominal democracies around the world do not believe they are inheritors of a sacral dispensation. Nor should they.
We are witnessing a crisis of democracy around the world, in the sense that both established and newer democracies are finding their populations dissatisfied. While it is true that people are not flocking to technical versions of mass democracies, they are taking to the streets and organizing protests, and involving themselves in the activities of citizenship. Meaney and Mounk are right, democracy is not assured, and we should never simply assume its continued vitality. But neither should we write it off entirely. Their essay should be read less as an obituary than a provocation. But it should be read. It is your Weekend Read.
"The end of the old is not necessarily the beginning of the new."
Hannah Arendt, The Life of the Mind
This is a simple enough statement, and yet it masks a profound truth, one that we often overlook out of the very human tendency to seek consistency and connection, to make order out of the chaos of reality, and to ignore the anomalous nature of that which lies in between whatever phenomena we are attending to.
Perhaps the clearest example of this has been what proved to be the unfounded optimism that greeted the overthrow of autocratic regimes through American intervention in Afghanistan and Iraq, and the native-born movements known collectively as the Arab Spring. It is one thing to disrupt the status quo, to overthrow an unpopular and undemocratic regime. But that end does not necessarily lead to the establishment of a new, beneficent and participatory political structure. We see this time and time again, now in Putin's Russia, a century ago with the Russian Revolution, and over two centuries ago with the French Revolution.
Of course, it has long been understood that oftentimes, to begin something new, we first have to put an end to something old. The popular saying that you can't make an omelet without breaking a few eggs reflects this understanding, although it is certainly not the case that breaking eggs will inevitably and automatically lead to the creation of an omelet. Breaking eggs is a necessary but not sufficient cause of omelets, and while this is not an example of the classic chicken and egg problem, I think we can imagine that the chicken might have something to say on the matter of breaking eggs. Certainly, the chicken would have a different view on what is signified or ought to be signified by the end of the old, meaning the end of the egg shell, insofar as you can't make a chicken without it first breaking out of the egg that it took form within.
So, whether you take the chicken's point of view, or adopt the perspective of the omelet, looking backwards, reverse engineering the current situation, it is only natural to view the beginning of the new as an effect brought into being by the end of the old, to assume or make an inference based on sequencing in time, to posit a causal relationship and commit the logical fallacy of post hoc ergo propter hoc, if for no other reason that by force of narrative logic that compels us to create a coherent storyline. In this respect, Arendt points to the foundation tales of ancient Israel and Rome:
We have the Biblical story of the exodus of Israeli tribes from Egypt, which preceded the Mosaic legislation constituting the Hebrew people, and Virgil's story of the wanderings of Aeneas, which led to the foundation of Rome—"dum conderet urbem," as Virgil defines the content of his great poem even in its first lines. Both legends begin with an act of liberation, the flight from oppression and slavery in Egypt and the flight from burning Troy (that is, from annihilation); and in both instances this act is told from the perspective of a new freedom, the conquest of a new "promised land" that offers more than Egypt's fleshpots and the foundation of a new City that is prepared for by a war destined to undo the Trojan war, so that the order of events as laid down by Homer could be reversed.
Fast forward to the American Revolution, and we find that the founders of the republic, mindful of the uniqueness of their undertaking, searched for archetypes in the ancient world. And what they found in the narratives of Exodus and the Aeneid was that the act of liberation, and the establishment of a new freedom are two events, not one, and in effect subject to Alfred Korzybski's non-Aristotelian Principle of Non-Identity. The success of the formation of the American republic can be attributed to the awareness on their part of the chasm that exists between the closing of one era and the opening of a new age, of their separation in time and space:
No doubt if we read these legends as tales, there is a world of difference between the aimless desperate wanderings of the Israeli tribes in the desert after the Exodus and the marvelously colorful tales of the adventures of Aeneas and his fellow Trojans; but to the men of action of later generations who ransacked the archives of antiquity for paradigms to guide their own intentions, this was not decisive. What was decisive was that there was a hiatus between disaster and salvation, between liberation from the old order and the new freedom, embodied in a novus ordo saeclorum, a "new world order of the ages" with whose rise the world had structurally changed.
I find Arendt's use of the term hiatus interesting, given that in contemporary American culture it has largely been appropriated by the television industry to refer to a series that has been taken off the air for a period of time, but not cancelled. The typical phrase is on hiatus, meaning on a break or on vacation. But Arendt reminds us that such connotations only scratch the surface of the word's broader meanings. The Latin word hiatus refers to an opening or rupture, a physical break or missing part or link in a concrete material object. As such, it becomes a spatial metaphor when applied to an interruption or break in time, a usage introduced in the 17th century. Interestingly, this coincides with the period in English history known as the Interregnum, which began in 1649 with the execution of King Charles I, led to Oliver Cromwell's installation as Lord Protector, and ended after Cromwell's death with the Restoration of the monarchy under Charles II, son of Charles I. While in some ways anticipating the American Revolution, the English Civil War followed an older pattern, one that Mircea Eliade referred to as the myth of eternal return, a circular movement rather than the linear progression of history and cause-effect relations.
The idea of moving forward, of progress, requires a future-orientation that only comes into being in the modern age, by which I mean the era that followed the printing revolution associated with Johannes Gutenberg (I discuss this in my book, On the Binding Biases of Time and Other Essays on General Semantics and Media Ecology). But that same print culture also gave rise to modern science, and with it the monopoly granted to efficient causality, cause-effect relations, to the exclusion in particular of final and formal cause (see Marshall and Eric McLuhan's Media and Formal Cause). This is the basis of the Newtonian universe in which every action has an equal and opposite reaction, and every effect can be linked back in a causal chain to another event that preceded it and brought it into being. The view of time as continuous and connected can be traced back to the introduction of the mechanical clock in the 13th century, but was solidified through the printing of calendars and time lines, and the same effect was created in spatial terms by the reproduction of maps, and the use of spatial grids, e.g., the Mercator projection.
And while the invention of history, as a written narrative concerning the linear progression over time can be traced back to the ancient Israelites, and the story of the exodus, the story incorporates the idea of a hiatus in overlapping structures:
A1. Joseph is the golden boy, the son favored by his father Jacob, earning him the enmity of his brothers
A2. he is sold into slavery by them, winds up in Egypt as a slave and then is falsely accused and imprisoned
A3. by virtue of his ability to interpret dreams he gains his freedom and rises to the position of Pharaoh's prime minister
B1. Joseph welcomes his brothers and father, and the House of Israel goes down to Egypt to sojourn due to famine in the land of Canaan
B2. their descendants are enslaved, oppressed, and persecuted
B3. Moses is chosen to confront Pharaoh, liberate the Israelites, and lead them on their journey through the desert
C1. the Israelites are freed from bondage and escape from Egypt
C2. the revelation at Sinai fully establishes their covenant with God
C3. after many trials, they return to the Promised Land
It can be clearly seen in these narrative structures that the role of the hiatus, in ritual terms, is that of the rite of passage, the initiation period that marks, in symbolic fashion, the change in status, the transformation from one social role or state of being to another (e.g., child to adult, outsider to member of the group). This is not to discount the role that actual trials, tests, and other hardships may play in the transition, as they serve to establish or reinforce, psychologically and sometimes physically, the value and reality of the transformation.
In mythic terms, this structure has become known as the hero's journey or hero's adventure, made famous by Joseph Campbell in The Hero with a Thousand Faces, and also known as the monomyth, because he claimed that the same basic structure is universal to all cultures. The basis structure he identified consists of three main elements: separation (e.g., the hero leaves home), initiation (e.g., the hero enters another realm, experiences tests and trials, leading to the bestowing of gifts, abilities, and/or a new status), and return (the hero returns to utilize what he has gained from the initiation and save the day, restoring the status quo or establishing a new status quo).
Understanding the mythic, non-rational element of initiation is the key to recognizing the role of the hiatus, and in the modern era this meant using rationality to realize the limits of rationality. With this in mind, let me return to the quote I began this essay with, but now provide the larger context of the entire paragraph:
The legendary hiatus between a no-more and a not-yet clearly indicated that freedom would not be the automatic result of liberation, that the end of the old is not necessarily the beginning of the new, that the notion of an all-powerful time continuum is an illusion. Tales of a transitory period—from bondage to freedom, from disaster to salvation—were all the more appealing because the legends chiefly concerned the deeds of great leaders, persons of world-historic significance who appeared on the stage of history precisely during such gaps of historical time. All those who pressed by exterior circumstances or motivated by radical utopian thought-trains, were not satisfied to change the world by the gradual reform of an old order (and this rejection of the gradual was precisely what transformed the men of action of the eighteenth century, the first century of a fully secularized intellectual elite, into the men of the revolutions) were almost logically forced to accept the possibility of a hiatus in the continuous flow of temporal sequence.
Note that concept of gaps in historical time, which brings to mind Eliade's distinction between the sacred and the profane. Historical time is a form of profane time, and sacred time represents a gap or break in that linear progression, one that takes us outside of history, connecting us instead in an eternal return to the time associated with a moment of creation or foundation. The revelation in Sinai is an example of such a time, and accordingly Deuteronomy states that all of the members of the House of Israel were present at that event, not just those alive at that time, but those not present, the generations of the future. This statement is included in the liturgy of the Passover Seder, which is a ritual reenactment of the exodus and revelation, which in turn becomes part of the reenactment of the Passion in Christianity, one of the primary examples of Campbell's monomyth.
Arendt's hiatus, then represents a rupture between two different states or stages, an interruption, a disruption linked to an eruption. In the parlance of chaos and complexity theory, it is a bifurcation point. Arendt's contemporary, Peter Drucker, a philosopher who pioneered the scholarly study of business and management, characterized the contemporary zeitgeist in the title of his 1969 book: The Age of Discontinuity. It is an age in which Newtonian physics was replaced by Einstein's relativity and Heisenberg's uncertainty, the phrase quantum leap becoming a metaphor drawn from subatomic physics for all forms of discontinuity. It is an age in which the fixed point of view that yielded perspective in art and the essay and novel in literature yielded to Cubism and subsequent forms of modern art, and stream of consciousness in writing.
Beginning in the 19th century, photography gave us the frozen, discontinuous moment, and the technique of montage in the motion picture gave us a series of shots and scenes whose connections have to be filled in by the audience. Telegraphy gave us the instantaneous transmission of messages that took them out of their natural context, the subject of the famous comment by Henry David Thoreau that connecting Maine and Texas to one another will not guarantee that they have anything sensible to share with each other. The wire services gave us the nonlinear, inverted pyramid style of newspaper reporting, which also was associated with the nonlinear look of the newspaper front page, a form that Marshall McLuhan referred to as a mosaic. Neil Postman criticized television's role in decontextualizing public discourse in Amusing Ourselves to Death, where he used the phrase, "in the context of no context," and I discuss this as well in my recently published follow-up to his work, Amazing Ourselves to Death.
The concept of the hiatus comes naturally to the premodern mind, schooled by myth and ritual within the context of oral culture. That same concept is repressed, in turn, by the modern mind, shaped by the linearity and rationality of literacy and typography. As the modern mind yields to a new, postmodern alternative, one that emerges out of the electronic media environment, we see the return of the repressed in the idea of the jump cut writ large.
There is psychological satisfaction in the deterministic view of history as the inevitable result of cause-effect relations in the Newtonian sense, as this provides a sense of closure and coherence consistent with the typographic mindset. And there is similar satisfaction in the view of history as entirely consisting of human decisions that are the product of free will, of human agency unfettered by outside constraints, which is also consistent with the individualism that emerges out of the literate mindset and print culture, and with a social rather that physical version of efficient causality. What we are only beginning to come to terms with is the understanding of formal causality, as discussed by Marshall and Eric McLuhan in Media and Formal Cause. What formal causality suggests is that history has a tendency to follow certain patterns, patterns that connect one state or stage to another, patterns that repeat again and again over time. This is the notion that history repeats itself, meaning that historical events tend to fall into certain patterns (repetition being the precondition for the existence of patterns), and that the goal, as McLuhan articulated in Understanding Media, is pattern recognition. This helps to clarify the famous remark by George Santayana, "those who cannot remember the past are condemned to repeat it." In other words, those who are blind to patterns will find it difficult to break out of them.
Campbell engages in pattern recognition in his identification of the heroic monomyth, as Arendt does in her discussion of the historical hiatus. Recognizing the patterns are the first step in escaping them, and may even allow for the possibility of taking control and influencing them. This also means understanding that the tendency for phenomena to fall into patterns is a powerful one. It is a force akin to entropy, and perhaps a result of that very statistical tendency that is expressed by the Second Law of Thermodynamics, as Terrence Deacon argues in Incomplete Nature. It follows that there are only certain points in history, certain moments, certain bifurcation points, when it is possible to make a difference, or to make a difference that makes a difference, to use Gregory Bateson's formulation, and change the course of history. The moment of transition, of initiation, the hiatus, represents such a moment.
McLuhan's concept of medium goes far beyond the ordinary sense of the word, as he relates it to the idea of gaps and intervals, the ground that surrounds the figure, and explains that his philosophy of media is not about transportation (of information), but transformation. The medium is the hiatus.
The particular pattern that has come to the fore in our time is that of the network, whether it's the decentralized computer network and the internet as the network of networks, or the highly centralized and hierarchical broadcast network, or the interpersonal network associated with Stanley Milgram's research (popularly known as six degrees of separation), or the neural networks that define brain structure and function, or social networking sites such as Facebook and Twitter, etc. And it is not the nodes, which may be considered the content of the network, that defines the network, but the links that connect them, which function as the network medium, and which, in the systems view favored by Bateson, provide the structure for the network system, the interaction or relationship between the nodes. What matters is not the nodes, it's the modes.
Hiatus and link may seem like polar opposites, the break and the bridge, but they are two sides of the same coin, the medium that goes between, simultaneously separating and connecting. The boundary divides the system from its environment, allowing the system to maintain its identity as separate and distinct from the environment, keeping it from being absorbed by the environment. But the membrane also serves as a filter, engaged in the process of abstracting, to use Korzybski's favored term, letting through or bringing material, energy, and information from the environment into the system so that the system can maintain itself and survive. The boundary keeps the system in touch with its situation, keeps it contextualized within its environment.
The systems view emphasizes space over time, as does ecology, but the concept of the hiatus as a temporal interruption suggests an association with evolution as well. Darwin's view of evolution as continuous was consistent with Newtonian physics. The more recent modification of evolutionary theory put forth by Stephen Jay Gould, known as punctuated equilibrium, suggests that evolution occurs in fits and starts, in relatively rare and isolated periods of major change, surrounded by long periods of relative stability and stasis. Not surprisingly, this particular conception of discontinuity was introduced during the television era, in the early 1970s, just a few years after the publication of Peter Drucker's The Age of Discontinuity.
When you consider the extraordinary changes that we are experiencing in our time, technologically and ecologically, the latter underlined by the recent news concerning the United Nations' latest report on global warming, what we need is an understanding of the concept of change, a way to study the patterns of change, patterns that exist and persist across different levels, the micro and the macro, the physical, chemical, biological, psychological, and social, what Bateson referred to as metapatterns, the subject of further elaboration by biologist Tyler Volk in his book on the subject. Paul Watzlawick argued for the need to study change in and of itself in a little book co-authored by John H. Weakland and Richard Fisch, entitled Change: Principles of Problem Formation and Problem Resolution, which considers the problem from the point of view of psychotherapy. Arendt gives us a philosophical entrée into the problem by introducing the pattern of the hiatus, the moment of discontinuity that leads to change, and possibly a moment in which we, as human agents, can have an influence on the direction of that change.
To have such an influence, we do need to have that break, to find a space and more importantly a time to pause and reflect, to evaluate and formulate. Arendt famously emphasizes the importance of thinking in and of itself, the importance not of the content of thought alone, but of the act of thinking, the medium of thinking, which requires an opening, a time out, a respite from the onslaught of 24/7/365. This underscores the value of sacred time, and it follows that it is no accident that during that period of initiation in the story of the exodus, there is the revelation at Sinai and the gift of divine law, the Torah or Law, and chief among them the Ten Commandments, which includes the fourth of the commandments, and the one presented in greatest detail, to observe the Sabbath day. This premodern ritual requires us to make the hiatus a regular part of our lives, to break the continuity of profane time on a weekly basis. From that foundation, other commandments establish the idea of the sabbatical year, and the sabbatical of sabbaticals, or jubilee year. Whether it's a Sabbath mandated by religious observance, or a new movement to engage in a Technology Sabbath, the hiatus functions as the response to the homogenization of time that was associated with efficient causality and literate linearity, and that continues to intensify in conjunction with the technological imperative of efficiency über alles.
To return one last time to the quote that I began with, the end of the old is not necessarily the beginning of the new because there may not be a new beginning at all, there may not be anything new to take the place of the old. The end of the old may be just that, the end, period, the end of it all. The presence of a hiatus to follow the end of the old serves as a promise that something new will begin to take its place after the hiatus is over. And the presence of a hiatus in our lives, individually and collectively, may also serve as a promise that we will not inevitably rush towards an end of the old that will also be an end of it all, that we will be able to find the opening to begin something new, that we will be able to make the transition to something better, that both survival and progress are possible, through an understanding of the processes of continuity and change.
Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.
Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.
Jonathan Schell has died. I first read "The Fate of the Earth" as a college freshman in Introduction to Political Theory and it was and is one of those books that forever impacts the young mind. Jim Sleeper, writing in the Yale Daily News, gets to the heart of Schell’s power: “From his work as a correspondent for The New Yorker in the Vietnam War through his rigorous manifesto for nuclear disarmament in "The Fate of the Earth", his magisterial re-thinking of state power and people’s power in “The Unconquerable World: Power, Nonviolence, and the Will of the People,” and his wry, rigorous assessments of politics for The Nation, Jonathan showed how varied peoples’ democratic aspirations might lead them to address shared global challenges.” The Obituary in the New York Times adds: “With “The Fate of the Earth” Mr. Schell was widely credited with helping rally ordinary citizens around the world to the cause of nuclear disarmament. The book, based on his extensive interviews with members of the scientific community, outlines the likely aftermath of a nuclear war and deconstructs the United States’ long-held rationale for nuclear buildup as a deterrent. “Usually, people wait for things to occur before trying to describe them,” Mr. Schell wrote in the book’s opening section. “But since we cannot afford under any circumstances to let a holocaust occur, we are forced in this one case to become the historians of the future — to chronicle and commit to memory an event that we have never experienced and must never experience.””
In an interview, Simon Schama, author of the forthcoming book and public television miniseries "The Story of the Jews," uses early Jewish settlement in America as a way into why he thinks that Jews have often been cast as outsiders: "You know, Jews come to Newport, they come to New Amsterdam, where they run into Dutch anti-Semites immediately. One of them, at least — Peter Stuyvesant, the governor. But they also come to Newport in the middle of the 17th century. And Newport is significant in Rhode Island because Providence colony is founded by Roger Williams. And Roger Williams is a kind of fierce Christian of the kind of radical — in 17th-century terms — left. But his view is that there is no church that is not corrupt and imperfect. Therefore, no good Christian is ever entitled to form a government [or] entitled to bar anybody else’s worship. That includes American Indians, and it certainly includes the Jews. And there’s an incredible spark of fire of toleration that begins in New England. And Roger Williams is himself a refugee from persecution, from Puritan Massachusetts. But the crucial big point to make is that Jews have had a hard time when nations and nation-states have founded themselves on myths about soil, blood and tribe."
Noam Scheiber describes the “wakeful nightmare for the lower-middle-aged” that has taken over the world of technology. The desire for the new, new thing has led to disdain for age; “famed V.C. Vinod Khosla told a conference that “people over forty-five basically die in terms of new ideas.” The value of experience and the wisdom of age or even of middle are scorned when everyone walks around with encyclopedias and instruction manuals in our pockets. The result: “Silicon Valley has become one of the most ageist places in America. Tech luminaries who otherwise pride themselves on their dedication to meritocracy don’t think twice about deriding the not-actually-old. “Young people are just smarter,” Facebook CEO Mark Zuckerberg told an audience at Stanford back in 2007. As I write, the website of ServiceNow, a large Santa Clara–based I.T. services company, features the following advisory in large letters atop its “careers” page: “We Want People Who Have Their Best Work Ahead of Them, Not Behind Them.””
Kenan Malik wonders how non-believers can appreciate sacred art. Perhaps, he says, the godless can understand it as "an exploration of what it means to be human; what it is to be human not in the here and now, not in our immediacy, nor merely in our physicality, but in a more transcendental sense. It is a sense that is often difficult to capture in a purely propositional form, but one that we seek to grasp through art or music or poetry. Transcendence does not, however, necessarily have to be understood in a religious fashion, solely in relation to some concept of the divine. It is rather a recognition that our humanness is invested not simply in our existence as individuals or as physical beings but also in our collective existence as social beings and in our ability, as social beings, to rise above our individual physical selves and to see ourselves as part of a larger project, to project onto the world, and onto human life, a meaning or purpose that exists only because we as human beings create it."
The Niemen Journalism lab has the straight scoop about the algorithm, written by Ken Scwhenke, that wrote the first story about last week's West Coast earthquake. Although computer programs like Schwenke's may be able to take over journalism's function as a source of initial news (that is, a notice that something is happening,) it seems unlikely that they will be able to take over one of its more sophisticated functions, which is to help people situate themselves in the world rather than merely know what's going on in it.
In an interview, Kate Beaton, the cartoonist responsible for the history and literature web comic Hark A Vagrant!, talks about how her comics, perhaps best described as academic parody, can be useful for teachers and students: "Oh yes, all the time! That’s the best! It’s so flattering—but I get it, the comics are a good icebreaker. If you are laughing at something, you already like it, and want to know more. If they’re laughing, they’re learning, who doesn’t want to be in on the joke? You can’t take my comics at face value, but you can ask, ‘What’s going on here? What’s this all about?’ Then your teacher gets down to brass tacks."
From the Hannah Arendt Center Blog
This week on the blog, our Quote of the Week comes from Arendt Center Research Associate, Thomas Wild, who looks at the close friendship between Hannah Arendt and Alfred Kazin who bonded over literature, writers, and the power of the written word.
In the most recent NY Review of Books, David Cole wonders if we've reached the point of no return on the issue of privacy:
“Reviewing seven years of the NSA amassing comprehensive records on every American’s every phone call, the board identified only one case in which the program actually identified an unknown terrorist suspect. And that case involved not an act or even an attempted act of terrorism, but merely a young man who was trying to send money to Al-Shabaab, an organization in Somalia. If that’s all the NSA can show for a program that requires all of us to turn over to the government the records of our every phone call, is it really worth it?”
Cole is beyond convincing in listing the dangers to privacy in the new national security state. Like many others in the media, he speaks the language of necessary trade-offs involved in living in a dangerous world, but suggests we are trading away too much and getting back too little in return. He warns that if we are not careful, privacy will disappear. He is right.
What is often forgotten and is absent in Cole’s narrative is that most people—at least in practice—simply don’t care that much about privacy. Whether snoopers promise security or better-targeted advertisements, we are willing to open up our inner worlds for the price of convenience. If we are to save privacy, the first step is articulating what it is about privacy that makes it worth saving.
Cole simply assumes the value of privacy and doesn’t address the benefits of privacy until his final paragraph. When he does come to explaining why privacy is important, he invokes popular culture dystopias to suggest the horror of a world without privacy:
More broadly, all three branches of government—and the American public—need to take up the challenge of how to preserve privacy in the information age. George Orwell’s 1984, Ray Bradbury’s Fahrenheit 451, and Philip K. Dick’s The Minority Report all vividly portrayed worlds without privacy. They are not worlds in which any of us would want to live. The threat is no longer a matter of science fiction. It’s here. And as both reports eloquently attest, unless we adapt our laws to address the ever-advancing technology that increasingly consumes us, it will consume our privacy, too.
There are two problems with such fear mongering in defense of privacy. The first is that these dystopias seem too distant. Most of us don’t experience the violations of our privacy by the government or by Facebook as intrusions. The second is that on a daily basis the fact that my phone knows where I am and that in a pinch the government could locate me is pretty convenient. These dystopian visions can appear not so dystopian.
Most writing about privacy simply assume that privacy is important. We are treated to myriad descriptions of the way privacy is violated. The intent is to shock us. But rarely are people shocked enough to actually respond in ways that protect the privacy they often say that they cherish. We have collectively come to see privacy as a romantic notion, a long-forgotten idle, exotic and even titillating in its possibilities, but ultimately irrelevant in our lives.
There is, of course, a reason why so many advocates of privacy don’t articulate a meaningful defense of privacy: It is because to defend privacy means to defend a rich and varied sphere of difference and plurality, the right and importance of people actually holding opinions divergent from one’s own. In an age of political correctness and ideological conformism, privacy sounds good in principle but is less welcome in practice when those we disagree with assert privacy rights. Thus many who defend privacy do so only in the abstract.
When it comes to actually allowing individuals to raise their children according to their religious or racial beliefs or when the question is whether people can marry whomever they want, defenders of privacy often turn tail and insist that some opinions and some practices must be prohibited. Over and over today, advocates of privacy show that they value an orderly, safe, and respectful public realm and that they are willing to abandon privacy in the name of security and a broad conception of civility according to which no one should have to encounter opinions and acts that give them offense.
The only major thinker of the last 100 years who insisted fully and consistently on the crucial importance of a rich and vibrant private realm is Hannah Arendt. Privacy, Arendt argues, is essential because it is what allows individuals to emerge as unique persons in the world. The private realm is the realm of “exclusiveness,” it is that realm in which we “choose those with whom we wish to spend our lives, personal friends and those we love.” The private choices we make are guided by nothing objective or knowable, “but strikes, inexplicably and unerringly, at one person in his uniqueness, his unlikeness to all other people we know.” Privacy is controversial because the “rules of uniqueness and exclusiveness are, and always will be, in conflict with the standards of society.” Arendt’s defense of mixed marriages (and by extension gay marriages) proceeds—no less than her defense of the right of parents to educate their children in single-sex or segregated schools—from her conviction that the uniqueness and distinction of private lives need to be respected and protected.
Privacy, for Arendt, is connected to the “sanctity of the hearth” and thus to the idea of private property. Indeed, property itself is respected not on economic grounds, but because “without owning a house a man could not participate in the affairs of the world because he had no location in it which was properly his own.” Property guarantees privacy because it enforces a boundary line, “ kind of no man’s land between the private and the public, sheltering and protecting both.” In private, behind the four walls of house and heath, the “sacredness of the hidden” protects men from the conformist expectations of the social and political worlds.
In private, shaded from the conformity of societal opinions as well from the demands of the public world, we can grow in our own way and develop our own idiosyncratic character. Because we are hidden, “man does not know where he comes from when he is born and where he goes when he dies.” This essential darkness of privacy gives flight to our uniqueness, our freedom to be different. It is privacy, in other words, that we become who we are. What this means is that without privacy there can be no meaningful difference. The political importance of privacy is that privacy is what guarantees difference and thus plurality in the public world.
Arendt develops her thinking on privacy most explicitly in her essays on education. Education must perform two seemingly contradictory functions. First, education leads a young person into the public world, introducing them and acclimating them to the traditions, public language, and common sense that precede him. Second, education must also guard the child against the world, care for the child so that “nothing destructive may happen to him from the world.” The child, to be protected against the destructive onslaught of the world, needs the privacy that has its “traditional place” in the family.
Because the child must be protected against the world, his traditional place is in the family, whose adult members return back from the outside world and withdraw into the security of private life within four walls. These four walls, within which people’s private family life is lived, constitute a shield against the world and specifically against the public aspect of the world. This holds good not only for the life of childhood but for human life in general…Everything that lives, not vegetative life alone, emerges from darkness and, however, strong its natural tendency to thrust itself into the light, it nevertheless needs the security of darkness to grow at all.
The public world is unforgiving. It can be cold and hard. All persons count equally in public, and little if any allowance is made for individual hardships or the bonds of friendship and love. Only in privacy, Arendt argues, can individuals emerge as unique individuals who can then leave the private realm to engage the political sphere as confident, self-thinking, and independent citizens.
The political import of Arendt’s defense of privacy is that privacy is what allows for meaningful plurality and differences that prevent one mass movement, one idea, or one opinion from imposing itself throughout society. Just as Arendt valued the constitutional federalism in the American Constitution because it multiplied power sources through the many state and local governments in the United States, so did she too value privacy because it nurtures meaningfully different and even opposed opinions, customs, and faiths. She defends the regional differences in the United States as important and even necessary to preserve the constitutional structure of dispersed power that she saw as the great bulwark of freedom against the tyranny of the majority. In other words, Arendt saw privacy as the foundation not only of private eccentricity, but also of political freedom.
Cole offers a clear-sighted account of the ways that government is impinging on privacy. It is essential reading and it is your weekend read.