Hannah Arendt Center for Politics and Humanities
21Apr/142

Amor Mundi 4/20/14

Arendtamormundi

Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.

Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.

Is Capitalism a Social Good?

421A book captures the Zeitgeist rarely in the 21st century, especially a book written by an empirical economist, published by a University Press, and translated from French. And yet Thomas Piketty’s Capital in the Twenty-First Century, published by Harvard University Press, is suddenly everywhere. Andrew Hussey at The Guardian interviews Piketty, who argues that capitalism does not improve the quality of life for everyone. Piketty seeks to prove that capitalism is rigged in favor of the wealthy. In other words, the wealth of the wealthy increases faster than the income of the workers. His main contention is that over the centuries since the emergence of capitalism, return on capital tends to be greater than the growth of the economy. Which leads to Piketty’s final conclusion that increasing inequality is inevitable within capitalism – and will only get worse: “When I began, simply collecting data, I was genuinely surprised by what I found, which was that inequality is growing so fast and that capitalism cannot apparently solve it. Many economists begin the other way around, by asking questions about poverty, but I wanted to understand how wealth, or super-wealth, is working to increase the inequality gap. And what I found, as I said before, is that the speed at which the inequality gap is growing is getting faster and faster. You have to ask what does this mean for ordinary people, who are not billionaires and who will never will be billionaires. Well, I think it means a deterioration in the first instance of the economic well-being of the collective, in other words the degradation of the public sector. You only have to look at what Obama's administration wants to do – which is to erode inequality in healthcare and so on – and how difficult it is to achieve that, to understand how important this is. There is a fundamentalist belief by capitalists that capital will save the world, and it just isn't so. Not because of what Marx said about the contradictions of capitalism, because, as I discovered, capital is an end in itself and no more.” That the wealthy get wealthier in capitalism may seem obvious to some; but capitalism is widely embraced by the poor as well as the rich because it increases productivity and supposedly makes everybody better off. Capitalism may make some filthy rich, so the story goes, but it also allows more mobility of status and income than pre-capitalist economies, thus opening possibilities to everyone. Piketty argues against these truisms. In the end, however, whether inequality is good or bad is not an empirical question, and no amount of empirical research can tell us whether capitalism is good or bad. What Piketty does show convincingly, is that capitalism will not lead to equality. For more on Piketty, see Roger Berkowitz’s essay at The American Interest.

Is Capitalist Inequality Really So Bad?

422Perhaps the best review of Piketty’s Capital in the Twenty-First Century is by Martin Wolf, the Financial Times columnist. Wolf gives an excellent summary of Piketty’s four “remarkable achievements” and then considers what they mean. He makes clear the importance of Piketty’s book. But he also raises the question Piketty leaves unasked: “Yet the book also has clear weaknesses. The most important is that it does not deal with why soaring inequality – while more than adequately demonstrated – matters. Essentially, Piketty simply assumes that it does. One argument for inequality is that it is a spur to (or product of) innovation. The contrary evidence is clear: contemporary inequality and, above all, inherited wealth are unnecessary for this purpose. Another argument is that the product of just processes must be just. Yet even if the processes driving inequality were themselves just (which is doubtful), this is not the only principle of distributive justice. Another – to me more plausible – argument against Piketty’s is that inequality is less important in an economy that is now 20 times as productive as those of two centuries ago: even the poor enjoy goods and services unavailable to the richest a few decades ago.” This does not mean that Wolf thinks increasing inequality is unimportant. Rightly, he turns to Aristotle to make this most-important point: “For me the most convincing argument against the ongoing rise in economic inequality is that it is incompatible with true equality as citizens. If, as the ancient Athenians believed, participation in public life is a fundamental aspect of human self-realization, huge inequalities cannot but destroy it.” You can read Eduardo Porter’s excellent review of the literature on the impact of wealth inequality on economic growth here. Of course, you should all read Piketty’s book for yourselves.

Fixed Records

423In an online interactive feature from The New York Times, an excellent example of what internet journalism can do well, John Jeremiah Sullivan recounts his recent search for 1930s blueswomen Elvie Thomas and Geeshie Wiley. Among his sources for the project was the blues scholar Mack McCormick, who has a mountain of blues material, photos and interviews as well as tracks, collected over several decades, and now organized into something called “The Monster.” McCormick has been largely unable to produce writing from his collection; as he's sitting on sources that no one else has, and that few have access to, this failure represents an extraordinary series of lacunas in blues history. Sullivan notes, however, that McCormick is still as significant a figure as the field has: “He is on record (in one of two or three notably good profiles done on him over the years) as saying that the subject of [blues guitarist Robert] Johnson has gone dead on him. And he has said since that part of him wishes he hadn’t let that one singer, that riddle of a man, consume him. Which is a human thing to feel . . . except for when you happen to know more than anyone on earth about a subject that loads of people in several countries want to know more about. Then your inability to produce becomes not just a personal problem but a cultural one. It’s plausible that the scope of research finally got too large for any one mind, even a uniquely brilliant one, to hold in orbit. The point here is not to accuse or defend him, but rather to point out that even his footnotes, even the fragments from his research that have landed in other scholars’ pages, have been enough to place him among the two or three most important figures in this field. He’s one of those people whose influence starts to show up everywhere, once you’re sensitized to it.” Sullivan’s essay is an excellent walk through the historian's craft, a peak into how the record is made, as it were. Although Arendt described the job of the historian as describing the world as it was, that task is more or less difficult depending on the preservation or availability of certain sources. Through a combination of resources and luck, Sullivan and his research assistant were able to piece together a little more than half the story he set out to tell; the rest is still absent, awaiting another curious investigator and another stroke of good fortune.

The Sacred and the Profane

Simonos Petra is a greek Orthodox monastery built on XIV century , in 1364 was enlarged by a serbian king ,, three times burned last time in 1891 with his library. Its located at the base of the Mount Athos with 2000 altitude . Agion Oros or Mount Athos iThere's a Greek mountain, Athos, home to a number of Orthodox monasteries, and no females; no women, no female animals. In a short profile of the space, Tom Whipple notes that it is both sacred and profane: “Athos is a place where a bearded octogenarian who has not seen a woman in 60 years can venerate the bones of a two-millennia-dead saint, then pull out a mobile phone to speak to his abbot. Where a pilgrim with a wooden staff in one hand can have a digital camera in the other. And where, in the dim light of dawn matins, I can look on a church interior that would be instantly recognizable to a pilgrim from five centuries ago. Maybe this is part of the reason I come: to play the time-traveler?” Elsewhere on the peninsula is a monastery under siege for having broken with the Orthodox Patriarch, and another that is believed to be in part responsible for Greece's financial crash more than half a decade ago. Even here, men who have repudiated the world find that they live within it.

Get To Work

425In an interview that covers his views on Ireland as a post-colonial site and the importance of gay themes in the Canon, Colm Toibin gives some advice to young writers: “I suppose the thing really is, you could suggest they might finish everything that they start. And the reason for that is, certainly with me, what happens is that something—an image, a memory, or something known, or something half thought of—stays in our mind, at some point or other it becomes a rhythm, and you write it down. Part of that is, you know it; you sort of know what you want to do. The chances are high of wanting to abandon it halfway through on the basis of, it really ceases to interest you because you know it already. And then you have to really push yourself to realize that other people don't know it. And that you're writing for communication, and that is not a private activity. Therefore you have to go on working—that's what the real work is maybe. But if you're young and starting off, it's so easy to abandon something at that point thinking, 'Oh yeah, I'm not sure there's any more I can gain from the writing of this.' And the answer is: You don't matter anymore. Get to work.”

Seeing The World Through God

426Rod Dreher, who picked up Dante during a midlife crisis, suggests that the Divine Comedy is about learning to see the world as it is through the mediation of the divine: “Beatrice, a Florentine woman young Dante had loved from afar, and who died early, serves as a representation of Divine Revelation. What the poet says here is that on Earth she represented to him a theophany, a disclosure of the divine. When she died, Dante forgot about the vision of divine reality she stood for. He allowed his eyes to be turned from faith—the hope in ‘the substance of things hoped for, the evidence of things not seen,’ as Scripture says—to a misdirected love for the transitory and worldly. This is how Dante ended up in the dark and savage wood. This is how I did, too. This is how many of us find ourselves there in the middle of the journey of our life. Dante’s pilgrimage, and the one we readers have taken with him, teaches us to see the world and ourselves as they really are and to cleanse through repentance and ascesis our own darkened vision through reordering the will. By learning to want for ourselves and for others what God wants, we become more like Him, and we come to see all things as He does."

Gabriel Garcia Marquez: A Second Opportunity on Earth

427Gabriel Garcia Marquez has died. It is worth revisiting “The Solitude of Latin America,” Marquez’s Nobel Prize acceptance speech. The speech ends with these words: “On a day like today, my master William Faulkner said, ‘I decline to accept the end of man.’ I would fall unworthy of standing in this place that was his, if I were not fully aware that the colossal tragedy he refused to recognize thirty-two years ago is now, for the first time since the beginning of humanity, nothing more than a simple scientific possibility. Faced with this awesome reality that must have seemed a mere utopia through all of human time, we, the inventors of tales, who will believe anything, feel entitled to believe that it is not yet too late to engage in the creation of the opposite utopia. A new and sweeping utopia of life, where no one will be able to decide for others how they die, where love will prove true and happiness be possible, and where the races condemned to one hundred years of solitude will have, at last and forever, a second opportunity on earth.”

Is it Possible to Be a Jewish Intellectual?

428In Haaretz (subscription required), sociologist Eva Illouz reprints her 2014 Andrea and Charles Bronfman Lecture in Israeli Studies, at the University of Toronto. Illouz considers Gershom Scholem’s accusation that Hannah Arendt had no lover for the Jewish people and her response, “How right you are that I have no such love, and for two reasons: First, I have never in my life ‘loved’ some nation or collective – not the German, French or American nation, or the working class, or whatever else might exist. The fact is that I love only my friends and am quite incapable of any other sort of love. Second, this kind of love for the Jews would seem suspect to me, since I am Jewish myself. I don’t love myself or anything I know that belongs to the substance of my being … [T]he magnificence of this people once lay in its belief in God – that is, in the way its trust and love of God far outweighed its fear of God. And now this people believes only in itself? In this sense I don’t love the Jews, nor do I ‘believe’ in them.” Illouz writes: “To better grasp what should strike us here, let me refer to another debate, one that had taken place just a few years earlier in France, where another intellectual’s position had also generated a storm. Upon receiving the Nobel Prize for Literature in Stockholm in 1957, Albert Camus was interviewed by an Arab student about his positions on the Algerian war. He famously answered, ‘People are now planting bombs in the tramways of Algiers. My mother might be on one of those tramways. If that is justice, then I prefer my mother.’ Camus’ statement provoked a ruckus in French intellectual circles. As Norman Podhoretz wrote, “When he declared that he chose his mother above justice, he was, as [Conor Cruise] O’Brien puts it, choosing ‘his own tribe’ against an abstract ideal of universal justice. A greater heresy against the dogmas of the left is hard to imagine.” Indeed, since the Dreyfus affair, at the end of the 19th century, intellectuals’ intervention in the public sphere had been defined by their claim to universality, a position that remained unchanged throughout the 20th century.… I evoke here Camus’ example only to better highlight how the position of the contemporary Jewish intellectual differs from what we may call the position of the intellectual in Europe. What was anathema to the European intellectual – to defend one’s group and family against competing universal claims – is, in fact, what is routinely expected from the Jewish intellectual – by which I mean not only the intellectual of Jewish origins, but the one who engages in a dialogue with his/her community…. Arendt’s refusal to respond to the needs of her group and the fury her positions generated is only one of the many occurrences in a long list of hostile reactions by the organized Jewish community to critique, defined here as a sustained questioning of a group’s beliefs and practices. (For a superb discussion of these issues, see Idith Zertal’s 2005 book Israel’s Holocaust and the Politics of Nationhood.) In fact, over the last 30 years, one of the favorite exercises of various representatives of Jewish and Israeli communities has been to unmask the hidden anti-Zionist or anti-Jewish tenets of critique. I am not saying some of the critiques of Israel may not be motivated by anti-Semitism. I simply note that the suspicion of critique has become an elaborate cultural and intellectual genre in the Jewish world.

From The Hannah Arendt Center Blog

This week on the blog, Lance Strate considers Arendt’s quotation, "The end of the old is not necessarily the beginning of the new." And in the Weekend Read, Roger Berkowitz looks at Timothy Shenk’s review of millennial Marxism and Thomas Piketty.

 

14Apr/142

Hiatus, Discontinuity, and Change

Arendtquote

"The end of the old is not necessarily the beginning of the new."

Hannah Arendt, The Life of the Mind

This is a simple enough statement, and yet it masks a profound truth, one that we often overlook out of the very human tendency to seek consistency and connection, to make order out of the chaos of reality, and to ignore the anomalous nature of that which lies in between whatever phenomena we are attending to.

Perhaps the clearest example of this has been what proved to be the unfounded optimism that greeted the overthrow of autocratic regimes through American intervention in Afghanistan and Iraq, and the native-born movements known collectively as the Arab Spring. It is one thing to disrupt the status quo, to overthrow an unpopular and undemocratic regime. But that end does not necessarily lead to the establishment of a new, beneficent and participatory political structure. We see this time and time again, now in Putin's Russia, a century ago with the Russian Revolution, and over two centuries ago with the French Revolution.

Of course, it has long been understood that oftentimes, to begin something new, we first have to put an end to something old. The popular saying that you can't make an omelet without breaking a few eggs reflects this understanding, although it is certainly not the case that breaking eggs will inevitably and automatically lead to the creation of an omelet. Breaking eggs is a necessary but not sufficient cause of omelets, and while this is not an example of the classic chicken and egg problem, I think we can imagine that the chicken might have something to say on the matter of breaking eggs. Certainly, the chicken would have a different view on what is signified or ought to be signified by the end of the old, meaning the end of the egg shell, insofar as you can't make a chicken without it first breaking out of the egg that it took form within.

eggs

So, whether you take the chicken's point of view, or adopt the perspective of the omelet, looking backwards, reverse engineering the current situation, it is only natural to view the beginning of the new as an effect brought into being by the end of the old, to assume or make an inference based on sequencing in time, to posit a causal relationship and commit the logical fallacy of post hoc ergo propter hoc, if for no other reason that by force of narrative logic that compels us to create a coherent storyline.  In this respect, Arendt points to the foundation tales of ancient Israel and Rome:

We have the Biblical story of the exodus of Israeli tribes from Egypt, which preceded the Mosaic legislation constituting the Hebrew people, and Virgil's story of the wanderings of Aeneas, which led to the foundation of Rome—"dum conderet urbem," as Virgil defines the content of his great poem even in its first lines. Both legends begin with an act of liberation, the flight from oppression and slavery in Egypt and the flight from burning Troy (that is, from annihilation); and in both instances this act is told from the perspective of a new freedom, the conquest of a new "promised land" that offers more than Egypt's fleshpots and the foundation of a new City that is prepared for by a war destined to undo the Trojan war, so that the order of events as laid down by Homer could be reversed.

 Fast forward to the American Revolution, and we find that the founders of the republic, mindful of the uniqueness of their undertaking, searched for archetypes in the ancient world. And what they found in the narratives of Exodus and the Aeneid was that the act of liberation, and the establishment of a new freedom are two events, not one, and in effect subject to Alfred Korzybski's non-Aristotelian Principle of Non-Identity. The success of the formation of the American republic can be attributed to the awareness on their part of the chasm that exists between the closing of one era and the opening of a new age, of their separation in time and space:

No doubt if we read these legends as tales, there is a world of difference between the aimless desperate wanderings of the Israeli tribes in the desert after the Exodus and the marvelously colorful tales of the adventures of Aeneas and his fellow Trojans; but to the men of action of later generations who ransacked the archives of antiquity for paradigms to guide their own intentions, this was not decisive. What was decisive was that there was a hiatus between disaster and salvation, between liberation from the old order and the new freedom, embodied in a novus ordo saeclorum, a "new world order of the ages" with whose rise the world had structurally changed.

I find Arendt's use of the term hiatus interesting, given that in contemporary American culture it has largely been appropriated by the television industry to refer to a series that has been taken off the air for a period of time, but not cancelled. The typical phrase is on hiatus, meaning on a break or on vacation. But Arendt reminds us that such connotations only scratch the surface of the word's broader meanings. The Latin word hiatus refers to an opening or rupture, a physical break or missing part or link in a concrete material object. As such, it becomes a spatial metaphor when applied to an interruption or break in time, a usage introduced in the 17th century. Interestingly, this coincides with the period in English history known as the Interregnum, which began in 1649 with the execution of King Charles I, led to Oliver Cromwell's installation as Lord Protector, and ended after Cromwell's death with the Restoration of the monarchy under Charles II, son of Charles I. While in some ways anticipating the American Revolution, the English Civil War followed an older pattern, one that Mircea Eliade referred to as the myth of eternal return, a circular movement rather than the linear progression of history and cause-effect relations.

The idea of moving forward, of progress, requires a future-orientation that only comes into being in the modern age, by which I mean the era that followed the printing revolution associated with Johannes Gutenberg (I discuss this in my book, On the Binding Biases of Time and Other Essays on General Semantics and Media Ecology). But that same print culture also gave rise to modern science, and with it the monopoly granted to efficient causality, cause-effect relations, to the exclusion in particular of final and formal cause (see Marshall and Eric McLuhan's Media and Formal Cause). This is the basis of the Newtonian universe in which every action has an equal and opposite reaction, and every effect can be linked back in a causal chain to another event that preceded it and brought it into being. The view of time as continuous and connected can be traced back to the introduction of the mechanical clock in the 13th century, but was solidified through the printing of calendars and time lines, and the same effect was created in spatial terms by the reproduction of maps, and the use of spatial grids, e.g., the Mercator projection.

And while the invention of history, as a written narrative concerning the linear progression over time can be traced back to the ancient Israelites, and the story of the exodus, the story incorporates the idea of a hiatus in overlapping structures:

A1.  Joseph is the golden boy, the son favored by his father Jacob, earning him the enmity of his brothers

A2.  he is sold into slavery by them, winds up in Egypt as a slave and then is falsely accused and imprisoned

A3.  by virtue of his ability to interpret dreams he gains his freedom and rises to the position of Pharaoh's prime minister

 

B1.  Joseph welcomes his brothers and father, and the House of Israel goes down to Egypt to sojourn due to famine in the land of Canaan

B2.  their descendants are enslaved, oppressed, and persecuted

B3.  Moses is chosen to confront Pharaoh, liberate the Israelites, and lead them on their journey through the desert

 

C1.  the Israelites are freed from bondage and escape from Egypt

C2.  the revelation at Sinai fully establishes their covenant with God

C3.  after many trials, they return to the Promised Land

It can be clearly seen in these narrative structures that the role of the hiatus, in ritual terms, is that of the rite of passage, the initiation period that marks, in symbolic fashion, the change in status, the transformation from one social role or state of being to another (e.g., child to adult, outsider to member of the group). This is not to discount the role that actual trials, tests, and other hardships may play in the transition, as they serve to establish or reinforce, psychologically and sometimes physically, the value and reality of the transformation.

In mythic terms, this structure has become known as the hero's journey or hero's adventure, made famous by Joseph Campbell in The Hero with a Thousand Faces, and also known as the monomyth, because he claimed that the same basic structure is universal to all cultures. The basis structure he identified consists of three main elements: separation (e.g., the hero leaves home), initiation (e.g., the hero enters another realm, experiences tests and trials, leading to the bestowing of gifts, abilities, and/or a new status), and return (the hero returns to utilize what he has gained from the initiation and save the day, restoring the status quo or establishing a new status quo).

Understanding the mythic, non-rational element of initiation is the key to recognizing the role of the hiatus, and in the modern era this meant using rationality to realize the limits of rationality. With this in mind, let me return to the quote I began this essay with, but now provide the larger context of the entire paragraph:

The legendary hiatus between a no-more and a not-yet clearly indicated that freedom would not be the automatic result of liberation, that the end of the old is not necessarily the beginning of the new, that the notion of an all-powerful time continuum is an illusion. Tales of a transitory period—from bondage to freedom, from disaster to salvation—were all the more appealing because the legends chiefly concerned the deeds of great leaders, persons of world-historic significance who appeared on the stage of history precisely during such gaps of historical time. All those who pressed by exterior circumstances or motivated by radical utopian thought-trains, were not satisfied to change the world by the gradual reform of an old order (and this rejection of the gradual was precisely what transformed the men of action of the eighteenth century, the first century of a fully secularized intellectual elite, into the men of the revolutions) were almost logically forced to accept the possibility of a hiatus in the continuous flow of temporal sequence.

Note that concept of gaps in historical time, which brings to mind Eliade's distinction between the sacred and the profane. Historical time is a form of profane time, and sacred time represents a gap or break in that linear progression, one that takes us outside of history, connecting us instead in an eternal return to the time associated with a moment of creation or foundation. The revelation in Sinai is an example of such a time, and accordingly Deuteronomy states that all of the members of the House of Israel were present at that event, not just those alive at that time, but those not present, the generations of the future. This statement is included in the liturgy of the Passover Seder, which is a ritual reenactment of the exodus and revelation, which in turn becomes part of the reenactment of the Passion in Christianity, one of the primary examples of Campbell's monomyth.

Arendt's hiatus, then represents a rupture between two different states or stages, an interruption, a disruption linked to an eruption. In the parlance of chaos and complexity theory, it is a bifurcation point. Arendt's contemporary, Peter Drucker, a philosopher who pioneered the scholarly study of business and management, characterized the contemporary zeitgeist in the title of his 1969 book: The Age of Discontinuity. It is an age in which Newtonian physics was replaced by Einstein's relativity and Heisenberg's uncertainty, the phrase quantum leap becoming a metaphor drawn from subatomic physics for all forms of discontinuity. It is an age in which the fixed point of view that yielded perspective in art and the essay and novel in literature yielded to Cubism and subsequent forms of modern art, and stream of consciousness in writing.

cubism

Beginning in the 19th century, photography gave us the frozen, discontinuous moment, and the technique of montage in the motion picture gave us a series of shots and scenes whose connections have to be filled in by the audience. Telegraphy gave us the instantaneous transmission of messages that took them out of their natural context, the subject of the famous comment by Henry David Thoreau that connecting Maine and Texas to one another will not guarantee that they have anything sensible to share with each other. The wire services gave us the nonlinear, inverted pyramid style of newspaper reporting, which also was associated with the nonlinear look of the newspaper front page, a form that Marshall McLuhan referred to as a mosaic. Neil Postman criticized television's role in decontextualizing public discourse in Amusing Ourselves to Death, where he used the phrase, "in the context of no context," and I discuss this as well in my recently published follow-up to his work, Amazing Ourselves to Death.

The concept of the hiatus comes naturally to the premodern mind, schooled by myth and ritual within the context of oral culture. That same concept is repressed, in turn, by the modern mind, shaped by the linearity and rationality of literacy and typography. As the modern mind yields to a new, postmodern alternative, one that emerges out of the electronic media environment, we see the return of the repressed in the idea of the jump cut writ large.

There is psychological satisfaction in the deterministic view of history as the inevitable result of cause-effect relations in the Newtonian sense, as this provides a sense of closure and coherence consistent with the typographic mindset. And there is similar satisfaction in the view of history as entirely consisting of human decisions that are the product of free will, of human agency unfettered by outside constraints, which is also consistent with the individualism that emerges out of the literate mindset and print culture, and with a social rather that physical version of efficient causality. What we are only beginning to come to terms with is the understanding of formal causality, as discussed by Marshall and Eric McLuhan in Media and Formal Cause. What formal causality suggests is that history has a tendency to follow certain patterns, patterns that connect one state or stage to another, patterns that repeat again and again over time. This is the notion that history repeats itself, meaning that historical events tend to fall into certain patterns (repetition being the precondition for the existence of patterns), and that the goal, as McLuhan articulated in Understanding Media, is pattern recognition. This helps to clarify the famous remark by George Santayana, "those who cannot remember the past are condemned to repeat it." In other words, those who are blind to patterns will find it difficult to break out of them.

Campbell engages in pattern recognition in his identification of the heroic monomyth, as Arendt does in her discussion of the historical hiatus.  Recognizing the patterns are the first step in escaping them, and may even allow for the possibility of taking control and influencing them. This also means understanding that the tendency for phenomena to fall into patterns is a powerful one. It is a force akin to entropy, and perhaps a result of that very statistical tendency that is expressed by the Second Law of Thermodynamics, as Terrence Deacon argues in Incomplete Nature. It follows that there are only certain points in history, certain moments, certain bifurcation points, when it is possible to make a difference, or to make a difference that makes a difference, to use Gregory Bateson's formulation, and change the course of history. The moment of transition, of initiation, the hiatus, represents such a moment.

McLuhan's concept of medium goes far beyond the ordinary sense of the word, as he relates it to the idea of gaps and intervals, the ground that surrounds the figure, and explains that his philosophy of media is not about transportation (of information), but transformation. The medium is the hiatus.

The particular pattern that has come to the fore in our time is that of the network, whether it's the decentralized computer network and the internet as the network of networks, or the highly centralized and hierarchical broadcast network, or the interpersonal network associated with Stanley Milgram's research (popularly known as six degrees of separation), or the neural networks that define brain structure and function, or social networking sites such as Facebook and Twitter, etc. And it is not the nodes, which may be considered the content of the network, that defines the network, but the links that connect them, which function as the network medium, and which, in the systems view favored by Bateson, provide the structure for the network system, the interaction or relationship between the nodes. What matters is not the nodes, it's the modes.

Hiatus and link may seem like polar opposites, the break and the bridge, but they are two sides of the same coin, the medium that goes between, simultaneously separating and connecting. The boundary divides the system from its environment, allowing the system to maintain its identity as separate and distinct from the environment, keeping it from being absorbed by the environment. But the membrane also serves as a filter, engaged in the process of abstracting, to use Korzybski's favored term, letting through or bringing material, energy, and information from the environment into the system so that the system can maintain itself and survive. The boundary keeps the system in touch with its situation, keeps it contextualized within its environment.

The systems view emphasizes space over time, as does ecology, but the concept of the hiatus as a temporal interruption suggests an association with evolution as well. Darwin's view of evolution as continuous was consistent with Newtonian physics. The more recent modification of evolutionary theory put forth by Stephen Jay Gould, known as punctuated equilibrium, suggests that evolution occurs in fits and starts, in relatively rare and isolated periods of major change, surrounded by long periods of relative stability and stasis. Not surprisingly, this particular conception of discontinuity was introduced during the television era, in the early 1970s, just a few years after the publication of Peter Drucker's The Age of Discontinuity.

When you consider the extraordinary changes that we are experiencing in our time, technologically and ecologically, the latter underlined by the recent news concerning the United Nations' latest report on global warming, what we need is an understanding of the concept of change, a way to study the patterns of change, patterns that exist and persist across different levels, the micro and the macro, the physical, chemical, biological, psychological, and social, what Bateson referred to as metapatterns, the subject of further elaboration by biologist Tyler Volk in his book on the subject. Paul Watzlawick argued for the need to study change in and of itself in a little book co-authored by John H. Weakland and Richard Fisch, entitled Change: Principles of Problem Formation and Problem Resolution, which considers the problem from the point of view of psychotherapy. Arendt gives us a philosophical entrée into the problem by introducing the pattern of the hiatus, the moment of discontinuity that leads to change, and possibly a moment in which we, as human agents, can have an influence on the direction of that change.

To have such an influence, we do need to have that break, to find a space and more importantly a time to pause and reflect, to evaluate and formulate. Arendt famously emphasizes the importance of thinking in and of itself, the importance not of the content of thought alone, but of the act of thinking, the medium of thinking, which requires an opening, a time out, a respite from the onslaught of 24/7/365. This underscores the value of sacred time, and it follows that it is no accident that during that period of initiation in the story of the exodus, there is the revelation at Sinai and the gift of divine law, the Torah or Law, and chief among them the Ten Commandments, which includes the fourth of the commandments, and the one presented in greatest detail, to observe the Sabbath day. This premodern ritual requires us to make the hiatus a regular part of our lives, to break the continuity of profane time on a weekly basis. From that foundation, other commandments establish the idea of the sabbatical year, and the sabbatical of sabbaticals, or jubilee year. Whether it's a Sabbath mandated by religious observance, or a new movement to engage in a Technology Sabbath, the hiatus functions as the response to the homogenization of time that was associated with efficient causality and literate linearity, and that continues to intensify in conjunction with the technological imperative of efficiency über alles.

hiatus

To return one last time to the quote that I began with, the end of the old is not necessarily the beginning of the new because there may not be a new beginning at all, there may not be anything new to take the place of the old. The end of the old may be just that, the end, period, the end of it all. The presence of a hiatus to follow the end of the old serves as a promise that something new will begin to take its place after the hiatus is over. And the presence of a hiatus in our lives, individually and collectively, may also serve as a promise that we will not inevitably rush towards an end of the old that will also be an end of it all, that we will be able to find the opening to begin something new, that we will be able to make the transition to something better, that both survival and progress are possible, through an understanding of the processes of continuity and change.

-Lance Strate

1Nov/130

Canard of Decline

ArendtWeekendReading

The secret of American exceptionalism may very well be the uniquely American susceptibility to narratives of decline. From the American defeat in Vietnam and the Soviet launch of Sputnik to the quagmire in Afghanistan and the current financial crisis, naysayers proclaim the end of the American century. And yet the prophecies of decline are nearly always, in a uniquely American spirit, followed by calls for rejuvenation. Americans are neither pessimists nor optimists. Instead, they are darkened by despair and fired by hope.

hope

Decline, writes Josef Joffe in a recent essay in The American Interest, “is as American as apple pie. “ The tales of decline that populate American cultural myths have many morals, but one common shared theme: Renewal.

“Decline Time in America” is never just a disinterested tally of trends and numbers. It is not about truth, but about consequences—as in any morality tale. Declinism tells a story to shape belief and change behavior; it is a narrative that is impervious to empirical validation, whose purpose is to bring comforting coherence to the flow of events. The universal technique of mythic morality tales is dramatization and hyperbole. Since good news is no news, bad news is best in the marketplace of ideas. The winning vendor is not Pollyanna but Henny Penny, also known as Chicken Little, who always sees the sky falling. But why does alarmism work so well, be it on the pulpit or on the hustings—whatever the inconvenient facts?

Joffe, the editor of the German weekly Die Zeit, writes from the lofty perch of an all-knowing cultural critic. Declinism is, when looked at from above, little more than a marketing pitch:

Since biblical times, prophets have never gone to town on rosy oratory, and politicos only rarely. Fire and brimstone are usually the best USP, “unique selling proposition” in marketing-speak.

The origins of modern declinism, pace Joffe, are found in “the serial massacre that was World War I,” the rapacious carnage that revealed “the evil face of technology triumphant.” WWI deflated the enlightenment optimism in reason and science, showing instead the destructive impact of those very same progressive ideals.

The knowledge that raised the Eiffel Tower also birthed the machine gun, allowing one man to mow down a hundred without having to slow down for reloading. Nineteenth-century chemistry revolutionized industry, churning out those blessings from petroleum to plastics and pharmacology that made the modern world. But the same labs also invented poison gas. The hand that delivered good also enabled evil. Worse, freedom’s march was not only stopped but reversed. Democracy was flattened by the utopia-seeking totalitarians of the 20th century. Their utopia was the universe of the gulag and the death camp. Their road to salvation led to a war that claimed 55 million lives and then to a Cold War that imperiled hundreds of millions more.

America, the land of progress in Joffe’s telling, now exists in a productive tension with the anti-scientific tale of the “death of progress.”

“Technology and plenty, the critics of the Enlightenment argued, would not liberate the common man, but enslave him in the prison of “false consciousness” built by the ruling elites. The new despair of the former torchbearers of progress may well be the reason that declinism flourishes on both Left and Right. This new ideological kinship alone does not by itself explain any of the five waves of American declinism, but it has certainly broadened its appeal over time.

Joffe stands above both extremes of the declinism pendulum. Instead of embracing or rejecting the tale of decline, he names decline and its redemptive flipside the driving force of American exceptionalism. Myths of decline are necessary in order to fuel the exceptional calls for sacrifice, work, and innovation that have for centuries turned the tide of American elections and American culture.

[D]awn always follows doom—as when Kennedy called out in his Inaugural Address: “Let the word go forth that the torch has been passed to a new generation of Americans.” Gone was the Soviet bear who had grown to monstrous size in the 1950s. And so again twenty years later. At the end of Ronald Reagan’s first term, his fabled campaign commercial exulted: “It’s morning again in America. And under the leadership of President Reagan, our country is prouder and stronger and better.” In the fourth year of Barack Obama’s first term, America was “back”, and again on top. Collapse was yesterday; today is resurrection. This miraculous turnaround might explain why declinism usually blossoms at the end of an administration—and wilts quickly after the next victory.

Over and over the handwriting that showed that decline was on the wall was, in truth, “a call to arms that galvanized the nation.”

Behind this long history of nightmares of degeneration and dreams of rebirth is Joffe’s ultimate question: Are the current worries about the death of the American century simply the latest in the American cycle of gloom and glee? Or is it possible that the American dream is, finally, used up? In other words, is it true that, since “at “some point, everything comes to an end,” this may be the end for America? Might it be that, as many in Europe now argue, “The United States is a confused and fearful country in 2010.” Is it true that the US is a “hate-filled country” in unavoidable decline?

Joffe is skeptical. Here is his one part of his answer:

Will they be proven right in the case of America? Not likely. For heuristic purposes, look at some numbers. At the pinnacle of British power (1870), the country’s GDP was separated from that of its rivals by mere percentages. The United States dwarfs the Rest, even China, by multiples—be it in terms of GDP, nuclear weapons, defense spending, projection forces, R&D outlays or patent applications. Seventeen of the world’s top universities are American; this is where tomorrow’s intellectual capital is being produced. America’s share of global GDP has held steady for forty years, while Europe’s, Japan’s and Russia’s have shrunk. And China’s miraculous growth is slipping, echoing the fates of the earlier Asian dragons (Japan, South Korea, Taiwan) that provided the economic model: high savings, low consumption, “exports first.” China is facing a disastrous demography; the United States, rejuvenated by steady immigration, will be the youngest country of the industrial world (after India).

In short, if America is to decline it will be because America refuses to stay true to its tradition of innovation and reinvention.

As convincing as Joffe is, the present danger that America’s current malaise will persist comes less from economics or from politics than from the extinguishing of the nation’s moral fire. And in this regard, essays such as Joffe’s are symptoms of the problem America faces. Joffe writes from above and specifically from the position of the social scientist. He looks down on America and American history and identifies trends. He cites figures. And he argues that in spite of the worry, all is generally ok. Inequality? Not to worry, it has been worse. Democratic sclerosis? Fret not; think back to the 1880s. Soul-destroying partisanship? Have you read the newspapers of the late 18th century? In short, our problems are nothing new under the sun. Keep it in perspective. There is painfully little urgency in such essays. Indeed, they trade above all in a defense of the status quo.

There is reason to worry though, and much to worry about. Joffe may himself have seen one such worry if he had lingered longer on an essay he cites briefly, but does not discuss. In 1954, Hannah Arendt published “Europe and America: Dream and Nightmare” in Commentary Magazine. In that essay—originally given as part of a series of talks at Princeton University on the relationship between Europe and America—she asked: “WHAT IMAGE DOES Europe have of America?”

Her answer is that Europe has never seen America as an exotic land like the South Sea Islands. Instead, there are two conflicting images of America that matter for Europeans. Politically, America names the very European dream of political liberty. In this sense, America is less the new world than the embodiment of the old world, the land in which European dreams of equality and liberty are made manifest. The political nearness of Europe and America explains their kinship.

European anti-Americanism, however, is lodged in a second myth about American, the economic image of America as the land of plenty. This European image of America’s stupendous wealth may or may not be borne out in reality, but it is a fantasy that drives European opinion:

America, it is true, has been the “land of plenty” almost since the beginning of its history, and the relative well-being of all her inhabitants deeply impressed even early travelers. … It is also true that the feeling was always present that the difference between the two continents was greater than national differences in Europe itself even if the actual figures did not bear this out. Still, at some moment—presumably after America emerged from her long isolation and became once more a central preoccupation of Europe after the First World War—this difference between Europe and America changed its meaning and became qualitative instead of quantitative. It was no longer a question of better, but of altogether different conditions, of a nature which makes understanding well nigh impossible. Like an invisible but very real Chinese wall, the wealth of the United States separates it from all other countries of the globe, just as it separates the individual American tourist from the inhabitants of the countries he visits.

Arendt’s interest in this “Chinese wall” that separates Europe from America is that it lies behind the anti-Americanism of European liberals, even as it inspires the poor. “As a result,” of this myth, Arendt writes, “sympathy for America today can be found, generally speaking, among those people whom Europeans call “reactionary,” whereas an anti-American posture is one of the best ways to prove oneself a liberal.” The same can largely be said today.

The danger in such European anti-Americanism is not only that it will fire a European nationalism, but also that it will  cast European nationalism as an ideological opposition to American wealth. “Anti-Americanism, its negative emptiness notwithstanding, threatens to become the content of a European movement.” In other words, European nationalism threatens to assume on a negative ideological tone.

That Europe will understand itself primarily in opposition to America as a land of wealth impacts America too, insofar as European opposition hardens Americans in their own mythic sense of themselves as a land of unfettered economic freedom and unlimited wealth. European anti-Americanism thus fosters the kind of free market ideology so rampant in America today.

What is more, when Europe and America emphasize their ideological opposition on an economic level, they deemphasize their political kinship as lands of freedom.

Myths of American decline serve a purpose on both sides of the Atlantic.

doom

In Europe, they help justify Europe’s social democratic welfare states, as well as their highly bureaucratized regulatory state. In America, they underlie attacks on regulation and calls to limit and shrink government. These are all important issues that should be thought and debated with an eye to reality. The danger is that the European emancipation and American exceptionalism threatens to elevate ideology over reality, hardening positions that need rather to be open for innovation.

Joffe’s essay on the Canard of Decline is a welcome spur to rethinking the gloom and the glee of our present moment. It is your weekend read.

-RB

23Sep/131

The False Culture of Utility

Arendtquote

“Culture is being threatened when all worldly objects and things, produced by the present or the past, are treated as mere functions for the life process of society, as though they are there only to fulfill some need, and for this functionalization it is almost irrelevant whether the needs in question are of a high or a low order.”

--Hannah Arendt, “The Crisis in Culture”

Hannah Arendt defines the cultural as that which gives testimony to the past and in preserving the past helps constitute  our common world.  A cultural object embodies the human goal of achieving “immortality,” which as Arendt explains in The Human Condition is not the same as eternal life or the biological propagation of the species. Immortality concerns the life of a people and is ultimately political.  It refers to the particular type of transcendence afforded by political action. In “The Crisis of Culture,” Arendt shows how culture has a political role insofar as it creates durable and lasting objects that contribute to the immortality of a people.

culture

The danger Arendt confronts in “The Crisis in Culture” is that mass culture makes art disposable and thus threatens the political ability of cultural life to produce lasting and immortal objects.  The source of her worry is not an invasion of culture by the low and the base, but a sort of cannibalization of culture by itself.  The problem is that mass culture swallows culture and subsumes it under the rubric of need.  The immortal is degraded to a biological necessity, to be endlessly consumed and reproduced. Durable cultural objects that constitute a meaningful political world are thereby consumed, eroding the common world that is the place of politics.

Arendt’s point is first that mass culture—like all culture under the sway of society— is too often confused with status, self-fulfillment, or entertainment. In the name of status or entertainment, cultural achievements are stripped down and repackaged as something to be consumed in the life process.  She would argue that this happens every time Hamlet is made into a movie or the Iliad is condensed into a children’s edition. By making culture accessible for those who would use it to improve themselves, the mass-culture industry makes it less and less likely that we will ever confront the great works of our past in their most challenging form.  Eventually, the watering down of once immortal works can make it difficult or impossible to perceive the importance of culture and cultural education for humanity and our common world.

However, Arendt does not offer simply a banal critique of reality television as fast-food.  We might recognize a more insidious form of the risks she describes in the new intellectualism that marks the politics, or anti-politics of the tech milieu. What has been termed Silicon Valley’s anti-intellectualism should instead be understood as a forced colonization of the space potentially inhabited by the public intellectual.

The prophets of the tech world see themselves as fulfilling a social and political duty through enterprise.  They unselfconsciously describe their creations as sources of liberation, democracy, and revolution.  And yet they eschew politics. Their abnegation of overt political activity is comprehensible in that, for them, ‘politics’ is always already contained in the project of saving the world through technological progress.

We see such exemplars of technological cultural salvation all around us.  Scholars and cultural figures are invited to lecture at the “campuses” of Apple and Google, and their ideas get digested into the business model or spit back out in the form of TED talks.  Even Burning Man, originally a ‘counter-cultural’ annual desert festival with utopian pretensions, has been sucked into the vortex, such that Stanford Professor Fred Turner could give a powerpoint lecture titled, “Burning Man at Google: A cultural infrastructure for new media production.”  The abstract for his article in New Media & Society is even more suggestive: “…this article explores the ways in which Burning Man’s bohemian ethos supports new forms of production emerging in Silicon Valley and especially at Google. It shows how elements of the Burning Man world – including the building of a sociotechnical commons, participation in project-based artistic labor and the fusion of social and professional interaction – help to shape and legitimate the collaborative manufacturing processes driving the growth of Google and other firms.”  Turner’s conclusion virtually replicates Arendt’s differentiation between nineteenth century philistinism and the omniphagic nature of mass culture:

In the 19th century, at the height of the industrial era, the celebration of art provided an occasion for the display of wealth. In the 21st century, under conditions of commons-based peer production, it has become an occasion for its [i.e. wealth] creation.

The instrumentalization of culture within polite society has given way to the digestion and reconstitution of culture in the form of gadgets meant to increase convenience.  Would-be cultural objects become rungs on the hamster wheel of life’s progress. Progress as the ultimate goal of technological cultural innovation is a vague concept because it is taken for granted due to the self-contained and self-enclosed nature of the industry.  Where it is defined, it is demonstrated through examples, such as the implementation of the smart parking meter or the use of cloud networking in order to better administer services to San Francisco’s homeless population.

In a recent New Yorker article on the tech revolutionaries, George Packer writes, “A favorite word in tech circles is ‘frictionless.’ It captures the pleasures of an app so beautifully designed that using it is intuitive, and it evokes a fantasy in which all inefficiencies, annoyances, and grievances have been smoothed out of existence—that is, an apolitical world.” Progress here is the increasingly efficient administration of life.

When tech does leave its insular environment and direct its energies outward, its engagements reflect both its solipsism and focus on utility, which for Arendt go together.  The Gates Foundation’s substantial investments in higher education impose the quantitatively verifiable standard of degree completion as the sole or main objective, which seems odd in itself, given Gates’ notoriety as a Harvard drop-out.  The efforts of the Foundation aim less at placing Shakespeare in the hands of every fast-food worker, and more towards redirecting all of cultural education toward the development of a cheap version of utilitarian aptitude.  Such tech intellectualism will ask, “What is the point of slaving over the so-called classics?” The claim is that the liberal arts vision of university education is inseparable from elitist designs, based on an exclusive definition of what ‘culture’ should be.

“What is the use?” is the wrong question, though, and it is tinged by the solipsistic mentality of a tech elite that dare not speak its name.  The tech intellectual presents the culture of Silicon Valley as inherently egalitarian, despite the fact that capital gains in the sector bare a large burden of the blame for this country’s soaring rate of inequality.  This false sense of equality fosters a naïve view of political and social issues.  It also fuels tech’s hubristic desire to remake the world in its own image:  Life is about frictionless success and efficient progress, and these can be realized via the technological fix.  “It worked for us, what’s the matter with you?”

tech

For Arendt, culture is not meant to be useful for employment or even the lofty purpose of self-cultivation; our relationship to culture nurtures our ability to make judgments.  Kant’s discussion of taste and “common sense” informs her notion of the faculty of judgment in art and politics.  In matters of taste, judging rests on the human ability to enlarge one’s mind and think with reference to an “anticipated communication with others” and “potential agreement.”  Common sense, as she uses it, “discloses to us the nature of the world insofar as it is a common world.”  Culture and politics are linked in that both can only exist in a world that is shared.  She writes:

Culture and politics, then, belong together because it is not knowledge or truth which is at stake, but rather judgment and decision, the judicious exchange of opinion about the sphere of public life and the common world, and the decision what manner of action is to be taken, as well as to how it is to look henceforth, what kind of things are to appear in it.

That culture and politics are about enacting judgments, rather than truth or technique for the advancement of biological life, is a point that is clearly missed by the tech intellectuals.  The establishment of utility as the sole goal of higher education represents only one section of a general lens through which the world appears only as a series of practical problems to be figured out.  In this paradoxical utopia of mass accessibility, insulation, and narrow-mindedness, applied knowledge threatens to occupy and pervert culture at the expense of political action and care for our common world.

-Jennifer Hudson

26Aug/130

Machine-man and man-machines in the last stage of the laboring society

Arendtquote

“The last stage of the laboring society, the society of job holders, demands of its members a sheer automatic functioning, as though individual life had actually been submerged in the over-all life process of the species and the only active decision still required of the individual were to let go, so to speak, to abandon his individuality, the still individually sensed pain and trouble of living, and acquiesce in a dazed, ‘tranquilized’, functional type of behavior”.

-Hannah Arendt,  "The Human Condition"

About fifty years ago Hannah Arendt diagnosed the “last stage of the laboring society”.  Human beings can only live as “job holders” without access to the realm of freedom in the sense of the classical ideal of political action. For Arendt this state of affairs is the result of the development process of modernity. As the life of the species, the ‘social’ became the central interest of the public sphere. There is no margin for self-realization unless this is within the limits of an adaptation to the needs of the collective life process. Even a passive freedom of “sensing pain and trouble of living” is no longer permitted. Human beings not only have to function automatically, they have to “bow with joy” to their condition. This ideological aspect of the contemporary conditio humana is perhaps the one that outrages Arendt the most. The anesthesia of the mind in modern society: Individuals have to “acquiesce in a dazed, ‘tranquilized’, functional type of behavior”.

labor

Through her diagnosis Arendt addresses the development of the “machine-man” in the laboring society. Subliminal to the process of the modern liberation of individuality, which reaches a pinnacle in the Universal Declaration of Human Rights, the private sphere of the ancient household as a place of labor is extended to the whole of society. At the end of the day individuals have to conform to the needs of the life production process in a way that makes it impossible for them even to look after their rights. This is the age of the machine-man. “Functionality” becomes the grounding element of human behavior. Positivistic fate in progress represents its civil religion: When every aspect of society could be traced back to its proper functioning there were no limits to life perfection.

With this result, to speak with Max Weber, a specific idea achieved an overwhelming impact on societal transformation. Descartes’ separation of res cogitans and res extensa produced the idea of an animal-machine without a soul, which could be completely reduced to the functional needs of rationalistic world domination. Some hundred years later La Mettrie completed the reflection with the idea of the homme-machine. Without knowing its sources in cultural history, industrialization translated the idea radically into action: By being reduced to machine-men individuals had to fulfill the needs of a mechanized production system. In order to face the anthropological consequences of the industrial development of modernity, Marx and Engels provided the plot for the political redemption of the machine-men. The only way to escape alienation is to attain the complete automation of the factory, and thus the substitution of job holders by intelligent machines. In 1921 reversed this utopia in a dystopia. He coined the word “robot” for his theater piece “Rossum’s Universal Robots” using the Slavic word robota, which traditionally means the work period (corvée) a serf had to give for his lord. By reviving the theme of the Jewish legend about the Golem, Čapek put the religious prohibition of recreating human beings at the forefront of the debate. There could be no liberation of machine-man by constructing man-machines without provoking a rebellion of the latter against their creators: this has been the subject of all science fiction literature and film about man-machines ever since.

A sociologically based intercultural survey about the current development of robotics shows that both the scientific utopia of creating man-machines as well as the public’s fears about their potential danger are present in the reflections of European and American engineers. Japanese roboticists on the other hand think that the introduction of man-machines into social interaction does not provoke any dystopic consequences. In an age of an increasing crisis of labor as the central category of modernity, technology research tries to develop substitutes for the missing animal laborans. Its leading idea is that an aging society needs support and care for humans who live long after they have ceased to be job holders. Instead of thinking about a different organization of society, decision-makers and stakeholders aim at substituting the absent young job holders with machines that have all the characteristics of functionality pointed out in Arendt’s diagnosis of the last stage of laboring society’s members. The machine-man reproduces himself as a man-machine.

But furthermore, the empirical surveys show that utopia stalls with the implementation of the man-machine. Technically, it is very hard to realize robots that can effectively substitute working humans in a real-world environment. Societally, there is a very low level of acceptance for man-machines, not least because of deep ethical concerns about human–robot interaction. Legal issues offer an even greater problem: Neither the European, American nor Japanese legal system provides proper legal instruments to allow robots to enter real-world settings.

robot

This background strongly influences the further development of technological research. So it is interesting to observe how developers worldwide slowly abandon the plan of realizing a substitute for the animal laborans as an autonomous entity. Following the design guidelines of “Ambient Assisted Living”, single parts of its body are disaggregated and put into the environment of the pensioned job holders. The man-machine only survives as an executer (Europe) or as a communication tool (Japan) for an overall ambient intelligence. Robots thereby become an interface for the “rule of nobody” of a superior control instance within the private life of the discharged job holders. No advent of autonomous robots seems therefore to be expected, if not as a result of undercover research into military robotics that plans for their introduction in the extra-legal domain of war.

Machine-men hesitate to realize the utopia of man-machines. They seem to abandon the idea of making man-machines full members of the public sphere, as they are to be seen e. g. in the film adaptation of Asimov’s I, Robot. This current stage of the laboring society poses the question of its critical assessment. It would be interesting to know what Hannah Arendt would have said about this.

-Gregor Fitzi

University of Potsdam, Germany

13May/130

Death and the Public Realm

Arendtquote

"There is perhaps no clearer testimony to the loss of the public realm in the modern age than the almost complete loss of authentic concern with immortality, a loss somewhat overshadowed by the simultaneous loss of the metaphysical concern with eternity."

-Hannah Arendt,  The Human Condition,

Hannah Arendt was one of the first to remark upon the loss of the public realm, or what Jürgen Habermas called the public sphere.  As indicated by the terms realm and sphere, along with related phrases such as public space and public sector, we are referring here to a kind of environment, or as Arendt puts it, "the world itself, in so far as it is common to all of us and distinguished from our privately owned place in it" (p. 52). The private realm, the subject of a previous post of mine (The Deprivations of Privacy) is defined in relation (and opposition) to the public, but both are differentiated from the natural environment according to Arendt.  Both are human artifacts, human inventions:

To live together in the world means essentially that a world of things is between those who have it in common, as a table is located between those who sit around it: the world like every in-between, relates and separates men at the same time. (p. 52)

The table is an apt metaphor, as it has the connotation of civilized discourse, and a willingness to sit down for peaceful negotiation. Indeed, it is much more than a metaphor, as the table does create a shared space for individuals, a medium, if you will, around which they can communicate. But the table also keeps individuals separate from one another, establishing a buffer zone that allows for a sense of safety in the company of individuals who might otherwise be threatening.  Sitting at a table restricts the possibilities of sudden movement, providing some assurance that the person seated across from you will not suddenly spring at you with sword or knife in hand, especially if both parties keep their hands visible on the table top. No wonder, then, that as the practice of sitting around a table for a meal emerges in the Middle Ages, it becomes the focal point for what Norbert Elias refers to as the civilizing process.

table3

The table is a medium, an in-between, as Arendt puts it, and each medium in its own way serves as a means by which individuals connect and relate to one another, and also are separated and kept apart from one another.  In Understanding Media, Marshall McLuhan expressed the same idea in saying that all media, meaning all technologies and human innovations, are extensions of some aspect of individuals, but at the same time are amputations.  As I have explained elsewhere, the medium that extends us into the world comes between us and the world, and in doing so becomes our world. Or as I like to put it, with apologies to McLuhan, the medium is the membrane.

The public realm then is a shared human environment, a media environment. As Arendt explains,

everything that appears in public can be seen and heard by everybody and has the widest possible publicity. For us, appearance—something that is being seen and heard by others as well as by ourselves—constitutes reality. (p. 50)

Paul Watzlawick has argued that our reality is constructed through our communication, rather than mere reflected or represented by our messages. And this means that our reality is shaped by our means of communication, our media.  It is through publicity that we create the public realm.  And for the public realm to exist, there must also be the possibility for some communication to take place privately, in a context where it cannot be seen and heard by everybody, where there are barriers to people's perception and their access to information, what Erving Goffman referred to as the back region.

The public realm is not a media environment we typically associate with tribal societies, where the distinction between public and private is, for the most part, non-existent.  Rather, it is strongly tied to the city as a human environment (and a medium of communication in its own right).  Lewis Mumford insightfully observed that cities are a type of container technology, indeed the container of containers, and what they contain includes great concentrations of population.  As settlements evolved into the first urban centers in the ancient world, they gave rise to the first true crowds and mobs, and also to audiences made up of people who do not necessarily know one another, or have strong social ties to each other.

These new kinds of audiences required a new form of communication:  public address.  They required new kinds of physical environments:  the agora, the forum, the marketplace.  And they required new forms of education:  the art of rhetoric.

The invention of writing is intimately bound up in all of these developments.  Without reasonably well-developed systems of notation, human populations were not able to handle the complexity of large populations. In tribal societies, as population increases, groups split up in order to keep their affairs manageable.  Writing, as a container for language, whose primary form is the spoken word, develops side by side with the city as container, and allows for the control and coordination of large populations and diverse activities.  And writing, in allowing language to be viewed and reviewed, made it possible to refine the art of public address, to study rhetoric and instruct others in the techniques of oratory, as did the Sophists in ancient Greece.  It is no accident that the introduction of the Greek alphabet was followed by the first forms of study, including rhetoric and grammar, and by the first forms of democracy.

Writing also has the peculiar effect of introducing the idea of the individual, of breaking people apart from their tribal, group identity. The ability to take one's thoughts, write them down, and observe them from the outside, made it possible to separate the knower from the known, as Eric Havelock put it, which also separated individuals from their traditions.

lang

Written law, beginning with Hammurabi and Moses, took judicial matters out of the concrete realm of proverbs and parables, and reasoning by analogy, opened the door to the view that everyone is equal, as an individual, before the law.  The fact that literacy also facilitated increasingly more abstract modes of thought also was of great importance, but the simple act of reading and writing alone, in isolation, had much to do with the genesis of individualism.

The origin of the public realm is closely tied to the medium of the written word, in highly significant but limited ways. Script gave us the civic public, rooted in rhetoric, but it was the printing revolution in early modern Europe that made the public intro a national, mass phenomenon. As McLuhan noted in his preface to The Gutenberg Galaxy,

Printing from movable types created a quite unexpected new environment—it created the PUBLIC.  Manuscript technology did not have the intensity or power of extension necessary to create publics on a national scale.  What we have called "nations" in recent centuries did not, and could not, precede the advent of Gutenberg technology any more than they can survive the advent of electric circuitry with its power of totally involving all people in all other people. (p. ii)

A reading public is quite different from a listening public, as readers are separated in time and space from one another, and this form of mediation also had the effect of making individualism a ruling ideology.  And yes, Habermas did place a great deal of emphasis on people gathering in public places like coffee shops to discuss and debate the issues of the day, but they did so based on what they read in print media such as newspapers, pamphlets, and the like. Moreover, historian Elizabeth Eisenstein explained in The Printing Press as an Agent of Change, the printers' shops were the first places that people gathered for such intellectual exchanges, long before they gravitated to the coffee shops and taverns.  The point is that the content of these discussions were based on typographic media, the mindset of the discussants was shaped by print literacy, and both were situated within the print media environment.  Within such an environment, a population of individuals could gain common access to ideas and opinions through print media, which in turn could provide the basis for political action; in this way publics came into being.

Publics were formed by publicity, and publicity was achieved through publication.  As much as books, pamphlets, catalogs, calendars, periodicals, and all manner of ephemera were the products of the printing press, so too, as McLuhan observed, was the reading public.  Print technology gave us our first form of mass communication, characterized by wide and relatively rapid dissemination of multiple, identical copies of the same text, a democratizing process, as Walter Benjamin observed.

But printing also created a new sense of immortality, of the author's words living on through the ages, and of posterity as the ultimate judge.  Elizabeth Eisenstein explains that the very multiplication of texts, however perishable any single copy might be, established what she referred to as the preservative powers of print far beyond anything previously known.  This idea of immortality goes hand in hand with the rise of a new kind of historical consciousness, which also emerged out of print culture.

Eternity, by way of contrast, is situated outside of historical time, within what Mircea Eliade calls sacred time. It is a time that looks back towards the moment of creation or a golden age. Through ritual, we can step out of the profane time of everyday life, and in enacting the myth of eternal return enter the sacred time that intersects with all of history—in this sense always a part of it and yet at the same time apart from it.

Traditional cultures look backward to creation or the golden age as a time superior to the present, a time they strive to reclaim.  Oral cultures are particularly associated with a cyclical understanding of time.  The invention of writing makes possible first chronology, then historical narrative, and this opens the door to the idea of progress. The shift begins with the biblical narrative in ancient Israel, and the secular history writing of ancient Greece and Rome.  But a complete reversal in orientation from looking to the past as the ideal towards anticipating the future as a continual process of getting better, perhaps culminating in utopia, is closely associated with the printing revolution and the modern world it gave rise to.  This is, in turn, superseded by a present-centered orientation brought on by the electronic media, as I have discussed in On the Binding Biases of Time.  The instantaneity and immediacy of electronic communication not only moves our focus from history and futurity to the present moment, but it translates the remembered past and the anticipated future into the present tense, the now of the computer program and digital simulation.

Arendt's insight that the loss of a concern with immortality is intimately bound up with the loss of the public realm implies a common denominator, specifically the electronic media environment that has superseded the typographic media environment. If literacy and print go hand in hand with citizenship, civics, and the public realm, what happens when these media are overshadowed by electronic technologies, from the telegraph and wireless to radio and television now to the internet and mobile technology?

tech

We still use the word public of course, but we have seen a great blurring of the boundaries between public and private, the continuing erosion of privacy but also a loss of the expectation that dress, behavior, and communication ought to be different when we are in a public place, and that there are rules and obligations that go along with being a part of a public.  And we have experienced a loss of our longstanding sense of individualism, replaced by an emphasis on personalization; a loss of citizenship based on equality, replaced by group identity based on grievance and all manner of neo-tribalism; a loss of traditional notions of character and personal integrity, replaced by various forms of identity construction via online profiles, avatars, and the like; the loss of separate public and private selves, replaced by affiliations with different lifestyles and media preferences.

As consumers, members of audiences, and participants in the online world, we live for the moment, and we do so with disastrous results, economically, ethically, and ecologically.  Arendt suggests that, "under modern conditions, it is indeed so unlikely that anybody should earnestly aspire to an earthly immortality that we probably are justified in thinking it is nothing but vanity" (p. 56).  Along the same lines, Daniel Boorstin in The Image argued that the hero, characterized by greatness, has been replaced by the celebrity, characterized by publicity, famous for appearing on the media rather than for any accomplishments of historical significance.  Heroes were immortal. Celebrities become famous seemingly overnight, and then just as quickly fade from collective consciousness. Heroes, as Boorstin describes them, were known through print media; celebrities make up the content of our audiovisual and electronic media.  These are the role models that people pattern their lives after.

Arendt explains that a public realm " cannot be erected for one generation and planned for the living only; it must transcend the life span of mortal men" (p. 55). And she goes on to explain,

It is the publicity of the public realm which can absorb and make shine through the centuries whatever men may want to save from the natural ruin of time. Through many ages before us—but now not any more—men entered the public realm because they wanted something of their own or something they had in common with others to be more permanent than their earthly lives. (p. 55)

Without this concern with a public realm that extends across history from the past into the future, what becomes of political action based on the common good, rather than private interests?

With the loss of any concern with immortality, have we witnessed not merely the erosion, but the irrevocable death of the public realm?

And perhaps most importantly of all, without the existence of a public, can there still exist, in something more than name only, a republic?

-Lance Strate

6May/132

Amor Mundi 5/5/13

Arendtamormundi

Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.

Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.

Muting the Words for the Book's Sake

bookOn the occasion of the publication of All That Is, James Salter's latest novel, the author is interviewed by Jonathan Lee. Lee notes that Salter seems to have toned down his sentences for the new book which, it turns out, was a deliberate stylistic choice. Salter elaborates:  "I suppose the truth is I became a little self-conscious about people telling me how much they loved my sentences. They'd come up and say, "You know what, I've memorized lines from Light Years." At book signings you'd see them with the corners of pages turned down, particular pages they'd loved and sentences they'd underlined. It's flattering, but it seemed to me that this love of sentences was in some sense getting in the way of the book itself."

The Inevitable, Unstoppable, and Coming Utopia

utopiaDavid Rieff writes in Foreign Policy about the unbelievable optimism of techno-utopianism. Rieff is biting and also thoughtful as he marshals enormous resources to show how uniform and repetitive the claims are about our coming perfection. "To me, though, what is most striking about the claims made by techno-utopians (though most, including Kurzweil and Zuckerman, reject the label) is the way assertions about the inevitability of unstoppable, exponential technological progress are combined with claims that human beings can, for the first time in history, take their fate into their own hands -- or even defy mortality itself. As Morozov remarks tartly, "Silicon Valley is guilty of many sins, but lack of ambition is not one of them.""

Think Like a Machine

machineNicholas Carr worries about the effect our growing use of machines has on how we think about thinking:  "I think we begin to believe that thinking is always just a matter of a kind of rapid problem-solving and exchanging information in a very utilitarian conception of how we should use our mind. And what gets devalued is those kind of more contemplative, more solitary modes of thought that in the past anyway, were considered central to the experience of life, to the life of the mind certainly, and even to our social lives."

Debating Drones

droneOver at Lawfare, Benjamin Wittes writes about his experience debating Jeremy Waldron about drones at the Oxford Union. Wittes summarizes the sides: "Our side interpreted the resolution as a debate over the propriety of using drones in warfare-that is, as asking whether the use of drones is ethical end effective relative to alternative weapons systems given that one has decided to employ military force. This is actually an easy question, in my opinion, since drones clearly enable more discriminating and deliberative targeting than do alternative weapon systems. Our opponents, by contrast, saw the resolution as implicating the wider question of whether the United States should be resorting to force at all in countries like Pakistan and Yemen. In other words, they saw the question not merely as one of choice of weapon but as about whether the particular weapon enables military actions the United States would not otherwise take and of which one should disapprove either on ethical grounds, as counterproductive strategically, or both."

Behind the Book

bookblocClaire Barliant examines the book bloc, a D.I.Y defensive shield utilized during and after the Occupy Wall Street protests. Barliant finds resonances between the blocs and the declining states of both the book in general and of higher education; several of them, which had been on exhibit at Interference Archive in Gowanus, were supposed to appear at the May Day protest of Cooper Union's decision to start charging its students fees in 2014.

Featured Upcoming Event

movie posterThe Official US Opening of the biopic, Hannah Arendt in NYC

May 29, 2013 at Film Forum, 209 W. Houston St., NYC at 7:45 PM
Film followed by discussion with the director; Margarethe von Trotta, the screenwriter; Pam Katz, Barbara Sukowa and Janet McTeer (playing Hannah Arendt and Mary McCarthy.)

 Buy tickets and learn more here.

From the Hannah Arendt Center Blog

The Arendt Center hosted  the Hudson Valley premiere of Margarethe von Trotta's new movieHannah Arendt, which Natan Sznaider reviewed. Lyndsey Stonebridge explored the role of Shakespeare's Richard III in Arendt's thinking on thinking.  And Roger Berkowitz looks at the brewing feud between the faculty and the MOOCs.

4Dec/122

The Irony of Sincerity

A few weeks ago, Christy Wampole, a professor of French at Princeton, took to the New York Times to point to what she sees as a pandemic of irony, the symptom of a malignant hipster culture which has metastasized, spreading out from college campuses and hip neighborhoods and into the population at large. Last week, author R. Jay Magill responded to Wampole, noting that the professor was a very late entry into an analysis of irony that stretches back to the last gasps of the 20th century, and that even that discourse fits into a much longer conversation about sincerity and irony that has been going on at least since Diogenes.

Of course, this wasn’t Magill’s first visit to this particular arena; his own entry, entitled Sincerity: How a Moral Ideal Born Five Hundred Years Ago Inspired Religious Wars, Modern Art, Hipster Chic, and the Curious Notion That We All Have Something to Say (No Matter How Dull), came out in July. Magill very effectively recapitulates the main point from his book in his article for the Atlantic, but, if you were to read this new summary alone, you would both deny yourself of some of the pleasures of Magill’s research and prose, as well as spare yourself from some of his less convincing arguments, arguments which, incidentally, happen to suffice for the thrust of his recent article.

The most interesting chapters of Magill’s book deal with the early history of the rise of sincerity, which he traces back to the Reformation. In Magill’s telling, the word “sincere” enters the record of English in 1533, when an English reformer named John Frith writes, to Sir Thomas More, that John Wycliffe “had lived ‘a very sincere life.’” Before that use, in its origin in Latin and French, the word “sincere” had only been used to describe objects and, now, Frith was using it not only for the first time in English but also to describe a particular individual as unusually true and pure to his self, set in opposition to the various hypocrisies that had taken root within the Catholic Church. Magill sums this up quite elegantly: “to be sincere” he writes “was to be reformed.”

Now, this would have been revolutionary enough, since it suggested that a relationship with God required internal confirmation rather than external acclamation—in the words of St. Paul, a fidelity to the spirit of the law and not just the letter. And yet reformed sincerity was not simply a return to the Gospel. In order to be true to one’s self, there must be a self to accord with, an internal to look towards. Indeed, Magill’s history of the idea of sincerity succeeds when it describes the development of the self, and, in particular, that development as variably determined by the internal or the external.

Image by Shirin Rezaee

It gets more complicated, however, or perhaps more interesting, when Magill turns towards deceptive presentations of the self, that is, when he begins to talk about insincerity. He begins this conversation with Montaigne, who “comes to sense a definite split between his public and private selves and is the first author obsessed with portraying himself as he really is.” The most interesting appearance of this conversation is an excellent chapter on Jean-Jacques Rousseau, who suggested that people should aspire to self-sameness, should do their best to “reconcile” one’s self to one’s self, a demand for authenticity that would come to be fully expressed in Immanuel Kant’s moral law, the command that I must set myself as a law for myself.

Sincerity, the moral ideal first put forth by John Frith, started as the Reformation’s response to the inability of the Catholic Church to enact that particular principle, in other words, its hypocrisy. This follows for each of the movements that Magill writes about, each responding to the hypocrisy of their own moment in a specific way. On this matter he has a very good teacher, Hannah Arendt, an inheritor of Kant, who was himself a reader of Rousseau. Arendt writes, in Crisis of the Republic, what might serve as a good summation of one of Magill’s more convincing arguments: “if we inquire historically into the causes likely to transform engagés into enragés, it is not injustice that ranks first, but hypocrisy.”

Still, while what makes the sincerity of Frith (who was burned at the stake) or Wycliffe (whose body was exhumed a half century after his death so that it, too, could be burned) compelling is the turn inwards, it is Rousseau’s substitution of the turn back for that turn inward that appears to interest Magill, who decries “the Enlightenment understanding of the world” that “would entirely dominate the West, relegating Rousseau to that breed of reactionary artististic and political minds who stood against the progress of technology, commerce, and modernization and pined for utopia.”

The whole point is moot; Rousseau was himself a hypocrite, often either unable or unwilling to enact the principles he set out in his writings. As Magill moves forward, though, it becomes clear the he values the turn back as a manifestation of sincerity, as a sort of expressing oneself honestly. The last few hundred years in the development of sincerity, it seems, are finding new iterations of the past in the self. He writes that the Romantics, a group he seems to favor as more sincere than most, “harbored a desire to escape a desire to escape forward-moving, rational civilization by worshipping nature, emotion, love, the nostalgic past, the bucolic idyll, violence, the grotesque, the mystical, the outcast and, failing these, suicide.” In turn, in his last chapter, Magill writes that hipster culture serves a vital cultural purpose: its “sincere remembrance of things past, however commodified or cheesy or kitschy or campy or embarrassing, remains real and small and beautiful because otherwise these old things are about to be discarded by a culture that bulldozes content once it has its economic utility.”

The hipster, for Magill, is not the cold affectation of an unculture, as Wampole wants to claim, but is instead the inheritor “of the the entire history of the Protestant-Romantic-rebellious ethos that has aimed for five hundred years to jam a stick into the endlessly turning spokes of time, culture and consumption and yell, “Stop! I want to get off!”

There’s the rub. What Magill offers doesn’t necessarily strike me as a move towards sincerity, but it is definitely a nod to nostalgia. Consider how he recapitulates his argument in the article:

One need really only look at what counts as inventive new music, film, or art. Much of it is stripped down, bare, devoid of over-production, or aware of its production—that is, an irony that produces sincerity. Sure, pop music and Jeff Koons alike retain huge pull (read: $$$), but lately there has been a return to artistic and musical genres that existed prior to the irony-debunking of 9/11: early punk, disco, rap, New Wave—with a winking nod to sparse Casio keyboard sounds, drum machines, naïve drawing, fake digital-look drawings, and jangly, Clash-like guitars. Bands like Arcade Fire, Metric, Scissor Sisters, CSS, Chairlift, and the Temper Trap all go in for heavy nostalgia and an acknowledgement of a less self-conscious, more D.I.Y. time in music.

Here, Magill is very selectively parsing the recent history of “indie music,” ignoring a particularly striking embrace of artificial pop music that happened alongside the rise of the “sincere” genres, like new folk, that he favors. There’s no reason to assume that Jeff Koons’s blown up balloon animals or Andy Warhol’s Brillo Boxes are any less sincere than the Scissor Sisters’s camp disco, just as there is no reason to assume that a desire to return to nature is any less sincere than the move into the city. Although Magill makes a good argument for the hipster’s cultural purpose, that purpose is not itself evidence that the hipster is expressing what’s truly inside himself, just as there’s no way for you to be sure that I am sincerely expressing my feelings about Sincerity. Magill, ultimately, makes the same mistake as Wampole, in that he judges with no evidence; the only person you can accurately identify as sincere is yourself.

-Josh Kopin