“The inner I: That I of reflection is the self, a reflection of the appearing human, so mortal, finite, growing old, capable of change, etc. On the other hand, the I of apperception, the thinking I, which does not change and is timeless. (Kafka Parable)”
—Hannah Arendt, Denktagebuch, February 1966
In an age overcome with the reach of globalization and the virtual expanse of the Internet, Arendt’s notes in her Denktagebuch on a seemingly obscure technical question on activity of thought in Kant gain new relevance by differentiating modes of thinking with depth and over time. Her reference to Kafka and the form of the entry pushes her profound temporal ideas in the direction of narrative fiction.
"Thinking in its non-cognitive, non-specialized sense as a natural need of human life, the actualization of the difference given in consciousness, is not a prerogative of the few but an everpresent faculty of everybody; by the same token, inability to think is not the “prerogative” of those many who lack brain power but the everpresent possibility for everybody—scientists, scholars, and other specialists in mental enterprises not excluded—to shun that intercourse with oneself whose possibility and importance Socrates first discovered."
--Hannah Arendt, “Thinking and Moral Considerations: A Lecture” (1971)
Published eight years after Eichmann in Jerusalem, “Thinking and Moral Considerations” is Arendt’s elaboration of her argument in that book that Adolf Eichmann’s criminal role in the Holocaust did not originate from any “base motives” or even from any motives at all, but from his “thoughtlessness” or “inability to think.” If, she asks, Eichmann’s crimes, which he committed over the course of years, resulted from the fact that he never paused to think, what exactly does it mean to think, and what is the relation between thinking and morality?
In the above quote, which appears on the penultimate page of the lecture, Arendt defines thinking—or the kind of thinking that she argues is necessary for morality—as “the actualization of the difference given in consciousness,” as “that intercourse with oneself whose possibility and importance Socrates first discovered.” She describes this “non-cognitive, non-specialized” kind of thinking both as “a natural need of human life” and as “an everpresent faculty of everybody.” By contrast, she defines “inability to think” as the everpresent possibility for everybody to shun thinking.
We might wonder at this point why Arendt does not simply speak of an “ability not to think,” an ability to (actively) shun thinking, rather than an “inability to think.” Is this because she wants to maintain a hierarchy between something that is natural and human (thinking) and something that is unnatural and inhuman (not thinking)? What would be the justification for such a hierarchy? Or does she want to suggest that Eichmann has become unable to think (through barbarous “nurture”), losing touch with his (nevertheless everpresent) faculty of thinking, which everybody has from birth (“nature”) or from the moment they learn to speak? Thinking and language are intrinsically connected from the first page of Arendt’s lecture, where the primary evidence of Eichmann’s inability to think is that he speaks in clichés. (Also, the lecture is dedicated to a poet, W.H. Auden.) Finally, how does Arendt’s description of thinking as a “natural need of human life” relate to her suggestion that Socrates did not merely discover the importance but the very possibility of thinking?
Arendt casts Socrates as “a model, (…) an example that, unlike the ‘professional’ thinkers, could be representative for our ‘everybody,’ (…) a man who counted himself neither among the many nor among the few (…).” She takes Socrates not as “a personified abstraction with some allegorical meaning ascribed to it,” but as an “ideal type” who “was chosen out of the crowd of living beings, in the past or the present, because he possessed a representative significance in reality which only needed some purification in order to reveal its full meaning.” What, then, is this representative significance?
Arendt bases her conception of thinking and its relation to morality primarily on two famous propositions that Socrates puts forward in the Gorgias: “It is better to be wronged than to do wrong,” and “It would be better for me that my lyre or a chorus I directed should be out of tune and loud with discord, and that multitudes of men should disagree with me rather than that I, being one, should be out of harmony with myself and contradict me” (Arendt’s emphases). According to Arendt, these propositions are not primarily “cogitations about morality” but “insights of experience,” of the experience of the process of thinking. Arendt claims that Socrates means by the first proposition that it is better for him to be wronged than to do wrong if he is thinking, because in thinking you are carrying on a dialogue with yourself, which presupposes some friendship between the partners in the thinking dialogue. You would not want to be friends and enter into a dialogue with someone who does wrong, and since Socrates presupposes that the unexamined life is not worth living, doing wrong leads to a life that is not worth living because examining it in thinking is no longer possible.
Arendt argues that conscience is a “by-product” of consciousness, of the actualization of the difference of me and myself in thinking, because: “What makes a man fear his conscience is the anticipation of the presence of a witness who awaits him only if and when he goes home” (Arendt’s emphasis). However, this formulation suggests that there is no reason to fear your conscience if you never go “home,” that is, if you never engage in the activity of thinking, which, according to Arendt, was precisely Eichmann’s problem. What, then, determines whether someone uses her faculty of thinking or realizes the everpresent possibility of not thinking?
Arendt’s lecture does not contain a strong answer to this question. But although the relation between phenomenological description and normative argument in this lecture remains somewhat unclear, the lecture seems to contain a defense of thinking and a “demand” that everybody think, that everybody aspire to some extent to the ideal-type represented by Socrates, because only thinking can provide an antidote to the “banality of evil.” Arendt acknowledges that thinking can lead to license, cynicism, and nihilism through the relativizing of existing values, because “all critical examinations must go through a stage of at least hypothetically negating accepted opinions and ‘values’ by finding out their implications and tacit assumptions.” However, Arendt’s anti-elitist suggestion is that the problem of nihilism is never that too many people think or that people think too much, but rather that people do not think enough.
Yet Arendt does not tell us what would promote thinking. She does not propose, for instance, to generalize the teaching of thinking through educational institutions, the way that Adorno proposed to create “mobile educational groups” of volunteers to teach “critical (…) self-reflection” to everybody, in his 1966 radio talk, “Education After Auschwitz.” A Habermasian model where people become critical through participation in democratic politics is unavailable for Arendt given her strong opposition of thinking to politics, which belongs to the realm of action. What Arendt does tell us is what is conducive to actualizing the everpresent possibility of not thinking: “(…) general rules which can be taught and learned until they grow into habits that can be replaced by other habits and rules,” the way that Eichmann, as Arendt argues in Eichmann in Jerusalem, simply substituted the duty to do the Führer’s will for Kant’s categorical imperative.
"The end of the old is not necessarily the beginning of the new."
Hannah Arendt, The Life of the Mind
This is a simple enough statement, and yet it masks a profound truth, one that we often overlook out of the very human tendency to seek consistency and connection, to make order out of the chaos of reality, and to ignore the anomalous nature of that which lies in between whatever phenomena we are attending to.
Perhaps the clearest example of this has been what proved to be the unfounded optimism that greeted the overthrow of autocratic regimes through American intervention in Afghanistan and Iraq, and the native-born movements known collectively as the Arab Spring. It is one thing to disrupt the status quo, to overthrow an unpopular and undemocratic regime. But that end does not necessarily lead to the establishment of a new, beneficent and participatory political structure. We see this time and time again, now in Putin's Russia, a century ago with the Russian Revolution, and over two centuries ago with the French Revolution.
Of course, it has long been understood that oftentimes, to begin something new, we first have to put an end to something old. The popular saying that you can't make an omelet without breaking a few eggs reflects this understanding, although it is certainly not the case that breaking eggs will inevitably and automatically lead to the creation of an omelet. Breaking eggs is a necessary but not sufficient cause of omelets, and while this is not an example of the classic chicken and egg problem, I think we can imagine that the chicken might have something to say on the matter of breaking eggs. Certainly, the chicken would have a different view on what is signified or ought to be signified by the end of the old, meaning the end of the egg shell, insofar as you can't make a chicken without it first breaking out of the egg that it took form within.
So, whether you take the chicken's point of view, or adopt the perspective of the omelet, looking backwards, reverse engineering the current situation, it is only natural to view the beginning of the new as an effect brought into being by the end of the old, to assume or make an inference based on sequencing in time, to posit a causal relationship and commit the logical fallacy of post hoc ergo propter hoc, if for no other reason that by force of narrative logic that compels us to create a coherent storyline. In this respect, Arendt points to the foundation tales of ancient Israel and Rome:
We have the Biblical story of the exodus of Israeli tribes from Egypt, which preceded the Mosaic legislation constituting the Hebrew people, and Virgil's story of the wanderings of Aeneas, which led to the foundation of Rome—"dum conderet urbem," as Virgil defines the content of his great poem even in its first lines. Both legends begin with an act of liberation, the flight from oppression and slavery in Egypt and the flight from burning Troy (that is, from annihilation); and in both instances this act is told from the perspective of a new freedom, the conquest of a new "promised land" that offers more than Egypt's fleshpots and the foundation of a new City that is prepared for by a war destined to undo the Trojan war, so that the order of events as laid down by Homer could be reversed.
Fast forward to the American Revolution, and we find that the founders of the republic, mindful of the uniqueness of their undertaking, searched for archetypes in the ancient world. And what they found in the narratives of Exodus and the Aeneid was that the act of liberation, and the establishment of a new freedom are two events, not one, and in effect subject to Alfred Korzybski's non-Aristotelian Principle of Non-Identity. The success of the formation of the American republic can be attributed to the awareness on their part of the chasm that exists between the closing of one era and the opening of a new age, of their separation in time and space:
No doubt if we read these legends as tales, there is a world of difference between the aimless desperate wanderings of the Israeli tribes in the desert after the Exodus and the marvelously colorful tales of the adventures of Aeneas and his fellow Trojans; but to the men of action of later generations who ransacked the archives of antiquity for paradigms to guide their own intentions, this was not decisive. What was decisive was that there was a hiatus between disaster and salvation, between liberation from the old order and the new freedom, embodied in a novus ordo saeclorum, a "new world order of the ages" with whose rise the world had structurally changed.
I find Arendt's use of the term hiatus interesting, given that in contemporary American culture it has largely been appropriated by the television industry to refer to a series that has been taken off the air for a period of time, but not cancelled. The typical phrase is on hiatus, meaning on a break or on vacation. But Arendt reminds us that such connotations only scratch the surface of the word's broader meanings. The Latin word hiatus refers to an opening or rupture, a physical break or missing part or link in a concrete material object. As such, it becomes a spatial metaphor when applied to an interruption or break in time, a usage introduced in the 17th century. Interestingly, this coincides with the period in English history known as the Interregnum, which began in 1649 with the execution of King Charles I, led to Oliver Cromwell's installation as Lord Protector, and ended after Cromwell's death with the Restoration of the monarchy under Charles II, son of Charles I. While in some ways anticipating the American Revolution, the English Civil War followed an older pattern, one that Mircea Eliade referred to as the myth of eternal return, a circular movement rather than the linear progression of history and cause-effect relations.
The idea of moving forward, of progress, requires a future-orientation that only comes into being in the modern age, by which I mean the era that followed the printing revolution associated with Johannes Gutenberg (I discuss this in my book, On the Binding Biases of Time and Other Essays on General Semantics and Media Ecology). But that same print culture also gave rise to modern science, and with it the monopoly granted to efficient causality, cause-effect relations, to the exclusion in particular of final and formal cause (see Marshall and Eric McLuhan's Media and Formal Cause). This is the basis of the Newtonian universe in which every action has an equal and opposite reaction, and every effect can be linked back in a causal chain to another event that preceded it and brought it into being. The view of time as continuous and connected can be traced back to the introduction of the mechanical clock in the 13th century, but was solidified through the printing of calendars and time lines, and the same effect was created in spatial terms by the reproduction of maps, and the use of spatial grids, e.g., the Mercator projection.
And while the invention of history, as a written narrative concerning the linear progression over time can be traced back to the ancient Israelites, and the story of the exodus, the story incorporates the idea of a hiatus in overlapping structures:
A1. Joseph is the golden boy, the son favored by his father Jacob, earning him the enmity of his brothers
A2. he is sold into slavery by them, winds up in Egypt as a slave and then is falsely accused and imprisoned
A3. by virtue of his ability to interpret dreams he gains his freedom and rises to the position of Pharaoh's prime minister
B1. Joseph welcomes his brothers and father, and the House of Israel goes down to Egypt to sojourn due to famine in the land of Canaan
B2. their descendants are enslaved, oppressed, and persecuted
B3. Moses is chosen to confront Pharaoh, liberate the Israelites, and lead them on their journey through the desert
C1. the Israelites are freed from bondage and escape from Egypt
C2. the revelation at Sinai fully establishes their covenant with God
C3. after many trials, they return to the Promised Land
It can be clearly seen in these narrative structures that the role of the hiatus, in ritual terms, is that of the rite of passage, the initiation period that marks, in symbolic fashion, the change in status, the transformation from one social role or state of being to another (e.g., child to adult, outsider to member of the group). This is not to discount the role that actual trials, tests, and other hardships may play in the transition, as they serve to establish or reinforce, psychologically and sometimes physically, the value and reality of the transformation.
In mythic terms, this structure has become known as the hero's journey or hero's adventure, made famous by Joseph Campbell in The Hero with a Thousand Faces, and also known as the monomyth, because he claimed that the same basic structure is universal to all cultures. The basis structure he identified consists of three main elements: separation (e.g., the hero leaves home), initiation (e.g., the hero enters another realm, experiences tests and trials, leading to the bestowing of gifts, abilities, and/or a new status), and return (the hero returns to utilize what he has gained from the initiation and save the day, restoring the status quo or establishing a new status quo).
Understanding the mythic, non-rational element of initiation is the key to recognizing the role of the hiatus, and in the modern era this meant using rationality to realize the limits of rationality. With this in mind, let me return to the quote I began this essay with, but now provide the larger context of the entire paragraph:
The legendary hiatus between a no-more and a not-yet clearly indicated that freedom would not be the automatic result of liberation, that the end of the old is not necessarily the beginning of the new, that the notion of an all-powerful time continuum is an illusion. Tales of a transitory period—from bondage to freedom, from disaster to salvation—were all the more appealing because the legends chiefly concerned the deeds of great leaders, persons of world-historic significance who appeared on the stage of history precisely during such gaps of historical time. All those who pressed by exterior circumstances or motivated by radical utopian thought-trains, were not satisfied to change the world by the gradual reform of an old order (and this rejection of the gradual was precisely what transformed the men of action of the eighteenth century, the first century of a fully secularized intellectual elite, into the men of the revolutions) were almost logically forced to accept the possibility of a hiatus in the continuous flow of temporal sequence.
Note that concept of gaps in historical time, which brings to mind Eliade's distinction between the sacred and the profane. Historical time is a form of profane time, and sacred time represents a gap or break in that linear progression, one that takes us outside of history, connecting us instead in an eternal return to the time associated with a moment of creation or foundation. The revelation in Sinai is an example of such a time, and accordingly Deuteronomy states that all of the members of the House of Israel were present at that event, not just those alive at that time, but those not present, the generations of the future. This statement is included in the liturgy of the Passover Seder, which is a ritual reenactment of the exodus and revelation, which in turn becomes part of the reenactment of the Passion in Christianity, one of the primary examples of Campbell's monomyth.
Arendt's hiatus, then represents a rupture between two different states or stages, an interruption, a disruption linked to an eruption. In the parlance of chaos and complexity theory, it is a bifurcation point. Arendt's contemporary, Peter Drucker, a philosopher who pioneered the scholarly study of business and management, characterized the contemporary zeitgeist in the title of his 1969 book: The Age of Discontinuity. It is an age in which Newtonian physics was replaced by Einstein's relativity and Heisenberg's uncertainty, the phrase quantum leap becoming a metaphor drawn from subatomic physics for all forms of discontinuity. It is an age in which the fixed point of view that yielded perspective in art and the essay and novel in literature yielded to Cubism and subsequent forms of modern art, and stream of consciousness in writing.
Beginning in the 19th century, photography gave us the frozen, discontinuous moment, and the technique of montage in the motion picture gave us a series of shots and scenes whose connections have to be filled in by the audience. Telegraphy gave us the instantaneous transmission of messages that took them out of their natural context, the subject of the famous comment by Henry David Thoreau that connecting Maine and Texas to one another will not guarantee that they have anything sensible to share with each other. The wire services gave us the nonlinear, inverted pyramid style of newspaper reporting, which also was associated with the nonlinear look of the newspaper front page, a form that Marshall McLuhan referred to as a mosaic. Neil Postman criticized television's role in decontextualizing public discourse in Amusing Ourselves to Death, where he used the phrase, "in the context of no context," and I discuss this as well in my recently published follow-up to his work, Amazing Ourselves to Death.
The concept of the hiatus comes naturally to the premodern mind, schooled by myth and ritual within the context of oral culture. That same concept is repressed, in turn, by the modern mind, shaped by the linearity and rationality of literacy and typography. As the modern mind yields to a new, postmodern alternative, one that emerges out of the electronic media environment, we see the return of the repressed in the idea of the jump cut writ large.
There is psychological satisfaction in the deterministic view of history as the inevitable result of cause-effect relations in the Newtonian sense, as this provides a sense of closure and coherence consistent with the typographic mindset. And there is similar satisfaction in the view of history as entirely consisting of human decisions that are the product of free will, of human agency unfettered by outside constraints, which is also consistent with the individualism that emerges out of the literate mindset and print culture, and with a social rather that physical version of efficient causality. What we are only beginning to come to terms with is the understanding of formal causality, as discussed by Marshall and Eric McLuhan in Media and Formal Cause. What formal causality suggests is that history has a tendency to follow certain patterns, patterns that connect one state or stage to another, patterns that repeat again and again over time. This is the notion that history repeats itself, meaning that historical events tend to fall into certain patterns (repetition being the precondition for the existence of patterns), and that the goal, as McLuhan articulated in Understanding Media, is pattern recognition. This helps to clarify the famous remark by George Santayana, "those who cannot remember the past are condemned to repeat it." In other words, those who are blind to patterns will find it difficult to break out of them.
Campbell engages in pattern recognition in his identification of the heroic monomyth, as Arendt does in her discussion of the historical hiatus. Recognizing the patterns are the first step in escaping them, and may even allow for the possibility of taking control and influencing them. This also means understanding that the tendency for phenomena to fall into patterns is a powerful one. It is a force akin to entropy, and perhaps a result of that very statistical tendency that is expressed by the Second Law of Thermodynamics, as Terrence Deacon argues in Incomplete Nature. It follows that there are only certain points in history, certain moments, certain bifurcation points, when it is possible to make a difference, or to make a difference that makes a difference, to use Gregory Bateson's formulation, and change the course of history. The moment of transition, of initiation, the hiatus, represents such a moment.
McLuhan's concept of medium goes far beyond the ordinary sense of the word, as he relates it to the idea of gaps and intervals, the ground that surrounds the figure, and explains that his philosophy of media is not about transportation (of information), but transformation. The medium is the hiatus.
The particular pattern that has come to the fore in our time is that of the network, whether it's the decentralized computer network and the internet as the network of networks, or the highly centralized and hierarchical broadcast network, or the interpersonal network associated with Stanley Milgram's research (popularly known as six degrees of separation), or the neural networks that define brain structure and function, or social networking sites such as Facebook and Twitter, etc. And it is not the nodes, which may be considered the content of the network, that defines the network, but the links that connect them, which function as the network medium, and which, in the systems view favored by Bateson, provide the structure for the network system, the interaction or relationship between the nodes. What matters is not the nodes, it's the modes.
Hiatus and link may seem like polar opposites, the break and the bridge, but they are two sides of the same coin, the medium that goes between, simultaneously separating and connecting. The boundary divides the system from its environment, allowing the system to maintain its identity as separate and distinct from the environment, keeping it from being absorbed by the environment. But the membrane also serves as a filter, engaged in the process of abstracting, to use Korzybski's favored term, letting through or bringing material, energy, and information from the environment into the system so that the system can maintain itself and survive. The boundary keeps the system in touch with its situation, keeps it contextualized within its environment.
The systems view emphasizes space over time, as does ecology, but the concept of the hiatus as a temporal interruption suggests an association with evolution as well. Darwin's view of evolution as continuous was consistent with Newtonian physics. The more recent modification of evolutionary theory put forth by Stephen Jay Gould, known as punctuated equilibrium, suggests that evolution occurs in fits and starts, in relatively rare and isolated periods of major change, surrounded by long periods of relative stability and stasis. Not surprisingly, this particular conception of discontinuity was introduced during the television era, in the early 1970s, just a few years after the publication of Peter Drucker's The Age of Discontinuity.
When you consider the extraordinary changes that we are experiencing in our time, technologically and ecologically, the latter underlined by the recent news concerning the United Nations' latest report on global warming, what we need is an understanding of the concept of change, a way to study the patterns of change, patterns that exist and persist across different levels, the micro and the macro, the physical, chemical, biological, psychological, and social, what Bateson referred to as metapatterns, the subject of further elaboration by biologist Tyler Volk in his book on the subject. Paul Watzlawick argued for the need to study change in and of itself in a little book co-authored by John H. Weakland and Richard Fisch, entitled Change: Principles of Problem Formation and Problem Resolution, which considers the problem from the point of view of psychotherapy. Arendt gives us a philosophical entrée into the problem by introducing the pattern of the hiatus, the moment of discontinuity that leads to change, and possibly a moment in which we, as human agents, can have an influence on the direction of that change.
To have such an influence, we do need to have that break, to find a space and more importantly a time to pause and reflect, to evaluate and formulate. Arendt famously emphasizes the importance of thinking in and of itself, the importance not of the content of thought alone, but of the act of thinking, the medium of thinking, which requires an opening, a time out, a respite from the onslaught of 24/7/365. This underscores the value of sacred time, and it follows that it is no accident that during that period of initiation in the story of the exodus, there is the revelation at Sinai and the gift of divine law, the Torah or Law, and chief among them the Ten Commandments, which includes the fourth of the commandments, and the one presented in greatest detail, to observe the Sabbath day. This premodern ritual requires us to make the hiatus a regular part of our lives, to break the continuity of profane time on a weekly basis. From that foundation, other commandments establish the idea of the sabbatical year, and the sabbatical of sabbaticals, or jubilee year. Whether it's a Sabbath mandated by religious observance, or a new movement to engage in a Technology Sabbath, the hiatus functions as the response to the homogenization of time that was associated with efficient causality and literate linearity, and that continues to intensify in conjunction with the technological imperative of efficiency über alles.
To return one last time to the quote that I began with, the end of the old is not necessarily the beginning of the new because there may not be a new beginning at all, there may not be anything new to take the place of the old. The end of the old may be just that, the end, period, the end of it all. The presence of a hiatus to follow the end of the old serves as a promise that something new will begin to take its place after the hiatus is over. And the presence of a hiatus in our lives, individually and collectively, may also serve as a promise that we will not inevitably rush towards an end of the old that will also be an end of it all, that we will be able to find the opening to begin something new, that we will be able to make the transition to something better, that both survival and progress are possible, through an understanding of the processes of continuity and change.
“The shift from the ‘why’ and ‘what’ to the ‘how’ implies that the actual objects of knowledge can no longer be things or eternal motions but must be processes, and that the object of science is no longer nature or the universe but the history, the story of the coming into being, of nature or life or the universe....Nature, because it could be known only in processes which human ingenuity, the ingeniousness of homo faber, could repeat and remake in the experiment, became a process, and all particular natural things derived their significance and meaning solely from their function in the over-all process. In the place of the concept of Being we now find the concept of Process. And whereas it is in the nature of Being to appear and thus disclose itself, it is in the nature of Process to remain invisible, to be something whose existence can only be inferred from the presence of certain phenomena.”
-Hannah Arendt, The Human Condition
Bookending Arendt’s consideration of the human condition “from the vantage point of our newest experiences and our most recent fears” is her invocation of several “events, ” which she took to be emblematic of the modern world launched by the atomic explosions of the 1940s and the threshold of the modern age that preceded it by several centuries. The event she invokes in the opening pages is the launch of Sputnik in 1957; its companion events are named in the last chapter of the book--the discovery of America, the Reformation, and the invention of the telescope and the development of a new science.
Not once mentioned in The Human Condition, but, as Mary Dietz argued so persuasively in her Turning Operations, palpably present as a “felt absence,” is the event of the Shoah, the “hellish experiment” of the SS concentration camps, which is memorialized today, Yom HaShoah. Reading Arendt’s commentaries on the discovery of the Archimedean point and its application in modern science with the palpably present but textually absent event of the Holocaust in mind sheds new light on the significance of her cautionary tale about the worrying implications of the new techno-science of algorithms and quantum physics and its understanding of nature produced through the experiment.
What happens, she seems to be asking, when the meaning of all “particular things” derives solely from “their function in the over-all process”? If nature in all of its aspects is understood as the inter- (or intra-) related aspects of the overall life process of the universe, does then human existence, as part of nature, become merely one part of that larger process, differing perhaps in degree, but not kind, from any other part?
Recently, “new materialist” philosophers have lauded this so-called “posthumanist” conceptualization of existence, arguing that the anthropocentrism anchoring earlier modern philosophies—Arendt implicitly placed among them?—arbitrarily separates humans from the rest of nature and positions them as masters in charge of the world (universe). By contrast, a diverse range of thinkers such as Jane Bennett, Rosi Braidotti, William Connolly, Diana Coole, and Cary Wolfe have drawn on a variety of philosophical and scientific traditions to re-appropriate and “post-modernize” some form of vitalism. The result is a reformulation of an ontology of process—what Connolly calls “a world of becoming”—as the most accurate way to understand matter’s dynamic and eternal self-unfolding. And, consequentially, it also entails transforming agency from a human capacity of “the will” with its related intentions to a theory of agency of “multiple degrees and sites...flowing from simple natural processes, to human beings and collective social assemblages” with each level and site containing “traces and remnants from the levels from which it evolved,” which “affect [agency’s] operation.” (Connolly, A World Becoming, p. 22, emphasis added). The advantage of a “philosophy/faith of radical immanence or immanent realism,” Connolly argues, is its ability to engage the “human predicament”: “how to negotiate life, without hubris or existential resentment, in a world that is neither providential nor susceptible to consummate mastery. We must explore how to invest existential affirmation in such a world, even as we strive to fend off its worst dangers.”
An implicit ethic of aiming to take better care of the world, “to fold a spirit of presumptive generosity for the diversity of life into your conduct” by not becoming too enamored with human agency resides in this philosophy/faith. In the entanglements she explores between human and non-human materiality—a “heterogeneous monism of vibrant bodies” —one can discern similar ethical concerns in Jane Bennett’s Vibrant Matter. “It seems necessary and impossible to rewrite the default grammar of agency, a grammar that assigns activity to people and passivity to things.” Conceptualizing nature as “an active becoming, a creative not-quite-human force capable of producing the new” Bennett affirms a “vital materiality [that] congeals into bodies, bodies that seek to persevere or prolong their run,” (p. 118, emphasis in the original) where “bodies” connotes all forms of matter. And she contends that this vital materialism can “enhance the prospects for a more sustainability-oriented public.” Yet, without some normative criteria for discerning the ways this new materialism can work toward “sustainability,” it is by no means obvious how either a declaration of faith in the “radical character of the (fractious) kinship between the human and the non-human” or having greater “attentiveness to the indispensable foreignness that we are” would lead to a change in political direction toward more gratitude and away from more destructive patterns of production and consumption. The recognition of our vulnerability could just as easily lead to renewed efforts to truncate or even eradicate the “foreignness” within.
Nonetheless, although these and other accounts call for a reconceptualization of concepts of agency and of causality, none pushes as far toward a productivist/performative account of matter and meaning as does Karen Barad’s theory of “agential realism.” Drawing out the implications of Niels Bohr’s quantum mechanics, Barad develops a theory of how “subjects” and “objects” are produced as apparently separable entities by “specific material configurings of the world” which enact “boundaries, properties, and meanings.” And, in her conceptualization, “meaning is not a human-based notion; rather meaning is an ongoing performance of the world in its differential intelligibility...Intelligibility is not an inherent characteristic of humans but a feature of the world in its differential becoming. The world articulates itself differently...[H]uman concepts or experimental practices are not foundational to the nature of phenomena. ” The world is immanently real and matter immanently materializes.
At first glance, this posthumanist understanding of reality seems consistent with Arendt’s own critique of Cartesian dualism and Newtonian physics and her understanding of the implicitly conditioned nature of human existence. “Men are conditioned beings because everything they come into contact with turns immediately into a condition of their existence. The world in which the vita activa spends itself consists of things produced by human activities; but the things that owe their existence exclusively to men nevertheless constantly condition their human makers.” Nonetheless, there is a profound difference between them. For Barad, “world” is not Arendt’s humanly built habitat, the domain of homo faber (which does not necessarily entail mastery of nature, but always involves a certain amount of violence done to nature, even to the point of “degrading nature and the world into mere means, robbing both of their independent dignity.” (H.C., p. 156, emphasis added.) “World” is matter, the physical, ever-changing reality of an inherently active, “larger material configuration of the world and it ongoing open-ended articulation.” Or is it?
Since this world is made demonstrably real or determinate only through the design of the right experiment to measure the effects of, or marks on, bodies, or “measuring agencies” (such as a photographic plate) made or produced by “measured objects” (such as electrons), the physical nature of this reality becomes an effect of the experiment itself. Despite the fact that Barad insists that “phenomena do not require cognizing minds for their existence” and that technoscientific practices merely manifest “an expression of the objective existence of particular material phenomena” (p. 361), the importance of the well-crafted scientific experiment to establishing the fact of matter looms large.
Why worry about the experiment as the basis for determining the nature of nature, including so-called “human nature? For Arendt, the answer was clear: “The world of the experiment seems always capable of becoming a man-made reality, and this, while it may increase man’s power of making and acting, even of creating a world, far beyond what any previous age dared imagine...unfortunately puts man back once more—and now even more forcefully—into the prison of his own mind, into the limitations of patterns he himself has created...[A] universe construed according to the behavior of nature in the experiment and in accordance with the very principles which man can translate technically into a working reality lacks all possible representation...With the disappearance of the sensually given world, the transcendent world disappears as well, and with it the possibility of transcending the material world in concept and thought.”
The transcendence of representationalism does not trouble Barad, who sees “representation” as a process of reflection or mirroring hopelessly entangled with an outmoded “geometrical optics of externality.” But for Arendt, appearance matters, and not in the sense that a subject discloses some inner core of being through her speaking and doing, but in the sense that what is given to the senses of perception—and not just to the sense of vision—is the basis for constructing a world in common. The loss of this “sensually given world” found its monstrous enactment in the world of the extermination camps, which Arendt saw as “special laboratories to carry through its experiment in total domination.”
If there is a residual humanism in Arendt’s theorizing it is not the simplistic anthropocentrism, which takes “man as the measure of all things,” a position she implicitly rejects, especially in her critique of instrumentalism. Rather, she insists that “the modes of human cognition [science among them] applicable to things with ‘natural’ qualities, including ourselves to the limited extent that we are specimens of the most highly developed species of organic life, fail us when we raise the question: And who are we?” (H.C., p. 11, emphasis in the original) And then there is the question of responsibility.
We may be unable to control the effects of the actions we set in motion, or, in Barad’s words, “the various ontological entanglements that materiality entails.”
But no undifferentiated assignation of agency to matter, or material sedimentations of the past “ingrained in the body’s becoming” can release us humans from the differential burden of consciousness and memory that is attached to something we call the practice of judgment. And no appeal to an “ethical call...written into the very matter of all being and becoming” will settle the question of judgment, of what is to be done. There may be no place to detach ourselves from responsibility, but how to act in the face of it is by no means given by the fact of entanglement itself. What if “everything is possible.”?
-Kathleen B. Jones
I was at dinner with a colleague this week—midterm week. Predictably, talk turned to the scourge of all professors: grading essays. There are few tasks in the life of a college professor less fulfilling than grading student essays. Every once in a while a really good essay jolts me to consciousness. I am elated by such encounters. To be honest, however, reading essays is for the most part stultifying. This is not the fault of the students, many of whom are brilliant and exuberant writers. I find it trying to wade through 25 essays discussing the same book, offering varying opinions and theories, while keeping my attention and interest. How many different ways can one ask for a thesis, talk about the importance of transition sentences, and correct grammar? For some time it is fun, in a way. One learns new things and is captivated by comparing how bright young minds see things. But after years, grading the essay becomes just part of the worst part of a great job.
So how might my colleagues and I react to news that EdX—the influential Harvard-MIT led consortium offering online courses—has developed software that will grade college student essays? I imagine it is sort of like how people felt when the dishwasher was invented. You mean we can cook and feast and don’t have to scrub pots and wash dishes? It promises to allow us to focus on teaching well without having to do that part of our job that we truly dread.
The appeal of computer grading is obvious and broad. Not only will many professors and teachers be freed from unwanted tedium, but also it may help our students. One advantage of computer grading is that it is nearly instantaneous. Students can hand in their work and get a grade and feedback seconds later. Too often essays are handed back days or even weeks after they are submitted. By then the students have lost interest in their paper and forgotten the inspiration that breathed life into their writing. To receive immediate feedback will allow students to see what they did wrong and how they could improve while the generative impulse underlying the paper is still fresh. Computer grading might encourage students to turn in numerous drafts of a paper; it may very well help teach students to write better, something that professorial comments delivered after a week rarely accomplish.
Another putative advantage of computer grading is its objectivity and consistency. Every professor knows that it matters when we read essays and in what order. Some essays find us awake and attentive. Others meet my eyes as they struggle to remain open. As much as I try to ignore the names on the top of the page, I can’t deny that my reading and grading is personalized to the students. I teach at a small liberal arts college where I know the students. If I read a particularly difficult sentence by a student I have come to trust, I often make a second effort. My personal attention has advantages but it is of course discriminatory. The computer will not do that, which may be seen by some as more fair. What is more, the computer doesn’t get tired or need caffeine.
Perhaps the most important advantage for administrators considering these programs is the cost savings. If computers relieve professors from the burden of grading, that means professors can teach more. It may also mean that fewer TA’s are necessary in large lecture courses, thus saving money for strapped universities. There may even be a further side benefit to these programs. If universities need fewer TA’s to grade papers, they may admit fewer graduate students to their programs, thus going some way towards alleviating the extraordinary and irresponsible over-production of young professors that is swelling the ranks of unemployable Ph.D.s.
There are, of course, real worries about computer grading of essays. My concern is not that the computers will make mistakes (so do I); or that we lack studies that show that computers can grade as well as human professors—for I doubt professors are on the whole excellent graders. The real issue is elsewhere.
According to the group “Professionals Against Machine Scoring of Student Essays in High-Stakes Assessment,” the problem with computer grading of essays is simple: Machines cannot read. Here is what the group says in a statement:
Let’s face the realities of automatic essay scoring. Computers cannot ‘read.’ They cannot measure the essentials of effective written communication: accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organization, clarity, and veracity, among others.
What needs to be taken seriously is not that computers can’t grade as well as humans. In many ways they grade better. More consistently. More honestly. With less grade inflation. And more quickly. But computer grading will be different than human grading. It will be less nuanced and aspire to clearly defined criteria. Are sentences grammatical? Is there a clear statement of the thesis? Are there examples given? Is there a transition between sentences? All of these are important parts of good writing and the computer can be trained to look for these characteristics in an essay. What this means, however, is that computers will demand the kind of clear, precise, and logical writing that computers can understand and that many professors and administrators demand from students. What this also means, however, is that writing will become more mechanical.
There is much to be learned here from an analogy with the rise of computer chess. The great grandmaster Gary Kasparov—who famously lost to Deep Blue— has perceptively argued that machines have changed the ways Chess is played and redefined what a good chess move and a well-played chess game looks like. As I have written before:
The heavy use of computer analysis has pushed the game itself in new directions. The machine doesn’t care about style or patterns or hundreds of years of established theory. It counts up the values of the chess pieces, analyzes a few billion moves, and counts them up again. (A computer translates each piece and each positional factor into a value in order to reduce the game to numbers it can crunch.) It is entirely free of prejudice and doctrine and this has contributed to the development of players who are almost as free of dogma as the machines with which they train. Increasingly, a move isn’t good or bad because it looks that way or because it hasn’t been done that way before. It’s simply good if it works and bad if it doesn’t. Although we still require a strong measure of intuition and logic to play well, humans today are starting to play more like computers. One way to put this is that as we rely on computers and begin to value what computers value and think like computers think, our world becomes more rational, more efficient, and more powerful, but also less beautiful, less unique, and less exotic.
Much the same might be expected from the increasing use of computers to grade (and eventually to write) essays. Students will learn to write in ways expected from computers, just as they today try to learn to write in ways desired by their professors. The difference is that different professors demand and respond to varying styles. Computers will consistently and logically drive writing towards a more mechanical and logical style. Writing, like Chess playing, will likely become more rational, more efficient, and more effective, but also less beautiful, less unique, and less eccentric. In other words, writing will become less human.
It turns out that many secondary school districts already use computers to grade essays. But according to John Markoff in The New York Times, the EdX software promises to bring the technology into college classrooms as well as online courses.
It is quite possible that in the near future, my colleagues and I will no longer have to complain about grading essays. But that is unlikely at Bard. More likely is that such software will be used in large university lecture courses. In such courses with hundreds of students, professors already shorten questions or replace essays with multiple-choice tests. Or they use armies of underpaid graduate students to grade these essays. It is quite likely that software will actually augment the educational value of writing assignments at college in these large lecture halls.
In seminars, however, and in classes at small liberal arts colleges like Bard where I teach, such software will not likely free my colleagues and me from reading essays. The essays I assign are not simple responses to questions in which there are clear criteria for grading. I look for elegance, brevity, insight, and the human spark (please no comments on my writing). Whether or not I am good at evaluating writing or at teaching writing, that is my aspiration. I seek to encourage writing that is thoughtful rather than writing that is simply accurate. When I have time to make meaningful comments on papers, they concern structure, elegance, and depth. It is not only a way to grade an essay, but also a way to connect with my students and help them to see what it means to write and think well.
And yet, I can easily imagine making use of such a computer-grading program. I rarely have time to grade essays as well or as quickly as I would like. I would love to have my students submit drafts of their essays to the EdX computer program.
If they could repeatedly submit their essays and receive such feedback and use the computer to catch not only grammatical errors but also poor sentences, redundancies, repetitions, and whatever other mistakes the computer can be trained to recognize, that would allow them to respond and rework their essays many times before I see them. Used well, I hope, such grading programs might really augment my capacities as a professor and their experiences as students.
I have real fears that grading technology will rarely be used well. Rather, it will too-often replace human grading altogether and in large lectures, high schools and standardized tests will impose a new and inhuman standard on the way we write and thus the way we think. We should greet such new technologies enthusiastically and skeptically. But first, we should try to understand them. Towards that end, it is well worth reading John Markoff’s excellent account of the new EdX computer grading software in The New York Times. It is your weekend read.
“To be alive means to live in a world that preceded one’s own arrival and will survive one’s own departure. On this level of sheer being alive, appearance and disappearance, as they follow upon each other, are the primordial events, which as such mark out time, the time span between birth and death.”
-Hannah Arendt, The Life of the Mind
I credit my undergraduate advisor, the late Kenneth Reshaur, for one of my obsessions: I refer to the crack in the spine, between the Work and the Action chapters that divides my undergraduate copy of Hannah Arendt’s The Human Condition. That fissure finds sustenance in the passage above, which appears at the very beginning of Arendt’s “Thinking” volume of The Life of the Mind.
It is a telling quote for many reasons, not the least of which because in it, Arendt explicitly echoes Maurice Merleau Ponty’s treatment of “primordial perception” in some of his late writings on painting, but also because it testifies to Arendt’s relentless commitment to thinking as primordially bound to the phenomenality of life, and especially to the life of politics. Politics is, for Arendt, apparitional in nature. It regards the appearance of things, both human and inhuman. And to appear is also what it means to be alive. To be sure, for Arendt there is the fact of natality that regards a coming into life; but that differs from an appearance. Natality is of the order of the new; but an appearance persists regardless of its newness or oldness. We might say that an appearance is indifferent to qualities like newness or oldness. Hence Arendt’s emphasis on the sensoriality of appearances, their ingression, but also their departure. It is an unavoidable fact for her that peoples, things, events appear and disappear in the way in which the sound of a note or of a voice appears and then fades away; what Arendt appreciates about this primordial condition of sensoriality is that the appearance and disappearance of things marks a domain of sheer aliveness; “sheer” in the sense of not having qualifications or conditions for their bodying forth.
For Arendt, the sheerness of the apparitional world of politics means that appearances are not mere appearances. This fact marks, to my mind, her great friction with some aspects of the Platonic tradition from which she also draws. The aspectual alliteration of “sheer” and “mere” resonates with her emphasis on appearances as being a site of care. To be more precise, Arendt’s elaboration of a politics of appearances bespeaks a commitment to a curatorial disposition to the world that she associates with the ability to trust others to “tend and take care of a world of appearances” (The Crisis in Culture). To consider appearances as “mere” (as opposed to “sheer”) suggests a disregard for life itself, for the way in which, as she goes on to affirm a few paragraphs after the quote, “To be alive means to be possessed by an urge to self-display which answers the fact of one’s own appearingness.” (The Life of Mind).
To be alive, in this sense, regards an urge to be felt, to be attended to by others. This is what the spectacle asks of the spectator: not so much “pay attention to me”, but “attend to what appears before you.” Such attention is what spurs on judgment, for Arendt, which is the activity sine qua non of “sharing-of-the-world-with-others” (Crisis in Culture). But before judgment may take place, before what captures our attentions can be morphed into thoughtful reflection, there is the sheerness of appearance that strikes at our curatorial dispositions.
And for Arendt, this primordial capacity to strike is disinterested.
What do I mean by this? Simply put, Arendt’s call to attend to the sheer appearance of the world forces us to come to terms with a domain of experience that precedes any and all capacities to formulate judgments, interests, and ideas: This is the primordial world of disinterest. And “disinterest” here does not mean either “indifferent” or “detached”; nor does this amount to a reassignment of the “Archimedean point.” On the contrary, the domain of disinterest is a domain of absorption and immersion in the facticity of lived sensations: it is the domain of the aesthetic that Arendt rightly identifies as the source of Kant’s political thought.
To recall, Kant’s crucial insight in the Critique of Judgment is that there can be no necessary conditions for something to count as beautiful, and hence there can be no rules for the category of the aesthetic. This is an insight that Kant borrows from Hume’s critique of consequentialism; but whereas for Hume, the heterogeneity that arises from the absence of necessity is a part of life, for Kant it is restricted to aesthetic experience as he defines it.
The aesthetic is the source of Kant’s political thought, then, not because the aesthetic provides normative guides to help us make judgments (it can’t), nor because there is anything specifically political about the beautiful (there can’t be because according to Kant aesthetic experience is disinterested in the sense of unqualifiable). Rather, the aesthetic is a source of political thinking, and political life in general, because it is only through aesthetic experience that one encounters a mode of valuing that is non-instrumental and not reducible to its use value. Indeed, aesthetic experience is that experience that annihilates our reliance on a sense of necessity; and it is precisely the annihilation of necessity – necessity being the concept that Arendt likens to the a-political qualities of the private and the social – which makes aesthetics and politics so intimately entangled for her.
Arendt’s politics of appearances, encapsulated in the quote from The Life of the Mind, thus speaks of the possibility of a life devoid of the force of necessity, and of things not having to go on as they have.
This is why she seems so resistant to the privative nature of the private, and the biologism of the social: what binds Arendt’s characterization of these entities (and I think it important to regard her use of these terms as characterizations and not descriptions), is their inexorable reliance on the force of necessity as sovereign.
For me, moreover, Arendt’s aesthetics of politics evokes the possibility of always having at one’s recourse the polemical claim that “this need not be”, that things need not continue in this way, that the continuity of any form of political subjectification is not necessary. This also means that the assembly of things – as they are at any one point in time – is not necessary in the manner in which an instrumental rationality demands that they must be. The possibility to admit of a resistance to necessity regards a curatorial disposition that attends to the sheer fact of appearance—of peoples, things, and events in the world. Such is the nature of Arendt’s politics of appearances.
There is probably no question more debated in the course of Middle Eastern uprisings than that of the status of human rights. Anyone familiar with the region knows that the status of human rights in the Middle East is at best obscure. The question of why there was not a “revolution” in Lebanon is a very complex one, tied with the fate of Syria and with the turbulent Lebanese politics since the end of the civil war, and hence cannot be fully answered. In a vague sense it can be said of course that Lebanon is the freest Arab country and that as such it bears a distinctively different character.
While at face value, the statement is true, being “more free than” in the Middle East is simply understating a problem. Just to outline the basic issues, Lebanon’s record on human rights has been a matter of concern for international watchdogs on the following counts:
Security forces arbitrarily detain and torture political opponents and dissidents without charge, different groups (political, criminal, terrorist and often a combination of the three) intimidate civilians throughout the country in which the presence of the state is at best weak, freedom of speech and press is severely limited by the government, Palestinian refugees are systematically discriminated and homosexual intercourse is still considered a crime.
While these issues remain at the level of the state, in society a number of other issues are prominent: Abuse of domestic workers, racism (for example excluding people from color and maids from the beaches) violence against women and homophobia that even included recently a homophobic rant on a newspaper of the prestigious American University in Beirut. The list could go on forever.
The question of gay rights in Lebanon remains somewhat paradoxical. On the one hand, article 534 of the Lebanese Penal Code prohibits explicitly homosexual intercourse since it “contradicts the laws of nature”, and makes it punishable with prison. On the other hand, Beirut – and Lebanon – remains against all odds a safe haven, for centuries, for many people in the Middle East fleeing persecution or looking for a more tolerant lifestyle.
That of course includes gays and lesbians and it is not uncommon to hear of gay parties held from time to time in Beirut’s celebrated clubs. At the same time, enforcement of the law is sporadic and like everything in Lebanon, it might happen and it might not; best is to read the horoscope in the morning and pray for good luck. A few NGO pro-LGBT have been created in the country since the inception of “Hurriyyat Khassa” (Private Liberties) in 2002.
In 2009 Lebanese LGBT-organization Helem launched a ground-breaking report about the legal status of homosexuals in the entire region, in which a Lebanese judge ruled against the use of article 534 to prosecute homosexuals.
It is against the background of this turbulent scenario that Samer Daboul’s film “Out Loud” (2011) came to life, putting together an unusual tale about friendship and love set in postwar Lebanon in which five friends and a girl set on a perilous journey in order to find their place in the world.
Though the plot of the film seems simple, underneath the surface lurks a challenge to the traditional morals and taboos of Lebanese society – homosexuality, the role of women, the troubled past of the war, delinquency, crime, honor – which for Lebanese cinema, on the other hand, marks a turning point.
This wouldn’t be so important in addressing the question of rights and freedoms in Lebanon were it not for a documentary, “Out Loud – The Documentary”, released together with the film that documents in detail the ordeal through which the director, actors and crew had to go through in order to complete this film.
Shot in Zahlé, in mountainous heartland of Lebanon and what the director called “a city and a nation of conservatism and intolerance”, it is widely reported in the documentary that from the very beginning the cast and crew were met with the same angry mobs, insults, and physical injuries that their film in itself so vehemently tried to overcome; a commercial film about family violence, gay lovers, and the boundaries of relationships between men and women. A film not about Lebanon fifteen or twenty years ago, but about Lebanon of here and today.
Daboul writes: “Although I grew up in the city in which “Out Loud” was filmed, even I had no idea how difficult it would be to make a movie in a nation plagued by violence, racism, sexism, corruption and a lack of respect for art and human rights.” The purpose of “Out Loud” of course wasn’t only to make a movie but a school of life, in which the maker, the actors and the audience could all have a peaceful chance to re-examine their own history and future.
Until very recently in lieu of a public space, in Lebanon, any conflict was solved by means of shooting, kidnapping and blackmailing by armed militias spread throughout the country and acting in the name of the nation.
The wounds have been very slow to heal as is no doubt visible from the contemporary political panorama. Recently, a conversation with an addiction counselor in Beirut revealed the alarming statistics of youth mental illness, alcoholism and drug addiction across all social classes in Lebanon, to which I will devote a different article.
Making films in Lebanon is an arduous process that not only does not receive support from the state but is also subject to an enormous censorship bureaucracy that wants to make sure that the content of the films do not run counter to the religious and political sensibilities of the state. In the absence of strong state powers, the regulations are often malleable and rather look after the sensibilities of political blocs and religious leaders rather than state security, if any such exists.
The whole idea of censorship of ideas is intimately intertwined with the reality of freedom and rights and with the severe limitations – both physical and intellectual – placed upon the public space.
In the Middle East, censorship of a gay relationship is an established practice in order to protect public morality; however what we hear on the news daily that goes from theft to murder to kidnap to abuse to rape to racism, does not require much censorship and is usually consumed by the very same public.
If there is one thing here that one can learn from Hannah Arendt about freedom of speech is that as Roger Berkowitz writes in “Hannah Arendt and Human Rights”:
The only truly human rights, for Arendt, are the rights to act and speak in public. The roots for this Arendtian claim are only fully developed five years later with the publication of The Human Condition. Acting and speaking, she argues, are essential attributes of being human. The human right to speak has, since Aristotle defined man as a being with the capacity to speak and think, been seen to be a “general characteristic of the human condition which no tyrant could take away.”
Similarly, the human right to act in public has been at the essence of human being since Aristotle defined man as a political animal who lives, by definition, in a community with others. It is these rights to speak and act –to be effectual and meaningful in a public world – that, when taken away, threaten the humanity of persons.
While these ideas might seem oversimplified and rather vague in a region “thirsty” for politics, they establish a number of crucial distinctions that must be taken into account in any discussion about human rights. Namely:
1) The failure of human rights is a fundamental fact of the modern age
2) There is a distinction between civil rights and human rights, the latter being what people resort to when the former have failed them
3) It is the fact that we appear in public and speak our minds to our fellowmen that ensures that we live our lives in a plurality of opinions and perspectives and the ultimate indicator of a life being lived with dignity.
Even if we have a “right” to a house, to an education and to a citizenship (that is, belonging to a community) if we do not have the right to speak and act in public and express ourselves (as homosexual, woman, dissident and what not) we are not being permitted to become fully human. Regardless of the stability of political institutions, provision of basic needs and security, there is no such a thing as a human world – a human community – in the absence of the possibility of appearing in the world as what we truly are.
“Out Loud” – both the film and the documentary – are a testimony of the degree to which the many elements composing the multi-layered landscape of Lebanese society are at a tremendous risk of worldlessness by being subject to an authority that relies on violence in lieu of power. Power and violence couldn’t be any more opposite.
Hannah Arendt writes in her journals:
Violence is measurable and calculable and, on the other hand, power is imponderable and incalculable. This is what makes power such a terrible force, but it is there precisely that its eminently human character lies. Power always grows in between men, whereas violence can be possessed by one man alone. If power is seized, power itself is destroyed and only violence is left.
It is always the case in dark times that peoples – and also the intellectuals among them – put their entire faith in politics to solve the conflicts that emerge in the absence of plurality and of the right to have rights, but nothing could be more mistaken. Politics cannot save, cannot redeem, cannot change the world. Just like the human community, it is something entirely contingent, fragile and temporary.
That is why no decisions made on the level of government and policies are a replacement for the spontaneity of human action and appearance. It is here that the immense worth of “Out Loud” lies; in enabling a generation that is no longer afraid of hell – for whatever reason – to have a conversation, and it is there where the rehabilitation of the public space is at stake and not in building empty parks to museumficate a troubled past, as has been often the case in Beirut. In an open conversation, people will continue contesting the legacy and appropriating the memory not as a distant past, but as their own.
The case of Lebanon remains precarious: Lebanon’s clergy has recently united in a call for more censorship; and today it was revealed that the security services summon people for interrogation over what they have posted on their Facebook accounts; HRW condemned the performance of homosexuality tests on detainees in Lebanon, even though this sparked a debate and a discussion on the topic ensued at the seminar “Test of Shame” held at Université Saint-Joseph in Beirut and the Lebanese Medical Society held a discussion in which they concluded those tests are of no scientific value.
In a country like Lebanon, plagued by decades of war and violence, as Samer Daboul has said in his film, people are more than often engaged at survival and just at that – surviving from one war to another, from one ruler to another, from one abuse to another, and as such, the responses of society to the challenges of the times are of an entirely secondary order. But what he has done in his films is what we, those who still have a little faith in Lebanon, should have as a principle: “It’s time to live. Not to survive”.
In anticipation of Bard’s upcoming fall conference (“Human Being in an Inhuman Age”) and reflecting upon several related threads in recent blogs (regarding “the wonders of man in the age of simulation”), I’ve found myself thinking about Rabbi Joseph Soleveitchik’s observations concerning the profound split in human nature.
It’s a division Soleveitchik traces back to the two creation stories in the Old Testament. In the first creation story (“Genesis I”), we read: “God created man, in the likeness of God made he him.” Created in God’s likeness, the first Adam stands as both the model and champion of humanity’s instrumental mastery over the earth and all that it contains. (“Fill the earth and subdue it, and have dominion over the fish of the sea, over the fowl of the heaven, and over the beasts, and all over the earth.”) Humankind’s mimetic faculty, in other words, correlates to material mastery. In the second creation story, by contrast, we find no reference either to images or to mastery. Instead, we read: “God breathed into his nostrils the breath of life; and man became a living soul.”
The chief variation in this version consists in the gift of life in the form of God’s breath. With the introduction of this immaterial element, the second creation story shifts focus, along with its normative register. Dominion over the material world gives way to a very different purpose. Placing Adam in the Garden of Eden, God instructs him “to dress it and to keep it.” In other words, mastery now yields to solicitude and conservation. If the first Adam is the master of creation, the second Adam is its self-denying caretaker. In short, if our first nature is instrumental, in the service of command and control, our second is responsive, mindful of that which requires care or service.
Today, it is the spirit of mastery that seems to be on the upswing. Whether it’s the culture of digital gaming, or the likes of Kurzweil’s immortal “spiritual machines,” or in popular films like The Matrix and Dark City, the message we hear is: “you can have it all!”
Dreams and the will to power, desire and reality, converge. Yet, it is this very convergence that may threaten the human – if we think of the “human” in terms of finitude, suffering, fragility, and the inevitability of uncertainty. This human reality is precisely what the will to material mastery (and dreams of digital immortality) deny. In this respect, Genesis I trumps Genesis II. The impulse to control is displacing our capacity for self-demotion in the service of what is other (beyond control). Otherness precludes mastery. Instead, it invites wonder. Wonder is the way we respond to that which goes beyond rational or instrumental control or mastery. This is the sublime. We experience it in the infinite call of nature (“beauty”) and in the infinite demands of the other who stands before us (“the ethical”). Judgment (of the beautiful and the just) begins in wonder, in the face of the real.
Sherry Turkle writes that digital simulation tends to undermine our fealty to the real. If this is so, authentic judgment may have no place in the domain of digital simulation. That claim looms large when law itself migrates to the screen (e.g., in the form of visual evidence and visual argument in court). Over the last decade or so, initially in my book When Law Goes Pop (Chicago: 2000) and more recently in my book, Visualizing Law in the Age of the Digital Baroque: Arabesques & Entanglements (Routledge: forthcoming 2011), this phenomenon has preoccupied my attention. What happens when visual images become the basis for judgment inside the courtroom? How does the image – the amateur documentary, the police surveillance video, the fMRI of brain or heart, or the digital re-enactment of accidents and crimes – affect law’s ongoing quest for fact-based justice? Upon reflection, it becomes plain that judgments based on visual images arise in a different way, with different aesthetic and ethical consequences, than when they rest upon words alone. Nor is visual literacy a given. We need to carefully decode the truth claims of images on the screen, but in order to do that we must first crack the code that constitutes the meaning they provide. And the code changes with the kind of image we see. Regardless, we all tend to be naïve realists when it comes to images. “Seeing is believing.” We tend to look through the screen as if it were a window rather than a construct.
When law lives as an image on the screen, it lives there the way other images do, for good and for ill. Law emulates the cultural constructs of popular entertainment as well as the aesthetics of science. When law lives as an image it, too, takes delight in images of a brain glowing with the beautiful, digitally programmed colors of visual neuroscience. Thus, the images on which legal judgments are based may serve as factual anchors or merely as a source of aesthetic delight, as reliable information or as unmitigated fantasy or illicit desire. So it’s no idle matter to ask, in what reality (if any) does the digital image partake? When fact-based justice rests upon digital simulation its claim to truth may come from a fantasy.
Like an image, law invites us to forget or deny what lies beyond its mimetic (figurative) aspect. Law’s oscillation between aesthetic form (image, figure, copy, text) and moral authority reenacts humanity’s historic vacillation between the two poles of our nature: mastery (Genesis I) and service (Genesis II). In the endless dance of power and meaning, Adam I and Adam II recapitulate the King’s two bodies, the letter and spirit of the law. Law oscillates between these two poles. Law commands, but it wants its commands to be accepted not simply out of fear of punishment, but also, even more importantly, in the belief that it is just. Without good (non-punitive, moral) reasons to accept its coercive power, law remains merely a gunman writ large.
And so, in a visual age like ours, it becomes incumbent upon all of us – jurists and lay people alike – to discern with great care whether or not the screen images we see are capable of bringing justice to mind.
Lemm's project is part of the now widespread attack on the traditional distinction between humans and animals. While the animality of humans has been a basic axiom of philosophical thinking at least since Aristotle characterized the human being as the animal having logos, the Aristotelian-Kantian elevation of the human as the animal who reasons is under revision. In part, the dissent results from our changing views of animals. But, as Berkowitz writes:
A more important challenge to human distinction originates from the discourse of human rights. One core demand of human rights—that men and women have a right to live and not be killed—brought about a shift in the idea of humanity from logos to life. The rise of biopolitics—the political demand that governments limit freedoms and regulate populations in order to protect and facilitate their citizens’ ability to live in comfort—has pushed the animality, the “life,” of human beings to the center of political and ethical activity. In embracing a politics of life over a politics of the reasoned life, biopolitics rejects the distinctive dignity of human rationality and works to reduce humanity to its animality.
Lemm's book brings Nietzsche to the aid of those who would oppose the traditional elevation of human over animal. She argues that the seat of freedom and creativity is with animals, not with humans. Berkowitz dissents.
Such an optimistic reading of the rise of the animal is, to my mind, one-sided. Affirming otherness and multiplicity risks forgetting that, as Hannah Arendt has argued, “Human distinctness is not the same as otherness.” While animal life can be multiple, “only man can express this distinction and distinguish himself, and only he can communicate himself and not merely something—thirst or hunger, affection or hostility or fear.”3 Far from outdated, Arendt’s version of human distinction is an effort to remind us that it is the human capacities to act and think, not to reason, that makes us uniquely human. Plurality, Arendt reminds us, is only possible because humans can initiate action.
The great tension of our times is that between a humanism that builds a world, a civilization, and an animalism that rebels against the limits that world represents. Nietzsche’s greatness was to see through the inhumanism of enlightenment humanism and to identify the perversion of human civilization into a rational world that plans, calculates, and orders the world dehumanizes humanity. To respond to the degradation of humanist civilization by abandoning humanity to its animality, however, risks pursuing a false path to liberation. The animal freedom and plurality that Lemm’s account of Nietzsche offers is, in Heidegger’s words, the “absence of boundaries and limits, the absence of objects not thought as a lack, but as the originary totality of the actual in which the creature is immediately admitted and thus set free.”4 The freedom of Rilke’s animal, in its rebellion against the rationalism of metaphysics, is the freedom of the “open sea,” a vast, undifferentiated, and yawning freedom of infinite possibility. What such a freedom forgets is that humans live in a world. It is one thing to bring into question the rational foundations of that world. It is another to question the world itself.