“The shift from the ‘why’ and ‘what’ to the ‘how’ implies that the actual objects of knowledge can no longer be things or eternal motions but must be processes, and that the object of science is no longer nature or the universe but the history, the story of the coming into being, of nature or life or the universe....Nature, because it could be known only in processes which human ingenuity, the ingeniousness of homo faber, could repeat and remake in the experiment, became a process, and all particular natural things derived their significance and meaning solely from their function in the over-all process. In the place of the concept of Being we now find the concept of Process. And whereas it is in the nature of Being to appear and thus disclose itself, it is in the nature of Process to remain invisible, to be something whose existence can only be inferred from the presence of certain phenomena.”
-Hannah Arendt, The Human Condition
Bookending Arendt’s consideration of the human condition “from the vantage point of our newest experiences and our most recent fears” is her invocation of several “events, ” which she took to be emblematic of the modern world launched by the atomic explosions of the 1940s and the threshold of the modern age that preceded it by several centuries. The event she invokes in the opening pages is the launch of Sputnik in 1957; its companion events are named in the last chapter of the book--the discovery of America, the Reformation, and the invention of the telescope and the development of a new science.
Not once mentioned in The Human Condition, but, as Mary Dietz argued so persuasively in her Turning Operations, palpably present as a “felt absence,” is the event of the Shoah, the “hellish experiment” of the SS concentration camps, which is memorialized today, Yom HaShoah. Reading Arendt’s commentaries on the discovery of the Archimedean point and its application in modern science with the palpably present but textually absent event of the Holocaust in mind sheds new light on the significance of her cautionary tale about the worrying implications of the new techno-science of algorithms and quantum physics and its understanding of nature produced through the experiment.
What happens, she seems to be asking, when the meaning of all “particular things” derives solely from “their function in the over-all process”? If nature in all of its aspects is understood as the inter- (or intra-) related aspects of the overall life process of the universe, does then human existence, as part of nature, become merely one part of that larger process, differing perhaps in degree, but not kind, from any other part?
Recently, “new materialist” philosophers have lauded this so-called “posthumanist” conceptualization of existence, arguing that the anthropocentrism anchoring earlier modern philosophies—Arendt implicitly placed among them?—arbitrarily separates humans from the rest of nature and positions them as masters in charge of the world (universe). By contrast, a diverse range of thinkers such as Jane Bennett, Rosi Braidotti, William Connolly, Diana Coole, and Cary Wolfe have drawn on a variety of philosophical and scientific traditions to re-appropriate and “post-modernize” some form of vitalism. The result is a reformulation of an ontology of process—what Connolly calls “a world of becoming”—as the most accurate way to understand matter’s dynamic and eternal self-unfolding. And, consequentially, it also entails transforming agency from a human capacity of “the will” with its related intentions to a theory of agency of “multiple degrees and sites...flowing from simple natural processes, to human beings and collective social assemblages” with each level and site containing “traces and remnants from the levels from which it evolved,” which “affect [agency’s] operation.” (Connolly, A World Becoming, p. 22, emphasis added). The advantage of a “philosophy/faith of radical immanence or immanent realism,” Connolly argues, is its ability to engage the “human predicament”: “how to negotiate life, without hubris or existential resentment, in a world that is neither providential nor susceptible to consummate mastery. We must explore how to invest existential affirmation in such a world, even as we strive to fend off its worst dangers.”
An implicit ethic of aiming to take better care of the world, “to fold a spirit of presumptive generosity for the diversity of life into your conduct” by not becoming too enamored with human agency resides in this philosophy/faith. In the entanglements she explores between human and non-human materiality—a “heterogeneous monism of vibrant bodies” —one can discern similar ethical concerns in Jane Bennett’s Vibrant Matter. “It seems necessary and impossible to rewrite the default grammar of agency, a grammar that assigns activity to people and passivity to things.” Conceptualizing nature as “an active becoming, a creative not-quite-human force capable of producing the new” Bennett affirms a “vital materiality [that] congeals into bodies, bodies that seek to persevere or prolong their run,” (p. 118, emphasis in the original) where “bodies” connotes all forms of matter. And she contends that this vital materialism can “enhance the prospects for a more sustainability-oriented public.” Yet, without some normative criteria for discerning the ways this new materialism can work toward “sustainability,” it is by no means obvious how either a declaration of faith in the “radical character of the (fractious) kinship between the human and the non-human” or having greater “attentiveness to the indispensable foreignness that we are” would lead to a change in political direction toward more gratitude and away from more destructive patterns of production and consumption. The recognition of our vulnerability could just as easily lead to renewed efforts to truncate or even eradicate the “foreignness” within.
Nonetheless, although these and other accounts call for a reconceptualization of concepts of agency and of causality, none pushes as far toward a productivist/performative account of matter and meaning as does Karen Barad’s theory of “agential realism.” Drawing out the implications of Niels Bohr’s quantum mechanics, Barad develops a theory of how “subjects” and “objects” are produced as apparently separable entities by “specific material configurings of the world” which enact “boundaries, properties, and meanings.” And, in her conceptualization, “meaning is not a human-based notion; rather meaning is an ongoing performance of the world in its differential intelligibility...Intelligibility is not an inherent characteristic of humans but a feature of the world in its differential becoming. The world articulates itself differently...[H]uman concepts or experimental practices are not foundational to the nature of phenomena. ” The world is immanently real and matter immanently materializes.
At first glance, this posthumanist understanding of reality seems consistent with Arendt’s own critique of Cartesian dualism and Newtonian physics and her understanding of the implicitly conditioned nature of human existence. “Men are conditioned beings because everything they come into contact with turns immediately into a condition of their existence. The world in which the vita activa spends itself consists of things produced by human activities; but the things that owe their existence exclusively to men nevertheless constantly condition their human makers.” Nonetheless, there is a profound difference between them. For Barad, “world” is not Arendt’s humanly built habitat, the domain of homo faber (which does not necessarily entail mastery of nature, but always involves a certain amount of violence done to nature, even to the point of “degrading nature and the world into mere means, robbing both of their independent dignity.” (H.C., p. 156, emphasis added.) “World” is matter, the physical, ever-changing reality of an inherently active, “larger material configuration of the world and it ongoing open-ended articulation.” Or is it?
Since this world is made demonstrably real or determinate only through the design of the right experiment to measure the effects of, or marks on, bodies, or “measuring agencies” (such as a photographic plate) made or produced by “measured objects” (such as electrons), the physical nature of this reality becomes an effect of the experiment itself. Despite the fact that Barad insists that “phenomena do not require cognizing minds for their existence” and that technoscientific practices merely manifest “an expression of the objective existence of particular material phenomena” (p. 361), the importance of the well-crafted scientific experiment to establishing the fact of matter looms large.
Why worry about the experiment as the basis for determining the nature of nature, including so-called “human nature? For Arendt, the answer was clear: “The world of the experiment seems always capable of becoming a man-made reality, and this, while it may increase man’s power of making and acting, even of creating a world, far beyond what any previous age dared imagine...unfortunately puts man back once more—and now even more forcefully—into the prison of his own mind, into the limitations of patterns he himself has created...[A] universe construed according to the behavior of nature in the experiment and in accordance with the very principles which man can translate technically into a working reality lacks all possible representation...With the disappearance of the sensually given world, the transcendent world disappears as well, and with it the possibility of transcending the material world in concept and thought.”
The transcendence of representationalism does not trouble Barad, who sees “representation” as a process of reflection or mirroring hopelessly entangled with an outmoded “geometrical optics of externality.” But for Arendt, appearance matters, and not in the sense that a subject discloses some inner core of being through her speaking and doing, but in the sense that what is given to the senses of perception—and not just to the sense of vision—is the basis for constructing a world in common. The loss of this “sensually given world” found its monstrous enactment in the world of the extermination camps, which Arendt saw as “special laboratories to carry through its experiment in total domination.”
If there is a residual humanism in Arendt’s theorizing it is not the simplistic anthropocentrism, which takes “man as the measure of all things,” a position she implicitly rejects, especially in her critique of instrumentalism. Rather, she insists that “the modes of human cognition [science among them] applicable to things with ‘natural’ qualities, including ourselves to the limited extent that we are specimens of the most highly developed species of organic life, fail us when we raise the question: And who are we?” (H.C., p. 11, emphasis in the original) And then there is the question of responsibility.
We may be unable to control the effects of the actions we set in motion, or, in Barad’s words, “the various ontological entanglements that materiality entails.”
But no undifferentiated assignation of agency to matter, or material sedimentations of the past “ingrained in the body’s becoming” can release us humans from the differential burden of consciousness and memory that is attached to something we call the practice of judgment. And no appeal to an “ethical call...written into the very matter of all being and becoming” will settle the question of judgment, of what is to be done. There may be no place to detach ourselves from responsibility, but how to act in the face of it is by no means given by the fact of entanglement itself. What if “everything is possible.”?
-Kathleen B. Jones
I was at dinner with a colleague this week—midterm week. Predictably, talk turned to the scourge of all professors: grading essays. There are few tasks in the life of a college professor less fulfilling than grading student essays. Every once in a while a really good essay jolts me to consciousness. I am elated by such encounters. To be honest, however, reading essays is for the most part stultifying. This is not the fault of the students, many of whom are brilliant and exuberant writers. I find it trying to wade through 25 essays discussing the same book, offering varying opinions and theories, while keeping my attention and interest. How many different ways can one ask for a thesis, talk about the importance of transition sentences, and correct grammar? For some time it is fun, in a way. One learns new things and is captivated by comparing how bright young minds see things. But after years, grading the essay becomes just part of the worst part of a great job.
So how might my colleagues and I react to news that EdX—the influential Harvard-MIT led consortium offering online courses—has developed software that will grade college student essays? I imagine it is sort of like how people felt when the dishwasher was invented. You mean we can cook and feast and don’t have to scrub pots and wash dishes? It promises to allow us to focus on teaching well without having to do that part of our job that we truly dread.
The appeal of computer grading is obvious and broad. Not only will many professors and teachers be freed from unwanted tedium, but also it may help our students. One advantage of computer grading is that it is nearly instantaneous. Students can hand in their work and get a grade and feedback seconds later. Too often essays are handed back days or even weeks after they are submitted. By then the students have lost interest in their paper and forgotten the inspiration that breathed life into their writing. To receive immediate feedback will allow students to see what they did wrong and how they could improve while the generative impulse underlying the paper is still fresh. Computer grading might encourage students to turn in numerous drafts of a paper; it may very well help teach students to write better, something that professorial comments delivered after a week rarely accomplish.
Another putative advantage of computer grading is its objectivity and consistency. Every professor knows that it matters when we read essays and in what order. Some essays find us awake and attentive. Others meet my eyes as they struggle to remain open. As much as I try to ignore the names on the top of the page, I can’t deny that my reading and grading is personalized to the students. I teach at a small liberal arts college where I know the students. If I read a particularly difficult sentence by a student I have come to trust, I often make a second effort. My personal attention has advantages but it is of course discriminatory. The computer will not do that, which may be seen by some as more fair. What is more, the computer doesn’t get tired or need caffeine.
Perhaps the most important advantage for administrators considering these programs is the cost savings. If computers relieve professors from the burden of grading, that means professors can teach more. It may also mean that fewer TA’s are necessary in large lecture courses, thus saving money for strapped universities. There may even be a further side benefit to these programs. If universities need fewer TA’s to grade papers, they may admit fewer graduate students to their programs, thus going some way towards alleviating the extraordinary and irresponsible over-production of young professors that is swelling the ranks of unemployable Ph.D.s.
There are, of course, real worries about computer grading of essays. My concern is not that the computers will make mistakes (so do I); or that we lack studies that show that computers can grade as well as human professors—for I doubt professors are on the whole excellent graders. The real issue is elsewhere.
According to the group “Professionals Against Machine Scoring of Student Essays in High-Stakes Assessment,” the problem with computer grading of essays is simple: Machines cannot read. Here is what the group says in a statement:
Let’s face the realities of automatic essay scoring. Computers cannot ‘read.’ They cannot measure the essentials of effective written communication: accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organization, clarity, and veracity, among others.
What needs to be taken seriously is not that computers can’t grade as well as humans. In many ways they grade better. More consistently. More honestly. With less grade inflation. And more quickly. But computer grading will be different than human grading. It will be less nuanced and aspire to clearly defined criteria. Are sentences grammatical? Is there a clear statement of the thesis? Are there examples given? Is there a transition between sentences? All of these are important parts of good writing and the computer can be trained to look for these characteristics in an essay. What this means, however, is that computers will demand the kind of clear, precise, and logical writing that computers can understand and that many professors and administrators demand from students. What this also means, however, is that writing will become more mechanical.
There is much to be learned here from an analogy with the rise of computer chess. The great grandmaster Gary Kasparov—who famously lost to Deep Blue— has perceptively argued that machines have changed the ways Chess is played and redefined what a good chess move and a well-played chess game looks like. As I have written before:
The heavy use of computer analysis has pushed the game itself in new directions. The machine doesn’t care about style or patterns or hundreds of years of established theory. It counts up the values of the chess pieces, analyzes a few billion moves, and counts them up again. (A computer translates each piece and each positional factor into a value in order to reduce the game to numbers it can crunch.) It is entirely free of prejudice and doctrine and this has contributed to the development of players who are almost as free of dogma as the machines with which they train. Increasingly, a move isn’t good or bad because it looks that way or because it hasn’t been done that way before. It’s simply good if it works and bad if it doesn’t. Although we still require a strong measure of intuition and logic to play well, humans today are starting to play more like computers. One way to put this is that as we rely on computers and begin to value what computers value and think like computers think, our world becomes more rational, more efficient, and more powerful, but also less beautiful, less unique, and less exotic.
Much the same might be expected from the increasing use of computers to grade (and eventually to write) essays. Students will learn to write in ways expected from computers, just as they today try to learn to write in ways desired by their professors. The difference is that different professors demand and respond to varying styles. Computers will consistently and logically drive writing towards a more mechanical and logical style. Writing, like Chess playing, will likely become more rational, more efficient, and more effective, but also less beautiful, less unique, and less eccentric. In other words, writing will become less human.
It turns out that many secondary school districts already use computers to grade essays. But according to John Markoff in The New York Times, the EdX software promises to bring the technology into college classrooms as well as online courses.
It is quite possible that in the near future, my colleagues and I will no longer have to complain about grading essays. But that is unlikely at Bard. More likely is that such software will be used in large university lecture courses. In such courses with hundreds of students, professors already shorten questions or replace essays with multiple-choice tests. Or they use armies of underpaid graduate students to grade these essays. It is quite likely that software will actually augment the educational value of writing assignments at college in these large lecture halls.
In seminars, however, and in classes at small liberal arts colleges like Bard where I teach, such software will not likely free my colleagues and me from reading essays. The essays I assign are not simple responses to questions in which there are clear criteria for grading. I look for elegance, brevity, insight, and the human spark (please no comments on my writing). Whether or not I am good at evaluating writing or at teaching writing, that is my aspiration. I seek to encourage writing that is thoughtful rather than writing that is simply accurate. When I have time to make meaningful comments on papers, they concern structure, elegance, and depth. It is not only a way to grade an essay, but also a way to connect with my students and help them to see what it means to write and think well.
And yet, I can easily imagine making use of such a computer-grading program. I rarely have time to grade essays as well or as quickly as I would like. I would love to have my students submit drafts of their essays to the EdX computer program.
If they could repeatedly submit their essays and receive such feedback and use the computer to catch not only grammatical errors but also poor sentences, redundancies, repetitions, and whatever other mistakes the computer can be trained to recognize, that would allow them to respond and rework their essays many times before I see them. Used well, I hope, such grading programs might really augment my capacities as a professor and their experiences as students.
I have real fears that grading technology will rarely be used well. Rather, it will too-often replace human grading altogether and in large lectures, high schools and standardized tests will impose a new and inhuman standard on the way we write and thus the way we think. We should greet such new technologies enthusiastically and skeptically. But first, we should try to understand them. Towards that end, it is well worth reading John Markoff’s excellent account of the new EdX computer grading software in The New York Times. It is your weekend read.
“To be alive means to live in a world that preceded one’s own arrival and will survive one’s own departure. On this level of sheer being alive, appearance and disappearance, as they follow upon each other, are the primordial events, which as such mark out time, the time span between birth and death.”
-Hannah Arendt, The Life of the Mind
I credit my undergraduate advisor, the late Kenneth Reshaur, for one of my obsessions: I refer to the crack in the spine, between the Work and the Action chapters that divides my undergraduate copy of Hannah Arendt’s The Human Condition. That fissure finds sustenance in the passage above, which appears at the very beginning of Arendt’s “Thinking” volume of The Life of the Mind.
It is a telling quote for many reasons, not the least of which because in it, Arendt explicitly echoes Maurice Merleau Ponty’s treatment of “primordial perception” in some of his late writings on painting, but also because it testifies to Arendt’s relentless commitment to thinking as primordially bound to the phenomenality of life, and especially to the life of politics. Politics is, for Arendt, apparitional in nature. It regards the appearance of things, both human and inhuman. And to appear is also what it means to be alive. To be sure, for Arendt there is the fact of natality that regards a coming into life; but that differs from an appearance. Natality is of the order of the new; but an appearance persists regardless of its newness or oldness. We might say that an appearance is indifferent to qualities like newness or oldness. Hence Arendt’s emphasis on the sensoriality of appearances, their ingression, but also their departure. It is an unavoidable fact for her that peoples, things, events appear and disappear in the way in which the sound of a note or of a voice appears and then fades away; what Arendt appreciates about this primordial condition of sensoriality is that the appearance and disappearance of things marks a domain of sheer aliveness; “sheer” in the sense of not having qualifications or conditions for their bodying forth.
For Arendt, the sheerness of the apparitional world of politics means that appearances are not mere appearances. This fact marks, to my mind, her great friction with some aspects of the Platonic tradition from which she also draws. The aspectual alliteration of “sheer” and “mere” resonates with her emphasis on appearances as being a site of care. To be more precise, Arendt’s elaboration of a politics of appearances bespeaks a commitment to a curatorial disposition to the world that she associates with the ability to trust others to “tend and take care of a world of appearances” (The Crisis in Culture). To consider appearances as “mere” (as opposed to “sheer”) suggests a disregard for life itself, for the way in which, as she goes on to affirm a few paragraphs after the quote, “To be alive means to be possessed by an urge to self-display which answers the fact of one’s own appearingness.” (The Life of Mind).
To be alive, in this sense, regards an urge to be felt, to be attended to by others. This is what the spectacle asks of the spectator: not so much “pay attention to me”, but “attend to what appears before you.” Such attention is what spurs on judgment, for Arendt, which is the activity sine qua non of “sharing-of-the-world-with-others” (Crisis in Culture). But before judgment may take place, before what captures our attentions can be morphed into thoughtful reflection, there is the sheerness of appearance that strikes at our curatorial dispositions.
And for Arendt, this primordial capacity to strike is disinterested.
What do I mean by this? Simply put, Arendt’s call to attend to the sheer appearance of the world forces us to come to terms with a domain of experience that precedes any and all capacities to formulate judgments, interests, and ideas: This is the primordial world of disinterest. And “disinterest” here does not mean either “indifferent” or “detached”; nor does this amount to a reassignment of the “Archimedean point.” On the contrary, the domain of disinterest is a domain of absorption and immersion in the facticity of lived sensations: it is the domain of the aesthetic that Arendt rightly identifies as the source of Kant’s political thought.
To recall, Kant’s crucial insight in the Critique of Judgment is that there can be no necessary conditions for something to count as beautiful, and hence there can be no rules for the category of the aesthetic. This is an insight that Kant borrows from Hume’s critique of consequentialism; but whereas for Hume, the heterogeneity that arises from the absence of necessity is a part of life, for Kant it is restricted to aesthetic experience as he defines it.
The aesthetic is the source of Kant’s political thought, then, not because the aesthetic provides normative guides to help us make judgments (it can’t), nor because there is anything specifically political about the beautiful (there can’t be because according to Kant aesthetic experience is disinterested in the sense of unqualifiable). Rather, the aesthetic is a source of political thinking, and political life in general, because it is only through aesthetic experience that one encounters a mode of valuing that is non-instrumental and not reducible to its use value. Indeed, aesthetic experience is that experience that annihilates our reliance on a sense of necessity; and it is precisely the annihilation of necessity – necessity being the concept that Arendt likens to the a-political qualities of the private and the social – which makes aesthetics and politics so intimately entangled for her.
Arendt’s politics of appearances, encapsulated in the quote from The Life of the Mind, thus speaks of the possibility of a life devoid of the force of necessity, and of things not having to go on as they have.
This is why she seems so resistant to the privative nature of the private, and the biologism of the social: what binds Arendt’s characterization of these entities (and I think it important to regard her use of these terms as characterizations and not descriptions), is their inexorable reliance on the force of necessity as sovereign.
For me, moreover, Arendt’s aesthetics of politics evokes the possibility of always having at one’s recourse the polemical claim that “this need not be”, that things need not continue in this way, that the continuity of any form of political subjectification is not necessary. This also means that the assembly of things – as they are at any one point in time – is not necessary in the manner in which an instrumental rationality demands that they must be. The possibility to admit of a resistance to necessity regards a curatorial disposition that attends to the sheer fact of appearance—of peoples, things, and events in the world. Such is the nature of Arendt’s politics of appearances.
There is probably no question more debated in the course of Middle Eastern uprisings than that of the status of human rights. Anyone familiar with the region knows that the status of human rights in the Middle East is at best obscure. The question of why there was not a “revolution” in Lebanon is a very complex one, tied with the fate of Syria and with the turbulent Lebanese politics since the end of the civil war, and hence cannot be fully answered. In a vague sense it can be said of course that Lebanon is the freest Arab country and that as such it bears a distinctively different character.
While at face value, the statement is true, being “more free than” in the Middle East is simply understating a problem. Just to outline the basic issues, Lebanon’s record on human rights has been a matter of concern for international watchdogs on the following counts:
Security forces arbitrarily detain and torture political opponents and dissidents without charge, different groups (political, criminal, terrorist and often a combination of the three) intimidate civilians throughout the country in which the presence of the state is at best weak, freedom of speech and press is severely limited by the government, Palestinian refugees are systematically discriminated and homosexual intercourse is still considered a crime.
While these issues remain at the level of the state, in society a number of other issues are prominent: Abuse of domestic workers, racism (for example excluding people from color and maids from the beaches) violence against women and homophobia that even included recently a homophobic rant on a newspaper of the prestigious American University in Beirut. The list could go on forever.
The question of gay rights in Lebanon remains somewhat paradoxical. On the one hand, article 534 of the Lebanese Penal Code prohibits explicitly homosexual intercourse since it “contradicts the laws of nature”, and makes it punishable with prison. On the other hand, Beirut – and Lebanon – remains against all odds a safe haven, for centuries, for many people in the Middle East fleeing persecution or looking for a more tolerant lifestyle.
That of course includes gays and lesbians and it is not uncommon to hear of gay parties held from time to time in Beirut’s celebrated clubs. At the same time, enforcement of the law is sporadic and like everything in Lebanon, it might happen and it might not; best is to read the horoscope in the morning and pray for good luck. A few NGO pro-LGBT have been created in the country since the inception of “Hurriyyat Khassa” (Private Liberties) in 2002.
In 2009 Lebanese LGBT-organization Helem launched a ground-breaking report about the legal status of homosexuals in the entire region, in which a Lebanese judge ruled against the use of article 534 to prosecute homosexuals.
It is against the background of this turbulent scenario that Samer Daboul’s film “Out Loud” (2011) came to life, putting together an unusual tale about friendship and love set in postwar Lebanon in which five friends and a girl set on a perilous journey in order to find their place in the world.
Though the plot of the film seems simple, underneath the surface lurks a challenge to the traditional morals and taboos of Lebanese society – homosexuality, the role of women, the troubled past of the war, delinquency, crime, honor – which for Lebanese cinema, on the other hand, marks a turning point.
This wouldn’t be so important in addressing the question of rights and freedoms in Lebanon were it not for a documentary, “Out Loud – The Documentary”, released together with the film that documents in detail the ordeal through which the director, actors and crew had to go through in order to complete this film.
Shot in Zahlé, in mountainous heartland of Lebanon and what the director called “a city and a nation of conservatism and intolerance”, it is widely reported in the documentary that from the very beginning the cast and crew were met with the same angry mobs, insults, and physical injuries that their film in itself so vehemently tried to overcome; a commercial film about family violence, gay lovers, and the boundaries of relationships between men and women. A film not about Lebanon fifteen or twenty years ago, but about Lebanon of here and today.
Daboul writes: “Although I grew up in the city in which “Out Loud” was filmed, even I had no idea how difficult it would be to make a movie in a nation plagued by violence, racism, sexism, corruption and a lack of respect for art and human rights.” The purpose of “Out Loud” of course wasn’t only to make a movie but a school of life, in which the maker, the actors and the audience could all have a peaceful chance to re-examine their own history and future.
Until very recently in lieu of a public space, in Lebanon, any conflict was solved by means of shooting, kidnapping and blackmailing by armed militias spread throughout the country and acting in the name of the nation.
The wounds have been very slow to heal as is no doubt visible from the contemporary political panorama. Recently, a conversation with an addiction counselor in Beirut revealed the alarming statistics of youth mental illness, alcoholism and drug addiction across all social classes in Lebanon, to which I will devote a different article.
Making films in Lebanon is an arduous process that not only does not receive support from the state but is also subject to an enormous censorship bureaucracy that wants to make sure that the content of the films do not run counter to the religious and political sensibilities of the state. In the absence of strong state powers, the regulations are often malleable and rather look after the sensibilities of political blocs and religious leaders rather than state security, if any such exists.
The whole idea of censorship of ideas is intimately intertwined with the reality of freedom and rights and with the severe limitations – both physical and intellectual – placed upon the public space.
In the Middle East, censorship of a gay relationship is an established practice in order to protect public morality; however what we hear on the news daily that goes from theft to murder to kidnap to abuse to rape to racism, does not require much censorship and is usually consumed by the very same public.
If there is one thing here that one can learn from Hannah Arendt about freedom of speech is that as Roger Berkowitz writes in “Hannah Arendt and Human Rights”:
The only truly human rights, for Arendt, are the rights to act and speak in public. The roots for this Arendtian claim are only fully developed five years later with the publication of The Human Condition. Acting and speaking, she argues, are essential attributes of being human. The human right to speak has, since Aristotle defined man as a being with the capacity to speak and think, been seen to be a “general characteristic of the human condition which no tyrant could take away.”
Similarly, the human right to act in public has been at the essence of human being since Aristotle defined man as a political animal who lives, by definition, in a community with others. It is these rights to speak and act –to be effectual and meaningful in a public world – that, when taken away, threaten the humanity of persons.
While these ideas might seem oversimplified and rather vague in a region “thirsty” for politics, they establish a number of crucial distinctions that must be taken into account in any discussion about human rights. Namely:
1) The failure of human rights is a fundamental fact of the modern age
2) There is a distinction between civil rights and human rights, the latter being what people resort to when the former have failed them
3) It is the fact that we appear in public and speak our minds to our fellowmen that ensures that we live our lives in a plurality of opinions and perspectives and the ultimate indicator of a life being lived with dignity.
Even if we have a “right” to a house, to an education and to a citizenship (that is, belonging to a community) if we do not have the right to speak and act in public and express ourselves (as homosexual, woman, dissident and what not) we are not being permitted to become fully human. Regardless of the stability of political institutions, provision of basic needs and security, there is no such a thing as a human world – a human community – in the absence of the possibility of appearing in the world as what we truly are.
“Out Loud” – both the film and the documentary – are a testimony of the degree to which the many elements composing the multi-layered landscape of Lebanese society are at a tremendous risk of worldlessness by being subject to an authority that relies on violence in lieu of power. Power and violence couldn’t be any more opposite.
Hannah Arendt writes in her journals:
Violence is measurable and calculable and, on the other hand, power is imponderable and incalculable. This is what makes power such a terrible force, but it is there precisely that its eminently human character lies. Power always grows in between men, whereas violence can be possessed by one man alone. If power is seized, power itself is destroyed and only violence is left.
It is always the case in dark times that peoples – and also the intellectuals among them – put their entire faith in politics to solve the conflicts that emerge in the absence of plurality and of the right to have rights, but nothing could be more mistaken. Politics cannot save, cannot redeem, cannot change the world. Just like the human community, it is something entirely contingent, fragile and temporary.
That is why no decisions made on the level of government and policies are a replacement for the spontaneity of human action and appearance. It is here that the immense worth of “Out Loud” lies; in enabling a generation that is no longer afraid of hell – for whatever reason – to have a conversation, and it is there where the rehabilitation of the public space is at stake and not in building empty parks to museumficate a troubled past, as has been often the case in Beirut. In an open conversation, people will continue contesting the legacy and appropriating the memory not as a distant past, but as their own.
The case of Lebanon remains precarious: Lebanon’s clergy has recently united in a call for more censorship; and today it was revealed that the security services summon people for interrogation over what they have posted on their Facebook accounts; HRW condemned the performance of homosexuality tests on detainees in Lebanon, even though this sparked a debate and a discussion on the topic ensued at the seminar “Test of Shame” held at Université Saint-Joseph in Beirut and the Lebanese Medical Society held a discussion in which they concluded those tests are of no scientific value.
In a country like Lebanon, plagued by decades of war and violence, as Samer Daboul has said in his film, people are more than often engaged at survival and just at that – surviving from one war to another, from one ruler to another, from one abuse to another, and as such, the responses of society to the challenges of the times are of an entirely secondary order. But what he has done in his films is what we, those who still have a little faith in Lebanon, should have as a principle: “It’s time to live. Not to survive”.
In anticipation of Bard’s upcoming fall conference (“Human Being in an Inhuman Age”) and reflecting upon several related threads in recent blogs (regarding “the wonders of man in the age of simulation”), I’ve found myself thinking about Rabbi Joseph Soleveitchik’s observations concerning the profound split in human nature.
It’s a division Soleveitchik traces back to the two creation stories in the Old Testament. In the first creation story (“Genesis I”), we read: “God created man, in the likeness of God made he him.” Created in God’s likeness, the first Adam stands as both the model and champion of humanity’s instrumental mastery over the earth and all that it contains. (“Fill the earth and subdue it, and have dominion over the fish of the sea, over the fowl of the heaven, and over the beasts, and all over the earth.”) Humankind’s mimetic faculty, in other words, correlates to material mastery. In the second creation story, by contrast, we find no reference either to images or to mastery. Instead, we read: “God breathed into his nostrils the breath of life; and man became a living soul.”
The chief variation in this version consists in the gift of life in the form of God’s breath. With the introduction of this immaterial element, the second creation story shifts focus, along with its normative register. Dominion over the material world gives way to a very different purpose. Placing Adam in the Garden of Eden, God instructs him “to dress it and to keep it.” In other words, mastery now yields to solicitude and conservation. If the first Adam is the master of creation, the second Adam is its self-denying caretaker. In short, if our first nature is instrumental, in the service of command and control, our second is responsive, mindful of that which requires care or service.
Today, it is the spirit of mastery that seems to be on the upswing. Whether it’s the culture of digital gaming, or the likes of Kurzweil’s immortal “spiritual machines,” or in popular films like The Matrix and Dark City, the message we hear is: “you can have it all!”
Dreams and the will to power, desire and reality, converge. Yet, it is this very convergence that may threaten the human – if we think of the “human” in terms of finitude, suffering, fragility, and the inevitability of uncertainty. This human reality is precisely what the will to material mastery (and dreams of digital immortality) deny. In this respect, Genesis I trumps Genesis II. The impulse to control is displacing our capacity for self-demotion in the service of what is other (beyond control). Otherness precludes mastery. Instead, it invites wonder. Wonder is the way we respond to that which goes beyond rational or instrumental control or mastery. This is the sublime. We experience it in the infinite call of nature (“beauty”) and in the infinite demands of the other who stands before us (“the ethical”). Judgment (of the beautiful and the just) begins in wonder, in the face of the real.
Sherry Turkle writes that digital simulation tends to undermine our fealty to the real. If this is so, authentic judgment may have no place in the domain of digital simulation. That claim looms large when law itself migrates to the screen (e.g., in the form of visual evidence and visual argument in court). Over the last decade or so, initially in my book When Law Goes Pop (Chicago: 2000) and more recently in my book, Visualizing Law in the Age of the Digital Baroque: Arabesques & Entanglements (Routledge: forthcoming 2011), this phenomenon has preoccupied my attention. What happens when visual images become the basis for judgment inside the courtroom? How does the image – the amateur documentary, the police surveillance video, the fMRI of brain or heart, or the digital re-enactment of accidents and crimes – affect law’s ongoing quest for fact-based justice? Upon reflection, it becomes plain that judgments based on visual images arise in a different way, with different aesthetic and ethical consequences, than when they rest upon words alone. Nor is visual literacy a given. We need to carefully decode the truth claims of images on the screen, but in order to do that we must first crack the code that constitutes the meaning they provide. And the code changes with the kind of image we see. Regardless, we all tend to be naïve realists when it comes to images. “Seeing is believing.” We tend to look through the screen as if it were a window rather than a construct.
When law lives as an image on the screen, it lives there the way other images do, for good and for ill. Law emulates the cultural constructs of popular entertainment as well as the aesthetics of science. When law lives as an image it, too, takes delight in images of a brain glowing with the beautiful, digitally programmed colors of visual neuroscience. Thus, the images on which legal judgments are based may serve as factual anchors or merely as a source of aesthetic delight, as reliable information or as unmitigated fantasy or illicit desire. So it’s no idle matter to ask, in what reality (if any) does the digital image partake? When fact-based justice rests upon digital simulation its claim to truth may come from a fantasy.
Like an image, law invites us to forget or deny what lies beyond its mimetic (figurative) aspect. Law’s oscillation between aesthetic form (image, figure, copy, text) and moral authority reenacts humanity’s historic vacillation between the two poles of our nature: mastery (Genesis I) and service (Genesis II). In the endless dance of power and meaning, Adam I and Adam II recapitulate the King’s two bodies, the letter and spirit of the law. Law oscillates between these two poles. Law commands, but it wants its commands to be accepted not simply out of fear of punishment, but also, even more importantly, in the belief that it is just. Without good (non-punitive, moral) reasons to accept its coercive power, law remains merely a gunman writ large.
And so, in a visual age like ours, it becomes incumbent upon all of us – jurists and lay people alike – to discern with great care whether or not the screen images we see are capable of bringing justice to mind.
Benjamin Stevens . firstname.lastname@example.org
Ethical and political thinking means thinking realistically: thinking about how things are actually done, about process or practices, and so about ideas only as they take shape in, and are shaped by, those practices. In other words, it means attending to how intellectual and, as it were, spiritual life are constrained by material conditions.
For thinking realistically today must begin with the fact that thought about something is always a something, a thing, in its own right: that thought is located in thinkers who live in spaces and times, in societies and cultures, and is mediated by their physical beings. In a word, thought is 'embodied'.
What are we to make of this fact, that thinking is something made? That thinking is, literally, a 'fiction'?
In this series, I try to answer that question by thinking realistically about fiction. I focus on those 'popular fictions' thought -- or made -- to have figured precisely the relationships between thinking and material being: fictions that figure what it means to be human (a seemingly 'rational animal' who 'thinks, therefore he (?) is') in a non-human, not to say unthinking, world.
Take Christopher Nolan's science fiction (sf) film Inception (2010). [At the time of this writing, the film is in wide mainstream release, and has been #1 at the box office two weekends running. Earlier versions of portions of this post appeared on facebook; special thanks are due to interlocutors there, especially Matt Emery, Jim Keller, and Deke Sharon, and in real life, especially Clark Frankel, Lucy Schmid, Roland Obedin-Schwartz, and Cameron Ogg.]
Sf films, whether or not they speculate about other technologies, draw special attention to the cinematic technology that makes them possible. In this way superficially resembling older 'cinema of attraction', they are also newly distracting: at least since Star Wars (Lucas 1977), which indissolubly associated them with 'blockbuster moviemaking' of a nostalgic or escapist sort, they can draw attention away from the deeper and grosser sociocultural structures and material conditions that allow for such fine-grained special effects.
(This is all the more true since The Matrix (Wachowski and Wachowski 1999), to whose literal vision, its mise-en-scène, many subsequent films, including Inception, owe a great deal; but whose figurative vision, of the particular dehumanizing effects of particular technology, most such imitators have failed to critique or even recreate. Like them, Inception seems to classify The Matrix more with the superficially brighter tradition begun by Star Wars than with the darker and more investigative tradition represented by Blade Runner (Scott 1982), whose vision of postnational society isn't neutral. What if The Matrix had been surpassed in popularity by Dark City (Proyas 1998)?)
Inception is a case in point, and disappointing. Especially -- intentionally -- astonishing is its quadruplicated 'inception' sequence, in which we're asked to follow four plots, worlds, and overlapping sets of physical laws simultaneously. The sequence is tightly constructed and, from the film's point of view, climactic. But it isn't show-stopping, as it could have been and, as I want to argue, as it should have been. A film from precisely so capable and intelligent a director as Nolan had the opportunity not only to tell its story but also to consider the conditions that make its very storytelling possible: to consider how it is that changing technologies have changed our stories and, alongside them, changed us.
In other words, Inception, like all sf, had the opportunity to self-ironize and therefore to criticize, developing an especially conscious perspective on the human effects of (storytelling) technology. Instead, it is technically accomplished but, conceptually, only clever: 'self-conscious' in only the most pervasively contemporary sense of wearing its love of genre knowledge on its sleeve. Inception is an example of how 'high-concept', high-budget sf risks merely crystallizing faded popular fictions about science and technology instead of critiquing how a technoscientific ideology vividly and consequentially fictionalizes 'human being'.
In that long 'inception' scene, for example, something as modern as nested relativistic physics is squandered in the service of a groaningly old-fashioned visual pun on 'climactic' and 'climatic'. At the high point of drama, the characters are subjected to low temperatures and wintry weather, bundled up indistinguishably to be trundled around an excessively video-gamey "level". The film seems confused by its own pun between "level" as "vertical or hierarchical stage" and "level" as "horizontal or sequential stage", the former allowing for exploration of interpenetrating causes and effects, the latter allowing only forward motion, as in a linear video game. As a result, while the scene isn't senseless -- there is a narrative logic to its literalizations of unconscious defense mechanisms -- it's pointless.
One measure of its being pointless is its being, surprisingly, sexless. Surprisingly indeed in a film drinking so deeply at the Dick-ian spring, one level's literal buttoning-down (natty French cuffs in a posh hotel whose high-class escort is a supporting character in Pythonesque psychic drag) giving way to puffy white snowpeople rolling about in mere alliance of convenience, only clockwork frantic, in place of what a better, more dangerous film would almost automatically have given: good old-fashioned Oedipal psychodrama. Part of the point, to be fair, is that the particular psyche's drama is centered around his repression of his own desires to adopt the image of his father, replacing instead of overtly killing: a textbook complex indeed. But the father in question was a captain of industry, on the verge of transforming his energy company into "a new superpower": there's a man who desired with all of his being to be master of all he surveyed, and the film responds by consigning him to deathbed mumblings.
Treated similarly sexlessly are the main character's dead wife "Mal" ('bad', whose refrain to the main character is, however, nothing more objectionable than that he'd promised they'd grow old together) and a potential but unrealized new interest, "Ariadne", whose mythic-psychic depths just don't exist: she's clean, good at mazes, and dutiful, made to comment that her "subconscious seems polite enough". No cannibalistic half-brother in the closet, no complicit survivor's guilt?
No, since in Inception's view all that matters is one man's emotional response to his own memory. Everyone else -- indeed, everything else, from soup to nuts -- is suppressed, made to act as if they were repressed, for his benefit alone.
All of that repression is, then, to speak figuratively, only one of the film's neuroses, lesser in comparison to another that is more pervasive and pernicious. For as Inception asks us to track the interaction of multiple fictional worlds simultaneously, and so in theory to consider whether different conceptual systems might influence each other so as to effect cognition, in practice it emphatically does not stop the show even to show, much less to critique, the factual machinery that makes that fictional sequence possible: the global technology and industry of film that allows for this local example. With the sequence representing the movie in miniature, the problem is not that the dream relates uncertainly to reality; for such is the film's own glossy enthusiasm, alongside its lack of consideration for other options, that we accept that old sf conceit without question.
The problem, rather, is that the dream is related uncertainly to any dreamer. The mood is repressive and suppressive both. Attention paid to drugs, including sedatives, that smooth the science fictional technology's operation; to the 'projections' -- really: decorative schemes -- supplied by individual dreamers; and to the operating assumption that the dreaming mind, as a way into the preconscious, can have permanent effect on the person as a whole: none of this takes proper account of dream as something that happens through and to a body. Not that the film doesn't deal with physical interaction; it does, for example in the 'inception' sequence, when physical effects like inertia and contact with water are transmitted analogously from level to nested lower level.
But in thus depicting only the most individual, personal conditions; in insisting however that the dreams are "shared"; and in the admonition that dreams ought not to be built out of memories: in all of this, Inception figures bodies as belonging to individuals, as matter (literally, figuratively) of individual minds, and therefore emphatically not as belonging to systems that make individuals possible, as material shaped by what the film itself depends on but depicts only in first-class passing: an international -- not postnational, not postindustrial -- system of technologies interlocking in ways almost incomprehensibly complex to the individual whose being is shaped by it.
Beyond being surprisingly sexless, then, the film's image of dreams is disembodied to the point of depicting bodies as apolitical. As a result, any questions it might seem to ask us in turn must end up floating free of any serious mooring: without any awareness of how human bodies and therefore minds are made by an international system of interlocking technologies, Inception is appallingly apolitical.
This is the problem of the film, and its moment of greatest missed opportunity for irony and critique: for thinking realistically, for thinking ethically and politically, about how the fact that there can be this sort of fiction must affect us.
The film wants us to wonder whether its plastic dream-logic might apply to our own (only apparently?) waking life.
But how could an answer matter when the question itself is imagined not as a political or ethical imperative but as a personal issue, a question posed not for us all as committed -- like it or not -- altogether to political interaction but for each of us as consumers, imagined as making decisions in response to what we like?
What in the world is at stake in a question that mistakes the world for a personal preference or lifestyle choice?
Looking back, we may notice that blithely disembodied machinery operating from the opening sequence onward. In a word, it is an apolitical postcolonialism, disappointingly toothless and neutered, allowing -- as it shouldn't -- the film to develop a starstruck vision of the world, of the world as it is figured almost exclusively in earlier films, at the expense and to the exclusion of the world as it is beyond such self-congratulatorily clever fare: as it is, precisely, to have made such a film possible.
Treating us, for example, to cameos from Batman's butler (Michael Caine, the British empire never wiser or more charming), a simulacrum of Batman's immortal enemy Ra's al Ghul (Ken Watanabe, Japan reconfigured to defuse incipient superpowers), another Batman enemy -- the one most closely associated with the film's own thing, hallucinatory mental manipulation -- (Cillian Murphy, his cheekboned creep utterly wasted); and to a scene in which the kid from 3rd Rock sneaks just the most glumly chaste kiss one can imagine from "Juno" (winkingly but, as I've noted, inconsequentially renamed Ariadne), Inception would distract us from -- as it has deluded itself about -- the world in which it is set.
It imagines a postnational, information-economic world in which former colonies and imperial competitors are alleged to have accepted American cultural superimposition so peaceably it is just as if it had been their idea all along. The film flubs its chance to draw to this, its most destabilizing suggestion of pre- or co-conditioning 'inception', in fact even the flimsy sort of psychoanalytical attention it draws to its main character in fiction. Much less does it muster the truly probing attention such an unethically apolitical vision of global affairs demands.
For what, in the end, is the product of all this global machinery in glossily spectacular motion? No prizes for guessing: a wealthy, even patrician white American man who helps another, even wealthier, even more patrician man find himself and, so, finds himself. Once he does, he gets to live in the exceedingly well-photographed and tastefully furnished world of his fondest dreams, where he'll raise his soft-focused, towheaded children free from any influence of their darkly witchy mother, who paid for her only 'mistake' (viz., wanting to live in the world of her fondest dreams) by being consigned in her husband's mind to the classic category of "batshit crazy". What can she do or, rather, what can she be figured in memory as having done, in her husband's memory, other than to take her own life?
At least that way it's not his fault, you see.
(Nolan didn't let Batman keep his brunette, either -- too idealistic -- although he allowed him to seduce her from a healthier relationship with an actual public official, a person with a political consciousness. Also delimited in this way is Ariadne, whose scattering through this post is one indication of how little the film is interested in her: without a half-brother to betray, she can compromise only her own artistic instincts as she must learn to be, first, less creative -- as she puts it: "reproductive" -- and, then, not to build dreams based on her memories … like Cobb, who, again, is rewarded precisely for having shown no such scruple.)
What the film imagines, then, is a world whose complexly interlocking systems of industrial technological production, obviously but unexaminedly dependent on the labor of thousands, if not millions, and inevitably resulting in the transformation of natural and cultural locales, may -- of course! -- be configured to help one American man feel better about himself.
Worse -- per the film's tedious ending (Was it all a dream? No, it's a film.) -- it doesn't even matter if that any of it is real as long as he gets to feel better.
Far from thinking realistically about, let's say, facts of individual or social responsibility, the film thus focuses on a lesser personal feeling of guilt. It has no awareness of the multi-dimensional problems inherent in the local effects of a global economy: imagining its characters and settings as postnational, it ignores ongoing problems caused by technologies mediating the destabilizing transition from imperialism and colonialism to late capitalism and beyond.
What is to be done?
In my favorite scene, a café owner in Mombasa knows better than the film itself, and tries but tellingly fails to make his concerns understood to the only character, Cobb, whose opinion is allowed to matter. Cobb, on the run from shadowy multinational corporate forces -- later, the international audience is insulted by being asked to question whether such forces, too, are only paranoid delusions --, seeks refuge in a bustling café, seating himself at a table whose other occupants are rightly non-plussed by his graceless arrival. When the owner confronts him -- 'no', 'get out' -- he tries to defuse his gross disruption of the setting by ordering a coffee. The owner refuses and, again, tries to make his concerns understood. Cobb, of course, can't understand him but, more importantly, doesn't want to hear him: he has own problems, you see. And besides, everything will be fine, we're only able to assume, once the gunfire that follows him has died down and we're off to the next exotic setting.
There is no mention of the local name for 'Mombasa', Kisiwa Cha Mvita, "Island of War".
Inception thus figures, despite its lack of consciousness, how what is still treated as an empire can but isn't allowed to "write back". Outside of a glossy cadré whose facility with imaginary technology is figured as daringly 'underground', even 'revolutionary', but in reality is merely self-congratulatory first-world consumerism; and whose characters are acted by actors famous already for their roles in other glibly nerdgasmic media, nobody has anything meaningful to say, certainly no African, lacking even reliable electricity and, so, who may conceivably have wanted to consider whether or not he might benefit from the energy "superpower" the heroes of the film are trying to scuttle.
Worse, the film's concluding suggestion that this might all be a dream actually confirms that this is how Cobb sees Africa: an erasure of postcolonial identity, just as if these Mombasan characters are, in the film's own terms, 'projections' of the white man's subconscious mind.
What about a version in which supporting characters are actual people, seeking to protect the integrity of their polity's being from violent intrusion: only metaphorically is a white virus vigorously rejected by the scene's immune system, figured tellingly as 'black blood cells'.
I started by mentioning a similarity between much contemporary sf film and an older 'cinema of attraction'. The similarity is, as I called it, "superficial" because, while cinema of attraction is famously, even excessively conscious of its novelty, to the point of subordinating or eliminating story, the problem with a more recent sf film like Inception is that, since the time of cinema of attraction, film including sf has been proven a capably narrative form. As a result, to tell no story must be judged a failure not of technique or of the medium's possibility but of imagination. In a sf film in particular, not to tell a story that is truly about the consequences of technology and technoscientific ideology on human being is to misunderstand the genre.
With its accomplished pastiche of earlier films, Inception parades just that kind of misunderstanding glossily, which is bad enough, and, what is worse, globally: it hits all the marks of the genre but misses its critical point. There is no ghost to creak its meaningful chains in this well-oiled machine. (A special, contemporary problem may be that the most widely-available technologies, e.g., smart phones, are orders of magnitude more difficult to tinker with than the consumer technology of a generation ago.)
For these reasons, a more charitable reading might conclude that Inception is simply not a sf film. But then what is it?
An in-flight magazine, gushing instead of reporting. (The reference to "Lost", the international flight to L.A., is clever, but what does it mean? That show, too, was filmed primarily in a place taken and retained unfairly from its rightful inhabitants.)
A callow glance and wink to oneself in the mirror, scope the frosted tips, eyebrows carefully slicked, ready with the roofie: when only your own memory matters, you can get away with murder.
In future posts in this series, I'll consider counterexamples and other examples of sf as the popular fiction most repaying consideration in terms of thinking realistically about how fictions envision being human in an inhuman age. I'll start with the image used to advertise the Center's upcoming conference.
In the meantime, a suggestion: District 9, in which the embodied individual and local is properly contextualized -- meaning, at this moment, complicated and problematized -- by the impersonal and global, a more realistic image than Inception's fantastic daydream of purely individual will to redemptive power.
Lemm's project is part of the now widespread attack on the traditional distinction between humans and animals. While the animality of humans has been a basic axiom of philosophical thinking at least since Aristotle characterized the human being as the animal having logos, the Aristotelian-Kantian elevation of the human as the animal who reasons is under revision. In part, the dissent results from our changing views of animals. But, as Berkowitz writes:
A more important challenge to human distinction originates from the discourse of human rights. One core demand of human rights—that men and women have a right to live and not be killed—brought about a shift in the idea of humanity from logos to life. The rise of biopolitics—the political demand that governments limit freedoms and regulate populations in order to protect and facilitate their citizens’ ability to live in comfort—has pushed the animality, the “life,” of human beings to the center of political and ethical activity. In embracing a politics of life over a politics of the reasoned life, biopolitics rejects the distinctive dignity of human rationality and works to reduce humanity to its animality.
Lemm's book brings Nietzsche to the aid of those who would oppose the traditional elevation of human over animal. She argues that the seat of freedom and creativity is with animals, not with humans. Berkowitz dissents.
Such an optimistic reading of the rise of the animal is, to my mind, one-sided. Affirming otherness and multiplicity risks forgetting that, as Hannah Arendt has argued, “Human distinctness is not the same as otherness.” While animal life can be multiple, “only man can express this distinction and distinguish himself, and only he can communicate himself and not merely something—thirst or hunger, affection or hostility or fear.”3 Far from outdated, Arendt’s version of human distinction is an effort to remind us that it is the human capacities to act and think, not to reason, that makes us uniquely human. Plurality, Arendt reminds us, is only possible because humans can initiate action.
The great tension of our times is that between a humanism that builds a world, a civilization, and an animalism that rebels against the limits that world represents. Nietzsche’s greatness was to see through the inhumanism of enlightenment humanism and to identify the perversion of human civilization into a rational world that plans, calculates, and orders the world dehumanizes humanity. To respond to the degradation of humanist civilization by abandoning humanity to its animality, however, risks pursuing a false path to liberation. The animal freedom and plurality that Lemm’s account of Nietzsche offers is, in Heidegger’s words, the “absence of boundaries and limits, the absence of objects not thought as a lack, but as the originary totality of the actual in which the creature is immediately admitted and thus set free.”4 The freedom of Rilke’s animal, in its rebellion against the rationalism of metaphysics, is the freedom of the “open sea,” a vast, undifferentiated, and yawning freedom of infinite possibility. What such a freedom forgets is that humans live in a world. It is one thing to bring into question the rational foundations of that world. It is another to question the world itself.