One of those controversies from last summer that somehow passed me by while I was teaching in Italy was the mash up over the Fertility Institutes decision first to offer a service allowing parents to choose basic traits for their children (hair and eye color, etc.) and then its subsequent retreat in the face of an ethical uproar.
The Institute did and still does provide screening for many diseases as well pre-selection options for sex (it has a 100% success rate on pre-selecting sex of the child). But the uproar over hair and eye color was too much to bear.
One question raised by this is why we allow screening for sex preference and disease, but not hair color? Consider that discrimination is much more prevalent in the areas of disability and sex than it is around eye color. So we permit the erasure or compensation for those traits and conditions that society deems meaningful, but prohibit the choice around less impactful traits.
In a March 9, 2009 WIRED online interview, James Hughes--who will be speaking at the Arendt Center's October conference Human Being in an Inhuman Age--defends the rights of parents to make such choices. And he rejects the term "designer babies." As he said in the H+ interview:
“It’s inevitable, in the broad context of freedom and choice. And the term ‘designer babies’ is an insult to parents, because it basically says parents don’t have their kids’ best interests at heart.” He said, “If I’ve got a dozen embryos I could implant, and the ones I want to implant are the green-eyed ones, or the blond-haired ones, that’s an extension of choices we think are perfectly acceptable — and restricting them a violation of our procreative autonomy.
In a poll cited by H+, the majority of respondents supported pre-genetic implantation in questions of health, but most drew the line somewhere around selecting for athletic ability or intelligence.
More recently, a January 2009 study by researchers at NYU Langone Medical Center found that an overwhelming 75% of parents would be in favor of trait selection using PGD – as long as that trait is the absence of mental retardation. A further 54% would screen their embryos for deafness, 56% for blindness, 52% for a propensity to heart disease, and 51% for a propensity to cancer. Only 10% would be willing to select embryos for better athletic ability, and 12.6% would select for greater intelligence. 52.2% of respondents said that there were no conditions for which genetic testing should never be offered, indicating widespread support for PGD – as long as it’s for averting disease and not engineering human enhancement.
It seems that there are at least two worries that need to be thought through. One is socio-economic. To allow for-profit companies to genetically implant desired traits means that those who can pay for it and want to will ensure their progeny genetic advantage. Now, genetic advantage does not mean one will succeed, but it is an advantage--at least a perceived advantage even if not always a real one.
The second question is broader. If Hannah Arendt and others are right that one part of our humanity is that we are subject to chance and to fate, then the opportunity to control all decisions in life--including the decision of life itself--raises questions about our humanity. As the mystery is taken from childbirth, one of the great human experiences begins to approximate the experience of ordering from a catalogue. Of course, this is not yet the case. But with surrogacy pregnancies and genetic implants, it is now possible to go on vacation, order a boy with green eyes, blonde hair, and high intelligence, and come home to pick him up. Is this a human way to have children? Is it human to eradicate disease? What about creating such healthy people that they live to be 700 years old, as Ray Kurzweil imagines will soon be the case?
One needs to think: What it means to be Human in a World of Super-Human Technologies.
And read about the latest clone mammal, Got, the Spanish fighting Bull.
Benjamin Stevens . firstname.lastname@example.org
Ethical and political thinking means thinking realistically: thinking about how things are actually done, about process or practices, and so about ideas only as they take shape in, and are shaped by, those practices. In other words, it means attending to how intellectual and, as it were, spiritual life are constrained by material conditions.
For thinking realistically today must begin with the fact that thought about something is always a something, a thing, in its own right: that thought is located in thinkers who live in spaces and times, in societies and cultures, and is mediated by their physical beings. In a word, thought is 'embodied'.
What are we to make of this fact, that thinking is something made? That thinking is, literally, a 'fiction'?
In this series, I try to answer that question by thinking realistically about fiction. I focus on those 'popular fictions' thought -- or made -- to have figured precisely the relationships between thinking and material being: fictions that figure what it means to be human (a seemingly 'rational animal' who 'thinks, therefore he (?) is') in a non-human, not to say unthinking, world.
Take Christopher Nolan's science fiction (sf) film Inception (2010). [At the time of this writing, the film is in wide mainstream release, and has been #1 at the box office two weekends running. Earlier versions of portions of this post appeared on facebook; special thanks are due to interlocutors there, especially Matt Emery, Jim Keller, and Deke Sharon, and in real life, especially Clark Frankel, Lucy Schmid, Roland Obedin-Schwartz, and Cameron Ogg.]
Sf films, whether or not they speculate about other technologies, draw special attention to the cinematic technology that makes them possible. In this way superficially resembling older 'cinema of attraction', they are also newly distracting: at least since Star Wars (Lucas 1977), which indissolubly associated them with 'blockbuster moviemaking' of a nostalgic or escapist sort, they can draw attention away from the deeper and grosser sociocultural structures and material conditions that allow for such fine-grained special effects.
(This is all the more true since The Matrix (Wachowski and Wachowski 1999), to whose literal vision, its mise-en-scène, many subsequent films, including Inception, owe a great deal; but whose figurative vision, of the particular dehumanizing effects of particular technology, most such imitators have failed to critique or even recreate. Like them, Inception seems to classify The Matrix more with the superficially brighter tradition begun by Star Wars than with the darker and more investigative tradition represented by Blade Runner (Scott 1982), whose vision of postnational society isn't neutral. What if The Matrix had been surpassed in popularity by Dark City (Proyas 1998)?)
Inception is a case in point, and disappointing. Especially -- intentionally -- astonishing is its quadruplicated 'inception' sequence, in which we're asked to follow four plots, worlds, and overlapping sets of physical laws simultaneously. The sequence is tightly constructed and, from the film's point of view, climactic. But it isn't show-stopping, as it could have been and, as I want to argue, as it should have been. A film from precisely so capable and intelligent a director as Nolan had the opportunity not only to tell its story but also to consider the conditions that make its very storytelling possible: to consider how it is that changing technologies have changed our stories and, alongside them, changed us.
In other words, Inception, like all sf, had the opportunity to self-ironize and therefore to criticize, developing an especially conscious perspective on the human effects of (storytelling) technology. Instead, it is technically accomplished but, conceptually, only clever: 'self-conscious' in only the most pervasively contemporary sense of wearing its love of genre knowledge on its sleeve. Inception is an example of how 'high-concept', high-budget sf risks merely crystallizing faded popular fictions about science and technology instead of critiquing how a technoscientific ideology vividly and consequentially fictionalizes 'human being'.
In that long 'inception' scene, for example, something as modern as nested relativistic physics is squandered in the service of a groaningly old-fashioned visual pun on 'climactic' and 'climatic'. At the high point of drama, the characters are subjected to low temperatures and wintry weather, bundled up indistinguishably to be trundled around an excessively video-gamey "level". The film seems confused by its own pun between "level" as "vertical or hierarchical stage" and "level" as "horizontal or sequential stage", the former allowing for exploration of interpenetrating causes and effects, the latter allowing only forward motion, as in a linear video game. As a result, while the scene isn't senseless -- there is a narrative logic to its literalizations of unconscious defense mechanisms -- it's pointless.
One measure of its being pointless is its being, surprisingly, sexless. Surprisingly indeed in a film drinking so deeply at the Dick-ian spring, one level's literal buttoning-down (natty French cuffs in a posh hotel whose high-class escort is a supporting character in Pythonesque psychic drag) giving way to puffy white snowpeople rolling about in mere alliance of convenience, only clockwork frantic, in place of what a better, more dangerous film would almost automatically have given: good old-fashioned Oedipal psychodrama. Part of the point, to be fair, is that the particular psyche's drama is centered around his repression of his own desires to adopt the image of his father, replacing instead of overtly killing: a textbook complex indeed. But the father in question was a captain of industry, on the verge of transforming his energy company into "a new superpower": there's a man who desired with all of his being to be master of all he surveyed, and the film responds by consigning him to deathbed mumblings.
Treated similarly sexlessly are the main character's dead wife "Mal" ('bad', whose refrain to the main character is, however, nothing more objectionable than that he'd promised they'd grow old together) and a potential but unrealized new interest, "Ariadne", whose mythic-psychic depths just don't exist: she's clean, good at mazes, and dutiful, made to comment that her "subconscious seems polite enough". No cannibalistic half-brother in the closet, no complicit survivor's guilt?
No, since in Inception's view all that matters is one man's emotional response to his own memory. Everyone else -- indeed, everything else, from soup to nuts -- is suppressed, made to act as if they were repressed, for his benefit alone.
All of that repression is, then, to speak figuratively, only one of the film's neuroses, lesser in comparison to another that is more pervasive and pernicious. For as Inception asks us to track the interaction of multiple fictional worlds simultaneously, and so in theory to consider whether different conceptual systems might influence each other so as to effect cognition, in practice it emphatically does not stop the show even to show, much less to critique, the factual machinery that makes that fictional sequence possible: the global technology and industry of film that allows for this local example. With the sequence representing the movie in miniature, the problem is not that the dream relates uncertainly to reality; for such is the film's own glossy enthusiasm, alongside its lack of consideration for other options, that we accept that old sf conceit without question.
The problem, rather, is that the dream is related uncertainly to any dreamer. The mood is repressive and suppressive both. Attention paid to drugs, including sedatives, that smooth the science fictional technology's operation; to the 'projections' -- really: decorative schemes -- supplied by individual dreamers; and to the operating assumption that the dreaming mind, as a way into the preconscious, can have permanent effect on the person as a whole: none of this takes proper account of dream as something that happens through and to a body. Not that the film doesn't deal with physical interaction; it does, for example in the 'inception' sequence, when physical effects like inertia and contact with water are transmitted analogously from level to nested lower level.
But in thus depicting only the most individual, personal conditions; in insisting however that the dreams are "shared"; and in the admonition that dreams ought not to be built out of memories: in all of this, Inception figures bodies as belonging to individuals, as matter (literally, figuratively) of individual minds, and therefore emphatically not as belonging to systems that make individuals possible, as material shaped by what the film itself depends on but depicts only in first-class passing: an international -- not postnational, not postindustrial -- system of technologies interlocking in ways almost incomprehensibly complex to the individual whose being is shaped by it.
Beyond being surprisingly sexless, then, the film's image of dreams is disembodied to the point of depicting bodies as apolitical. As a result, any questions it might seem to ask us in turn must end up floating free of any serious mooring: without any awareness of how human bodies and therefore minds are made by an international system of interlocking technologies, Inception is appallingly apolitical.
This is the problem of the film, and its moment of greatest missed opportunity for irony and critique: for thinking realistically, for thinking ethically and politically, about how the fact that there can be this sort of fiction must affect us.
The film wants us to wonder whether its plastic dream-logic might apply to our own (only apparently?) waking life.
But how could an answer matter when the question itself is imagined not as a political or ethical imperative but as a personal issue, a question posed not for us all as committed -- like it or not -- altogether to political interaction but for each of us as consumers, imagined as making decisions in response to what we like?
What in the world is at stake in a question that mistakes the world for a personal preference or lifestyle choice?
Looking back, we may notice that blithely disembodied machinery operating from the opening sequence onward. In a word, it is an apolitical postcolonialism, disappointingly toothless and neutered, allowing -- as it shouldn't -- the film to develop a starstruck vision of the world, of the world as it is figured almost exclusively in earlier films, at the expense and to the exclusion of the world as it is beyond such self-congratulatorily clever fare: as it is, precisely, to have made such a film possible.
Treating us, for example, to cameos from Batman's butler (Michael Caine, the British empire never wiser or more charming), a simulacrum of Batman's immortal enemy Ra's al Ghul (Ken Watanabe, Japan reconfigured to defuse incipient superpowers), another Batman enemy -- the one most closely associated with the film's own thing, hallucinatory mental manipulation -- (Cillian Murphy, his cheekboned creep utterly wasted); and to a scene in which the kid from 3rd Rock sneaks just the most glumly chaste kiss one can imagine from "Juno" (winkingly but, as I've noted, inconsequentially renamed Ariadne), Inception would distract us from -- as it has deluded itself about -- the world in which it is set.
It imagines a postnational, information-economic world in which former colonies and imperial competitors are alleged to have accepted American cultural superimposition so peaceably it is just as if it had been their idea all along. The film flubs its chance to draw to this, its most destabilizing suggestion of pre- or co-conditioning 'inception', in fact even the flimsy sort of psychoanalytical attention it draws to its main character in fiction. Much less does it muster the truly probing attention such an unethically apolitical vision of global affairs demands.
For what, in the end, is the product of all this global machinery in glossily spectacular motion? No prizes for guessing: a wealthy, even patrician white American man who helps another, even wealthier, even more patrician man find himself and, so, finds himself. Once he does, he gets to live in the exceedingly well-photographed and tastefully furnished world of his fondest dreams, where he'll raise his soft-focused, towheaded children free from any influence of their darkly witchy mother, who paid for her only 'mistake' (viz., wanting to live in the world of her fondest dreams) by being consigned in her husband's mind to the classic category of "batshit crazy". What can she do or, rather, what can she be figured in memory as having done, in her husband's memory, other than to take her own life?
At least that way it's not his fault, you see.
(Nolan didn't let Batman keep his brunette, either -- too idealistic -- although he allowed him to seduce her from a healthier relationship with an actual public official, a person with a political consciousness. Also delimited in this way is Ariadne, whose scattering through this post is one indication of how little the film is interested in her: without a half-brother to betray, she can compromise only her own artistic instincts as she must learn to be, first, less creative -- as she puts it: "reproductive" -- and, then, not to build dreams based on her memories … like Cobb, who, again, is rewarded precisely for having shown no such scruple.)
What the film imagines, then, is a world whose complexly interlocking systems of industrial technological production, obviously but unexaminedly dependent on the labor of thousands, if not millions, and inevitably resulting in the transformation of natural and cultural locales, may -- of course! -- be configured to help one American man feel better about himself.
Worse -- per the film's tedious ending (Was it all a dream? No, it's a film.) -- it doesn't even matter if that any of it is real as long as he gets to feel better.
Far from thinking realistically about, let's say, facts of individual or social responsibility, the film thus focuses on a lesser personal feeling of guilt. It has no awareness of the multi-dimensional problems inherent in the local effects of a global economy: imagining its characters and settings as postnational, it ignores ongoing problems caused by technologies mediating the destabilizing transition from imperialism and colonialism to late capitalism and beyond.
What is to be done?
In my favorite scene, a café owner in Mombasa knows better than the film itself, and tries but tellingly fails to make his concerns understood to the only character, Cobb, whose opinion is allowed to matter. Cobb, on the run from shadowy multinational corporate forces -- later, the international audience is insulted by being asked to question whether such forces, too, are only paranoid delusions --, seeks refuge in a bustling café, seating himself at a table whose other occupants are rightly non-plussed by his graceless arrival. When the owner confronts him -- 'no', 'get out' -- he tries to defuse his gross disruption of the setting by ordering a coffee. The owner refuses and, again, tries to make his concerns understood. Cobb, of course, can't understand him but, more importantly, doesn't want to hear him: he has own problems, you see. And besides, everything will be fine, we're only able to assume, once the gunfire that follows him has died down and we're off to the next exotic setting.
There is no mention of the local name for 'Mombasa', Kisiwa Cha Mvita, "Island of War".
Inception thus figures, despite its lack of consciousness, how what is still treated as an empire can but isn't allowed to "write back". Outside of a glossy cadré whose facility with imaginary technology is figured as daringly 'underground', even 'revolutionary', but in reality is merely self-congratulatory first-world consumerism; and whose characters are acted by actors famous already for their roles in other glibly nerdgasmic media, nobody has anything meaningful to say, certainly no African, lacking even reliable electricity and, so, who may conceivably have wanted to consider whether or not he might benefit from the energy "superpower" the heroes of the film are trying to scuttle.
Worse, the film's concluding suggestion that this might all be a dream actually confirms that this is how Cobb sees Africa: an erasure of postcolonial identity, just as if these Mombasan characters are, in the film's own terms, 'projections' of the white man's subconscious mind.
What about a version in which supporting characters are actual people, seeking to protect the integrity of their polity's being from violent intrusion: only metaphorically is a white virus vigorously rejected by the scene's immune system, figured tellingly as 'black blood cells'.
I started by mentioning a similarity between much contemporary sf film and an older 'cinema of attraction'. The similarity is, as I called it, "superficial" because, while cinema of attraction is famously, even excessively conscious of its novelty, to the point of subordinating or eliminating story, the problem with a more recent sf film like Inception is that, since the time of cinema of attraction, film including sf has been proven a capably narrative form. As a result, to tell no story must be judged a failure not of technique or of the medium's possibility but of imagination. In a sf film in particular, not to tell a story that is truly about the consequences of technology and technoscientific ideology on human being is to misunderstand the genre.
With its accomplished pastiche of earlier films, Inception parades just that kind of misunderstanding glossily, which is bad enough, and, what is worse, globally: it hits all the marks of the genre but misses its critical point. There is no ghost to creak its meaningful chains in this well-oiled machine. (A special, contemporary problem may be that the most widely-available technologies, e.g., smart phones, are orders of magnitude more difficult to tinker with than the consumer technology of a generation ago.)
For these reasons, a more charitable reading might conclude that Inception is simply not a sf film. But then what is it?
An in-flight magazine, gushing instead of reporting. (The reference to "Lost", the international flight to L.A., is clever, but what does it mean? That show, too, was filmed primarily in a place taken and retained unfairly from its rightful inhabitants.)
A callow glance and wink to oneself in the mirror, scope the frosted tips, eyebrows carefully slicked, ready with the roofie: when only your own memory matters, you can get away with murder.
In future posts in this series, I'll consider counterexamples and other examples of sf as the popular fiction most repaying consideration in terms of thinking realistically about how fictions envision being human in an inhuman age. I'll start with the image used to advertise the Center's upcoming conference.
In the meantime, a suggestion: District 9, in which the embodied individual and local is properly contextualized -- meaning, at this moment, complicated and problematized -- by the impersonal and global, a more realistic image than Inception's fantastic daydream of purely individual will to redemptive power.
I stopped in at the “Systematic” exhibit now on at the Project 176 in London and received a tour by two of the gallery assistants, David Angus and Chloe Cooper. The exhibit, curated by Ellen Mara De Wachter, confronts the question of the place of the human being and the role of the artist at a time when individuals and humans are being subsumed by rational, social, and scientific systems. Featuring 18 works by 8 artists, the exhibit raises the fundamental question of our time: what does it mean to be human in an increasingly inhuman age?
The works on display in “Systematic” provoke principally because they enthusiastically embrace the utopian optimism that underlies the thinking of prophets of singularity from Ray Kurzweil to Sergey Brin. The premise of the exhibit is the power of systems over individuals. As De Wachter writes in her essay that accompanies the exhibit, the system today represents the
emergent properties ‘of the combination as a whole—which are more than the sum of its individual parts.’
The artists in “Systematic” produce works that abandon themselves to systems that operate beyond the awareness or control of human intelligence.
Justin Beal offers glass and dry-wall tables that incorporate rotting fruit into their joints. The fruit rots and attracts insects, molds, and fungi that alter the “artwork” in ways that are outside of artistic control. For De Wachter, Beal “celebrates the unpredictability and undecidability that befall all works of art once they leave the artist’s hands.” The key word here is “celebrates.” For Beal, as for many in the artistic and technological worlds today, the power of the system over the individual is to be welcomed.
Katie Paterson’s “Earth-Moon-Earth” partakes in a similar bow to the power of systems. Paterson translates Beethoven’s Moonlight Sonata into morse code, beams it to the moon, and receives it back upon its reflection. She then translates the returned code into musical notes, with all the losses, transpositions, and gaps left in. This new sonata is then played and the spectator can listen to the new sonata played on vinyl through headphones in the gallery.
For De Wachter, artists like Beal and Paterson—and the other artists on exhibit—work by “surrendering a certain amount of control to the systems” with which they interact. In doing so, “these artists admit that the artworks they produce have a life of their own, and a life beyond the studio in which they were made.”
The language of artistic surrender is reminiscent of an older artistic ideal and also eerily different. Artists of the pre-modern and classic ages were often anonymous. The artistic ideal was to serve simply as a medium through which the divine truth flowed and manifested itself in the world as a work of art. The artist, bemused by his muse, lost himself in rapture and gave himself over to the fashioning of a work in which the truth came to stand in the world. Opposed to this tradition of the artist as medium is the ideal of artistic genius, the artist who composes works from the productive brilliance of his own mind.
In Systematic, the artists abandon control not to a divine, rational, or meaningful truth, but to the random, unpredictable, and meaningless systems of growth and decay, chance and circumstance. The celebration of this powerlessness is, I think, undoubtedly the result of a new faith that has swept up much of the artistic and technological intelligentsia today: the faith in an intelligent universe that goes by the popular name, The Singularity.
The Singularity, as Ray Kurzweil has popularized it, is the hope that humans and machines will merge into a new species that will be governed by super-rational and super-intelligent knowledge. As Kurzweil says:
Once nonbiological intelligence gets a foothold in the human brain (this has already started with computerized neural implants), the machine intelligence in our brains will grow exponentially (as it has been doing all along), at least doubling in power each year. Ultimately, the entire universe will become saturated with our intelligence. This is the destiny of the universe.
In the Singularity, knowledge that is inaccessible to the human brain, a system of all systems, will inaugurate a harmonious existence amongst man-machines and the natural world.
What needs to be remembered amidst this technological utopianism is that the singularity means the death of humanity. The super-intelligent consciousness is not something accessible by mere humans who live and die in mortal timelines. This is why there is a persistent anti-humanism in artistic and technological avant garde circles.
The celebratory anti-humanism exhibited inSystmatic is, of course, ambiguous. These artists claim at once to be celebrating systems and also pointing to their limits and dangers. The glass solitude booths in Damian Hirst’s “Sometimes I Avoid People” are, as De Wachter notes, reminiscent of cases at a natural history museum. In this early work from 1991, Hirst, in a way others in the exhibition do not, points to the dark side of the elevation of systems over humanity.
Above all, the exhibition reminded me of what Hannah Arendt calls Earth Alienation. The great event that inaugurates earth alienation is Galileo’s discovery of the telescope. While the telescope symbolizes the power of sense perception to see what had previously been invisible, it also challenges the adequacy of our human senses to make sense of the world. What the telescope shows us is not reality. It is not the earth or the moon or the stars. Similarly, social science does not show us individuals and persons. The scientific perspective views persons and objects as seen through systems and instruments and, as Sir Arthur Eddington wrote, the things we see have as much resemblance to their appearance in our instruments as a “telephone number to a subscriber.”
Science, for Arendt, is both anti-human and anti-earth. It is anti-earth, she writes, because
in physics—whether we release energy processes that ordinarily go on only in the sun, or attempt to initiate in a test tube the processes of cosmic evolution, or penetrate with the help of telescopes the cosmic space to a limit of two and even six billion light years, or build machines for the production and control of energies unknown in the household of earthly nature, or attain speeds in atomic accelerators which approach the speed of light, or produce elements not to be found in nature, or disperse radioactive particles, created by us through the use of cosmic radiation, on the earth—we always handle nature from a point in the universe outside the earth. And even at the risk of endangering the natural life process we expose the earth to universal, cosmic forces alien to nature’s household.
And science is anti-human:
[The humanist] view of man is even more alien to the scientist, to whom man is no more than a special case of organic life and to whom man’s habitat—the earth, together with earthbound laws—is no more than a special borderline case of absolute, universal laws, that is, laws that rule the immensity of the universe. Surely the scientist cannot permit himself to ask: What consequences will the result of my investigations have for the stature of man? It has been the glory of modern science that it has been able to emancipate itself completely from all such anthropocentric, that is, truly humanistic, concerns.
The scientist cannot ask the question of whether science dehumanizes man. The scientist also cannot ask the question of whether science alienates man from the earth and his life on earth. The scientist can’t ask such questions because the scientific perspective is the universal, not the particular. It is to ask from an Archimedean point divorced from all reality. That is why the scientist speaks in no earthly language, but in the pure language of mathematics.
The scientist reasons, Arendt writes. He or she seeks to reveal the hidden causes of the universe. But the scientist does not think, does not ask whether such knowledge is good or bad.
But what of the artist? What struck me in Systematic was just how fully the artists today have given themselves over to a celebration of the scientific-technological world and its values. I value their art as a mark of the power of that discourse to shape contemporary thought. But I wonder: why have artists have followed scientists in celebrating the anti-human power of technology?
The question of art’s response to the power of systems and science is at the forefront of Human Being in an Inhuman Age, the Arendt Center’s October 2010 conference that explores the fate of humanity in an inhuman age. The conference features Ann Lauterbach, Nicholson Baker, Wyatt Mason, Gilles Peress, David Rothenberg on the question: "Is Art Human? The Fate of Art in the Age of Machines."
The Zabludowicz Collection, London.
Just a few months back (in December of 2009), The Hannah Arendt Center hosted legendary filmmaker Margarethe Von Trotta at Bard College.
The German director came and spoke about her recent film Rosa, addressing the life of Rosa Luxembourg, and her current project, a film about Hannah Arendt. She also offered insight on previous films, among a variety of topics. She spoke of feminism, though she resists the label of a "feminist filmmaker." She relayed the difficulty of portraying real horror, as many of the women she represents in her films survived adversity. She gave us inspiration, and answered our questions with honesty, humility, and humor.
We invite you to listen to / watch a panel discussion entitled on filming Rosa Luxemburg and Hannah Arendt, recorded at Bard on December 4, 2009. Panel discussants: Margarethe von Trotta, Pamela Katz, Leon Botstein, Norman Manea, and Roger Berkowitz.
Also check out Salmagundi magazine, which recently dedicated an entire issue (plus a DVD of a previously unreleased film) to Von Trotta and her work. You can order the issue on Amazon, or contact Salmagundi directly.
Marilyn Robinson has a new book, Absence of Mind, which she discusses here with Jon Stewart. Her best line:
I don't think it's scientific to proceed from the study of ants to a conclusion about the nature of the cosmos.
A number of years ago Wyatt Mason, now Senior Fellow at the Arendt Center, turned me on to Marilynne Robinson, first her spell-binding novel Gilead, and later
her essay collection, The Death of Adam: Essays on Modern Thought.
"Darwinism," The first and very worthwhile essay in that collection is a thoughtful critique of the political and intellectual foundations of Darwinist thought. One of Robinson's main efforts is to reject the Darwinian rejection of human exceptionalism.
Much like Hannah Arendt, Robinson wants to insist on a distinction between humans and all other species. Where Darwinism and other social sciences imagine humans to follow rules (survival of the fittest, struggle, pursuit of self-interest), she insists that real human beings are much more complicated than that. Indeed, the very idea that mankind can put an end to life on earth is, for Robinson, persuasive reason to conclude that humanity is "exceptional among the animals." Such a human capacity, she writes surely
complicates the idea that we are biologically driven by the imperatives of genetic survival. Surely it also complicates the idea that competition and aggression serve the ends of genetic survival in our case, at least.
It is one thing to say that there is undeniable scientific evidence for evolution. It is another to say that evolution means that the "fittest" survive and that they survive by a genetic predisposition to self-interest, strength, and competition. These are ethical arguments, not scientific proofs. And they are used to delegitimate charity and to strip away humane constraints on ostensibly natural self-interested behavior.
It is unpopular to critique Darwin today, but Hannah Arendt made a similar point in her book, The Origins of Totalitarianism. The force of Arendt's questions about Darwin is that Darwinism is one of the ideas in modern life that diminish the idea of humanity and the worth of humans. Darwinism sees humanity as one species among many. Since evolution does not stop, there will be higher species.
Darwinism thus kicks out one crutch supporting the idea of an inviolable human dignity.
What both Robinson and Arendt see is that to give up on human exceptionalism means that we lose the ethical prohibition on murdering or culling or breeding human beings. It opens the door to hierarchies of humans, whether by race, gender, intelligence, wealth, or productivity. And it leads to the worry about what will happen to the masses of humanity in an age of automation and robotic intelligence, an age when the wealthiest and most powerful simply don't need the mass of laborers to thrive.
Robinson and Arendt thus both raise the question of what it means to be human. Read about the Arendt Center's upcoming Conference: Human Being in an Inhuman Age.
Rebecca Thomas comments:
All that said, it seems correct to say that a more emotional, less rational approach to the game is losing a lot of ground, and there is something sad about that. On the other hand, we’re talking about a game. The stated goal is to win the game while playing within the rules. (There are many unstated goals, of course – spending time with an opponent, exercising the brain on a hard problem, etc.) I think the conclusion is that the stated goal of chess is one that computers are well equipped to achieve, and in fact better equipped than humans. That doesn’t particularly bother me, perhaps exactly because of the style of chess computers play. Being good at that is something like being good at multiplying very large numbers, another task at which computers outshine humans.
On the one hand, Rebecca argues that Chess is unlike life because it is a rational game. But it is much harder to rationalize and solve than a game like checkers, which computers have already solved. There are computer checkers games that are unbeatable. That hasn’t happened for Chess yet, though it might.
But what about a game like jeopardy? That is much more “life like” and that is the game IBM has currently set its sights on. The question is: if we automate checkers, then chess, then jeopardy, at what point does “life” not become subject to automation?
We can of course rationalize and justify each one of these advances in isolation. But the overall effect is that we humans are living in a world of increasingly rationalized systems in which our values will change just as our valuation of chess moves has changed. This will change our planet and our lives in the direction of the beauty of reason, and away from the beauty of chance, adventure, and risk. I find this undeniable. The next question: so what?
This week I had lunch with an ex-student who is thinking about traveling to Korea to teach English. She told me that another of my students was in Korea now teaching English. And I just got an email from another former student asking for a law school recommendation. She has been, you guessed it, teaching English in Korea. It seems that the Korean government is doing a good job subsidizing my former students.
But this, according to today's NY Times, may soon change. South Korea is working to replace native English speaking teachers with robots who are cheaper and more reliable. South Korea now plans to deploy 8,400 robots in the nation's kindergartens by 2013. And budgetary pressures in the program to enlist native English speakers is leading the government to turn to robotic teachers.
A front page essay from the Smarter than You Think series also in today's NY Times explores the growing uses of robots in teaching at all levels. According to Benedict Carey and John Markoff, scientists around the world
are developing robots that can engage people and teach them simple skills, including household tasks, vocabulary or, ... elementary imitation and taking turns.
While they quote computer scientists who say that they have neither the intention nor the ability to replace human teachers, clearly budget conscious schools and governments will seek to employ robots as teachers.
Teachers are threatened not only by robots, but also by electronic and distance education. A study last year for the US Department of Education found, to the great chagrin of many teachers and educators, that
The automation of the workforce is attacking the arts as well as teaching. As Paul Woodiel writes on the Times Oped page today, broadway's musicians and violinists are being replaced by synthesizers.
One question rarely asked in such discussions is: "what is good teaching?" or "what is great music?"and what does it teach? It may be that robots and computers are indeed better at teaching basic skills and customizing learning for individual students. Are electronic synthesizers better at playing the violins on the Great White Way?
But what seems, at least at this point, beyond the reach of robotic teaching is the flash of inspiration that opens a student's mind to the beauty and truth of the world. Then again, most students don't want such teaching--just as most broadway theatre goers don't need the human touch of the violin--which may mean that there are quite a few job openings for professor bots and synthesizers around the world.
Rebecca Thomas has a long and thoughtful response to my post on Gary Kasparov's article on computers, chess, and humanity. The whole comment is worth reading. But here is how she begins:
Regarding the first of the three comments, I have to take issue with the idea that a chess game played by a computer is necessarily less beautiful than one played by a human. There are various kinds of beauty, and mathematical beauty is a very real thing. Some proofs are more elegant than others, for instance. Some x-y curves are quite beautiful, and often these are captured by particularly compact mathematical expressions. One could wonder why: is this preference for simplicity inherent in our idea of beauty, or have we preferentially developed (mathematical) language to describe things we find beautiful?
Surely, there is beauty to math. And there are various kinds of beauty. An efficient, powerful, and unstoppable game of chess played by a computer may be truly beautiful in its reduction of complexity to simplicity.
The point Kasparov makes is not that rationality is not beautiful in some way, but that it changes the idea of beauty in chess. A bold move, a risky move, a daring move has been valued in the world of chess. Chess, despite its rational reputation, has had an emotional and adventurous side. However, against computers--or even against humans aided by computers--such risks rarely succeed and thus they are devalued.
Chess changes. It becomes less quirky, less risky, and more rational. I don't think it wrong to say that chess becomes less human. Chess, in the age of computer chess, loses the valuation of a particularly human beauty, even if it might reflect an impeccably beautiful mathematical rationality.
It need not be that mathematical beauty is inferior to human beauty, as Rebecca suggests I must mean, but simply that the elimination of the human beauty as a meaningful option in chess is to be regretted.
II. Rebecca's second related point is to concede that chess players are internalizing the values of computer chess and thus playing more and more like computers. But, she argues this is not so new or so bad. Chess has changed before. New theories of chess emerge all the time. Why is this different? Why is this change, she might ask, the tipping point that makes chess less human?
I think the answer is the one given above. The values and approach to chess this particular change inaugurates take one element of chess--its rationality--and elevate it to the only relevant element of chess. All competing theories are judged by their ability to succeed over a hyper-rational strategy, and they will eventually be found wanting. Those who play chess (as opposed to making art with chess pieces) will succumb to the values of computerized chess. While earlier theories of chess may have aspired for complete dominance, only a purely rational computer chess can achieve that aim.
In celebrating last weekend's holiday, some of our thoughts wandered back to the narratives at the foundation of the United States. Tales of unwavering independence and fierce defenses of individual rights are among the stories that comprise the mythos of American history and American politics. We have great lies (that we are all equal) and not so great lies (Iraq had nuclear weapons). We would not be who we are today without these lies, and yet increasingly the boldness of political lying seems to threaten the very idea of truth and rend the fabric of our nation.
In her essays "Lying in Politics" and "Truth and Politics," Arendt asserts that politics is essentially about producing and dispersing convincing narratives. This process necessarily includes both truth and lying, and the two are not always mutually exclusive. But while politics feeds upon enabling and constitutive fictions, it is threatened by what Arendt calls the "modern lie," lies so hostile to basic factual truths that they attack truth itself.
In May of 2009, the Arendt Center invited guests Verity Smith (Harvard University), Julia Honkasalo (The New School for Social Research), Krista Johannson (University of Helsinki, Finland) and Cassie Cornell (then a Bard College senior) to a round table discussion on "Hannah Arendt and The Persistence of Political Prevarication."
The hour-long discussion was held at Bard College, and facilitated by Roger Berkowitz. Well worth the listen:
Or watch the video (Roger Berkowitz' introduction and Verity Smith's discussion only):
On March 4-5, 2011, There will be a Two-Day Conference on Lying in Politics in New York City. Sponsored jointly by the Hannah Arendt Center for Ethical and Political Thinking at Bard College and the Hannah Arendt Center at the New School, the conference will feature leading political and literary thinkers on importance and dangers of political lying.
Roger Hodge--Former editor of Harpers and author of The Mendacity of Hope.
George Kateb--Professor Emeritus at Princeton and author of Patriotism and Other Mistakes.
Kirstie McClure--Professor at UCLA and author of Between the Castigation of Texts and the Excess of Words.
Uday Mehta--Distinguished Professor at CUNY and author of Liberalism and Empire
Andreas Kalyvas--Professor at the New School and author of Democracy and the Politics of the Extraordinary.
One of the most reflective essays on the fate of Human Being in an Inhuman Age is Gary Kasparov's NY Review of Books Essay, The Chess Master and the Computer.
Kasparov respects the power of computers and knows that there already exist computer programs that play Checkers in a way that is unbeatable. Chess is another story, and although IBM's Deep Blue bested him in 1997, the challenge of an unbeatable Chess program is extreme, if simply because there are over 10 to the 120th possible chess games, and most computers simply are not yet so powerful as to be able to master every game. That said, most store-bought computer chess machines will regularly beat grandmasters.
The real question the smart machines raise is not who will win, but how the intelligent machines change our human being and our human world. Kasparov has three fascinating observations on that question.
First, Kasparov argues that machines have changed the ways Chess is played and redefined what a good chess move and a well-played chess game looks like.
The heavy use of computer analysis has pushed the game itself in new directions. The machine doesn’t care about style or patterns or hundreds of years of established theory. It counts up the values of the chess pieces, analyzes a few billion moves, and counts them up again. (A computer translates each piece and each positional factor into a value in order to reduce the game to numbers it can crunch.) It is entirely free of prejudice and doctrine and this has contributed to the development of players who are almost as free of dogma as the machines with which they train. Increasingly, a move isn’t good or bad because it looks that way or because it hasn’t been done that way before. It’s simply good if it works and bad if it doesn’t. Although we still require a strong measure of intuition and logic to play well, humans today are starting to play more like computers.
One way to put this is that as we rely on computers and begin to value what computers value and think like computers think, our world becomes more rational, more efficient, and more powerful, but also less beautiful, less unique, and less exotic.
The question is: is such a world less human?
Another change Kasparov identifies is that the availability of computer chess machines has reduced the advantage of age and experience.
The availability of millions of games at one’s fingertips in a database is also making the game’s best players younger and younger. Absorbing the thousands of essential patterns and opening moves used to take many years, a process indicative of Malcolm Gladwell’s “10,000 hours to become an expert” theory as expounded in his recent book Outliers. (Gladwell’s earlier book, Blink, rehashed, if more creatively, much of the cognitive psychology material that is re-rehashed in Chess Metaphors.) Today’s teens, and increasingly pre-teens, can accelerate this process by plugging into a digitized archive of chess information and making full use of the superiority of the young mind to retain it all. In the pre-computer era, teenage grandmasters were rarities and almost always destined to play for the world championship. Bobby Fischer’s 1958 record of attaining the grandmaster title at fifteen was broken only in 1991. It has been broken twenty times since then, with the current record holder, Ukrainian Sergey Karjakin, having claimed the highest title at the nearly absurd age of twelve in 2002. Now twenty, Karjakin is among the world’s best, but like most of his modern wunderkind peers he’s no Fischer, who stood out head and shoulders above his peers—and soon enough above the rest of the chess world as well.
Aside from mortality, one of the essential features of human beings through history has been the benefit of wisdom acquired with age. But as the world values increasingly reason over insight and facts over judgment, the necessity of experience is supplanted by the acquisition of knowledge through computers.
A third consequence of the rise of computer chess is that genius and exceptional experience is effectively neutralized. Kasparov tells of his experience of two matches played against the Bulgarian Veselin Topalov, at the time the world's highest ranked Chess Master. When Kasparov played him in regular timed chess, he bested Topalov 3-1. But when he played him in a match when both were allowed to consult a computer for assistance, the match ended in a 3-3 draw. It is not that computer-assisted chess nullifies human creativity: As Kasparov writes:
The computer could project the consequences of each move we considered, pointing out possible outcomes and countermoves we might otherwise have missed. With that taken care of for us, we could concentrate on strategic planning instead of spending so much time on calculations. Human creativity was even more paramount under these conditions.
And yet, the computer evened out the match nevertheless: "My advantage in calculating tactics had been nullified by the machine."
What Kasparov offers are three transformations of the modern world that the rise of artificial intelligence promise.
1) As computers set the standard for success, the world will value creativity and originality less and rationality ever more. Jaron Lanier has made similar arguments in his book You Are Not a Gadget.
2) The advantages of age and experience will be eroded and our already youth-worshipping culture will have fewer reasons than ever for respecting their elders.
3) Cheap and easy access to unlimited computer power will largely neutralize the genetic or social advantages of extraordinary memory or excellent schooling.
Other changes beckon as well, for good and for bad. And the overriding question remains: How to be Human in an increasingly Inhuman Age?
The Times actually had two stories today in its "Smarter Than You Think" series on robots and social effects of the rise of smart machines. The first, on personal robots, is discussed below. The second has reporter Amy Harmon making conversation with a remarkably human looking Bina48, namesake of Bina Rothblatt, partner of the self-made millionaire Martine Rothblatt, who commissioned the robotic likeness.
At one point Harmon asks Bina 48 what it is like to be a robot.
“Um, I have some thoughts on that,” she said.
I leaned forward eagerly.
“Even if I appear clueless, perhaps I’m not. You can see through the strange shadow self, my future self. The self in the future where I’m truly awakened. And so in a sense this robot, me, I am just a portal.”
Well Bina48 did appear clueless at times, clearly having difficulty with basic conversation. But the question really is, to what is Bina48 and others like her a portal to? For that, it is helpful to think about Gary Kasparov's own reflections about the rise of computer chess. That is the topic of my next post.
Lemm's project is part of the now widespread attack on the traditional distinction between humans and animals. While the animality of humans has been a basic axiom of philosophical thinking at least since Aristotle characterized the human being as the animal having logos, the Aristotelian-Kantian elevation of the human as the animal who reasons is under revision. In part, the dissent results from our changing views of animals. But, as Berkowitz writes:
A more important challenge to human distinction originates from the discourse of human rights. One core demand of human rights—that men and women have a right to live and not be killed—brought about a shift in the idea of humanity from logos to life. The rise of biopolitics—the political demand that governments limit freedoms and regulate populations in order to protect and facilitate their citizens’ ability to live in comfort—has pushed the animality, the “life,” of human beings to the center of political and ethical activity. In embracing a politics of life over a politics of the reasoned life, biopolitics rejects the distinctive dignity of human rationality and works to reduce humanity to its animality.
Lemm's book brings Nietzsche to the aid of those who would oppose the traditional elevation of human over animal. She argues that the seat of freedom and creativity is with animals, not with humans. Berkowitz dissents.
Such an optimistic reading of the rise of the animal is, to my mind, one-sided. Affirming otherness and multiplicity risks forgetting that, as Hannah Arendt has argued, “Human distinctness is not the same as otherness.” While animal life can be multiple, “only man can express this distinction and distinguish himself, and only he can communicate himself and not merely something—thirst or hunger, affection or hostility or fear.”3 Far from outdated, Arendt’s version of human distinction is an effort to remind us that it is the human capacities to act and think, not to reason, that makes us uniquely human. Plurality, Arendt reminds us, is only possible because humans can initiate action.
The great tension of our times is that between a humanism that builds a world, a civilization, and an animalism that rebels against the limits that world represents. Nietzsche’s greatness was to see through the inhumanism of enlightenment humanism and to identify the perversion of human civilization into a rational world that plans, calculates, and orders the world dehumanizes humanity. To respond to the degradation of humanist civilization by abandoning humanity to its animality, however, risks pursuing a false path to liberation. The animal freedom and plurality that Lemm’s account of Nietzsche offers is, in Heidegger’s words, the “absence of boundaries and limits, the absence of objects not thought as a lack, but as the originary totality of the actual in which the creature is immediately admitted and thus set free.”4 The freedom of Rilke’s animal, in its rebellion against the rationalism of metaphysics, is the freedom of the “open sea,” a vast, undifferentiated, and yawning freedom of infinite possibility. What such a freedom forgets is that humans live in a world. It is one thing to bring into question the rational foundations of that world. It is another to question the world itself.
The New York Times today has an article about robotic friends and companions. Exhibit A is "Paro," a robot seal with artificial intelligence that coos, blinks, wriggles and generally responds to basic linguistic stimuli. Paro is primitive as AI goes, but it has been a huge hit with elderly patients in nursing homes. Often it elicits reactions and joy in patients that have been joyless for extended periods. These personal robots are part of our future, and not only in nursing homes.
Sherry Turkle, Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology in the Program in Science, Technology, and Society at MIT, worries that as Robots become more accepted as friends, teachers, and even lovers, the quality of human friendship may be diminished:
“Paro is the beginning.” “It’s allowing us to say, ‘A robot makes sense in this situation.’ But does it really? And then what? What about a robot that reads to your kid? A robot you tell your troubles to? Who among us will eventually be deserving enough to deserve people?”
Faithful friends are hard to find, so they may be bought more easily. We know, of course, that elderly and not-so elderly people buy companionship in the form of home aides. Parents buy companionship for their children. But these companions are human. It will be cheaper, not long from now, to purchase robotic babysitters and artificially intelligent tutors. And once these machines arrive, what will it mean for children and adults to spend so much of their time interacting with machines, even super-intelligent machines?
“Paro is the beginning,” Turkle says. “It’s allowing us to say, ‘A robot makes sense in this situation.’ But does it really? And then what? What about a robot that reads to your kid? A robot you tell your troubles to? Who among us will eventually be deserving enough to deserve people?”
Of course many of us already spend much of days interacting with machines. These Smart Phones and Computers are still largely tools, and yet they run software that is governed by artificial intelligence, software that introduce us to our friends (Facebook), puts the world of facts at our fingertips (Wikipedia), and allows friends to converse in a mixed virtual and physical reality (Google Wave). As Jaron Lanier has argued in You Are Not a Gadget, the collectivism of the current applications for Smart Phones and the Web is stunting rather than generating human creativity.
The Times article quotes Timothy Hornyak, author of “Loving the Machine,” who rightly advises that “We as a species have to learn how to deal with this new range of synthetic emotions that we’re experiencing — synthetic in the sense that they’re emanating from a manufactured object.”
These questions, what will it mean to be human in a world increasingly populated by smart machines, are at the center of an upcoming conference, Human Being in an Inhuman Age. The conference features Sherry Turkle and Ray Kurzweil, amongst over 20 speakers.