Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.
Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.
Drones are simply one weapon in a large arsenal with which we fight the war on terror. Even targeted killings, the signature drone capability, are nothing new. The U.S. and other countries have targeted and killed individual leaders for decades if not centuries, using snipers, poisons, bombs, and many other technologies. To take a historical perspective, drones don’t change much. Nor is the airborne capacity of drones to deliver devastation from afar anything new, having as its predecessors the catapult, the long bow, the bomber, and the cruise missile. And yet, there is seemingly something new about the way drones change the feel and reality of warfare. On one side, drones sanitize the battlefield from a space of blood, fear, and heroic fortitude into a video game played on consoles. On the other side, drones dominate life, creating a low pitched humming sound that reminds inhabitants that at any moment a missile might pierce their daily routines. The two sides of this phenomenology of drones is the topic of an essay by Nasser Hussain in The Boston Review: “In order to widen our vision, I provide a phenomenology of drone strikes, examining both how the world appears through the lens of a drone camera and the experience of the people on the ground. What is it like to watch a drone’s footage, or to wait below for it to strike? What does the drone’s camera capture, and what does it occlude?” You can also read Roger Berkowitz’s weekend read on seeing through drones.
Marilynne Robinson, speaking to the American Conservative about her faith, elaborates on what she sees as the central flaws in contemporary American Christianity: "Something I find regrettable in contemporary Christianity is the degree to which it has abandoned its own heritage, in thought and art and literature. It was at the center of learning in the West for centuries—because it deserved to be. Now there seems to be actual hostility on the part of many Christians to what, historically, was called Christian thought, as if the whole point were to get a few things right and then stand pat. I believe very strongly that this world, these billions of companions on earth that we know are God’s images, are to be loved, not only in their sins, but especially in all that is wonderful about them. And as God is God of the living, that means we ought to be open to the wonderful in all generations. These are my reasons for writing about Christian figures of the past. At present there is much praying on street corners. There are many loud declarations of personal piety, which my reading of the Gospels forbids me to take at face value. The media are drawn by noise, so it is difficult to get a sense of the actual state of things in American religious culture."
Is poetry going the way of the Dodo bird? Vanessa Place makes this argument in a recent essay “Poetry is Dead. I Killed It,” on the Poetry Foundation website. And Kenneth Goldsmith, in the New Yorker, asks whether Place is right. The internet, he suggests, has killed or at least so rethought poetry that it may be unrecognizable. "Quality is beside the point—this type of content is about the quantity of language that surrounds us, and about how difficult it is to render meaning from such excesses. In the past decade, writers have been culling the Internet for material, making books that are more focussed [sic] on collecting than on reading. These ways of writing—word processing, databasing, recycling, appropriating, intentionally plagiarizing, identity ciphering, and intensive programming, to name just a few—have traditionally been considered outside the scope of literary practice."
In a rare interview, famously reclusive Calvin and Hobbes cartoonist Bill Watterson prognosticates on the future of the comics: "Personally, I like paper and ink better than glowing pixels, but to each his own. Obviously the role of comics is changing very fast. On the one hand, I don’t think comics have ever been more widely accepted or taken as seriously as they are now. On the other hand, the mass media is disintegrating, and audiences are atomizing. I suspect comics will have less widespread cultural impact and make a lot less money. I’m old enough to find all this unsettling, but the world moves on. All the new media will inevitably change the look, function, and maybe even the purpose of comics, but comics are vibrant and versatile, so I think they’ll continue to find relevance one way or another. But they definitely won’t be the same as what I grew up with."
Cambodian director Rithy Panh's new movie, The Missing Picture is about the rule of the Khmer Rouge in Cambodia. In making the film, he had to confront the challenge of making a movie about atrocities that are famously without explicit visual records, and he hit upon a unique solution: clay dolls. Although these figures "are necessarily silent, immobile, and therefore devoid of the intensity of those moments in other Panh films where his camera bores in on the face of a witness and lingers there as he remembers what happened, or what he did," Richard Bernstein suggests that they give the movie a unique power.
This week on the blog, Ian Storey revisits George Orwell's prescient essay, "Politics and the English Language." Jeffrey Champlin looks at James Muldoon's essay about Arendt's writngs on the advocacy of council systems in On Revolution. And your weekend read looks at the cultural impact of drones on the nations and groups that are employing them.
The response has been swift and negative to the Rolling Stone Magazine cover—a picture of Dzhokhar Tsarnaev who with his now dead brother planted deadly homemade bombs near the finish-line of the Boston Marathon. The cover features a picture Tsarnaev himself posted on his Facebook page before the bombing. It shows him as he wanted himself to be seen—that itself has offended many, who ask why he is not pictured as a suspect or convict. In the photo he is young, hip, handsome, and cool. He could be a rock star, and given the context of the Rolling Stone cover, that is how he appears.
The cover is jarring, and that is intended. It is controversial, and that was probably also intended. Hundreds of thousands of comments on Facebook and around the web are critical and angry, asking how Rolling Stone could portray the bomber as a rock-star. They overlook or ignore the text accompanying the photo on the cover, which reads: “The Bomber. How a Popular, Promising Student Was Failed by His Family, Fell Into Radical Islam, and Became a Monster.” CVS and other retailers have announced they will not sell the magazine in their stores.
That is unfortunate, for the story written by Janet Reitman is exceptionally good and deserves to be read.
Controversies like this have a perverse effect. Just as the furor over Hannah Arendt’s Eichmann in Jerusalem resulted in the viral dissemination of her claims about the Jewish leaders, so too will this Rolling Stone cover be seen by millions of people who otherwise would never have heard of Rolling Stone. What is more, such publicity makes it ever less likely that the story itself will be read seriously, just as Arendt’s book was criticized by everyone, but read by few.
Reitman’s narrative itself is unexceptional. It is a common story line: young, normal kid becomes radicalized and does something none of his old friends can believe he could do. This is a now familiar narrative that we hear in the wake of the tragedies in Newtown (Adam Lanza was described as a nice quiet kid) and Columbine (Time’s cover announced “The Monsters Next Door.”)
This is also the narrative that Rolling Stone managing editor Will Dana embraced to defend the Cover on NPR arguing it was an "apt image because part of what the story is about is what an incredibly normal kid [Tsarnaev] seemed like to those who knew him best back in Cambridge.” It was echoed too by Erin Burnett, on CNN, who recently invoked Hannah Arendt’s idea of the “banality of evil.” In the easy frame the story offers, Tsarnaev was a good kid, part of a striving immigrant family, someone who loved multi-racial America. And then something went wrong. He found Islam; his family fell apart; and he became a monster.
This story is too simple. And yet within the Rolling Stone story, there is a wealth of information and reporting that does give a nuanced and thoughtful portrayal of Tsarnaev’s journey into the heart of evil.
One fact that is important to note is that Tsarnaev is not Eichmann. Eichmann was a member of the SS, a nationalist security service engaged in world war and dedicated to wiping certain races of peoples off the face of the earth. He committed genocide as part of a system of extermination, something both worse than and yet less messy than murder itself. It is Tsarnaev, who had no state apparatus behind him, who become a cold-blooded murderer. The problems that Hannah Arendt thought that the court in Jerusalem faced with Eichmann—that he was a new type of criminal—do not apply in Tsarnaev’s case. He is a murderer. To understand him is not to understand a new type of criminal. And yet it is a worthy endeavor to try to understand why more and more young men like Tsarnaev are so easily radicalized and drawn to murdering innocent people in the name of a cause.
Both Eichmann and Tsarnaev were from upwardly striving bourgeois families that struggled with economic setbacks. Eichmann was white and Austrian, Tsarnaev an immigrant in Cambridge, but both were economically disaffected. Tsarnaev wanted to make money and, like his parents, dreamed of a better life.
Tsarnaev’s family had difficulty fitting in with U.S. culture. His father was ill and could not work. His mother sought to earn money. And his older brother, whom he idolized, saw his dreams of Olympic boxing dashed partly because he was not a citizen. He increasingly turned to a radical version of Islam. When Tsarnaev’s parents both returned to Dagestan, he fell increasingly under the influence of his older brother.
Like Eichmann, Tsarnaev appears to have adopted an ideology that provided a coherent and meaningful narrative that gave his life significance. One can see this in a number of tweets and statements that are quoted in the article. For example, just before the bombing, he tweeted:
"Evil triumphs when good men do nothing."
"If you have the knowledge and the inspiration all that's left is to take action."
"Most of you are conditioned by the media."
Like Eichmann, Tsarnaev came to see himself as a hero, someone willing to suffer and even die for a noble cause. His cause was different—anti-American jihad instead of anti-Semitic Nazism—but he was an ideological idealist, a joiner, someone who found meaning and importance in belonging to a movement. A smart and talented and by most accounts good young man, he was lost and adrift, searching for someone and something to give his life purpose. He found that someone in his brother and that something in jihad against America, the land that previously he had so embraced. And he became someone who believed that what he was doing was right and necessary, even if he understood also that it was wrong.
We see clearly this ambivalent understanding of right and wrong in the note Tsarnaev apparently scrawled while he was hiding in a boat before he was captured. Here is how Reitman’s article describes what he wrote:
When investigators finally gained access to the boat, they discovered a jihadist screed scrawled on its walls. In it, according to a 30-count indictment handed down in late June, Jihad [Tsarnaev's nickname] appeared to take responsibility for the bombing, though he admitted he did not like killing innocent people. But "the U.S. government is killing our innocent civilians," he wrote, presumably referring to Muslims in Iraq and Afghanistan. "I can't stand to see such evil go unpunished. . . . We Muslims are one body, you hurt one, you hurt us all," he continued, echoing a sentiment that is cited so frequently by Islamic militants that it has become almost cliché. Then he veered slightly from the standard script, writing a statement that left no doubt as to his loyalties: "Fuck America."
Eichmann too spoke of his shock and disapproval of killing innocent Jews, but he justified doing so for the higher Nazi cause. He also said that when he found out about the sufferings of Germans at the hands of the allies, it made it easier for him to justify what he had done, because he saw it as equivalent. The fact that the Germans were aggressors, that they had started the war, and that they were killing and torturing innocent people simply did not register for Eichmann, just as it did not register for Tsarnaev that the people in the Boston marathon were innocent. There are, of course, innocent people in Iraq and Afghanistan who have died at the hands of U.S. bombs. Even for those of us who were against the wars and question their sense and justification, however, there is a difference between death in a war zone and terrorism.
The Rolling Stone article does a good job of chronicling Tsarnaev's slide into a radical jihadist ideology, one mixed with conspiracy theories.
The Prophet Muhammad, he noted on Twitter, was now his role model. "For me to know that I am FREE from HYPOCRISY is more dear to me than the weight of the ENTIRE world in GOLD," he posted, quoting an early Islamic scholar. He began following Islamic Twitter accounts. "Never underestimate the rebel with a cause," he declared.
His rebellious cause was to awaken Americans to their complicity both in the bombing of innocent Muslims and also to his belief in the common conspiracy theory that America was behind the 9/11 attacks. In one Tweet he wrote: "Idk [I don’t know] why it's hard for many of you to accept that 9/11 was an inside job, I mean I guess fuck the facts y'all are some real #patriots #gethip."
Besides these tweets that offer a provocative insight into Tsarnaev's emergent ideological convictions, the real virtue of the article is its focus on Tsarnaev's friends, his school, and his place in American youth culture. While his friends certainly do not support or condone what Tsarnaev did, many share some of his conspiratorial and anti-American beliefs. Here are two descriptions of the mainstream nature of many of his beliefs:
To be fair, Will and others note, Jahar's perspective on U.S. foreign policy wasn't all that dissimilar from a lot of other people they knew. "In terms of politics, I'd say he's just as anti-American as the next guy in Cambridge," says Theo.
This is not an uncommon belief. Payack, who [was Tsarnaev's wrestling coach and mentor and] also teaches writing at the Berklee College of Music, says that a fair amount of his students, notably those born in other countries, believe 9/11 was an "inside job." Aaronson tells me he's shocked by the number of kids he knows who believe the Jews were behind 9/11. "The problem with this demographic is that they do not know the basic narratives of their histories – or really any narratives," he says. "They're blazed on pot and searching the Internet for any 'factoids' that they believe fit their highly de-historicized and decontextualized ideologies. And the adult world totally misunderstands them and dismisses them – and does so at our collective peril," he adds.
The article presents a sad portrait of youth culture, and not just because all these “normal” kids are smoking “a copious amount of weed.” The jarring realization is that these talented and intelligent young people at a good school in a storied neighborhood come off so disaffected. What is more, their beliefs in conspiracies are accepted by the adults in their lives as commonplaces; their anti-Americanism is simply a noted fact; and their idolization of slacking (Tsarnaev's favorite word, his friends say, “was "sherm," Cambridge slang for ‘slacker’”) is seen as cute. There is painfully little concern by adults to insist that the young people face facts and confront unserious opinions.
In short, the young people in Tsarnaev's story appear to be abandoned by adults to their own youthful and quite fanciful views of reality. Youth culture dominates, and adult supervision seems absent. There is seemingly no one who, in Arendt’s language from “The Crisis in Education”, takes responsibility for teaching them to love the world as it is.
The Rolling Stone article and cover do not glorify a monster; but they do play on two dangerous trends in modern culture that Hannah Arendt worried about in her writing: First, the rise of youth culture and the abandonment of adult authority in education; and second, the fascination bourgeois culture has for vice and the short distance that separates an acceptance of vice from an acceptance of monstrosity. If only all the people who are so concerned about a magazine cover today were more concerned about the delusions and fantasies of Tsarnaev, his friends, and others like them.
Taking responsibility for teaching young people to love the world is the very essence of what Arendt understands education to be. It will be the topic of the Hannah Arendt Center upcoming conference “Failing Fast: The Crisis of the Educated Citizen.” Registration for the conference opened this week. For now, ignore the controversy and read Reitman’s article “Jahar’s World.” It is your weekend read. It is as good an argument for thinking seriously about the failure of our approach to education as one can find.
Thomas Levin of Princeton came to Bard Tuesday to give a lecture to the Drones Seminar, a weekly class I am participating in, led by my colleague Thomas Keenan and conceived by two of our students Arthur Holland and Dan Gettinger. Levin has studied surveillance techniques for years and he came to think with us about how the present obsession with drones will transform our landscape and our imaginations. At a time when the obsession with drones in the media is focused on their offensive capacities, it is important to recall that drones were originally developed as a surveillance technology. If drones are to become omnipresent in our lives, what will that mean?
Levin began by reminding us of the embrace of other surveillance devices in mass culture, like recording devices at the turn of the 20th century. He offered old postcards and cartoons in which unsuspecting servants or children were caught goofing off or insulting their superiors with newfangled recording devices like the cylinder phonograph and, later, hidden cameras and spy satellites. The realization emerges that we are being watched, and this sense pervades the popular consciousness. In looking to these representations from mass culture of the fear, awareness, and even expectation that we will be watched and listened to, Levin finds the emergence of what he calls “rhetoric of surveillance.”
In short, we talk and think constantly about the fact that we are or may be being watched. This cannot but change the way we behave and act. Levin poses this question. What, he asks, is the emerging drone imaginary?
To answer that question it is helpful to revisit an uncannily prescient imagination of the rise of drones in a text written over half a century ago, Ernst Jünger’s The Glass Bees. Originally published in 1957 and recently reissued in translation with an introduction by science fiction novelist Bruce Sterling, Jünger’s text centers around a job interview between an unnamed former light cavalry officer and Giacomo Zapparoni, secretive, filthy rich, and powerful proprietor of The Zapparoni Works that “manufactured robots for every imaginable purpose.” Zapparoni’s secret, however, is that he instead of big and hulking robots, he specialized in Lilliputian robots that gave “the impression of intelligent ants.”
The robots were not powerful in themselves, but they worked together. Like drone bees and drone ants—that exist only for procreation and then die—the small robots, or drones, serve specific purposes in industry or business. Zapparoni’s tiny robots “could count, weigh, sort gems or paper money….” Their power came from their coordination.
The robots “worked in dangerous locations, handling explosives, dangerous viruses, and even radioactive materials. Swarms of selectors could not only detect the faintest smell of smoke but could also extinguish a fire at an early stage; others repaired defective wiring, and still others fed upon filth and became indispensable in all jobs where cleanliness was essential.” Dispensable and efficient, Zapparoni’s little robots could do the most dangerous and least desirable tasks.
In The Glass Bees, we are introduced to Zapparoni’s latest invention: flying glass bees that can pollinate flowers much more efficiently and quickly than natural bees. The bees “were about the size of a walnut still encased in its green shell.” They were completely transparent and they were an improvement upon nature, at least insofar as the pollination of flowers was concerned. If a true or natural bee “sucked first on the calyx, at least a dessert remained.” But Zapparoni’s glass bees “proceeded more economically; that is, they drained the flower more thoroughly.” What is more, the bees were a marvel of agility and skill: “Given the flying speed, the fact that no collisions occurred during these flights back and forth was a masterly feat.” According to the cavalry officer, “It was evident that the natural procedure had been simplified, cut short, and standardized.”
Before our hero is introduced to Zapparoni’s bees, he is given a warning: “Beware of the bees!” And yet he forgets this warning. Watching the glass bees, the cavalry officer is fascinated. He felt himself “come under the spell of the deeper domain of techniques,” which like a spectacle “both enthralled and mesmerized.” His mind, he writes, went to sleep and he “forgot time” and “also entirely forgot the possibility of danger.”
Jünger’s book tells, in part, the story of our fascination and subjection to technologies of surveillance. On Facebook or Words with Friends, or even using our smart phones or GPS systems, we allow our fascination with technology to dull our sense of its danger. As Jünger writes: “Technical perfection strives toward the calculable, human perfection toward the incalculable. Perfect mechanisms—around which, therefore, stands an uncanny but fascinating halo of brilliance—evoke both fear and a titanic pride which will be humbled not by insight but only by catastrophe.”
The protagonist of The Glass Bees, a former member of the Light Cavalry and later a tank inspector, had once been fascinated by the “succession of ever new models becoming obsolete at an ever increasing speed, this cunning question-and-answer game between overbred brains.” What he came to see is that “the struggle for power had reached a new stage; it was fought with scientific formulas. The weapons vanished in the abyss like fleeting images, like pictures one throws into the fire. New ones were produced in protean succession.” Victory ceased to be about physical battle; it became, instead, a contest of technical mastery and knowledge.
The danger drones pose is not necessarily military. As General Stanley McChrystal rightly said when I asked him about this last week at the New York Historical Society, drones are simply another military tool that can be used for good or ill. Many fret today about collateral damage by drones and forget that if we had to send in armies to do these tasks the collateral damage would be much greater. Others worry about assassination, but drones are simply the tool, not the person pulling the trigger. It may be true that having drones when others don’t offers an enormous military advantage and makes the decision to go to kill easier, but when both sides have drones, we will all think heavily between beginning a cycle of illegal assassinations.
Rather, the danger of drones is how they change us as humans. As we humans interact more regularly with drones and machines and computers, we will inevitably come to expect ourselves and our friends and our colleagues and our lovers to act with the efficiency and selflessness of drones. Sherry Turkle worries that mechanical companions offer such fascination and unquestionable love that humans are beginning to prefer spending time with their machines than with other humans—who make demands, get tired, act cranky, and disappoint us. Ron Arkin has argued that robot soldiers will be more humane at war than human soldiers, who often act rashly out of exhaustion, anger, or revenge. Doctors are learning to rely on Watson and artificially intelligent medical machines, who can bring databases of knowledge to bear on diagnoses with the speed and objectivity that humans can only dream of. In every area of human life where humans once were thought to be necessary, drones and machines are proving more reliable, more capable, and more desirable.
The danger drones represent is not what they do better than humans, but that they do it better than humans. They are a further step in the human dream of self-improvement—the desire to overcome our shame at our all-too-human limitations.
The incredible popularity of drones today is partly a result of their freeing us to fight wars with ever-reduced human and economic costs. But drones are popular also because they appeal to the human desire for perfection. The question is, however, how perfect we humans can be before we begin to lose our humanity. That is, of course, the force of Jünger’s warning: Beware of the bees!
As drones appear everywhere around us, you would do well to put down the newspaper and turn off You Tube and, instead, revisit Ernst Jünger’s classic tale of drones. The Glass Bees is your weekend read. You can read Bruce Sterling’s introduction to The Glass Bees here.
We commonly assume that political acts and claims are shaped by some form of reasoning. How then do we respond to political stands in which arguments are piled atop arguments in contradictory ways, and where the force of the various arguments is less important than victory? We see in political discourse a definite willingness to embrace any argument that helps one win, whether or not it makes sense.
One example of our cynical embrace of bad arguments is the recent controversy over the East Side Gallery in Berlin. The Gallery is comprised of a series of murals that, over the course of the past two decades, an international cast of artists has painted and re-painted on an approximately one-mile stretch of the Berlin Wall. Indeed, the East Side Gallery occupies the longest existing remnant of the Wall, and it has become a significant landmark not only for those visitors who seek to experience something of the city’s Cold War past, but also for those long-time residents who regard it as an embodiment of the city’s contemporary feel and texture.
The tumult of the past few weeks erupted over the plans of a developer, Maik Uwe Hinkel, to construct luxury apartments and an office complex in the former border zone—now a modest green space—that lies between the East Side Gallery and the Spree River. According to the agreements reached by Hinkel and the local government, these new buildings would entail the creation of an access road and pedestrian bridge to allow passage to pedestrians, bicyclists, and emergency vehicles. The road and bridge, in turn, would require the removal of two stretches of the East Side Gallery and their replacement in the adjacent green space. Local planners had first approved the construction and the alteration to the East Side Gallery back in 2005, and since that time Hinkel’s plans had aroused little concerted opposition.
When workers lifted out one concrete slab from the Gallery on Friday, March 2nd, however, hundreds of demonstrators flocked to the site to prevent any further removals. A group of activists hastily organized a larger demonstration that same weekend, one that ultimately drew a raucous crowd of more than six thousand people. In the face of these surprising protests, Berlin Mayor Klaus Wowereit declared that all further work on the site would be postponed until at least March 18th, when a meeting of the major players would decide its fate. Since then, the developer and the relevant local officials have all declared their eagerness to find a solution that preserves the East Side Gallery in its current state. Even the slab removed earlier this month seems destined to return to its former location.
Yet the apparent success of the protest threatens to overshadow the problematic aspects of the demonstrators’ arguments. On the one hand, many of the organizers and protesters regarded their opposition as a small but significant rejoinder to the insistent tide of commercial development in post-Wall Berlin. To adopt the terms of Sharon Zukin’s recent book Naked City, they saw the East Side Gallery as an embodiment of the city’s distinctive authenticity and rootedness, which they argued should be protected from the homogenizing onslaught of upscale growth and gentrification. To wit, one of the coalitions that spearheaded the protest calls itself “Sink the Media Spree” (Mediaspree Versenken), a name that invokes developers’ recent efforts to transform the area along the river into a headquarters for high-tech communications and media. Its webpage declares that this portion of Berlin should preserve “the neighborhood” as it currently exists and not fall victim to “profit mania” (Kiez statt Profitwahn).
But the East Side Gallery cannot be cast so readily as an incarnation of local authenticity, especially the kind that stands opposed to commerce. First of all, many government actors and city residents were far more eager to see the Wall dismantled in the months and years after November 1989 than to see it preserved, and they condoned if not actively contributed to its wholesale removal. As a result, the survival of the East Side Gallery represents the exception, not the rule, in the city’s engagement with the Wall as a material structure. Second, artists from around the world initially established the East Side Gallery as a celebration of artistic and political liberty, but their murals received support from the local and national governments because they helped to draw tourists to Berlin and added to the city’s cachet as a cultural destination. In the light of this state patronage, I find it rather curious to hear activists pitching the East Side Gallery against the forces of capital and development.
On the other hand, many demonstrators contended that the alteration of the East Side Gallery would amount to an intolerable attack on the city’s historical inheritance. One variation of this position is that the removal of the two sections constitutes a dilution if not erasure of Germany’s traumatic past. According to this argument, the East Side Gallery should be left intact so that residents and visitors can confront the traces of the country’s division. Another, more strident variation insists that the construction plans display a callous disregard for those who suffered under the East German regime and, more specifically, lost their lives while attempting to escape it. In the words of one activist in Der Tagesspiegel: “the most important point is not whether the Wall will be opened. We are against the combination of removing the Wall and building hotels and apartments in death strips.”
Again, the East Side Gallery’s connection with Germany’s fraught past is not nearly as straightforward as the activists and demonstrators have suggested. As Brian Ladd details in his book The Ghosts of Berlin, the murals of the East Side Gallery were not painted until the early 1990s, after the Wall had fallen and East Germany had ceased to exist. In fact, this portion of the Wall could not have been painted before 1989, because it stood in East Berlin, and anyone who attempted to leave a mark on it, or even lingered near it, would have been apprehended by East German police officers or border soldiers. Of course, amateur and professional artists did draw and paint some striking imagery on the Berlin Wall during the Cold War, but they created it on the Wall’s “outer” surface while standing in West Berlin, where they had much less to fear from East German border personnel. The muralists who launched and maintained the East Side Gallery certainly meant to evoke and further this tradition of “Wall art,” but in the process they abstracted it from a prior historical era and relocated it in another part of the city.
I note these objections not because I support the proposed construction or the alteration of the East Side Gallery. In particular, I am not at all convinced that the partial removal of the Wall is really necessary, whether or not Hinkel and the city go ahead with the area’s development. But I am troubled by the protesters’ reluctance to take the ironies and complexities of the current circumstances more fully into account. They are too eager to cast the developer and local officials as the villains in this story, particularly when the city and the federal government have in fact created a substantial memorial landscape related to the Wall. And they are too quick to position themselves on the moral high ground. Given the Wall’s disappearance from virtually every other part of the city, their demands for preserving the East Side Gallery seem more than a little belated.
“All thought arises out of experience, but no thought yields any meaning or even coherence without undergoing the operations of imagining and thinking.”
- Hannah Arendt, Thinking
In the wake of an extraordinarily brutal punctuation to an extraordinarily brutal year of gun violence in the United States and across the continent, the eye of American politics has finally turned back toward something it perhaps ought never have left, the problem in this country of the private ownership of the means to commit extraordinary brutality.
Perhaps unsurprisingly, public discourse around the problem has descended nearly instantaneously from fractiousness into what could now only generously be termed playground name-calling (to spend millions of dollars to publicly call one’s opponent an “elitist hypocrite” should feel extraordinary, even if it doesn’t). There are many tempting culprits to blame for this fall. The actors, of course, include some powerful players whose opposing ideologies so deeply inflect their understanding of the situation that it is entirely uncertain whether they are in fact seeing the same world, let alone the same problem within it. There is the stage on which the actors play, a largely national media structure whose voracious demands can be fed most easily, if not most effectively, by those who seek the currency of political power in hyperbole and absoluteness of conviction. Finally, there is the problem of constructing the problem itself: is it clear that private ownership of the means of extraordinary violence is so distinct a problem from that of its public ownership and (borderless) use? Can the line of acceptability between means of extraordinary brutality really be settled by types of implements, let alone the number of bullets in a magazine? What are the connections and disconnections between the events – Oak Creek, Chicago, Newtown,… – that have summoned the problem back onto our collective stage, and why had the problem disappeared in the first place when the violence so demonstrably had not? There is something in all of these instincts, but before we rush to decry our national theater (more Mamet than melodrama), it’s worth remembering that the problem is an extraordinary one, and that many of the pathologies of our various reactions to it spring from the same seed as our best resources: the nature of thinking itself.
The rhetoric used in describing the problem of gun violence – formulated so readily and so intractably – coupled with the unavoidable connection of the problem with intense emotion make it tempting to suspect one’s political opponents in this arena of ceasing to think altogether. I will admit to sometimes being convinced that there was no thought at all behind some of the words being splayed across television screens and RSS feeds (not, it should be said, entirely without reason). Arendt, in Thinking, describes thinking and feeling as inherently mutually antagonistic, and whether or not that is true it certainly seems that the tenor and pitch of the vitriol make thinking, let alone conversing, difficult. But that may point to a reality still more sobering than the perennially (and maybe banally) true observation that a great deal of what passes for public discourse did not require serious thought in its formulation: that when we deal with certain kinds of events, and try to engage in the process of translating them and reconstructing them into the form of a problem, we are running up against dimensions of the human experience so extraordinary that they shove us flatly against the limits of what we are able to do in thought. Perhaps the struggle now is less against a chronic inability to think, and more with recognizing the ways in which the limits of how we can feel and see and know – and then think – have created limits not just to how we can understand the problem, but to how we can understand each others’ responses to it.
One permanent refrain in this debate is the culpability of violent media in generating cultures in which, it is said, such extraordinary brutality becomes possible (ignoring, it might be objected, that humankind has shown a rather vibrant aptitude for brutality for quite some time). The newest variation on this theme, which in structure has changed little since its revival by Tipper Gore and Susan Baker in the 1980s, is that violent video games, by wedding the sensation of the rapid pleasures of accomplishment unique to video games with a sense of agency in apparent violence have created a generation desensitized not just to images of extraordinary violence, but to the prospect of committing it oneself. A friend of mine who has good reason to be sensitive was so infuriated at the NRA’s release of a mobile app promoting “responsible gun use” one month to the day after the Newtown shootings that he couldn’t eat for several days.
If it is possible to set aside questions of titanically poor taste and worse (and its not clear that we should), there is something about this way of thinking about the problem of violent imaginaries that reflects what I am suggesting is an issue of pathologies arising from mental necessities.
There is little use denying that being intensively immersed in gaming environments (any gaming environments, and not just violent electronic ones) for extended periods of time can seriously, if usually temporarily, alter a person’s phenomenal experience of their own agency and the realness of the world around them (I confess this as a recovering Sid Meier enthusiast myself). But the concept of de-sensitization is a difficult one in particular because, as Arendt points out, de-sensitization is precisely what thought does, and must do to carry out its work. Nowhere is this more clear than in those cases in which we are confronted with events that seriously strain the possibility of thinking about them at all.
Thinking about tragedies involves a twin process that need not, and should not, lessen the experience of their terribleness…but it always can. That twin process, as Arendt describes it, is one of de-sensation and re-sensation. When we try to think about what has occur, we have to call it up, we reproduce it “by repeating in [our] imagination, we de-sense whatever had been given to our senses.” In remembering, we convert the data of our senses, including our common sense, into objects of thought. We do that in order to make them fit for the preoccupation of thought, our “quest for meaning;” in other words, re-sensation, the process of translation into narrative and metaphor by which facts become truths.
It’s not difficult to see how extraordinary brutality challenges this double operation to the point of impossibility. On the one hand, this model of de-sensation by the reproductive imagination presumes a kind of voluntarism to the recollection, when often, and most especially in the cases like those of immediate victims where the stakes are highest, recollection comes unbidden, and far from de-sensing involves the cruel and incessant reiteration of sense that is renewed in all of its thought-destroying power. On the other hand, extraordinary brutality by its very nature resists re-sensation in proportion to its extraordinariness: to read the trial of Anders Breivik, for example, is to watch a play of the utter failure of not only the killer’s own efforts at narrative, but those of every single speaking person involved. It is not a surprise that these trials test the law’s own limited strictures of re-sensation to the breaking point, which often comes as nothing more than quiet acquittal (as with Mathieu Ngudjolo Chui, in whose case international law was forced to confess the inadequacy of its categories).
What’s more difficult to see is how that terrible challenge presented by extraordinary brutality to our very capacity to think is simultaneously a challenge to our politics, one perhaps graver still for our hope, as Arendt puts it in her Denktagebuch, to share a world with those with whom we must live. Extraordinary brutality makes a shamble of our narrating powers, and the failures of others to make sense of it which incite our scorn – as when, I will admit, even as someone who grew up in a gun culture, I literally cannot make sense of the suggestion that high-capacity magazines would be better combated by their increased prevalence in the school environment itself – are no less replicated by our own attempts, whether or not we can see and admit it. Imagination’s other function, its most political function for Arendt, is to put ourselves in the place of others in order to more fully see the political world that confronts us. If this is true, then it is not our capacity to put ourselves in the place of a killer that most threatens our political capacity to respond, whatever the prevalence of this problem in popular discourse. This may often be an impossibility, but the stakes are much lower than that of the impossibility of putting ourselves in the places of others who are also trying – and like us mostly failing – to respond. In trying and failing to renarrate tragedy in order to construct political problems and solutions, we come up against the limits of our imaginations, limits are themselves defined by the bounds of our prior experiences and our thought itself. When it comes to the world of the gun (and here, I can only urge a look at the truly remarkable The Language of the Gun), we are running up against the reality that contemporary American polity covers experiences of the world divergent to such an extreme – how much, in terms of sensory experience in their personal history do David Keene and Alan Padilla share, really? – that answers truly are being constructed from worlds which, in the senses that matter to policymaking, don’t overlap. And in an environment where that is true, the first, most critical order must be the one that is neglected most: not to analyze why our competing solutions are right or wrong, but to understand why the solutions we are proposing arise from the experiences of the world we have had, including our experiences of the tragedies we cannot re-sense.
Responses cannot be crafted out of worlds that are not shared, and tending to the former requires a kind of tending to the latter that we see vanishingly rarely, thought the torch still carried by a few radio producers and documentary filmmakers. Absent that kind of dedicated world-making – and perhaps that process requires a time and restraint that too is threatened by extraordinary brutality – we will simply be left with what we have, an issue politics without common sense because the only sense that is common, the event, is insensible. When they respond in ways we cannot abide, understanding our political others is an almost impossibly difficult task. It is also one that a polity cannot possibly do without.
San Jose State University is experimenting with a program where students pay a reduced fee for online courses run by the private firm Udacity. Teachers and their unions are in retreat across the nation. And groups like Uncollege insist that schools and universities are unnecessary. At a time when teachers are everywhere on the defensive, it is great to read this opening salvo from Leon Wieseltier:
When I look back at my education, I am struck not by how much I learned but by how much I was taught. I am the progeny of teachers; I swoon over teachers. Even what I learned on my own I owed to them, because they guided me in my sense of what is significant.
I share Wieseltier’s reverence for educators. Eric Rothschild and Werner Feig lit fires in my brain while I was in high school. Austin Sarat taught me to teach myself in college. Laurent Mayali introduced me to the wonders of history. Marianne Constable pushed me to be a rigorous reader. Drucilla Cornell fired my idealism for justice. And Philippe Nonet showed me how much I still had to know and inspired me to read and think ruthlessly in graduate school. Like Wieseltier, I can trace my life’s path through the lens of my teachers.
The occasion for such a welcome love letter to teachers is Wieseltier’s rapacious rejection of homeschooling and unschooling, two movements that he argues denigrate teachers. As sympathetic as I am to his paean to pedagogues, Wieseltier’s rejection of all alternatives to conventional education today is overly defensive.
For all their many ills, homeschooling and unschooling are two movements that seek to personalize and intensify the often conventional and factory-like educational experience of our nation’s high schools and colleges. According to Wieseltier, these alternatives are possessed of the “demented idea that children can be competently taught by people whose only qualifications for teaching them are love and a desire to keep them from the world.” These movements believe that young people can “reject college and become “self-directed learners.”” For Wieseltier, the claim that people can teach themselves is both an “insult to the great profession of pedagogy” and a romantic over-estimation of “untutored ‘self’.”
The romance of the untutored self is strong, but hardly dangerous. While today educators like Will Richardson and entrepreneurs like Dale Stephens celebrate the abundance of the internet and argue that anyone can teach themselves with simply an internet connection, that dream has a history. Consider this endorsement of autodidactic learning from Ray Bradbury from long before the internet:
Yes, I am. I’m completely library educated. I’ve never been to college. I went down to the library when I was in grade school in Waukegan, and in high school in Los Angeles, and spent long days every summer in the library. I used to steal magazines from a store on Genesee Street, in Waukegan, and read them and then steal them back on the racks again. That way I took the print off with my eyeballs and stayed honest. I didn’t want to be a permanent thief, and I was very careful to wash my hands before I read them. But with the library, it’s like catnip, I suppose: you begin to run in circles because there’s so much to look at and read. And it’s far more fun than going to school, simply because you make up your own list and you don’t have to listen to anyone. When I would see some of the books my kids were forced to bring home and read by some of their teachers, and were graded on—well, what if you don’t like those books?
In this interview in the Paris Review, Bradbury not only celebrates the freedom of the untutored self, but also dismisses college along much the same lines as Dale Stephens of Uncollege does. Here is Bradbury again:
You can’t learn to write in college. It’s a very bad place for writers because the teachers always think they know more than you do—and they don’t. They have prejudices. They may like Henry James, but what if you don’t want to write like Henry James? They may like John Irving, for instance, who’s the bore of all time. A lot of the people whose work they’ve taught in the schools for the last thirty years, I can’t understand why people read them and why they are taught. The library, on the other hand, has no biases. The information is all there for you to interpret. You don’t have someone telling you what to think. You discover it for yourself.
What the library and the internet offer is unfiltered information. For the autodidact, that is all that is needed. Education is a self-driven exploration of the database of the world.
Of course such arguments are elitist. Not everyone is a Ray Bradbury or a Wilhelm Gottfried Leibniz, who taught himself Latin in a few days. Hannah Arendt refused to go to her high school Greek class because it was offered at 8 am—too early an hour for her mind to wake up, she claimed. She learned Greek on her own. For such people self-learning is an option. But even Arendt needed teachers, which is why she went to Freiburg to study with Martin Heidegger. She had heard, she later wrote, that thinking was happening there. And she wanted to learn to think.
What is it that teachers teach when they are teaching? To answer “thinking” or “critical reasoning” or “self-reflection” is simply to open more questions. And yet these are the crucial questions we need to ask. At a period in time when education is increasingly confused with information delivery, we need to articulate and promote the dignity of teaching.
What is most provocative in Wieseltier’s essay is his civic argument for a liberal arts education. Education, he writes, is the salvation of both the person and the citizen. Indeed it is the bulwark of a democratic politics:
Surely the primary objectives of education are the formation of the self and the formation of the citizen. A political order based on the expression of opinion imposes an intellectual obligation upon the individual, who cannot acquit himself of his democratic duty without an ability to reason, a familiarity with argument, a historical memory. An ignorant citizen is a traitor to an open society. The demagoguery of the media, which is covertly structural when it is not overtly ideological, demands a countervailing force of knowledgeable reflection.
That education is the answer to our political ills is an argument heard widely. During the recent presidential election, the candidates frequently appealed to education as the panacea for everything from our flagging economy to our sclerotic political system. Wieseltier trades in a similar argument: A good liberal arts education will yield critical thinkers who will thus be able to parse the obfuscation inherent in the media and vote for responsible and excellent candidates.
I am skeptical of arguments that imagine education as a panacea for politics. Behind such arguments is usually the unspoken assumption: “If X were educated and knew what they were talking about, they would see the truth and agree with me.” There is a confidence here in a kind of rational speech situation (of the kind imagined by Jürgen Habermas) that holds that when the conditions are propitious, everyone will come to agree on a rational solution. But that is not the way human nature or politics works. Politics involves plurality and the amazing thing about human beings is that educated or not, we embrace an extraordinary variety of strongly held, intelligent, and conscientious opinions. I am a firm believer in education. But I hold out little hope that education will make people see eye to eye, end our political paralysis, or usher in a more rational polity.
What then is the value of education? And why is that we so deeply need great teachers? Hannah Arendt saw education as “the point at which we decide whether we love the world enough to assume responsibility for it." The educator must love the world and believe in it if he or she is to introduce young people to that world as something noble and worthy of respect. In this sense education is conservative, insofar as it conserves the world as it has been given. But education is also revolutionary, insofar as the teacher must realize that it is part of that world as it is that young people will change the world. Teachers simply teach what is, Arendt argued; they leave to the students the chance to transform it.
To teach the world as it is, one must love the world—what Arendt comes to call amor mundi. A teacher must not despise the world or see it as oppressive, evil, and deceitful. Yes, the teacher can recognize the limitations of the world and see its faults. But he or she must nevertheless love the world with its faults and thus lead the student into the world as something inspired and beautiful. To teach Plato, you must love Plato. To teach geology, you must love rocks. While critical thinking is an important skill, what teachers teach is rather enthusiasm and love of learning. The great teachers are the lovers of learning. What they teach, above all, is the experience of discovery. And they do so by learning themselves.
Education is to be distinguished from knowledge transmission. It must also be distinguished from credentialing. And finally, education is not the same as indoctrinating students with values or beliefs. Education is about opening students to the fact of what is. Teaching them about the world as it is. It is then up to the student, the young, to judge whether the world that they have inherited is loveable and worthy of retention, or whether it must be changed. The teacher is not responsible for changing the world; rather the teacher nurtures new citizens who are capable of judging the world on their own.
Arendt thus affirms Ralph Waldo Emerson's view that “He only who is able to stand alone is qualified for society.” Emerson’s imperative, to take up the divine idea allotted to each one of us, resonates with Arendt’s Socratic imperative, to be true to oneself. Education, Arendt insists, must risk allowing people their unique and personal viewpoints, eschewing political education and seeking, simply, to nurture independent minds. Education prepares the youth for politics by bringing them into a common world as independent and unique individuals. From this perspective, the progeny of teachers is the educated citizen, someone one who is both self-reliant in an Emersonian sense and also part of a common world.
We face a challenge of leadership; there is a void in our body politics that remains to be filled. First, expectations of the president need to re-evaluated. The public’s perception of the president is unrealistic and inflated. A CBS News/New York Times poll in March 2012 reported that 54% of people believe the president “can do a lot” about gas prices.
Our economic recession adds another dimension to the public’s bloated expectations. In the wake of the 2008 economic recession all eyes turned on what the President-elect would do once in office. People believed and still do that the President had the ability to fix the global economic meltdown. The public expected the President to solve our economic problem without understanding that in the globalized neo-liberal regime markets are highly connected. It is no longer possible for a single country to ameliorate the effects of an economic meltdown.
The president will only matter in this century if it is first addressed how we perceive the president. He is neither a deity nor a dictator. His actions in an increasingly filibuster-happy congress are limited. The public’s expectations must be re-evaluated and shaped to accept reality. The president cannot solve all our problems; the very fabric of the American constitution prohibits the president from securing more powers. The justified fear of an autocrat prohibits action. This tradeoff was accepted by the founding fathers and it must now be accepted again.
Once expectations are adjusted, how then does the president matter? The president will matter as long as he can engage citizens in our democratic process. The pervasive idea that democracy is simply voting has filled the minds of millions. The civic and democratic institutions lie asleep in times where the market prevails. People have given up on government; they see it as an artifact to be studied in history books. The president must see his role as protector of our democracy; he must be its biggest champion. This cannot only be done through rhetoric alone. The president must help foster an engaged citizenry that actively participates in our democracy.
The danger to our politics does not come from terrorist it comes from a citizenry that is not informed, does not participate, and could care less. When the media suggests the president must rise above politics the only way that can be done is to address the inherent problems in our current political system. It is to remind citizens of the price paid by their forefathers for political rights. The president must become the chief persuader thereby helping bring citizens into the political fold. The only way for the president to matter in this century is for people to see him as a protector of this great experiment and not merely as passerby.
These leaders will come from the left and the right alike, engaging citizens should not be a partisan issue. They must also come with a historical understanding of our democracy and American institutions. This does not mean they will rise from academia, but that their understanding cannot be informed by current political debates but rather by history. New political leaders must accept a non-politicized history that seeks truth.
Facts have become politicized, each side molding it to their own advantage. Objective truths are irrelevant because each side has been allowed to massage it. On August 28, 2012 New Jersey Governor Chris Christie lied by omission. He gave the keynote address at the Republican National Convention claiming that there has been a New Jersey come back. That his policies have worked and all it takes is serious leaders to tackle our problem. He claims he cut the state deficit while decreasing taxes. The governor forgets to mention he also cut pensions, teachers, firefighters, and many others. What is more glaring is New Jersey’s unemployment rate at over 9%. The myth is created allowing Governor Christie to become a hero in the Republican Party. The truth does not lie with either party. A new leader must inform citizens of the reality rather than try to score political points. This may be impossible but it is the only way that the president will matter.
People are tired of the partisan bickering; Obama’s unemployment rate is just as bad as Governor Christie’s and yet both sides claim victory. A president will not matter until he can acknowledge the fundamental problems at hand. For a leader to matter he must stand for something greater than his own party. He must stand for citizen participation and access to information. A leader would not claim victory but would relate to citizens the problems we face and the solutions they believe will solve it. They must acknowledge when those solutions do not work. It is a pragmatic president that will matter in this century, one who is willing to suffer the consequences of failed policies for democracies sake.
The Millennial generation will inherit a troubled world by the year 2040. Their ability to lead will prove extremely important. They will be the heirs to the American dilemma. The hope is that they rise and fill the leadership void not as past generations have done, but as new leaders different and emboldened by a fight for a vibrant participatory democracy. It is John Dewey that should inform what a new president needs to fight for. “[T]he task of democracy is forever that of creation of a freer and more humane experience in which all share and to which all contribute.”
 John Dewey, “Creative Democracy—The Task before US
Have you not watched Newt Gingrich's take down of CNN's John King at the opening of the Republican debate last night? You should.
Gingrich's supremely confident critique of the media's obsession with personal issues certainly put the Republican contest back in play and may have set him on the road to the nomination. It is also fascinating in the widely divergent reactions it has spawned.
The Republicans attending the debate gave Gingrich two standing ovations within three minutes. Most commentators have concluded that Gingrich won the debate in the first five minutes. But reaction on the left has been contemptuous.
Andrew Sullivan has great coverage and collects the responses.
John Marshall marvels at his hubris: "Shameless, hubris, chutzpah, whatever. It was pitch perfect for his intended audience. He took control of the debate and drew down all the tension about when the debate would turn to the open marriage stuff."
Andrew Sprung writes of an "astounding display of the Audacity of Hubris."
PM Carpenter shouts that it was "the most despicable display of grotesque demagoguery I have ever witnessed."
Tim Stanley (hat tip to Andrew Sullivan) has the best characterization of the rhetorical power of Gingrich's answer.
To understand the full power of Gingrich’s answer, you really have to watch him give it. The former Speaker has three standard expressions: charmed bemusement (“Why are you asking me that, you fool?”), indignant (“Why are you asking me that, you swine?”) and supreme confidence (“That’s not the question I would have asked, you moron”). Each comes with its own number of chins. For his stunning “No, but I will”, Newt employed the full dozen. He looked straight down them, with half moon goblin eyes. “I think the destructive, vicious, negative nature of much of the news media makes it harder to govern this country, harder to attract decent people to run for public office. And I am appalled that you would begin a presidential debate on a topic like that.” By the time his chins unfolded, Gingrich was in total command of the debate.
The interesting question is: was Gingrich wrong to react the way he did? Did his angry and forceful response show hubris and contempt? Or is it the confident and powerful response of a true leader?
For years, liberals and conservatives alike have kvetched unceasingly about how the media cares more about scandal than substance.
What was John King thinking starting off the last Republican debate before a crucial primary with a question about marital infidelity from decades ago? One can of course argue that infidelity goes to character, and maybe it could have been asked about in some way. But is it really the most important issue of the debate? There are plenty of questions about Gingrich's character that are more pertinent to his ability to be President. Whether he once asked his wife to allow him to keep a mistress is not what disqualifies him to be President.
The reason Gingrich is still in this contest is because he has a supreme confidence in himself. He believes that he is the only candidate with big ideas, the only one willing to really buck the status quo. He styles himself a leader, and the strengths and weaknesses of his idea of leadership were on display in his answer to John King.
The strengths are clear. He elevated himself far above his questioner. He assumed a leadership position and pushed through without any self-doubt or self-criticism. Imagine someone like President Obama acting with such assurance? It is almost inconceivable. I can't imagine watching Gingrich and not feeling something like: Finally! someone has the courage to say what they believe and tell the media to get over their titillations and focus on the fact that this is the most important Presidential election in a generation.
Gingrich's weaknesses are clear as well. The man is imperious. He lives at times in a fantasy world of his own, one in which he is the philosopher king straining to keep calm and save the rest of us before he explodes at our idiocy. Nothing is more indicative of his hubris is his contempt for the Congressional Budget Office, the non-partisan body that Gingrich regularly assails and wants to abolish. Why that has never been asked about in the debates is a travesty, and in many ways supports Gingrich's tirade. In any case, it speaks much more to the question of character and leadership than his marital problems.