Julia Frakes, a student of political science and peace & justice studies, recently sent us this image of her personal Arendt library.
Here is what she has to say about the image:
I posted this photo on Instagram a few months ago, knee-deep in research and awestricken with how much our contemporary scholarship owes to Arendtian moral and action theories articulated in Eichmann in Jerusalem. Judith Butler’s conceptualization of terrorism and the movements that sweep up youthful sympathies owes much to Arendt’s most striking and novel insight—that there is an intrinsic link between our ability (or inability) to think and evil itself—especially as our society contends with pressing questions about civil rights, the normative value of capitalism, state-sponsored violence, crimes against humanity, the spectacle of the 27/7 media cycle, global revolutions, violent swings toward nationalism, an eerie “unthaw” of the Cold War, exercises of totalitarian power structures and surveillance, and racial and ethnic crises in inner-cities and the Middle East which challenge easy and en vogue applications of Arendt’s totalitarianism thesis and demand that we veer from disastrous impassivity. To properly honor Hannah Arendt’s genius and wisdom, we must honestly tackle the ties between (not) thinking and evil (Villa 2000: 279).
“Men who no longer can make sure of the reality which they feel and experience through talking about it and sharing it with their fellow-men, live in the same nightmare of loneliness and uncertainty which, in a normal world, is the terrible fate of insanity.”
--Hannah Arendt, “Ideology and Propaganda”
Who could forget the story of the Emperor’s new clothes? A pair of scheming tailors promise the ruler of a rich kingdom a suit of clothes woven with a magical property: only truly worthy individuals can see them. The Emperor accepts, but when the enchanted robes arrive he finds he cannot see them. Neither can his advisers. Neither can any member of his court.
Secretly ashamed, the Emperor and his retinue proceed to parade through the streets. The many subjects who have assembled there in order to catch a glimpse of the robes find their own proud hopes embarrassed. The farce reaches its peak when an uninhibited youth finally points out the obvious: the Emperor is wearing nothing.
Hans Christian Andersen’s fable skewers social pretension, not political domination. It cautions against face-saving falsehoods, not forced untruths. Nevertheless, the story of the Emperor’s new clothes highlights political hazards quite similar to those discussed by Hannah Arendt in her 1950 lecture, “Ideology and Propaganda.” In particular, both texts focus attention on the threat of pluralistic ignorance that arises whenever free public discourse is throttled by convention, or prohibited by law.
Pluralistic ignorance is a particular kind of popular delusion. It occurs when the various members of a group or population (1) do not know some fact or accept some principle, (2) do not know that their peers do not know that fact or accept that principle, and (3) act in such ways as to avoid revealing their lack of knowledge or acceptance to their peers. In the story of the Emperor’s new clothes, as Cristina Bicchieri has pointed out, the condition of pluralistic ignorance explains why, though neither the emperor nor his subjects can see the magic robes, all act as if they can. Many may doubt the reality of those robes, but fear of public degradation prevents any from airing these doubts before the bold child speaks out.
States of pluralistic ignorance can be sustained by sterner forces than fear of public disgrace, as Hannah Arendt’s 1950 lecture explains. The basic subject of Arendt’s talk is familiar from movies like The Lives of Others and books like 1984. She is concerned with the straitened states of mind that systematic surveillance and severe curtailments of freedom of expression can produce. Arendt’s analysis of the “nightmare of loneliness and uncertainty” induced by totalitarian forms of government and social control suggests that the cumulative effect of such repressive policies is to uncouple belief from judgment, conviction from action. But this is just what characterizes the condition of pluralistic ignorance.
Arendt did not think that loneliness was exclusively a product of totalitarian modes of government. She believed this estranged state of mind could also be non-coercively induced by long exposure to commercial standards and patterns of life in liberal societies. To understand this view, we should distinguish loneliness from a similar concept to which Arendt assigned very different meaning, namely, solitude.
Loneliness, on Arendt’s view, is the condition of persons whose beliefs, formed by active or passive processes, remain largely privately held, and are rarely submitted to the scrutiny of others in the form of judgments, or tested more rigorously still in the form of action. Loneliness can result from formal prohibitions on expression or action, as seen in totalitarian societies; but it can also result from informal standards and patterns of life which disvalue political – and overvalue social or commercial –interactions.
Against loneliness, Arendt opposed the condition of solitude. This is the condition of isolation that thinking persons temporarily enter in order to review their beliefs or principles undistracted by the tumult of social and political life. Solitude is distinguished by loneliness insofar as the beliefs or commitments formed in this condition of temporary retreat are expressly intended for eventual exhibition in the political sphere in the form of judgments and actions.
If loneliness aligns with pluralistic ignorance by signifying a gap between belief and action, solitude provides a check on pluralistic ignorance by enabling individuals to revise their beliefs and prepare their judgments in isolation from forces that might repress or distort them. But solitude only fulfills this purpose when the isolated individual returns to political life and expresses a judgment or performs an action in which the connection between private belief and public undertaking is manifest. In this way the forces that sustain pluralistic ignorance are undermined, or overcome.
We might ask what the predominant causes of loneliness and pluralistic ignorance are today, seven decades after Arendt’s lecture. Recent revelations concerning the operations of the National Security Administration show that active, systematic surveillance of citizens’ personal communications is no Cold War relic, but rather a present reality. Within some communities, at least, awareness or suspicion of direct government surveillance will likely inhibit free expression, and bar open discourse.
At the same time, developments in technology and the rise of mass participation in social networks may also contribute to the growth or persistence of loneliness, in Arendt’s sense. Such technologies certainly make it more difficult to achieve the kind of solitude recommended by Arendt as the condition for effective contemplation – as anyone who owns a smartphone knows. Additionally, the tendency of participants in digital communications to cluster amongst like-minded peers, and to expose themselves only to opinions likely to match their own, limits the chances of encountering checks or dissensions from one’s judgments that could effectively alter one’s beliefs, or expose a gap between conviction and action. In light of such facts, we might alter Arendt’s phrase to speak loneliness and certainty as states of mind characteristic of our present age.
It would be a mistake to end on such a gloomy note, however. Digital technologies have also created powerful new means of expressing judgments, or organizing actions that are truly political, in the sense that their conclusion is not pre-determined, their progress not fixed in any one direction. Although the ‘Twitter revolutions’ of the last several years have disappointed the hopes of many of their proponents, their unanticipated paths of development have helped to make vivid the risks imposed by action, and the radical openness of politics. These are topics worthy of contemplation; they are also topics that demand debate.
Indeed my opinion now is that evil is never “radical,” that it is only extreme, and that it possesses neither depth nor any demonic dimension. It can overgrow and lay waste the whole world precisely because it spreads like a fungus over the surface. It is ‘thought-defying,’ as I said, because thought tries to reach some depth, to go to the roots, and the moment it concerns itself with evil, it is frustrated because there is nothing.
-Hannah Arendt, letter to Gershom Scholem
Recent commentators have marked the 50th anniversary of Stanley Kubrick’s bleak nuclear satire, Dr. Strangelove, by noting that the film contained quite a bit more reality than we had thought. While national security and military officials at the time scoffed at the film’s farfetched depictions of a nuclear holocaust set off by a crazed general, we now know that such an unthinkable event would have been, at least theoretically, entirely possible. Yet there is another, deeper sense in which Kubrick’s satire puts us in touch with a reality that could not be readily depicted through other means.
The film tells the story of a rogue general who, at the height of the Cold War arms race, launches a nuclear attack that cannot be recalled, which leads to the destruction of most of humanity in a nuclear holocaust. These are events that we would conventionally describe as “tragic,” but the film is no tragedy. Why not? One answer, of course, is the comic, satirical touch with which Kubrick treated the material, his use of Peter Sellers to play three different characters, and his method of actually tricking his actors into playing their roles more ridiculously than they would have otherwise. But in a deeper sense, Stranglove is about the loss of a capacity for the tragic. The characters, absorbed in utter banalities as they hurtle toward collective catastrophe, display no real grasp of the moral reality of their actions, because they’ve lost contact with the moral reality of the world they share. Dr. Strangelove, then, is a satire about the impossibility of tragedy.
In order to think about what this might mean, it’s helpful to turn to the idea, famously invoked by Hannah Arendt at the end of Eichmann in Jerusalem, of the banality of evil. As Arendt stressed in a later essay, the banality of evil is not a theory or a doctrine “but something quite factual, the phenomenon of evil deeds, committed on a gigantic scale, which could not be traced to any particularity of wickedness, pathology, or ideological conviction in the doer, whose only personal distinction was perhaps extraordinary shallowness.” Eichmann was no villainous monster or demon; rather, he was “terrifyingly normal,” and his chief characteristic was “not stupidity but a curious, quite authentic inability to think.” The inability to think has nothing to do with the capacity of strategizing, performing instrumental calculations, or “reckoning with consequences,” as Hobbes put it. Rather, thinking has to do with awakening the inner dialogue involved in all consciousness, the questioning of the self by the self, which Arendt says dissolves all certainties and examines anew all accepted dogmas and values.
According to Arendt, the socially recognized function of “clichés, stock phrases, adherence to conventional, standardized codes of expression and conduct” is to “protect us against reality”; their function is to protect us against the claim that reality makes on our thinking. This claim, which awakens the dissolving powers of thought, can be so destabilizing that we all must inure ourselves to some degree against it, so that ordinary life can go on at all. What characterized Eichmann is that “he clearly knew of no such claim at all.” Eichmann’s absorption in instrumental and strategic problem solving, on the one hand, and clichés and empty platitudes on the other, was total. The absence of thought, and with it the absence of judgment, ensured a total lack of contact with the moral reality of his actions. Hence the “banality” of his evil resides not in the enormity of the consequences of his actions, but in the depthless opacity of the perpetrator.
The characters in Dr. Strangelove are banal in precisely this sense. All of them—from the affable, hapless president, the red-blooded general, the vodka-swilling diplomat, the self-interested advisors and Dr. Strangelove himself—are silly cardboard cutouts, superficial stereotypes of characters that any lack depth, self-reflection or the capacity for communicating anything other than empty clichés. They are missing what Arendt called “the activity of thinking as such, the habit of examining and reflecting upon whatever happens to come to pass, regardless of specific content and quite independent of results…” They also lack any contact with the moral reality of their activity. All of their actions takes place in an increasingly claustrophobic series of confined spaces carefully sealed off by design: the war room, the military base, the bomber cockpit. The world—Arendt’s common world of appearances that constitutes the possibility of narrative and story telling—never appears at all; reality cannot break through.
The presence of some of Arendt’s core themes in Kubrick’s film should not come as a surprise. Although she dedicated very little attention in her published works to the problem of nuclear war, in an early draft of a text that would later become The Human Condition, Arendt claimed that two experiences of the 20th century, “totalitarianism and the atomic bomb – ignite the question about the meaning of politics in our time. They are fundamental experiences of our age, and if we ignore them it is as if we never lived in the world that is our world.” Moreover, the culmination of strategic statecraft in social scientific doctrines mandating the nuclear arms race reflects on some of the core themes Arendt identified with political modernity: the emergence of a conception of politics as a strategic use of violence for the purposes of protecting society.
Niccolò Machiavelli, a thinker for whom Arendt had a lot of admiration, helped inaugurate this modern adventure of strategic statecraft by reframing politics as l’arte della stato – the art of the state, which unlike the internal civic space of the republic, always finds itself intervening within an instrumental economy of violence. For Machiavelli the prince, shedding the persona of Ciceronian humanism, must be willing to become beastly, animal-like, to discover the virtues of the vir virtutis in the animal nature of the lion and the fox. If political modernity is inaugurated by Machiavelli’s image of the centaur, the Prince-becoming-beastly, Strangelove closes with a suitable 20th century corollary to the career of modern statecraft. It is the image of the amiable, good-natured “pilot” who never steers the machines he occupies but is himself steered by them, finally straddling and literally transforming himself into the Bomb. It is an image that, in our own age of remote drone warfare and the possible dawning of a new, not yet fully conceivable epoch of post-human violence, has not lost its power to provoke reflection.
The secret of American exceptionalism may very well be the uniquely American susceptibility to narratives of decline. From the American defeat in Vietnam and the Soviet launch of Sputnik to the quagmire in Afghanistan and the current financial crisis, naysayers proclaim the end of the American century. And yet the prophecies of decline are nearly always, in a uniquely American spirit, followed by calls for rejuvenation. Americans are neither pessimists nor optimists. Instead, they are darkened by despair and fired by hope.
Decline, writes Josef Joffe in a recent essay in The American Interest, “is as American as apple pie. “ The tales of decline that populate American cultural myths have many morals, but one common shared theme: Renewal.
“Decline Time in America” is never just a disinterested tally of trends and numbers. It is not about truth, but about consequences—as in any morality tale. Declinism tells a story to shape belief and change behavior; it is a narrative that is impervious to empirical validation, whose purpose is to bring comforting coherence to the flow of events. The universal technique of mythic morality tales is dramatization and hyperbole. Since good news is no news, bad news is best in the marketplace of ideas. The winning vendor is not Pollyanna but Henny Penny, also known as Chicken Little, who always sees the sky falling. But why does alarmism work so well, be it on the pulpit or on the hustings—whatever the inconvenient facts?
Joffe, the editor of the German weekly Die Zeit, writes from the lofty perch of an all-knowing cultural critic. Declinism is, when looked at from above, little more than a marketing pitch:
Since biblical times, prophets have never gone to town on rosy oratory, and politicos only rarely. Fire and brimstone are usually the best USP, “unique selling proposition” in marketing-speak.
The origins of modern declinism, pace Joffe, are found in “the serial massacre that was World War I,” the rapacious carnage that revealed “the evil face of technology triumphant.” WWI deflated the enlightenment optimism in reason and science, showing instead the destructive impact of those very same progressive ideals.
The knowledge that raised the Eiffel Tower also birthed the machine gun, allowing one man to mow down a hundred without having to slow down for reloading. Nineteenth-century chemistry revolutionized industry, churning out those blessings from petroleum to plastics and pharmacology that made the modern world. But the same labs also invented poison gas. The hand that delivered good also enabled evil. Worse, freedom’s march was not only stopped but reversed. Democracy was flattened by the utopia-seeking totalitarians of the 20th century. Their utopia was the universe of the gulag and the death camp. Their road to salvation led to a war that claimed 55 million lives and then to a Cold War that imperiled hundreds of millions more.
America, the land of progress in Joffe’s telling, now exists in a productive tension with the anti-scientific tale of the “death of progress.”
“Technology and plenty, the critics of the Enlightenment argued, would not liberate the common man, but enslave him in the prison of “false consciousness” built by the ruling elites. The new despair of the former torchbearers of progress may well be the reason that declinism flourishes on both Left and Right. This new ideological kinship alone does not by itself explain any of the five waves of American declinism, but it has certainly broadened its appeal over time.
Joffe stands above both extremes of the declinism pendulum. Instead of embracing or rejecting the tale of decline, he names decline and its redemptive flipside the driving force of American exceptionalism. Myths of decline are necessary in order to fuel the exceptional calls for sacrifice, work, and innovation that have for centuries turned the tide of American elections and American culture.
[D]awn always follows doom—as when Kennedy called out in his Inaugural Address: “Let the word go forth that the torch has been passed to a new generation of Americans.” Gone was the Soviet bear who had grown to monstrous size in the 1950s. And so again twenty years later. At the end of Ronald Reagan’s first term, his fabled campaign commercial exulted: “It’s morning again in America. And under the leadership of President Reagan, our country is prouder and stronger and better.” In the fourth year of Barack Obama’s first term, America was “back”, and again on top. Collapse was yesterday; today is resurrection. This miraculous turnaround might explain why declinism usually blossoms at the end of an administration—and wilts quickly after the next victory.
Over and over the handwriting that showed that decline was on the wall was, in truth, “a call to arms that galvanized the nation.”
Behind this long history of nightmares of degeneration and dreams of rebirth is Joffe’s ultimate question: Are the current worries about the death of the American century simply the latest in the American cycle of gloom and glee? Or is it possible that the American dream is, finally, used up? In other words, is it true that, since “at “some point, everything comes to an end,” this may be the end for America? Might it be that, as many in Europe now argue, “The United States is a confused and fearful country in 2010.” Is it true that the US is a “hate-filled country” in unavoidable decline?
Joffe is skeptical. Here is his one part of his answer:
Will they be proven right in the case of America? Not likely. For heuristic purposes, look at some numbers. At the pinnacle of British power (1870), the country’s GDP was separated from that of its rivals by mere percentages. The United States dwarfs the Rest, even China, by multiples—be it in terms of GDP, nuclear weapons, defense spending, projection forces, R&D outlays or patent applications. Seventeen of the world’s top universities are American; this is where tomorrow’s intellectual capital is being produced. America’s share of global GDP has held steady for forty years, while Europe’s, Japan’s and Russia’s have shrunk. And China’s miraculous growth is slipping, echoing the fates of the earlier Asian dragons (Japan, South Korea, Taiwan) that provided the economic model: high savings, low consumption, “exports first.” China is facing a disastrous demography; the United States, rejuvenated by steady immigration, will be the youngest country of the industrial world (after India).
In short, if America is to decline it will be because America refuses to stay true to its tradition of innovation and reinvention.
As convincing as Joffe is, the present danger that America’s current malaise will persist comes less from economics or from politics than from the extinguishing of the nation’s moral fire. And in this regard, essays such as Joffe’s are symptoms of the problem America faces. Joffe writes from above and specifically from the position of the social scientist. He looks down on America and American history and identifies trends. He cites figures. And he argues that in spite of the worry, all is generally ok. Inequality? Not to worry, it has been worse. Democratic sclerosis? Fret not; think back to the 1880s. Soul-destroying partisanship? Have you read the newspapers of the late 18th century? In short, our problems are nothing new under the sun. Keep it in perspective. There is painfully little urgency in such essays. Indeed, they trade above all in a defense of the status quo.
There is reason to worry though, and much to worry about. Joffe may himself have seen one such worry if he had lingered longer on an essay he cites briefly, but does not discuss. In 1954, Hannah Arendt published “Europe and America: Dream and Nightmare” in Commentary Magazine. In that essay—originally given as part of a series of talks at Princeton University on the relationship between Europe and America—she asked: “WHAT IMAGE DOES Europe have of America?”
Her answer is that Europe has never seen America as an exotic land like the South Sea Islands. Instead, there are two conflicting images of America that matter for Europeans. Politically, America names the very European dream of political liberty. In this sense, America is less the new world than the embodiment of the old world, the land in which European dreams of equality and liberty are made manifest. The political nearness of Europe and America explains their kinship.
European anti-Americanism, however, is lodged in a second myth about American, the economic image of America as the land of plenty. This European image of America’s stupendous wealth may or may not be borne out in reality, but it is a fantasy that drives European opinion:
America, it is true, has been the “land of plenty” almost since the beginning of its history, and the relative well-being of all her inhabitants deeply impressed even early travelers. … It is also true that the feeling was always present that the difference between the two continents was greater than national differences in Europe itself even if the actual figures did not bear this out. Still, at some moment—presumably after America emerged from her long isolation and became once more a central preoccupation of Europe after the First World War—this difference between Europe and America changed its meaning and became qualitative instead of quantitative. It was no longer a question of better, but of altogether different conditions, of a nature which makes understanding well nigh impossible. Like an invisible but very real Chinese wall, the wealth of the United States separates it from all other countries of the globe, just as it separates the individual American tourist from the inhabitants of the countries he visits.
Arendt’s interest in this “Chinese wall” that separates Europe from America is that it lies behind the anti-Americanism of European liberals, even as it inspires the poor. “As a result,” of this myth, Arendt writes, “sympathy for America today can be found, generally speaking, among those people whom Europeans call “reactionary,” whereas an anti-American posture is one of the best ways to prove oneself a liberal.” The same can largely be said today.
The danger in such European anti-Americanism is not only that it will fire a European nationalism, but also that it will cast European nationalism as an ideological opposition to American wealth. “Anti-Americanism, its negative emptiness notwithstanding, threatens to become the content of a European movement.” In other words, European nationalism threatens to assume on a negative ideological tone.
That Europe will understand itself primarily in opposition to America as a land of wealth impacts America too, insofar as European opposition hardens Americans in their own mythic sense of themselves as a land of unfettered economic freedom and unlimited wealth. European anti-Americanism thus fosters the kind of free market ideology so rampant in America today.
What is more, when Europe and America emphasize their ideological opposition on an economic level, they deemphasize their political kinship as lands of freedom.
Myths of American decline serve a purpose on both sides of the Atlantic.
In Europe, they help justify Europe’s social democratic welfare states, as well as their highly bureaucratized regulatory state. In America, they underlie attacks on regulation and calls to limit and shrink government. These are all important issues that should be thought and debated with an eye to reality. The danger is that the European emancipation and American exceptionalism threatens to elevate ideology over reality, hardening positions that need rather to be open for innovation.
Joffe’s essay on the Canard of Decline is a welcome spur to rethinking the gloom and the glee of our present moment. It is your weekend read.
We commonly assume that political acts and claims are shaped by some form of reasoning. How then do we respond to political stands in which arguments are piled atop arguments in contradictory ways, and where the force of the various arguments is less important than victory? We see in political discourse a definite willingness to embrace any argument that helps one win, whether or not it makes sense.
One example of our cynical embrace of bad arguments is the recent controversy over the East Side Gallery in Berlin. The Gallery is comprised of a series of murals that, over the course of the past two decades, an international cast of artists has painted and re-painted on an approximately one-mile stretch of the Berlin Wall. Indeed, the East Side Gallery occupies the longest existing remnant of the Wall, and it has become a significant landmark not only for those visitors who seek to experience something of the city’s Cold War past, but also for those long-time residents who regard it as an embodiment of the city’s contemporary feel and texture.
The tumult of the past few weeks erupted over the plans of a developer, Maik Uwe Hinkel, to construct luxury apartments and an office complex in the former border zone—now a modest green space—that lies between the East Side Gallery and the Spree River. According to the agreements reached by Hinkel and the local government, these new buildings would entail the creation of an access road and pedestrian bridge to allow passage to pedestrians, bicyclists, and emergency vehicles. The road and bridge, in turn, would require the removal of two stretches of the East Side Gallery and their replacement in the adjacent green space. Local planners had first approved the construction and the alteration to the East Side Gallery back in 2005, and since that time Hinkel’s plans had aroused little concerted opposition.
When workers lifted out one concrete slab from the Gallery on Friday, March 2nd, however, hundreds of demonstrators flocked to the site to prevent any further removals. A group of activists hastily organized a larger demonstration that same weekend, one that ultimately drew a raucous crowd of more than six thousand people. In the face of these surprising protests, Berlin Mayor Klaus Wowereit declared that all further work on the site would be postponed until at least March 18th, when a meeting of the major players would decide its fate. Since then, the developer and the relevant local officials have all declared their eagerness to find a solution that preserves the East Side Gallery in its current state. Even the slab removed earlier this month seems destined to return to its former location.
Yet the apparent success of the protest threatens to overshadow the problematic aspects of the demonstrators’ arguments. On the one hand, many of the organizers and protesters regarded their opposition as a small but significant rejoinder to the insistent tide of commercial development in post-Wall Berlin. To adopt the terms of Sharon Zukin’s recent book Naked City, they saw the East Side Gallery as an embodiment of the city’s distinctive authenticity and rootedness, which they argued should be protected from the homogenizing onslaught of upscale growth and gentrification. To wit, one of the coalitions that spearheaded the protest calls itself “Sink the Media Spree” (Mediaspree Versenken), a name that invokes developers’ recent efforts to transform the area along the river into a headquarters for high-tech communications and media. Its webpage declares that this portion of Berlin should preserve “the neighborhood” as it currently exists and not fall victim to “profit mania” (Kiez statt Profitwahn).
But the East Side Gallery cannot be cast so readily as an incarnation of local authenticity, especially the kind that stands opposed to commerce. First of all, many government actors and city residents were far more eager to see the Wall dismantled in the months and years after November 1989 than to see it preserved, and they condoned if not actively contributed to its wholesale removal. As a result, the survival of the East Side Gallery represents the exception, not the rule, in the city’s engagement with the Wall as a material structure. Second, artists from around the world initially established the East Side Gallery as a celebration of artistic and political liberty, but their murals received support from the local and national governments because they helped to draw tourists to Berlin and added to the city’s cachet as a cultural destination. In the light of this state patronage, I find it rather curious to hear activists pitching the East Side Gallery against the forces of capital and development.
On the other hand, many demonstrators contended that the alteration of the East Side Gallery would amount to an intolerable attack on the city’s historical inheritance. One variation of this position is that the removal of the two sections constitutes a dilution if not erasure of Germany’s traumatic past. According to this argument, the East Side Gallery should be left intact so that residents and visitors can confront the traces of the country’s division. Another, more strident variation insists that the construction plans display a callous disregard for those who suffered under the East German regime and, more specifically, lost their lives while attempting to escape it. In the words of one activist in Der Tagesspiegel: “the most important point is not whether the Wall will be opened. We are against the combination of removing the Wall and building hotels and apartments in death strips.”
Again, the East Side Gallery’s connection with Germany’s fraught past is not nearly as straightforward as the activists and demonstrators have suggested. As Brian Ladd details in his book The Ghosts of Berlin, the murals of the East Side Gallery were not painted until the early 1990s, after the Wall had fallen and East Germany had ceased to exist. In fact, this portion of the Wall could not have been painted before 1989, because it stood in East Berlin, and anyone who attempted to leave a mark on it, or even lingered near it, would have been apprehended by East German police officers or border soldiers. Of course, amateur and professional artists did draw and paint some striking imagery on the Berlin Wall during the Cold War, but they created it on the Wall’s “outer” surface while standing in West Berlin, where they had much less to fear from East German border personnel. The muralists who launched and maintained the East Side Gallery certainly meant to evoke and further this tradition of “Wall art,” but in the process they abstracted it from a prior historical era and relocated it in another part of the city.
I note these objections not because I support the proposed construction or the alteration of the East Side Gallery. In particular, I am not at all convinced that the partial removal of the Wall is really necessary, whether or not Hinkel and the city go ahead with the area’s development. But I am troubled by the protesters’ reluctance to take the ironies and complexities of the current circumstances more fully into account. They are too eager to cast the developer and local officials as the villains in this story, particularly when the city and the federal government have in fact created a substantial memorial landscape related to the Wall. And they are too quick to position themselves on the moral high ground. Given the Wall’s disappearance from virtually every other part of the city, their demands for preserving the East Side Gallery seem more than a little belated.
The gap between our citizens and our Government has never been so wide. The people are looking for honest answers, not easy answers; clear leadership, not false claims and evasiveness and politics as usual.
-Jimmy Carter, July 15, 1979
Contemporary observers of secondary education have appropriately decried the startling lack of understanding most students possess of the American presidency. This critique should not be surprising. In textbooks and classrooms across the country, curriculum writers and teachers offer an abundance of disconnected facts about the nation’s distinct presidencies—the personalities, idiosyncrasies, and unique time-bound crises that give character and a simple narrative arc to each individual president. Some of these descriptions contain vital historical knowledge. Students should learn, for example, how a conflicted Lyndon Johnson pushed Congress for sweeping domestic programs against the backdrop of Vietnam or how a charismatic and effective communicator like Ronald Reagan found Cold War collaboration with Margaret Thatcher and Mikhail Gorbachev.
But what might it mean to ask high school students to look across these and other presidencies to encourage more sophisticated forms of historical thinking? More specifically, what might teachers begin to do to promote thoughtful writing and reflection that goes beyond the respective presidencies and questions the nature of the executive office itself? And how might one teach the presidency, in Arendtian fashion, encouraging open dialogue around common texts, acknowledging the necessary uncertainty in any evolving classroom interpretation of the past, and encouraging flexibility of thought for an unpredictable future? By provocatively asking whether the president “matters,” the 2012 Hannah Arendt Conference provided an ideal setting for New York secondary teachers to explore this central pedagogical challenge in teaching the presidency.
Participants in this special writing workshop, scheduled concurrently with the conference, attended conference panels and also retreated to consider innovative and focused approaches to teaching the presidency.
Conference panels promoted a broader examination of the presidency than typically found in secondary curricula. A diverse and notable group of scholars urged us to consider the events and historical trends, across multiple presidencies, constraining or empowering any particular chief executive. These ideas, explored more thoroughly in the intervening writing workshops, provoked productive argument on what characteristics might define the modern American presidency. In ways both explicit and implicit, sessions pointed participants to numerous and complicated ways Congress, the judiciary, mass media, U.S. citizens, and the president relate to one another.
This sweeping view of the presidency contains pedagogical potency and has a place in secondary classrooms. Thoughtful history educators should ask big questions, encourage open student inquiry, and promote civic discourse around the nature of power and the purposes of human institutions. But as educators, we also know that the aim and value of our discipline resides in place-and time-bound particulars that beg for our interpretation and ultimately build an evolving understanding of the past. Good history teaching combines big ambitious questions with careful attention to events, people, and specific contingencies. Such specifics are the building blocks of storytelling and shape the analogies students need to think through an uncertain future.
Jimmy Carter’s oval office speech on July 15, 1979, describing a national “crisis of confidence” presented a unique case study for thinking about the interaction between American presidents and the populations the office is constitutionally obliged to serve. Workshop participants prepared for the conference by watching the video footage from this address and reading parts of Kevin Mattson’s history of the speech. In what quickly became known as the “Malaise Speech,” Carter attempted a more direct and personal appeal to the American people, calling for personal sacrifice and soul searching, while warning of dire consequences if the nation did not own up to its energy dependencies. After Vietnam and Watergate, Carter believed, America needed a revival that went beyond policy recommendations. His television address, after a mysterious 10-day sequestration at Camp David, took viewers through Carter’s own spiritual journey and promoted the conclusions he drew from it.
Today, the Malaise Speech has come to symbolize a failed Carter presidency. He has been lampooned, for example, on The Simpsons as our most sympathetically honest and humorously ineffectual former president. In one episode, residents of Springfield cheer the unveiling of his presidential statue, emblazoned with “Malaise Forever” on the pedestal. Schools give the historical Carter even less respect. Standardized tests such as the NY Regents exam ask little if anything about his presidency. The Malaise speech is rarely mentioned in classrooms—at either the secondary or post-secondary levels. Similarly, few historians identify Carter as particularly influential, especially when compared to the leaders elected before and after him. Observers who mention his 1979 speeches are most likely footnoting a transitional narrative for an America still recovering from a turbulent Sixties and heading into a decisive conservative reaction.
Indeed, workshop participants used writing to question and debate Carter’s place in history and the limited impact of the speech. But we also identified, through primary sources on the 1976 election and documents around the speech, ways for students to think expansively about the evolving relationship between a president and the people. A quick analysis of the electoral map that brought Carter into office reminded us that Carter was attempting to convince a nation that looks and behaves quite differently than today. The vast swaths of blue throughout the South and red coastal counties in New York and California are striking. Carter’s victory map can resemble an electoral photo negative to what has now become a familiar and predictable image of specific regional alignments in the Bush/Obama era. The president who was elected in 1976, thanks in large part to an electorate still largely undefined by the later rise of the Christian Right, remains an historical enigma. As an Evangelical Democrat from Georgia, with roots in both farming and nuclear physics, comfortable admitting his sins in both Sunday School and Playboy, and neither energized by or defensive about abortion or school prayer, Carter is as difficult to image today as the audience he addressed in 1979.
It is similarly difficult for us to imagine the Malaise Speech ever finding a positive reception. However, this is precisely what Mattson argues. Post-speech weekend polls gave Carter’s modest popularity rating a surprisingly respectable 11-point bump. Similarly, in a year when most of the president’s earlier speeches were ignored, the White House found itself flooded with phone calls and letters, almost universally positive. The national press was mixed and several prominent columnists praised the speech. This reaction to such an unconventional address, Mattson goes on to argue, suggests that the presidency can matter.
Workshop participants who attended later sessions heard Walter Russell Mead reference the ways presidents can be seen as either transformative or transactional. In many ways, the “malaise moment” could be viewed as a late term attempt by a transactional president to forge a transformational presidency. In the days leading up to the speech, Carter went into self-imposed exile, summoning spiritual advisors to his side, and encouraging administration-wide soul searching. Such an approach to leadership, admirable to some and an act of desperation to others, defies conventions and presents an odd image of presidential behavior (an idea elaborated on by conference presenter Wyatt Mason). “Malaise” was never mentioned in Carter’s speech. But his transformational aspirations are hard to miss.
In a nation that was proud of hard work, strong families, close-knit communities, and our faith in God, too many of us now tend to worship self-indulgence and consumption. Human identity is no longer defined by what one does, but by what one owns. But we've discovered that owning things and consuming things does not satisfy our longing for meaning. We've learned that piling up material goods cannot fill the emptiness of lives which have no confidence or purpose.
It is this process—the intellectual act of interpreting Carter and his [in]famous speech as aberrant presidential behavior—that allows teachers and their students to explore together the larger question of defining the modern presidency. And it is precisely this purposeful use of a small number of primary sources that forces students to rethink, through writing and reflection, the parameters that shape how presidents relate to their electorate. In our workshop we saw how case studies, in-depth explorations of the particulars of history, precede productive debate on whether the presidency matters.
The forgotten Carter presidency can play a disproportionately impactful pedagogical role for teachers interested in exploring the modern presidency. As any high school teacher knows, students rarely bring an open interpretive lens to Clinton, Bush, or Obama. Ronald Reagan, as the first political memory for many of their parents, remains a polarizing a figure. However, few students or their parents hold strong politically consequential opinions about Carter. Most Americans, at best, continue to view him as a likable, honest, ethical man who is much more effective as an ex-president than he was as president.
Workshop participants learned that the initial support Carter received after the Malaise Speech faded quickly. Mattson and some members of the administration now argue that the President lacked a plan to follow up on the goodwill he received from a nation desiring leadership. Reading Ezra Klein, we also considered the possibility that, despite all the attention educators give to presidential speeches (as primary sources that quickly encapsulate presidential visions), there is little empirical evidence that any public address really makes much of a difference. In either case, Carter’s loss 16 months later suggests that his failures of leadership both transformational and transactional.
Did Carter’s speech matter? The teachers in the workshop concluded their participation by attempting to answer this question, working collaboratively to draft a brief historical account contextualizing the 1979 malaise moment. In doing so, we engaged in precisely the type of activity missing in too many secondary school classrooms today: interrogating sources, corroborating evidence, debating conflicting interpretations, paying close attention to language, and doing our best to examine our underlying assumptions about the human condition. These efforts produced some clarity, but also added complexity to our understanding of the past and led to many additional questions, both pedagogical and historical. In short, our writing and thinking during the Arendt Conference produced greater uncertainty. And that reality alone suggests that study of the presidency does indeed matter.
Stephen Mucher is assistant professor of history education in the Master of Arts in Teaching Program at Bard College.
The workshop, Teaching the American Presidency, facilitated by Teresa Vilardi and Stephen Mucher, sponsored by the Institute for Writing and Thinking and Master of Arts in Teaching Program in collaboration with the Hannah Arendt Center at Bard College was offered as part of the Center’s 2012 conference, “Does the President Matter? American Politics in an Age of Disrepair.”