Hannah Arendt Center for Politics and Humanities
2Oct/120

Malaise in the Classroom: Teaching Secondary Students About the Presidency

The gap between our citizens and our Government has never been so wide. The people are looking for honest answers, not easy answers; clear leadership, not false claims and evasiveness and politics as usual.

-Jimmy Carter,  July 15, 1979

Contemporary observers of secondary education have appropriately decried the startling lack of understanding most students possess of the American presidency.  This critique should not be surprising.  In textbooks and classrooms across the country, curriculum writers and teachers offer an abundance of disconnected facts about the nation’s distinct presidencies—the personalities, idiosyncrasies, and unique time-bound crises that give character and a simple narrative arc to each individual president.  Some of these descriptions contain vital historical knowledge.  Students should learn, for example, how a conflicted Lyndon Johnson pushed Congress for sweeping domestic programs against the backdrop of Vietnam or how a charismatic and effective communicator like Ronald Reagan found Cold War collaboration with Margaret Thatcher and Mikhail Gorbachev.

But what might it mean to ask high school students to look across these and other presidencies to encourage more sophisticated forms of historical thinking?  More specifically, what might teachers begin to do to promote thoughtful writing and reflection that goes beyond the respective presidencies and questions the nature of the executive office itself?  And how might one teach the presidency, in Arendtian fashion, encouraging open dialogue around common texts, acknowledging the necessary uncertainty in any evolving classroom interpretation of the past, and encouraging flexibility of thought for an unpredictable future?  By provocatively asking whether the president “matters,” the 2012 Hannah Arendt Conference provided an ideal setting for New York secondary teachers to explore this central pedagogical challenge in teaching the presidency.

Participants in this special writing workshop, scheduled concurrently with the conference, attended conference panels and also retreated to consider innovative and focused approaches to teaching the presidency.

Conference panels promoted a broader examination of the presidency than typically found in secondary curricula. A diverse and notable group of scholars urged us to consider the events and historical trends, across multiple presidencies, constraining or empowering any particular chief executive.  These ideas, explored more thoroughly in the intervening writing workshops, provoked productive argument on what characteristics might define the modern American presidency.  In ways both explicit and implicit, sessions pointed participants to numerous and complicated ways Congress, the judiciary, mass media, U.S. citizens, and the president relate to one another.

This sweeping view of the presidency contains pedagogical potency and has a place in secondary classrooms.  Thoughtful history educators should ask big questions, encourage open student inquiry, and promote civic discourse around the nature of power and the purposes of human institutions. But as educators, we also know that the aim and value of our discipline resides in place-and time-bound particulars that beg for our interpretation and ultimately build an evolving understanding of the past.  Good history teaching combines big ambitious questions with careful attention to events, people, and specific contingencies.  Such specifics are the building blocks of storytelling and shape the analogies students need to think through an uncertain future.

Jimmy Carter’s oval office speech on July 15, 1979, describing a national “crisis of confidence” presented a unique case study for thinking about the interaction between American presidents and the populations the office is constitutionally obliged to serve. Workshop participants prepared for the conference by watching the video footage from this address and reading parts of Kevin Mattson’s history of the speech.  In what quickly became known as the “Malaise Speech,” Carter attempted a more direct and personal appeal to the American people, calling for personal sacrifice and soul searching, while warning of dire consequences if the nation did not own up to its energy dependencies.  After Vietnam and Watergate, Carter believed, America needed a revival that went beyond policy recommendations.  His television address, after a mysterious 10-day sequestration at Camp David, took viewers through Carter’s own spiritual journey and promoted the conclusions he drew from it.

Today, the Malaise Speech has come to symbolize a failed Carter presidency.  He has been lampooned, for example, on The Simpsons as our most sympathetically honest and humorously ineffectual former president.  In one episode, residents of Springfield cheer the unveiling of his presidential statue, emblazoned with “Malaise Forever” on the pedestal.  Schools give the historical Carter even less respect.  Standardized tests such as the NY Regents exam ask little if anything about his presidency.  The Malaise speech is rarely mentioned in classrooms—at either the secondary or post-secondary levels.  Similarly, few historians identify Carter as particularly influential, especially when compared to the leaders elected before and after him.  Observers who mention his 1979 speeches are most likely footnoting a transitional narrative for an America still recovering from a turbulent Sixties and heading into a decisive conservative reaction.

Indeed, workshop participants used writing to question and debate Carter’s place in history and the limited impact of the speech.  But we also identified, through primary sources on the 1976 election and documents around the speech, ways for students to think expansively about the evolving relationship between a president and the people.  A quick analysis of the electoral map that brought Carter into office reminded us that Carter was attempting to convince a nation that looks and behaves quite differently than today.  The vast swaths of blue throughout the South and red coastal counties in New York and California are striking. Carter’s victory map can resemble an electoral photo negative to what has now become a familiar and predictable image of specific regional alignments in the Bush/Obama era.  The president who was elected in 1976, thanks in large part to an electorate still largely undefined by the later rise of the Christian Right, remains an historical enigma.  As an Evangelical Democrat from Georgia, with roots in both farming and nuclear physics, comfortable admitting his sins in both Sunday School and Playboy, and neither energized by or defensive about abortion or school prayer, Carter is as difficult to image today as the audience he addressed in 1979.

It is similarly difficult for us to imagine the Malaise Speech ever finding a positive reception.  However, this is precisely what Mattson argues. Post-speech weekend polls gave Carter’s modest popularity rating a surprisingly respectable 11-point bump.  Similarly, in a year when most of the president’s earlier speeches were ignored, the White House found itself flooded with phone calls and letters, almost universally positive.  The national press was mixed and several prominent columnists praised the speech. This reaction to such an unconventional address, Mattson goes on to argue, suggests that the presidency can matter.

Workshop participants who attended later sessions heard Walter Russell Mead reference the ways presidents can be seen as either transformative or transactional.  In many ways, the “malaise moment” could be viewed as a late term attempt by a transactional president to forge a transformational presidency.  In the days leading up to the speech, Carter went into self-imposed exile, summoning spiritual advisors to his side, and encouraging administration-wide soul searching.  Such an approach to leadership, admirable to some and an act of desperation to others, defies conventions and presents an odd image of presidential behavior (an idea elaborated on by conference presenter Wyatt Mason).  “Malaise” was never mentioned in Carter’s speech.  But his transformational aspirations are hard to miss.

In a nation that was proud of hard work, strong families, close-knit communities, and our faith in God, too many of us now tend to worship self-indulgence and consumption. Human identity is no longer defined by what one does, but by what one owns. But we've discovered that owning things and consuming things does not satisfy our longing for meaning. We've learned that piling up material goods cannot fill the emptiness of lives which have no confidence or purpose.

It is this process—the intellectual act of interpreting Carter and his [in]famous speech as aberrant presidential behavior—that allows teachers and their students to explore together the larger question of defining the modern presidency. And it is precisely this purposeful use of a small number of primary sources that forces students to rethink, through writing and reflection, the parameters that shape how presidents relate to their electorate.  In our workshop we saw how case studies, in-depth explorations of the particulars of history, precede productive debate on whether the presidency matters.

The forgotten Carter presidency can play a disproportionately impactful pedagogical role for teachers interested in exploring the modern presidency.  As any high school teacher knows, students rarely bring an open interpretive lens to Clinton, Bush, or Obama. Ronald Reagan, as the first political memory for many of their parents, remains a polarizing a figure.  However, few students or their parents hold strong politically consequential opinions about Carter.  Most Americans, at best, continue to view him as a likable, honest, ethical man who is much more effective as an ex-president than he was as president.

Workshop participants learned that the initial support Carter received after the Malaise Speech faded quickly.  Mattson and some members of the administration now argue that the President lacked a plan to follow up on the goodwill he received from a nation desiring leadership.  Reading Ezra Klein, we also considered the possibility that, despite all the attention educators give to presidential speeches (as primary sources that quickly encapsulate presidential visions), there is little empirical evidence that any public address really makes much of a difference.  In either case, Carter’s loss 16 months later suggests that his failures of leadership both transformational and transactional.

Did Carter’s speech matter?  The teachers in the workshop concluded their participation by attempting to answer this question, working collaboratively to draft a brief historical account contextualizing the 1979 malaise moment.  In doing so, we engaged in precisely the type of activity missing in too many secondary school classrooms today: interrogating sources, corroborating evidence, debating conflicting interpretations, paying close attention to language, and doing our best to examine our underlying assumptions about the human condition.  These efforts produced some clarity, but also added complexity to our understanding of the past and led to many additional questions, both pedagogical and historical.   In short, our writing and thinking during the Arendt Conference produced greater uncertainty. And that reality alone suggests that study of the presidency does indeed matter.

-Stephen Mucher

Stephen Mucher is assistant professor of history education in the Master of Arts in Teaching Program at Bard College.

The workshop, Teaching the American Presidency, facilitated by Teresa Vilardi and Stephen Mucher, sponsored by the Institute for Writing and Thinking and Master of Arts in Teaching Program in collaboration with the Hannah Arendt Center at Bard College was offered as part of the Center’s 2012 conference, “Does the President Matter? American Politics in an Age of Disrepair.” 

 

9Feb/120

A Brief History of Campaign Finance

The NY Times penned one of those editorials Wednesday that makes one wonder who is home. The Times takes President Obama to task for forming a Super PAC--or for having someone form a Super PAC for him, because we know there is no coordination between the Super PAC and the Super PAC's beneficiary.  As cynical as the current Super PAC frenzy is, and as disheartening as the crush of money being spent by the Republican Super PACs and hoarded by Karl Rove's Super PAC is, what would be served by President Obama refusing to feed at the trough? Recall, he is the first Presidential candidate since 1974 to opt out of the public matching funds system. The idea that he might run as an anti-big-money candidate is hard to imagine, so how could he meaningfully run a campaign claiming on principle to be opposed to the influence of big money, as the Times editorial suggests.

I am in Berlin where on Monday I gave a Keynote Talk to open the State of the World Week in Berlin, sponsored by the European College of Liberal Arts of Bard. My talk was on the Citizen United court case, the case that opened the door to Super PACs. I'll be blogging more about Campaign Finance Reform as the election progresses. But for now, here is a short excerpt of one part of my talk that offered a condensed history of Campaign Finance and Campaign Finance Reform in the United States.

We can divide the history of Campaign finance in the U.S. into 7 stages.

     1. The first stage is the pre-History involving the 1787 Constitutional Convention. As Zephyr Teachout has shown, "Corruption was discussed more often in the Constitutional Convention than factions, violence, or instability. It was a topic of concern on almost a quarter of the days that the members convened." Teachout and Lawrence Lessig have argued that there was a strong sense among the founding fathers that the great threat to new Constitution was corruption. And they have pointed to a number of practical responses to that threat in the Constitution itself. These include Article I, Section 6, Clause 2, which prevents members of Congress from holding civil office while serving as a legislator, or from being appointed to offices that had been created—or in which the compensation was increased—during their tenure.  The point was to prevent members of Congress from using their posts to enrich themselves and their friends.

Another innovation aimed to prevent corruption was the decision to have those in the House of Representatives serve only for two years. According to Teachout and Lessig, this was designed to counter the formation of bonds between legislators and the President. By turning over the members of the House on a regular basis, it would be less likely that the Representatives would form strong alliances with members of the Executive branch, thus helping to maintain their independence. The founding fathers would surely be astounded by the incumbent advantages apparent today.

     2. The Second stage of American campaign finance history runs from the passage of the Constitution until the election of Andrew Jackson in 1828. In early U.S. elections, most campaign expenses were paid directly by the candidates using their own money. Such expenses were relatively minimal, going toward an occasional campaign pamphlet and, sometimes, for food and drink at rallies. As Bradley Smith writes, "Though free from the "corrupting" effects of money, elections in this early period were generally contested by candidates representing aristocratic factions standing for election before a relatively small, homogeneous electorate of propertied white men."

     3. The financing of American political campaigns begins to become interesting in 1828, with the election of Andrew Jackson. Jackson's presidency is rightly seen as the true beginning of modern American democracy. And Jackson's campaign for President was the first presidential campaign that appealed directly to the voters and not simply to party elites. Jackson's campaign was organized by Martin van Buren (who later served as his Vice President and thereafter as President). Van Buren was one of the original machine politicians from New York who created the machine concept Boss William Tweed would perfect later in the century at Tammany Hall.  What Van Buren did for Jackson was to organize a campaign aimed at the people. This cost money. And what he and Jackson did was to raise money from those who were seeking jobs in the government.  This was the beginning of the spoils system, whereby political campaigns were funded by current and prospective government employees; these employees in turn expected to be rewarded with jobs once their candidate won the election.

     4. The spoils system lasted until the passage of the Pendleton Act, in 1883, which inaugurates the fourth stage of the development of campaign finance. The Pendleton Act professionalized the Federal Civil Service, instituting an exam for entry into the service and outlawing the Spoils system. The result was that campaign funds from federal officeholders dried up, and politicians needed new sources of funds. The obvious sources were wealthy individuals and corporations. And oh boy did corporations jump into the breach. By the late 19th century, the government was giving grants of land and cash to corporations, and in return the corporations were generously funding political campaigns. In 1888 40%, of Republican national campaign funds came from Pennsylvania manufacturing and business interests. By 1904, 73% of Teddy Roosevelt's presidential campaign funds were raised from corporate contributions. (I take these numbers from Bradley Smith). The age of corporate funded campaigns was here, and it has never left.

     5. Once he was elected, Teddy Roosevelt made it a priority to reform the broken campaign financing system that he had exploited so well. With his support, Congress passed the Tillman Act in 1907, which made illegal all campaign contributions from corporations. The Tillman Act opens the Fifth stage of the development of Campaign Finance Reform in the United States.

While the Tillman Act carried penalties for its violation, it instituted no enforcement mechanism. The result is that not much changed. To take only one legendary example, in 1968 and 1972 Clement Stone contributed up to $10 million to President Richard Nixon's Presidential campaigns. Stone's contributions caused a scandal that, together with the outrage over Watergate, led Congress to finally institute a serious attempt at campaign finance reform.

     6. The key moment of modern campaign finance reform is the passage of the Federal Election Campaign Act (FECA) in 1974, and  the Supreme Court's partial upholding and partial overturning of that law in Buckley v. Valeo in 1976. In the wake of Watergate and the loss of trust in government, the Congress passed FECA which: limited individual contributions to individual candidates to $1,000; limited the amount candidates could spend on a campaign; established a system of public financing of campaigns that required a voluntary limit on campaign expenditures; required that candidates, parties, PACs and groups engaging in express advocacy disclose their fund-raising and spending; and created the Federal Elections Commission, to regulate and enforce the new rules.

In a landmark decision that still controls all legal approaches to the regulation of campaign financing, the Supreme Court in Buckley v. Valeo upheld the disclosure requirement and the limits on individual contributions. It also upheld the limits on campaign spending when those limits were voluntary and in conjunction with the decision to accept public financing. But the Court struck down compulsory limits on spending both by individual candidates and by PACs and other groups. While the Court recognized that limits on campaign spending were a kind of censorship that limited the rights of people and corporations to speak about the most central political issues of the day, it also acknowledged "large contributions threaten the integrity of our system of representative democracy." Because large contributions, especially to individual candidates, at the very least appear to suggest a kind of quid pro quo corruption, the Court accepted that Congress has the right to censor such expressions of support. More general expenditures not given to or coordinated with a specific candidate were, the Court argued, not examples of the kind of corruption that would allow Congress to override the fundamental free speech interests of individuals and corporations who would want to influence the political debate. Thus, post-Buckley, the rule was: The Constitution limits censorship of political activity, political speech and political spending on campaigns. Any limit is censorship that violates the First Amendment. And yet the Court carved out One Narrow Exception: speech or activity that either is or gives the appearance of quid pro quo corruption could be regulated and banned.

In the aftermath of Buckley v. Valeo, money continued to pour into politics. Candidates and their supporters made use of "soft money," money given to political parties and other groups and thus not subject to the limits imposed on individual contributions to individual candidates. PACS began to bundle large sums of money that, while not individual contributions to candidates, nevertheless carried the tint of influence peddling. In the year 1993-94, the Democratic Party received $45 Million dollars in "soft money" and the Republic Party received $59 Million. By 1999-2000, the numbers were $92 Million and $244 Million respectively. In  2001-2002, the Democratic Party took in $200 Million and the Republicans $421 Million.

     7. The failure of FECA to stem the tsunami of money in elections led Congress to try again, and in 2002 it passed the Bi-Partisan Campaign Reform Act (BCRA), also known as the McCain-Feingold Act—the seventh and until now final stage of the effort to regulate campaign finance in the United States.  The main innovation of BRCA was to prohibit unlimited soft money contributions by corporations and unions. And it was this provision that was held to be unconstitutional by the Supreme Court in the now infamous case of Citizens United v. FEC.

The core of the Citizens United ruling was Justice Anthony Kennedy's argument that "If the First Amendment has any force, it prohibits Congress from fining or jailing citizens, or associations of citizens, for simply engaging in political speech." For Kennedy, "The censorship we now confront is vast in its reach." What he means is that the law bans all those corporations—including large multinationals and also small mom and pop stores and even non-profit corporations—from expressing their views about political candidates for either 30 or 60 days leading up to an election.

In Kennedy's telling, corporations are part of the country and, what is more, an important part of the country. The Government has “muffle[d] the voices that best represent the most significant segments of the economy." Here Kennedy channels Felix Frankfurter, who in the 1941 case of U.S. v.s. Congress of Industrial Organizations, wrote:

To say that labor unions as such have nothing of value to contribute to that process and no vital or legitimate interest in it is to ignore the obvious facts of political and economic life and of their increasing interrelationship in modern society.

U.S. v. C.I.O. dealt with the anti-Union Smith Act, which forbade unions and corporations from using treasury funds to pay for politicking. In this regard, the Smith Act was very much like 2002 Bipartisan Campaign Reform Act. While the majority of the Court refused to consider the Constitutional Question and decided the case on narrow grounds, Frankfurter did.  In his telling, the Court must take seriously the evil that Congress sought to address: namely, the corruption of elections and federal officials by the expenditure of large masses of aggregated wealth. And yet, Frankfurter saw that "the claimed evil is not one unmixed with good."  The expression of corporate or union speech in elections is, he writes, a good thing! "The expression of bloc sentiment has always been an integral part of our democratic and legislative processes."  Replace "Labor unions" with "corporations." That is what Kennedy did.

 -RB

4Oct/112

Whistle-Blowers as Truth-Tellers-Jacqueline Bao

The New York Review of Books first published Hannah Arendt’s Lying in Politics: Reflections on the Pentagon Papers on November 18, 1971, after Daniel Ellsberg had leaked 47 sections of the document to the New York Times.  Originally commissioned in 1967 by then Secretary of Defense Robert McNamara, the Pentagon Papers was an effort to produce an “encyclopedic and objective”[1] report on the Vietnam War.  The report was an authenticated document that proved the American government was engaged in over a decade of deception and secrecy aimed at the public—the president had lied, the secretary of defense had lied—no one was telling the truth, and even worse, truth just wasn’t accessible to anyone but the insiders.

The term “whistleblower” acquired its contemporary meaning in the early 1970’s as “‘one who blows the whistle’ on a person or activity, especially from within an organization.” The colloquial saying ‘to blow the whistle’ is derived from the literal act, like when a referee blows the whistle on a foul play or a police officer blows the whistle to expose and halt a crime in a crowded street.  According to OED, the noun whistle-blower was first used in a contemporary, more figurative sense in 1970.  In 1972, one year after the Pentagon Papers were leaked, an onslaught of critical whitstleblowers followed Ellsberg and Russo’s lead: Peter Buxtun blew the whistle on the Tuskegee Syphilis Experiment, ending a four-decade syphilis “study” conducted on 400 poor black Alabama males.  That same year, Ralph Nader hosted the first organized conference on “professional responsibility,” later known to root the beginnings of corporate whistle blowing culture.  And most famously, W. Mark Felt (or Deep Throat) tipped off the arrest of five men in the Watergate democratic headquarters, leading to the Watergate investigations.  Whistleblowing would rise to become one of the prominent modes of truth telling in the increasingly withdrawing public sphere.

Forty years after the release of the Pentagon papers, WikiLeaks released “Collateral Murder,” a classified United States military video depicting soldiers firing indiscriminately at civilian targets, including two Reuters journalists. Private Bradley Manning was later charged and arrested for leaking half-a-million reports from the Iraq War, including the video “Collateral Murder.”  Unlike the Vietnam War, during which photographers and journalists had relative independence in reporting, beginning with the Clinton administration and continuing through the Bush and Obama administrations, severe restrictions were placed on media coverage for wars.  Truly authentic images of the war, as opposed to staged photo-press “opportunities,” surfaced mainly through insiders, amateurs, and whistleblowers.  For instance, the amateur shot images of Abu Ghraib exposed by Joe Darby became the most publicized photography to come out of the Iraq War because of their unquestionable authenticity.  The images proved and documented various incidences of torture, and what’s more, that torture was being inflicted on prisoners solely as a pleasurable pastime to fight the boredom of soldiers.

Hannah Arendt writes that secrecy has been a part and parcel of politics since the beginning of recorded time: “truthfulness has never been counted among the political virtues, and lies have always been regarded as justifiable tools in political dealings” (LP4).  But within the last half-century, lying became so prolific within politics that it became an “adequate weapon against truth” (TP 232).  The fabric of our common reality, what Arendt defines as factual truth, began to erode.  Unlike rational truth, or the truths of the mind—those mathematical, scientific, and provable axioms and theories—factual truths are dependent on discourse between men and remembrance in history for its survival.  With the popularization and sharpening of organized lying, truth, exceedingly fragile in current affairs yet resilient in time, was lost in man’s present world.   Without factual truth, “we should never find our bearings in an ever-changing world” (TP 261).  Though the falsehoods of organized lying would never come to substitute facts, Arendt’s greater concern is that to live in a world without bearings means that men increasingly cannot move, act, and judge in the public realm; men lose touch with the world.

In the ‘ever-changing’ world where information cycles constantly—the 24 hours news circuit, twitter posts, and online media—more information often equates only to an increasingly defactualized world.  In a recent New York Times article on the global mass protesting occurring independently on the streets of India, Israel, Spain, Greece, and even Wall Street, young protesters indicated “they were so distrustful their country’s political class and its pandering to established interest groups that they feel only an assault on the system itself can bring about real change.”[1] Indeed, whistleblowing has always sought to ‘assault the system’ by exposing organized secrets to the public to bring forth real change.  But apart from the media outlets that they inevitably be dispersed in, does the leaked documents inhabit a privilege position to truth in our distrusting and cynical world? Or are these leaks just more organized lies atop a sea of deception.  The latter of this paper is dedicated to examining, albeit shortly, the specific whistleblowing cases of the Pentagon paper, Wikileaks and Julian Assange, and the photographs from Abu Ghraib and how they are relegated into the factual world.

Daniel Ellsberg was the first figure to be called a whistle-blower in American popular culture. The legitimacy of the Pentagon papers derived not from the innumerable facts of a 7,000 page report, which of course few can say have read in full even after it was officially declassified this summer, but from the fact that because it was officially mandated, it revealed decades of deception by the executive branch aimed at both the public and Congress.  In alignment with Henry David Thoreau’s famous essay ‘On the Duty of Civil Disobedience,” Ellsberg’s actions voiced that it is a civil responsibility to disobey an unjust government and let truth be heard. What made Ellsberg a compelling truth-teller were the risks he took—ruining his career and ending his life as a free man—because it proved there could be no self-interest in the story and the interest really was simply telling the truth.

The ability to decipher what is real and what is a lie is continually being uprooted in our digital age as it becomes harder and harder to determine what documents are authentic.  And yet, the digital age has introduced new cyber spaces, which are opening up for action. Last year, with the publication of Collateral Murder,[1] it seemed WikiLeaks and its spokesperson/founder Julian Assange

affirmed Arendt’s optimism in human natality, as the small non-profit has reimagined the possibilities of political activism and created a new and aggressive approach to insert itself into politics.  Assange boasted that the organization provided ‘the world with more classified documents than the rest of the world’s media combined.’[1] However one year later, Assange is under trial for sexual misconduct, in process of being extradited by Sweden, and filing suit for the unauthorized publication of his autobiography.  WikiLeaks and Assange appear in the media much less for whistleblowing leaks, than for defaming law suits.  In a world where image is so pervasive, the defamation of Assange led to a discrediting of information provided by Wikileaks, despite the inherent truth or false value in them.

The Abu Ghraib prison photographs, released by Joe Darby to the Criminal Investigation Division (CID) and exposed to the public by Seymour M. Hersh in a New Yorker article, evidenced what is ‘torture’ by standard definitions occurring in the Iraqi prison.  The photographs of torture at Abu Ghraib are perhaps the most important images to come out of the Iraq War because unlike Collateral Murder, where content was edited, the photographs of Abu Ghraib spoke in totality.  The premiere intellectual debate to rise from the images was not one that questioned the photograph’s legitimacy, but one that questioned what the image depicted, mainly, was it “torture”?  To which Susan Sontag, the great cynic of photography responded “to refuse to call what took place in Abu Ghraib -- and what has taken place elsewhere in Iraq and in Afghanistan and at Guantanamo Bay—by its true name, torture, is as outrageous as the refusal to call the Rwandan genocide a genocide.”[2] Of course, the alterations of words—from torture to abuse—seek to shape and defactualize the truth of the image, alleviate the gravity of the crime. But whatever word chosen to replace the truth could never undermine the common sense reaction the photographs elicited: that it was wrong, it should be stopped, and it was hard for us all to witness.

In Regarding the Pain of Others, Sontag ends her essay: “it seems that one picture is worth a thousand words. And even if our leaders choose not to look at them, there will be thousands more snapshots and videos.  Unstoppable.”  The tenacity of images and their multiplication in digital world, mirrors the stubbornness of facts as Arendt affirms: “their fragility is oddly combined with great resiliency” (TP 259).  Though it is impossible to quantify the impact in each of the three whistleblowing cases, whistle-blowers force the public to endeavor in the hard task of bearing witness.  ‘Bear’ derives from the PIE root bher meaning to ‘give birth’ or to ‘carry the burden.’[1]  In bearing witness, we carry the burden of the unpleasantary of truths just as we give life to the permanence of the world by establishing a common reality: “what is at stake is survival, the perseverance of existence, and no human world destined to outlast the short life span of mortals within it will ever be able to survive without men willing to do what Herodotus was the first to undertake consciously—namely, to say what is.” (Truth and Politics, 229).



[1] Hannah Arendt, Lying in Politics

[2] New York Times, http://www.nytimes.com/2011/09/28/world/as-scorn-for-vote-grows-protests-surge-around-globe.html?pagewanted=1&_r=3&ref=world

[3] It should be noted that Collateral Damage was not

[4] Ted Talks, Julian Assange Interview

[5] Sontag, Regarding the Pain of Others

[6] http://www.etymonline.com/index.php?term=bear