Hannah Arendt Center for Politics and Humanities
7Apr/140

Amor Mundi 4/6/14

Arendtamormundi

Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.

Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.

Oligarchs, Inc.

supremeOver at SCOTUSblog, Burt Neuborne writes that “American democracy is now a wholly owned subsidiary of Oligarchs, Inc.” The good news, Neuborne reminds, is that “this too shall pass.” After a fluid and trenchant review of the case and the recent decision declaring limits on aggregate giving to political campaigns to be unconstitutional, Neuborne writes: “Perhaps most importantly, McCutcheon illustrates two competing visions of the First Amendment in action. Chief Justice Roberts’s opinion turning American democracy over to the tender mercies of the very rich insists that whether aggregate contribution limits are good or bad for American democracy is not the Supreme Court’s problem. He tears seven words out of the forty-five words that constitute Madison’s First Amendment – “Congress shall make no law abridging . . . speech”; ignores the crucial limiting phrase “the freedom of,” and reads the artificially isolated text fragment as an iron deregulatory command that disables government from regulating campaign financing, even when deregulation results in an appalling vision of government of the oligarchs, by the oligarchs, and for the oligarchs that would make Madison (and Lincoln) weep. Justice Breyer’s dissent, seeking to retain some limit on the power of the very rich to exercise undue influence over American democracy, views the First Amendment, not as a simplistic deregulatory command, but as an aspirational ideal seeking to advance the Founders’ effort to establish a government of the people, by the people, and for the people for the first time in human history. For Justice Breyer, therefore, the question of what kind of democracy the Supreme Court’s decision will produce is at the center of the First Amendment analysis. For Chief Justice Roberts, it is completely beside the point. I wonder which approach Madison would have chosen. As a nation, we’ve weathered bad constitutional law before. Once upon a time, the Supreme Court protected slavery. Once upon a time the Supreme Court blocked minimum-wage and maximum-hour legislation.  Once upon a time, the Supreme Court endorsed racial segregation, denied equality to women, and jailed people for their thoughts and associations. This, too, shall pass. The real tragedy would be for people to give up on taking our democracy back from the oligarchs. Fixing the loopholes in disclosure laws, and public financing of elections are now more important than ever. Moreover, the legal walls of the airless room are paper-thin. Money isn’t speech at obscenely high levels. Protecting political equality is a compelling interest justifying limits on uncontrolled spending by the very rich. And preventing corruption means far more than stopping quid pro quo bribery. It means the preservation of a democracy where the governed can expect their representatives to decide issues independently, free from economic serfdom to their paymasters. The road to 2016 starts here. The stakes are the preservation of democracy itself.” It is important to remember that the issue is not really partisan, but that both parties are corrupted by the influx of huge amounts of money. Democracy is in danger not because one party will by the election, but because the oligarchs on both sides are crowding out grassroots participation. This is an essay you should read in full. For a plain English review of the decision, read this from SCOTUSblog. And for a Brief History of Campaign Finance, check out this from the Arendt Center Archives.

Saving Democracy

democZephyr Teachout, the most original and important thinker about the constitutional response to political corruption, has an op-ed in the Washington Post: “We should take this McCutcheon moment to build a better democracy. The plans are there. Rep. John Sarbanes (D-Md.) has proposed something that would do more than fix flaws. H.R. 20, which he introduced in February, is designed around a belief that federal political campaigns should be directly funded by millions of passionate, but not wealthy, supporters. A proposal in New York would do a similar thing at the state level.” Teachout spoke at the Arendt Center two years ago after the Citizens United case. Afterwards, Roger Berkowitz wrote: “It is important to see that Teachout is really pointing out a shift between two alternate political theories. First, she argues that for the founders and for the United States up until the mid-20th century, the foundational value that legitimates our democracy is the confidence that our political system is free from corruption. Laws that restrict lobbying or penalize bribery are uncontroversial and constitutional, because they recognize core—if not the core—constitutional values. Second, Teachout sees that increasingly free speech has replaced anti-corruption as the foundational constitutional value in the United States. Beginning in the 20th century and culminating in the Court's decision in Citizens United, the Court gradually accepted the argument that the only way to guarantee a legitimate democracy is to give unlimited protection to the marketplace of idea. Put simply, truth is nothing else but the product of free debate and any limits on debate, especially political debate, will delegitimize our politics.” Read the entirety of his commentary here. Watch a recording of Teachout’s speech here.

The Forensic Gaze

forA new exhibition opened two weeks ago at the Haus der Kulturen der Welt in Berlin that examines the changing ways in which states police and govern their subjects through forensics, and how certain aesthetic-political practices have also been used to challenge or expose states. Curated by Anselm Franke and Eyal Weizman, Forensis “raises fundamental questions about the conditions under which spatial and material evidence is recorded and presented, and tests the potential of new types of evidence to expand our juridical imagination, open up forums for political dispute and practice, and articulate new claims for justice.” Harry Burke and Lucy Chien review the exhibition on Rhizome: “The exhibition argues that forensics is a political practice primarily at the point of interpretation. Yet if the exhibition is its own kind of forensic practice, then it is the point of the viewer's engagement where the exhibition becomes significant. The underlying argument in Forensis is that the object of forensics should be as much the looker and the act of looking as the looked-upon.” You may want to read more and then we suggest Mengele’s Skull: The Advent of a Forensic Aesthetics.

Empathy's Mess

empathy

In an interview, Leslie Jamison, author of the very recently published The Empathy Exams, offers up a counterintuitive defense of empathy: “I’m interested in everything that might be flawed or messy about empathy — how imagining other lives can constitute a kind of tyranny, or artificially absolve our sense of guilt or responsibility; how feeling empathy can make us feel we’ve done something good when we actually haven’t. Zizek talks about how 'feeling good' has become a kind of commodity we purchase for ourselves when we buy socially responsible products; there’s some version of this inoculation logic — or danger — that’s possible with empathy as well: we start to like the feeling of feeling bad for others; it can make us feel good about ourselves. So there’s a lot of danger attached to empathy: it might be self-serving or self-absorbed; it might lead our moral reasoning astray, or supplant moral reasoning entirely. But do I want to defend it, despite acknowledging this mess? More like: I want to defend it by acknowledging this mess. Saying: Yes. Of course. But yet. Anyway.”

What the Language Does

barsIn a review of Romanian writer Herta Muller's recently translated collection Christina and Her Double, Costica Bradatan points to what changing language can do, what it can't do, and how those who attempt to manipulate it may also underestimate its power: “Behind all these efforts was the belief that language can change the real world. If religious terms are removed from language, people will stop having religious feelings; if the vocabulary of death is properly engineered, people will stop being afraid of dying. We may smile today, but in the long run such polices did produce a change, if not the intended one. The change was not in people’s attitudes toward death or the afterworld, but in their ability to make sense of what was going on. Since language plays such an important part in the construction of the self, when the state subjects you to constant acts of linguistic aggression, whether you realize it or not, your sense of who you are and of your place in the world are seriously affected. Your language is not just something you use, but an essential part of what you are. For this reason any political disruption of the way language is normally used can in the long run cripple you mentally, socially, and existentially. When you are unable to think clearly you cannot act coherently. Such an outcome is precisely what a totalitarian system wants: a population perpetually caught in a state of civic paralysis.”

Humanities and Human Life

humanCharles Samuleson, author of "The Deepest Human Life: An Introduction to Philosophy for Everyone," has this paean to the humanities in the Wall Street Journal: “I once had a student, a factory worker, who read all of Schopenhauer just to find a few lines that I quoted in class. An ex-con wrote a searing essay for me about the injustice of mandatory minimum sentencing, arguing that it fails miserably to live up to either the retributive or utilitarian standards that he had studied in Introduction to Ethics. I watched a preschool music teacher light up at Plato's "Republic," a recovering alcoholic become obsessed by Stoicism, and a wayward vet fall in love with logic (he's now finishing law school at Berkeley). A Sudanese refugee asked me, trembling, if we could study arguments concerning religious freedom. Never more has John Locke —or, for that matter, the liberal arts—seemed so vital to me.”

Caritas and Felicitas

charityArthur C. Brooks makes the case that charitable giving makes us happier and even more successful: “In 2003, while working on a book about charitable giving, I stumbled across a strange pattern in my data. Paradoxically, I was finding that donors ended up with more income after making their gifts. This was more than correlation; I found solid evidence that giving stimulated prosperity…. Why? Charitable giving improves what psychologists call “self-efficacy,” one’s belief that one is capable of handling a situation and bringing about a desired outcome. When people give their time or money to a cause they believe in, they become problem solvers. Problem solvers are happier than bystanders and victims of circumstance.” Do yourself a favor, then, and become a member of the Arendt Center.

Featured Events

heidThe Black Notebooks (1931-1941):

What Heidegger's Denktagebuch reveals about his thinking during the Nazi regime.

April 8, 2014

Goethe Institut, NYC

Learn more here.

 

"My Name is Ruth."

An Evening with Bard Big Read and Marilynne Robinson's Housekeeping

Excerpts will be read by Neil Gaiman, Nicole Quinn, & Mary Caponegro

April 23, 2014

Richard B. Fisher Center, Bard College

Learn more here.

 

From the Hannah Arendt Center Blog

This week on the blog, our Quote of the Week comes from Martin Wager, who views Arendt's idea of world alienation through the lens of modern day travel. Josh Kopin looks at Stanford Literary Lab's idea of using computers and data as a tool for literary criticism. In the Weekend Read, Roger Berkowitz ponders the slippery slope of using the First Amendment as the basis for campaign finance reform. 

2Apr/140

Reading With Your Computer

FromtheArendtCenter

Franco Moretti is a literature professor, and founder of the Stanford Literary Lab, who believes in something called "computational criticism," that is, the ability of computers to aid in the understanding of literature. Joshua Rothman's recent profile of Moretti has provoked a lot of response, most of it defending traditional literary criticism from the digital barbarians at the gates. Moretti's defenders argue, however, that his critics have failed to understand a crucial difference between his work and what they're worried it might supplant: "The basic idea in Moretti’s work is that, if you really want to understand literature, you can’t just read a few books or poems over and over (“Hamlet,” “Anna Karenina,” “The Waste Land”). Instead, you have to work with hundreds or even thousands of texts at a time. By turning those books into data, and analyzing that data, you can discover facts about literature in general—facts that are true not just about a small number of canonized works but about what the critic Margaret Cohen has called the 'Great Unread.'"

lit

The truth Moretti is after, however, has nothing to do with literature, with the bone curdling insights of tragedy or the personal insights of the novel's hero. What Moretti seeks is a better understanding of all the other texts, of the  entirety of texts and the overarching literariness of a period or of history as a whole. One could say that rather than supplant the traditional literary critic, Moretti's work will aid the literary historian, if only to give a potentially comprehensive idea of any given zeitgeist. That is true, so far as it goes. But as the already decreasing numbers of literature students are now in part siphoned off into alternative studies of literature that ignore and even disdain the surprising and irreducible quality of momentary shock of insight, the declining impact of the literary sensibility will only be accelerated. This is hardly to condemn Moretti and his data-oriented approach to literature as a reservoir of information into mass society; we ought, nevertheless, to find in the popularity of such trends the provocation to remind ourselves why literature is meant to be read by humans instead of machines.

RB  h/t Josh Kopin

24Mar/140

Amor Mundi 3/23/14

Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.

Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.

What Silver Knows

foxData journalist Nate Silver reopened his FiveThirtyEight blog this past week, after leaving the New York Times last year. Although the website launched with a full slate of articles, the opening salvo is a manifesto he calls "What The Fox Knows," referencing the maxim from the poet Archilochus’, “The fox knows many things, but the hedgehog knows one big thing.” For Silver, this means, “We take a pluralistic approach and we hope to contribute to your understanding of the news in a variety of ways.” What separates FiveThirtyEight is its focus on big data, the long trail of information left by everything we do in a digital world. From big data, Silver believes he can predict outcomes more accurately than traditional journalism, and that he will also be better able to explain and predict human behavior. “Indeed, as more human behaviors are being measured, the line between the quantitative and the qualitative has blurred. I admire Brian Burke, who led the U.S. men’s hockey team on an Olympic run in 2010 and who has been an outspoken advocate for gay-rights causes in sports. But Burke said something on the hockey analytics panel at the MIT Sloan Sports Analytics Conference last month that I took issue with. He expressed concern that statistics couldn’t measure a hockey player’s perseverance. For instance, he asked, would one of his forwards retain control of the puck when Zdeno Chara, the Boston Bruins’ intimidating 6’9″ defenseman, was bearing down on him? The thing is, this is something you could measure. You could watch video of all Bruins games and record how often different forwards kept control of the puck. Soon, the NHL may install motion-tracking cameras in its arenas, as other sports leagues have done, creating a record of each player’s x- and y-coordinates throughout the game and making this data collection process much easier.” As the availability of data increases beyond comprehension, humans will necessarily turn the effort of analysis over to machines running algorithms. Predictions and simulations will abound and human actions—whether voting for a president or holding on to a hockey puck—will increasingly appear to be predictable behavior. The fact that actions are never fully predictable is already fading from view; we have become accustomed to knowing how things will end before they begin. At the very least, Nate Silver and his team at FiveThirtyEight will try to “critique incautious uses of statistics when they arise elsewhere in news coverage.”

All in All, Another Tweet in the Wall

tejuAuthor Teju Cole recently composed and released an essay called “A Piece of The Wall” exclusively on Twitter. In an interview, along with details about the technical aspects of putting together what's more like a piece of radio journalism than a piece of print journalism, Cole notes that there may be a connection between readership and change: "I’m not getting my hopes up, but the point of writing about these things, and hoping they reach a big audience, has nothing to do with “innovation” or with “writing.” It’s about the hope that more and more people will have their conscience moved about the plight of other human beings. In the case of drones, for example, I think that all the writing and sorrow about it has led to a scaling back of operations: It continues, it’s still awful, but the rate has been scaled back, and this has been in specific response to public criticism. I continue to believe the emperor has a soul."

A Religious Age?

bergerPeter Berger has a thoughtful critique of Charles Taylor’s A Secular Age, one that accepts Taylor’s philosophical premise but denies its sociological reality. “I think that Taylor’s magnum opus makes a very significant contribution, though I disagree with its central proposition: We don’t live in a “secular age”; rather in most of the world we live in a turbulently religious age (with the exception of a few places, like university philosophy departments in Canada and football clubs in Britain). (Has Taylor been recently in Nepal? Or for that matter in central Texas?) Taylor is a very sophisticated philosopher, not an empirically oriented sociologist of religion. It so happens that we now have a sizable body of empirical data from much of the world (including America and Europe) on what ordinary religious people actually believe and how they relate their faith to various secular definitions of reality). Let me just mention the rich work of Robert Wuthnow, Nancy Ammerman and Tanya Luhrmann in the US, and Grace Davie, Linda Woodhead and Daniele Hervieu-Leger in Europe. There is a phrase that sociology students learn in the first year of graduate study—frequency distribution:  It is important for me to understand just what X is; it is even more important for me to know how much X there is at a given time in a given place. The phrase is to be recommended to all inclined to make a priori  statements about anything. In this case, I think that Taylor has made a very useful contribution in his careful description of what he calls “the immanent frame” (he also calls it “exclusive humanism”)—a sense of reality that excludes all references to transcendence or anything beyond mundane human experience. Taylor also traced the historical development of this definition of reality.” Maybe the disagreement is more subtle: Religion continues in the secular age, but it is more personal. Quite simply, churches were once the tallest and most central buildings, representing the center of public and civic life. That is no longer the case in Europe; nor in Nepal.

Looking Under the Skin

scarlettAnthony Lane in The New Yorker asks the question, “Why should we watch Scarlett Johansson with any more attention than we pay to other actors?” His answer concerns Johansson’s role and performance in her new movie “Under the Skin.” Lane is near obsessed with Johansson’s ability to reveal nothing and everything with a look—what he calls the “Johansson look, already potent and unnerving. She was starting to poke under the skin.” He continues describing Johansson in a photo shoot: ““Give me nothing,” Dukovic said, and Johansson wiped the expression from her face, saying, “I’ll just pretend to be a model.” Pause. “I rarely have anything inside me.” Then came the laugh: dry and dirty, as if this were a drama class and her task was to play a Martini. Invited to simulate a Renaissance picture, she immediately slipped into a sixteenth-century persona, pretending to hold a pose for a painter and kvetching about it: “How long do I have to sit here for? My sciatica is killing me.” You could not wish for a more plausible insight into the mind-set of the Mona Lisa. A small table and a stool were provided, and Johansson sat down with her arms folded in front of her. “I want to look Presidential,” she declared. “I want this to be my Mt. Rushmore portrait.” Once more, Dukovic told her what to show: “Absolutely nothing.” Not long after, he and his team began to pack up. The whole shoot had taken seventeen minutes. She had given him absolutely everything. We should not be surprised by this. After all, film stars are those unlikely beings who seem more alive, not less, when images are made of them; who unfurl and reach toward the light, instead of seizing up, when confronted by a camera; and who, by some miracle or trick, become enriched versions of themselves, even as they ramify into other selves on cue. Clarence Sinclair Bull, the great stills photographer at M-G-M, said of Greta Garbo that “she seems to feel the emotion for each pose as part of her personality.” From the late nineteen-twenties, he held a near-monopoly on pictures of Garbo, so uncanny was their rapport. “All I did was to light the face and wait. And watch,” he said. Why should we watch Johansson with any more attention than we pay to other actors?”

Fantasizing About Being Lost

malaysiaGeoffrey Gray suggests a reason why we've become obsessed with the missing plane: "Wherever the Malaysia Airlines plane is, it found a hiding place. And the longer it takes investigators to discover where it is and what went wrong, the longer we have to indulge in the fantasy that we too might be able to elude the computers tracking our clicks, text messages, and even our movements. Hidden from the rest of the world, if only for an imagined moment, we feel what the passengers of Flight 370 most likely don't: safe."

 

This Week on the Hannah Arendt Center Blog

This week on the blog, learn more about the Program Associate position now available at the Arendt Center. In the Quote of the Week, Ian Zuckerman looks at the role some of Arendt's core themes play in Kubrik's famed nuclear satire, "Dr Strangelove." And, HannahArendt.net issues a call for papers for their upcoming 'Justice and Law' edition being released in August of this year.

10Mar/140

Amor Mundi Newsletter 3/9/14

Arendtamormundi

Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.

Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.

Why the Jews?

antiAnthony Grafton calls David Nirenberg’s Anti-Judaism “one of the saddest stories, and one of the most learned, I have ever read.” Grafton knows that Anti-Judaism “is certainly not the first effort to survey the long grim history of the charges that have been brought against the Jews by their long gray line of self-appointed prosecutors.” What makes this account of the long history of Jewish hatred so compelling is that Nirenberg asks the big question: Why the Jews? “[Nirenberg] wants to know why: why have so many cultures and so many intellectuals had so much to say about the Jews? More particularly, he wants to know why so many of them generated their descriptions and explanations of Jewishness not out of personal knowledge or scholarly research, but out of thin air—and from assumptions, some inherited and others newly minted, that the Jews could be wholly known even to those who knew no Jews.” The question recalls the famous joke told during the Holocaust, especially amongst Jews in concentration camps. Here is one formulation of the joke from Antisemitism, the first book in the trilogy that comprises Hannah Arendt’s magnum opus, The Origins of Totalitarianism: “An antisemite claimed that the Jews had caused the war; the reply was: Yes, the Jews and the bicyclists. Why the bicyclists? Asks the one? Why the Jews? asks the other.” Read more on the Arendt Center blog.

The SAT is Part Hoax, Part Fraud

satNews that the SAT is about to undergo a makeover leaves Bard College President Leon Botstein unimpressed: “The changes recently announced by the College Board to its SAT college entrance exam bring to mind the familiar phrase “too little, too late.” The alleged improvements are motivated not by any serious soul searching about the SAT but by the competition the College Board has experienced from its arch rival, the ACT, the other major purveyor of standardized college entrance exams. But the problems that plague the SAT also plague the ACT. The SAT needs to be abandoned and replaced. The SAT has a status as a reliable measure of college readiness it does not deserve. The College Board has successfully marketed its exams to parents, students, colleges and universities as arbiters of educational standards. The nation actually needs fewer such exam schemes; they damage the high school curriculum and terrify both students and parents. The blunt fact is that the SAT has never been a good predictor of academic achievement in college. High school grades adjusted to account for the curriculum and academic programs in the high school from which a student graduates are. The essential mechanism of the SAT, the multiple choice test question, is a bizarre relic of long outdated twentieth century social scientific assumptions and strategies. As every adult recognizes, knowing something or how to do something in real life is never defined by being able to choose a “right” answer from a set of possible answers (some of them intentionally misleading) put forward by faceless test designers who are rarely eminent experts. No scientist, engineer, writer, psychologist, artist, or physician— and certainly no scholar, and therefore no serious university faculty member—pursues his or her vocation by getting right answers from a set of prescribed alternatives that trivialize complexity and ambiguity.”

What Does the West Have to Prove?

ukForeign policy types are up in arms—not over Russia’s pending annexation of Crimea, but over the response in the West. By yelling loudly but doing nothing in Syria and now in the Ukraine, America and Europe are losing all credibility. The insinuation is clear. If we don’t draw the line at Crimea, we will embolden Putin in Poland. Much as in the 1930s, the current NATO alliance seems unwilling to stand up for anything on principle if the costs are more than a few photo opportunities and some angry tweets. According to The American Interest, “Putin believes the West is decadent, weak, and divided. The West needs to prove him wrong.” And in Politico, Ben Judah writes: “Russia’s rulers have been buying up Europe for years. They have mansions and luxury flats from London’s West End to France’s Cote d’Azure. Their children are safe at British boarding and Swiss finishing schools. And their money is squirrelled away in Austrian banks and British tax havens.Putin’s inner circle no longer fear the European establishment. They once imagined them all in MI6. Now they know better. They have seen firsthand how obsequious Western aristocrats and corporate tycoons suddenly turn when their billions come into play. They now view them as hypocrites—the same European elites who help them hide their fortunes.”

Fiction is Not a Means

royIn The New York Times Magazine, Siddhartha Deb profiles Arundhati Roy, the Indian writer best known in the West for her 1997 novel The God of Small Things. Though the book made Roy into a national icon, her political essays – in which she has addressed, among other issues, India’s occupation of Kashmir, the “lunacy” of India’s nuclear programme, and the paramilitary operations in central India against the ultraleft guerillas and indigenous populations – have angered many nationalist and upper-class Indians for their fierce critiques. Roy’s most recent work, The Doctor and the Saint, is an introduction to Dr. B.R. Ambedkar’s famous 1936 essay “The Annihilation of Caste” that is likely to spark controversy over her rebuke of Ghandi, who wanted to abolish untouchability but not caste. How does Roy see her fiction in relation to her politics? “I’m not a person who likes to use fiction as a means,” she says. “I think it’s an irreducible thing, fiction. It’s itself. It’s not a movie, it’s not a political tract, it’s not a slogan. The ways in which I have thought politically, the proteins of that have to be broken down and forgotten about, until it comes out as the sweat on your skin.” You can read Deb’s profile of Roy here, and an excerpt from The Doctor and the Saint here.

Whither the MOOC Participant

moocComparing the MOOC and the GED, Michael Guerreiro wonders whether participants approach both programs with the same sense of purpose. The answer, he suspects, is no: "The data tells us that very few of the students who enroll in a MOOC will ever reach its end. In the ivy, brick, and mortar world from which MOOCs were spun, that would be damning enough. Sticking around is important there; credentials and connections reign, starting with the high-school transcript and continuing through graduate degrees. But students may go into an online course knowing that a completion certificate, even offered under the imprimatur of Harvard or UPenn, doesn’t have the same worth. A recent study by a team of researchers from Coursera found that, for many MOOC students, the credential isn’t the goal at all. Students may treat the MOOC as a resource or a text rather than as a course, jumping in to learn new code or view an enticing lecture and back out whenever they want, just as they would while skimming the wider Web. For many, MOOCs may be just one more Internet tool or diversion; in the Coursera study, the retention rate among committed students for a typical class was shown to be roughly on par with that of a mobile app. And the London Times reported last week that, when given the option to get course credit for their MOOC (for a fee), none of the thousand, or so students who enrolled in a British online class did.” A potent reminder that while MOOCs may indeed succeed and may even replace university education for many people, they are not so much about education as a combination of entertainment, credential, and manual. These are important activities each, but they are not what liberal arts colleges should be about. The hope in the rise of MOOCs, as we’ve written before, is that they help return college to its mission: to teach critical thinking and expose students to the life of the mind.

The Afterlife of the American University

ameNoam Chomsky, speaking to the Adjunct Faculty Association of the United Steelworkers, takes issue with the idea that the American university was once living and is now undead, and seeks a way forward: "First of all, we should put aside any idea that there was once a “golden age.” Things were different and in some ways better in the past, but far from perfect. The traditional universities were, for example, extremely hierarchical, with very little democratic participation in decision-making. One part of the activism of the 1960s was to try to democratize the universities, to bring in, say, student representatives to faculty committees, to bring in staff to participate. These efforts were carried forward under student initiatives, with some degree of success. Most universities now have some degree of student participation in faculty decisions. And I think those are the kinds of things we should be moving towards: a democratic institution, in which the people involved in the institution, whoever they may be (faculty, students, staff), participate in determining the nature of the institution and how it runs; and the same should go for a factory. These are not radical ideas."

From the Hannah Arendt Center Blog

This week on the blog Anna Metcalfe examines the multi-dimensional idea of action which Arendt discusses in The Human Condition. And in the Weekend Read, entitled 'Why the Jews?', Roger Berkowitz delves into anti-Judaism and its deeply seated roots in Western civilization.

Featured Events

 

hireshousekeepingcoverBard Big Read

Featuring Housekeeping by Marilynne Robinson.

Bard College partners with five local libraries for six weeks of activities, performances, and discussions scheduled throughout the Hudson Valley.

Learn more here.

 

 

 

'What Europe? Ideals to Fight for Today'

The HAC co-sponsors the second annual conference with Bard College in Berlin

March 27-28, 2014

ICI Berlin

 

Learn more here.

17Feb/141

Amor Mundi 2/16/14

Arendtamormundi

Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.

Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.

The Young and Unexceptional

xcetAccording to Rich Lowry and Ramesh Ponnuru, “The survival of American exceptionalism as we have known it is at the heart of the debate over Obama’s program. It is why that debate is so charged.” Mitt Romney repeated this same line during his failed bid to unseat the President, arguing that President Obama “doesn't have the same feelings about American exceptionalism that we do.” American exceptionalism—long a sociological concept used to describe qualities that distinguished American cultural and political institutions—has become a political truncheon. Now comes Peter Beinart writing in the National Journal that the conservatives are half correct. It is true that American exceptionalism is threatened and in decline. But the cause is not President Obama. Beinart argues that the real cause of the decline of exceptionalist feeling in the United States is conservatism itself. Here is Beinart on one way the current younger generation is an exception to the tradition of American exceptionalism: “For centuries, observers have seen America as an exception to the European assumption that modernity brings secularism. “There is no country in the world where the Christian religion retains a greater influence over the souls of men than in America,” de Tocqueville wrote. In his 1996 book, American Exceptionalism: A Double-Edged Sword, Seymour Martin Lipset quoted Karl Marx as calling America “preeminently the country of religiosity,” and then argued that Marx was still correct. America, wrote Lipset, remained “the most religious country in Christendom.”  But in important ways, the exceptional American religiosity that Gingrich wants to defend is an artifact of the past. The share of Americans who refuse any religious affiliation has risen from one in 20 in 1972 to one in five today. Among Americans under 30, it's one in three. According to the Pew Research Center, millennials—Americans born after 1980—are more than 30 percentage points less likely than seniors to say that "religious faith and values are very important to America's success." And young Americans don't merely attend church far less frequently than their elders. They also attend far less than young people did in the past. "Americans," Pew notes, "do not generally become more [religiously] affiliated as they move through the life cycle"—which means it's unlikely that America's decline in religious affiliation will reverse itself simply as millennials age.  In 1970, according to the World Religion Database, Europeans were over 16 percentage points more likely than Americans to eschew any religious identification. By 2010, the gap was less than half of 1 percentage point. According to Pew, while Americans are today more likely to affirm a religious affiliation than people in Germany or France, they are actually less likely to do so than Italians and Danes.” Read more on Beinart and American exceptionalism in the Weekend Read.

 Humans and the Technium

guyIn this interview, Kevin Kelly, one of the founders of Wired magazine, explains his concept of the “technium,” or the whole system of technology that has developed over time and which, he argues, has its own biases and tendencies “inherently outside of what humans like us want.” One thing technology wants is to watch us and to track us. Kelly writes: “How can we have a world in which we are all watching each other, and everybody feels happy? I don't see any counter force to the forces of surveillance and self-tracking, so I'm trying to listen to what the technology wants, and the technology is suggesting that it wants to be watched. What the Internet does is track, just like what the Internet does is to copy, and you can't stop copying. You have to go with the copies flowing, and I think the same thing about this technology. It's suggesting that it wants to monitor, it wants to track, and that you really can't stop the tracking. So maybe what we have to do is work with this tracking—try to bring symmetry or have areas where there's no tracking in a temporary basis. I don't know, but this is the question I'm asking myself: how are we going to live in a world of ubiquitous tracking?” Asking such questions is where humans fit into the technium world. “In a certain sense,” he says, “what becomes really valuable in a world running under Google's reign are great questions, and that’s something that for a long time humans will be better at than machines. Machines are for answers; humans are for questions.”

Literature Against Consumer Culture 

coupleTaking issue with a commentator's claim that The Paris Review's use of the word "crepuscular" (adj., resembling twilight) was elitist, Eleanor Catton suggests that the anti-critical attitude of contemporary readers arises out of consumer culture: "The reader who is outraged by being “forced” to look up an unfamiliar word — characterising the writer as a tyrant, a torturer — is a consumer outraged by inconvenience and false advertising. Advertising relies on the fiction that the personal happiness of the consumer is valued above all other things; we are reassured in every way imaginable that we, the customers, are always right." Literature, she says, resists this attitude, and, in fact cannot be elitist at all: "A book cannot be selective of its readership; nor can it insist upon the conditions under which it is read or received. The degree to which a book is successful depends only on the degree to which it is loved. All a starred review amounts to is an expression of brand loyalty, an assertion of personal preference for one brand of literature above another. It is as hopelessly beside the point as giving four stars to your mother, three stars to your childhood, or two stars to your cat."

Global Corruption

corruptVladislav Inozemtsev reviews Laurence Cockcroft’s book Global Corruption. “The book’s central argument is that corruption has political roots, which Cockcroft identifies as the “merging of elites.” Surveying the mechanisms of top-level decision-making from Russia to Brazil, to Peru and India, as well as in many other countries, he discerns a pattern: Politicians today often act as entrepreneurs, surround themselves with sycophants and deputies, and so navigate the entire political process as they would any commercial business. The hallmarks of a corrupt society are the widespread leveraging of wealth to secure public office; the leveraging of such authority to secure various kinds of privileges; and the interplay of both to make even bigger money. Simply put, corruption is a transformation of public service into a specific kind of entrepreneurship.”

Amazon's Bait and Switch

amazonGeorge Packer takes a look at Amazon's role in the book business noting that its founder, Jeff Bezos, knew from the start that book sales were only the lure; Amazon's real business was Big Data, a big deal in an industry that speaks to people's hearts and minds as well as their wallets. Still, "Amazon remains intimately tangled up in books. Few notice if Amazon prices an electronics store out of business (except its staff); but, in the influential, self-conscious world of people who care about reading, Amazon’s unparalleled power generates endless discussion, along with paranoia, resentment, confusion, and yearning. For its part, Amazon continues to expend considerable effort both to dominate this small, fragile market and to win the hearts and minds of readers. To many book professionals, Amazon is a ruthless predator. The company claims to want a more literate world—and it came along when the book world was in distress, offering a vital new source of sales. But then it started asking a lot of personal questions, and it created dependency and harshly exploited its leverage; eventually, the book world realized that Amazon had its house keys and its bank-account number, and wondered if that had been the intention all along."

Ready or Not

michaelTa-Nehisi Coates, in the wake of NFL prospect Michael Sam's announcement that he is gay, considers how the concept of readiness is backwards: "The question which we so often have been offered—is the NFL ready for a gay player?—is backwards. Powerful interests are rarely “ready” for change, so much as they are assaulted by it. We refer to barriers being "broken" for a reason. The reason is not because great powers generally like to unbar the gates and hold a picnic in the honor of the previously excluded. The NFL has no moral right to be "ready" for a gay player, which is to say it has no right to discriminate against gay men at its leisure which anyone is bound to respect.”

Counter Reformation

classThis week, the magazine Jacobin released Class Action, a handbook for activist teachers, set against school reform and financed using the Kickstarter crowdfunding platform. One of the many essays contained within is Dean Baker's "Unremedial Education," which contains one of the handbook's major theses, an important reminder for those who are interested in education as a route to both the life of the mind and the success of the person: "Education is tremendously valuable for reasons unrelated to work and income. Literacy, basic numeracy skills, and critical thinking are an essential part of a fulfilling life. Insofar as we have children going through school without developing these skills, it is an enormous failing of society. Any just society would place a top priority on ensuring that all children learn such basic skills before leaving school. However, it clearly is not the case that plausible increases in education quality and attainment will have a substantial impact on inequality."

From the Hannah Arendt Center Blog

This week on the blog, Roger Berkowitz asks "Why Think?". And in the Weekend Read, Berkowitz reflects on the loss of American exceptionalism.

16Dec/130

The Laboratory as Anti-Environment

Arendtquote

"Seen from the perspective of the "real" world, the laboratory is the anticipation of a changed environment."

-Hannah Arendt, The Life of the Mind

I find this quote intriguing in that its reference to environments and environmental change speak to the fact that Arendt's philosophy was essentially an ecological one, indeed one that is profoundly media ecological. The quote appears in a section of The Life of the Mind entitled "Science and Common Sense," in which Arendt argues that the practice of science is quite distinct from thinking as a philosophical activity.

lifeofmind

As she explains:

Thinking, no doubt, plays an enormous role in every scientific enterprise, but it is a role of a means to an end; the end is determined by a decision about what is worthwhile knowing, and this decision cannot be scientific.

Here Arendt invokes a variation on Gödel's incompleteness theorem in mathematics, noting that science cannot justify itself on scientific grounds, but rather must somehow depend on something outside of and beyond itself. Perhaps more to the point, science, especially as associated with empiricism, cannot be divorced from concrete reality, and does not function only in the abstract realm of ideas that Plato insisted was the only true reality.

The transformation of truth into mere verity results primarily from the fact that the scientist remains bound to the common sense by which we find our bearings in a world of appearances. Thinking withdraws radically and for its own sake from this world and its evidential nature, whereas science profits from a possible withdrawal for the sake of specific results.

It is certainly the case that scientific truth is always contingent, tentative, open to refutation, as Karl Popper explained.  Scientific truth is never absolute, never anything more than a map of some other territory, a map that needs to be continually tested and reviewed, updated and revised, as Alfred Korzybski explained by way of establishing his discipline of general semantics. Even the so-called laws of nature and physics need not be considered immutable, but may be subject to change and evolution, as Lee Smolin argues in his insightful book, Time Reborn.

Scientists are engaged in the process of abstracting, insofar as they take the data gained by empirical investigation and make generalizations in the form of theories and hypotheses, but this process of induction cannot be divorced from concrete reality, from the world of appearances. Science may be used to test, challenge, and displace common sense, but it operates on the same level, as a distilled form of common sense, rather than something qualitatively different, a status Arendt reserves for the special activity of thinking associated with philosophy.

Arendt goes on to argue that both common sense and scientific speculation lack "the safeguards inherent in sheer thinking, namely thinking's critical capacity."  This includes the capacity for moral judgment, which became horrifically evident by the ways in which Nazi Germany used science to justify its genocidal policies and actions. Auschwitz did not represent a retrieval of tribal violence, but one of the ultimate expressions of the scientific enterprise in action. And the same might be said of Hiroshima and Nagasaki, holding aside whatever might be said to justify the use of the atomic bomb to bring the Second World War to a speedy conclusion. In remaining close to the human lifeworld, science abandons the very capacity that makes us human, that makes human life and human consciousness unique.

The story of modern science is in fact a story of shifting alliances. Science begins as a branch of philosophy, as natural philosophy. Indeed, philosophy itself is generally understood to begin with the pre-Socratics sometimes referred to as Ionian physicists, i.e., Thales, Anaximander, Heraclitus, who first posited the concept of elements and atoms. Both science and philosophy therefore coalesce during the first century that followed the introduction of the Greek alphabet and the emergence of a literate culture in the ancient Greek colonies in Asia Minor.

And just as ancient science is alphabetic in its origins, modern science begins with typography, as the historian Elizabeth Eisenstein explains in her exhaustive study, The Printing Press as an Agent of Change in Early Modern Europe. Simply by making the writings of natural philosophers easily available through the distribution of printed books, scholars were able to compare and contrast what different philosophers had to say about the natural world, and uncover their differences of opinion and contradictions. And this in turn spurned them on to find out for themselves which of various competing explanations are correct, where the truth lies, so that more reading led to even more empirical research, which in turn would have to be published, that is made public, via printing, for the purposes of testing and confirmation. And publication encouraged the formation of a scientific republic of letters, a typographically mediated virtual community.

guten

Eisenstein notes that during the first century following Gutenberg, printed books gave Copernicus access to centuries of recorded observations of the movements of celestial objects, access not easily available to his predecessors. What is remarkable to consider is that the telescope was not invented in his lifetime, that the Polish astronomer arrived at his heliocentric view based only on what could be observed by the naked eye, by gazing up at the heavens, and down at the printed page. The typographic revolution that began in the 15th century was the necessary technological precondition for the Copernican revolution of the 16th century.  The telescope as a tool to extend vision beyond its natural capabilities had not yet been invented, and was not required, although soon after its introduction Galileo was able to confirm the theory that Copernicus had put forth a century earlier.

In the restricted literate culture of medieval Europe, the idea took hold that there are two books to be studied in an effort to discern the divine will, and mind: the book of scripture and the book of nature. Both books were seen as sources of knowledge that can be unlocked by a process of reading and interpretation. It was grammar, the ancient study of language, which became one third of the trivium, the foundational curriculum of the medieval university, that became the basis of modern science, and not dialectic or logic, that is, pure thinking, which is the source of the philosophic tradition, as Marshall McLuhan noted in The Classical Trivium. The medieval schoolmen of course placed scripture in the primary position, whereas modern science situates truth in the book of nature alone.

The publication of Francis Bacon's Novum Organum in 1620 first formalized the separation of science from philosophy within print culture, but the divorce was finalized during the 19th century, coinciding with the industrial revolution, as researchers became known as scientists rather than natural philosophers. In place of the alliance with philosophy, science came to be associated with technology; before this time, technology, and engineering, often referred to as mechanics, represented entirely different lines of inquiry, utterly practical, often intuitive rather than systematic. Mechanics was part of the world of work rather than that of action, to use the terms Arendt introduced in The Human Condition, which is to say that it was seen as the work of the hand rather than the mind. By the end of 19th century, scientific discovery emerged as the main the source of major technological breakthroughs, rather than innovation springing fully formed from the tinkering of inventors, and it became necessary to distinguish between applied science and theoretical science, the latter nonetheless still tied to the world of appearances.

Today, the acronym STEM, which stands for science, technology, engineering, and mathematics, has become a major buzzword in education, a major emphasis in particular for higher education, and a major concern in regards to economic competitiveness. We might well take note of how recent this combination of fields and disciplines really is, insofar as mathematics represents pure logic and highly abstract forms of thought, and science once was a purely philosophical enterprise, both aspects of the life of the mind. Technology and engineering, on the other hand, for most of our history took the form of arts and crafts, part of the world of appearances.

The convergence of science and technology also had much to do with scientists' increasing reliance on scientific instruments for their investigations, a trend increasingly prevalent following the introduction of both the telescope and the microscope in the early 17th century, a trend even more apparent from the 19th century on. The laboratory is in fact another such instrument, a technology whose function is to provide precisely controlled conditions, beyond its role as a facility for the storage and use of other scientific instruments. Scientific instruments are media that extend our senses and allow us to see the world in new ways, therefore altering our experience of our environment, while the discoveries they lead to provide us with the means of altering our environments physically. And the laboratory is an instrument that provides us with a total environment, enclosed, controlled, isolated from the world to become in effect the world. It is a micro-environment where experimental changes can be made that anticipate changes that can be made to the macro-environment we regularly inhabit.

The split between science and philosophy can also be characterized as a division between the eye and the ear. Modern science, as intimately bound up in typography, is associated with visualism, the idea that seeing is believing, that truth is based on vision, that knowledge can be displayed visually as an organized set of facts, rather than the product of ongoing dialogue, and debate. McLuhan noted the importance of the fixed point of view as a by-product of training the eye to read, and Walter Ong studied the paradigm-shift in education attributed to Peter Ramus, who introduced pedagogical methods we would today associated with textbooks, outlining, and the visual display of information. Philosophy has not been immune to this influence, but retains a connection to the oral-aural mode through the method of Socratic dialogue, and by way of an understanding of the history of ideas as an ongoing conversation. Arendt, in The Human Condition, explained action, the realm of words, as a social phenomenon, one based on dialogic exchanges of ideas and opinions, not a solitary matter of looking things up. And thinking, which she elevates above the scientific enterprise in The Life of the Mind, is mostly a matter of an inner dialogue, or monologue if you prefer, of hearing oneself think, of silent speech, and not of a mental form of writing out words or imaginary reading. We talk things out, to others and/or to ourselves.

Science, on the other hand, is all about visible representations, as words, numbers, illustrations, tables, graphs, charts, diagrams, etc. And it is the investigation of visible phenomena, or otherwise of phenomena that can be rendered visible through scientific instruments. Acoustic phenomena can only be dealt with scientifically by being turned into a visual measurement, either of numbers or of lines going up and down to depict sound waves.  The same is true for the other senses; smell, taste, and touch can only be dealt with scientifically though visual representation. Science cannot deal with any sense other than sight on its own terms, but always requires an act of translation into visual form. Thus, Arendt notes that modern science, being so intimately bound up in the world of appearances, is often concerned with making the invisible visible:

That modern science, always hunting for manifestations of the invisible—atoms, molecules, particles, cells, genes—should have added to the world a spectacular, unprecedented quantity of new perceptible things is only seemingly paradoxical.

Arendt might well have noted the continuity between the modern activity of making the invisible visible as an act of translation, and the medieval alchemist's search for methods of achieving material transformation, the translation of one substance into another. She does note that the use of scientific instruments are a means of extending natural functions, paralleling McLuhan's characterization of media as extensions of body and biology:

In order to prove or disprove its hypotheses… and to discover what makes things work, it [modern science] began to imitate the working processes of nature. For that purpose it produced the countless and enormously complex implements with which to force the non-appearing to appear (if only as an instrument-reading in the laboratory), as that was the sole means the scientist had to persuade himself of its reality. Modern technology was born in the laboratory, but this was not because scientists wanted to produce appliances or change the world. No matter how far their theories leave common-sense experience and common-sense reasoning behind, they must finally come back to some form of it or lose all sense of realness in the object of their investigation.

Note here the close connection between reality, that is, our conception of reality, and what lends someone the aura of authenticity, as Walter Benjamin would put it, is dependent on the visual sense, on the phenomenon being translated into the world of appearances (the aura as opposed to the aural). It is no accident then that there is a close connection in biblical literature and the Hebrew language between the words for spirit and soul, and the words for invisible but audible phenomena such as wind and breath, breath in turn being the basis of speech (and this is not unique to Hebraic culture or vocabulary). It is at this point that Arendt resumes her commentary on the function of the controlled environment:

And this return is possible only via the man-made, artificial world of the laboratory, where that which does not appear of its own accord is forced to appear and to disclose itself. Technology, the "plumber's" work held in some contempt by the scientist, who sees practical applicability as a mere by-product of his own efforts, introduces scientific findings, made in "unparalleled insulation… from the demands of the laity and of everyday life," into the everyday world of appearances and renders them accessible to common-sense experience; but this is possible only because the scientists themselves are ultimately dependent on that experience.

We now reach the point in the text where the quote I began this essay with appears, as Arendt writes:

Seen from the perspective of the "real" world, the laboratory is the anticipation of a changed environment; and the cognitive processes using the human abilities of thinking and fabricating as means to their end are indeed the most refined modes of common-sense reasoning. The activity of knowing is no less related to our sense of reality and no less a world-building activity than the building of houses.

Again, for Arendt, science and common sense both are distinct in this way from the activity of pure thinking, which can provide a sorely needed critical function. But her insight as to the function of the laboratory as an environment in which the invisible is made visible is important in that this helps us to understand that the laboratory is, in fact, what McLuhan referred to as a counter-environment or anti-environment.

In our everyday environment, the environment itself tends to be invisible, if not literally so, then functionally insofar as whatever fades into the background tends to fall out of our perceptual awareness or is otherwise ignored. Anything that becomes part of our routine falls into this category, becoming environmental, and therefore subliminal. And this includes our media, technology, and symbol systems, insofar as they are part of our everyday world. We do pay attention to them when they are brand new and unfamiliar, but once their novelty wears off they become part of the background, unless they malfunction or breakdown. In the absence of such conditions, we need an anti-environment to provide a contrast through which we can recognize the things we take for granted in our world, to provide a place to stand from which we can observe our situation from the outside in, from a relatively objective stance. We are, in effect, sleepwalkers in our everyday environment, and entering into an anti-environment is a way to wake us up, to enhance awareness and consciousness of our surroundings. This occurs, in a haphazard way, when we return home after spending time experiencing another culture, as for a brief time much of what was once routinized about own culture suddenly seems strange and arbitrary to us. The effect wears off relatively quickly, however, although the after-effects of broadening our minds in this way can be significant.

science

The controlled environment of the laboratory helps to focus our attention on phenomena that are otherwise invisible to us, either because they are taken for granted, or because they require specialized instrumentation to be rendered visible. It is not just that such phenomena are brought into the world of appearances, however, but also that they are made into objects of concerted study, to be recorded, described, measured, experimented upon, etc.

McLuhan emphasized the role of art as an anti-environment. The art museum, for example, is a controlled environment, and the painting that we encounter there has the potential to make us see things we had never seen before, by which I mean not just objects depicted that are unfamiliar to us, but familiar objects depicted in unfamiliar ways. In this way, works of art are instruments that can help us to see the world in new and different ways, help us to see, to use our senses and perceive in new and different ways. McLuhan believed that artists served as a kind of distant early warning system, borrowing cold war terminology to refer to their ability to anticipate changes occurring in the present that most others are not aware of. He was fond of the Ezra Pound quote that the artist is the antenna of the race, and Kurt Vonnegut expressed a similar sentiment in describing the writer as a canary in a coal mine. We may further consider the art museum or gallery or library as a controlled environment, a laboratory of sorts, and note the parallel in the idea of art as the anticipation of a changed environment.

There are other anti-environments as well. Houses of worship function in this way, often because they are based on earlier eras and different cultures, and otherwise are constructed to remove us out of our everyday environment, and help us to see the world in a different light. They are in some way dedicated to making the invisible world of the spirit visible to us through the use of sacred symbols and objects, even for religions whose concept of God is one that is entirely outside of the world of appearances. Sanctuaries might therefore be considered laboratories used for moral, ethical, and sacred discovery, experimentation, and development, and places where changed environments are also anticipated, in the form of spiritual enlightenment and the pursuit of social justice. This also suggests that the scientific laboratory might be viewed, in a certain sense, as a sacred space, along the lines that Mircea Eliade discusses in The Sacred and the Profane.

The school and the classroom are also anti-environments, or at least ought to be, as Neil Postman argued in Teaching as a Conserving Activity.  Students are sequestered away from the everyday environment, into a controlled situation where the world they live in can be studied and understood, and phenomena that are taken for granted can be brought into conscious awareness. It is indeed a place where the invisible can be made visible. In this sense, the school and the classroom are laboratories for learning, although the metaphor can be problematic when it used to imply that the school is only about the world of appearances, and all that is needed is to let students discover that world for themselves. Exploration is indeed essential, and discovery is an important component of learning. But the school is also a place where we may engage in the critical activity of pure thinking, of critical reasoning, of dialogue and disputation.

The classroom is more than a laboratory, or at least it must become more than a laboratory, or the educational enterprise will be incomplete. The school ought to be an anti-environment, not only in regard to the everyday world of appearances and common sense, but also to that special world dominated by STEM, by science, technology, engineering and math.  We need the classroom to be an anti-environment for a world subject to a flood of entertainment and information, we need it to be a language-based anti-environment for a world increasingly overwhelmed by images and numbers. We need an anti-environment where words can take precedence, where reading and writing can be balanced by speech and conversation, where reason, thinking, and thinking about thinking can allow for critical evaluation of common sense and common science alike. Only then can schools be engaged in something more than just adjusting students to take their place in a changed and changing environment, integrating them within the technological system, as components of that system, as Jacques Ellul observed in The Technological Society. Only then can schools help students to change the environment itself, not just through scientific and technological innovation, but through the exercise of values other than the technological imperative of efficiency, to make things better, more human, more life-affirming.

The anti-environment that we so desperately need is what Hannah Arendt might well have called a laboratory of the mind.

-Lance Strate

7Jun/130

In the Age of Big Data, Should We Live in Awe of Machines?

ArendtWeekendReading

In 1949, The New York Times asked Norbert Wiener, author of Cybernetics, to write an essay for the paper that expressed his ideas in simple form. For editorial and other reasons, Wiener’s essay never appeared and was lost. Recently, a draft of the never-published essay was found in the MIT archives. Written now 64 years ago, the essay remains deeply topical. The Times recently printed excerpts. Here is the first paragraph:

By this time the public is well aware that a new age of machines is upon us based on the computing machine, and not on the power machine. The tendency of these new machines is to replace human judgment on all levels but a fairly high one, rather than to replace human energy and power by machine energy and power. It is already clear that this new replacement will have a profound influence upon our lives, but it is not clear to the man of the street what this influence will be.

Wiener draws a core distinction between machines and computing machines, a distinction that is founded upon the ability of machines to mimic and replace not only human labor, but also human judgment. In the 1950s, when Wiener wrote, most Americans worried about automation replacing factory workers. What Wiener saw was a different danger: that intelligent machines could be created that would “replace human judgment on all levels but a fairly high one.”  

Today, of course, Wiener’s prophecy is finally coming true. The IBM supercomputer Watson is being trained to make diagnoses with such accuracy, speed, and efficiency that it will largely replace the need for doctors to be trained in diagnostics.

watson

Google is developing a self-driving car that will obviate the need for humans to judge how fast and near to others they will drive, just as GPS systems already render moot the human sense of direction. MOOCs are automating the process of education and grading so that fewer human decisions need to be made at every level. Facebook is automating the acquisition of friends, lawyers are employing computers to read and analyze documents, and on Wall Street computer trading is automating the buying and selling of stocks. Surveillance drones, of course, are being given increasing autonomy to sift through data and decide which persons to follow or investigate. Finally, in the scandal of the day, the National Security Agency is using computer algorithms to mine data about our phone calls looking for abnormalities and suspicious patterns that would suggest potential dangers. In all these cases, the turn to machines to supplement or even replace human judgment has a simple reason: Even if machines cannot think, they can be programmed to do traditionally human tasks in ways that are faster, more reliable, and less expensive than can be done by human beings. In ways big and small, human judgment is being replaced by computers and machines.

It is important to recognize that Wiener is not arguing that we will create artificial human beings. The claim is not that humans are simply fancy machines or that machines can become human. Rather, the point is that machines can be made to mimic human judgment with such precision and subtlety so that their judgments, while not human, are considered either equal to human judgment or even better. The result, Wiener writes, is that “Machines much more closely analogous to the human organism are well understood, and are now on the verge of being built. They will control entire industrial processes and will even make possible the factory substantially without employees.”

Wiener saw this new machine age as dangerous on at least two grounds. First, economically, the rise of machines carries the potential to upend basic structures of civilization. He writes:

These new machines have a great capacity for upsetting the present basis of industry, and of reducing the economic value of the routine factory employee to a point at which he is not worth hiring at any price. If we combine our machine-potentials of a factory with the valuation of human beings on which our present factory system is based, we are in for an industrial revolution of unmitigated cruelty.

The dangers Wiener sees from our increased reliance on computing machines are not limited to economic dislocation. The real threat that computing machines pose is that as we cede more and more power to machines in our daily lives, we will, he writes, gradually forfeit our freedom and independence:

[I]f we move in the direction of making machines which learn and whose behavior is modified by experience, we must face the fact that every degree of independence we give the machine is a degree of possible defiance of our wishes. The genie in the bottle will not willingly go back in the bottle, nor have we any reason to expect them to be well disposed to us.

In short, it is only a humanity which is capable of awe, which will also be capable of controlling the new potentials which we are opening for ourselves. We can be humble and live a good life with the aid of the machines, or we can be arrogant and die.

For Wiener, our eventual servitude to machines is both an acceptable result and a fait accompli, one we must learn to accept. If we insist on arrogantly maintaining our independence and freedom, we will die. I gather the point is not that machines will rise up and kill their creators, but rather that we ourselves will program our machines to eliminate, imprison, immobilize, or re-program those humans who refuse to comply with paternalistic and well-meaning directives of the machines systems we create in order to provide ourselves with security and plenty.

Wiener counsels that instead of self-important resistance, we must learn to be in awe of our machines. Our machines will improve our lives. They will ensure better medical care, safer streets, more efficient production, better education, more reliable childcare and more human warfare. Machines offer the promise of a cybernetic civilization in which an entire human and natural world is regulated and driven towards a common good with super-human intelligence and calculative power. In the face of such utopian possibility, we must accept our new status as the lucky beneficiaries of the regulatory systems we have created and humble ourselves as beings meant to live well rather than to live free.

tech

Recent revelations about the U.S. government’s using powerful computers to mine and analyze enormous amounts of data collected via subpoenas from U.S. telecom companies is simply one example of the kind of tradeoff Wiener suggests we will and we should make. If I understand the conclusions of Glenn Greenwald’s typically excellent investigative reporting, the NSA uses computer algorithms to scan the totality of phone calls and internet traffic in and out of the United States. The NSA needs all of this data—all of our private data—in order to understand the normal patterns of telephony and web traffic and thus to notice, as well, those exceptional patterns of calling, chatting, and surfing. The civil libertarian challenges of such a program are clear: the construction of a database of normal behavior allows the government to attend to those whose activities are outside the norm. Those outliers can be terrorists or pedophiles; they may be Branch Davidians or members of Occupy Wall Street; they may be Heideggerians or Arendtians. Whomever they are, once those who exist and act in patterns outside the norm are identified, it is up to the government whether to act on that information and what to do with it. We are put in the position of having to trust our government to use that information wisely, with pitifully little oversight. Yet the temptation will always be there for the government to make use of private information once they have it.

In the face of the rise of machines and the present NSA action, we have, Wiener writes, a choice. We can arrogantly thump our chests and insist that our privacy be protected from snooping machines and governmental bureaucracies, or we can sit back and stare in awe of the power of these machines to keep us safe from terrorists and criminals at such a slight cost to our happiness and quality of life. We already allow the healthcare bureaucracy to know the most intimate details of our lives and the banking system to penetrate into the most minute details of our finances and the advertising system to know the most embarrassing details of our surfing and purchasing histories; why, Wiener pushes us to ask, should we shy away from allowing the security apparatus from making use of our communication?

If there is a convincing answer to this hypothetical question and if we are to decide to resist the humbling loss of human freedom and human dignity that Wiener welcomes, we need to articulate the dangers Wiener recognizes and then rationalizes in a much more provocative and profound way. Towards that end, there are few books more worth reading than Hannah Arendt’s The Human Condition. Wiener is not mentioned in Hannah Arendt’s 1958 book; and yet, her concern and her theme, if not her response, are very much in line with the threat that cybernetic scientific and computational thinking pose for the future of human beings.

In her prologue to The Human Condition, Arendt writes that two threatening events define the modern age. The first was the launch of Sputnik. The threat of Sputnik had nothing to do with the cold war or the Russian lead in the race for space. Rather, Sputnik signifies for Arendt the fact that we humans are finally capable of realizing the age-old dream of altering the basic conditions of human life, above all that we are earth-bound creatures subject to fate. What Sputnik meant is that we were then in the 1950s, for the first time, in a position to humanly control and transform our human condition and that we are doing so, thoughtlessly, without politically and thoughtfully considering what that would mean. I have written much about this elsewhere and given a TEDx talk about it here.

The second “equally decisive” and “no less threatening event” is “the advent of automation.”  In the 1950s, automation of factories threatened to “liberate mankind from its oldest and most natural burden, the burden of laboring and the bondage to necessity.” Laboring, Arendt writes, has for thousands of years been one essential part of what it means to be a human being. Along with work and action, labor comprises those activities engaged in by all humans. To be human has meant to labor and support oneself; to be human has for thousands of years meant that we produce things—houses, tables, stories, and artworks—that provide a common humanly built world in which we live together; and to be human has meant to have the ability to act and speak in such a way as to surprise others so that your action will be seen and talked about and reacted to with a force that will alter the course and direction of the human world. Together these activities comprise the dignity of man, our freedom to build, influence, and change our given world—within limits.

But all three of these activities of what Arendt calls the vita activa, are now threatened, if not with extinction, then at least with increasing rarity and public irrelevance. As automation replaces human laborers, the human condition of laboring for our necessary preservation is diminished, and we come to rely more and more on the altruism of a state enriched by the productivity of machine labor. Laboring, part of what it has meant to be human for thousands of years, threatens to become ever less necessary and to occupy an ever smaller demand on our existence. As the things we make, the houses we live in, and the art we produce become ever more consumable, fleeting, and temporary, the common world in which we live comes to seem ever more fluid; we move houses and abandon friends with the greater ease than previous ages would dispose of a pair of pants. Our collective focus turns toward our present material needs rather than towards the building of common spiritual and ethical worlds. Finally, as human action is seen as the statistically predictable and understandable outcome of human behavior rather than the surprising and free action of human beings, our human dignity is sacrificed to our rational control and steering of life to secure safety and plenty. The threat to labor, work, and action that Arendt engages emerges from the rise of science—what she calls earth and world alienation—and the insistence that all things, including human beings, are comprehensible and predictable by scientific laws.

Arendt’s response to these collective threats to the human condition is that we must “think what we are doing.” She writes at the end of her prologue:

What I propose in the following is a reconsideration of the human condition from the vantage point of our newest experiences and our most recent fears. This, obviously, is a matter of thought, and thoughtlessness—the heedless recklessness or hopeless confusion or complacent repetition of “truths” which have become trivial and empty—seems to me among the outstanding characteristics of our time. What I propose, therefore, is very simple: it is nothing more than to think what we are doing.

Years before Arendt traveled to Jerusalem and witnessed what she saw as the thoughtlessness of Adolf Eichmann, she saw the impending thoughtlessness of our age as the great danger of our time. Only by thinking what we are doing—and in thinking also resisting the behaviorism and materialism of our calculating time—can we humans hope to resist the impulse to be in awe of our machines and, instead, retain our reverence for human being that is foundation of our humanity. Thinking—that dark, irrational, and deeply human activity—is the one meaningful response Arendt finds to both the thoughtlessness of scientific behaviorism and the thoughtlessness of the bureaucratic administration of mass murder.

think

There will be great examples of chest thumping about the loss of privacy and the violation of constitutional liberties over the next few days. This is as it should be. There will also be sober warnings about the need to secure ourselves from terrorists and enemies. This is also necessary. What is needed beyond both these predictable postures, however, is serious thinking about the tradeoffs between our need for reliable and affordable security along with honest discussion of what we today mean by human freedom. To begin such a discussion, it is well worth revisiting Norbert Wiener’s essay. It is your weekend read.

If you are interested in pursuing Arendt’s own response to crisis of humanism, you can find a series of essays and public lectures on that theme here.

-RB

24May/130

Looking Beyond A Digital Harvard

ArendtWeekendReading

Graduation is upon us. Saturday I will be in full academic regalia mixing with the motley colors of my colleagues as we send forth yet another class of graduates onto the rest of their lives. I advised three senior projects this year. One student is headed to East Jerusalem, where she will be a fellow at the Bard Honors College at Al Quds University. Another is staying at Bard where he will co-direct Bard’s new Center for the Study of the Drone. The third is returning to the United Kingdom where he will be the fourth person in a new technology driven public relations start up. A former student just completed Bard’s Masters in Teaching and will begin a career as a high school teacher. Another recent grad is returning from Pakistan to New York where she will earn a Masters in interactive technology at the Tisch School for the Arts at NYU.  These are just a few of the extraordinary opportunities that young graduates are finding or making for themselves.

graduation

The absolute best part of being a college professor is the immersion in optimism from being around exceptional young people. Students remind us that no matter how badly we screw things up, they keep on dreaming and working to reinvent the world as a better and more meaningful place. I sometimes wonder how people who don’t have children or don’t teach can possibly keep their sanity. I count my lucky stars to be able to live and work around such amazing students.

I write this at a time, however, in which the future of physical colleges where students and professors congregate in small classrooms to read and think together is at a crossroads. In The New Yorker, Nathan Heller has perhaps the most illuminating essay on MOOC’s yet to be written. His focus is on Harvard University, which brings a different perspective than most such articles. Heller asks how MOOCs will change not only our wholesale educational delivery at state and community colleges across the country, but also how the rush to transfer physical courses into online courses will transform elite education as well. He writes: “Elite educators used to be obsessed with “faculty-to-student-ratio”; now schools like Harvard aim to be broadcast networks.”

By focusing on Harvard, Heller shifts the traditional discourse surrounding MOOCs, one that usually concentrates on economics. When San Jose State or the California State University system adopts MOOCs, the rationale is typically said to be savings for an overburdened state budget. While many studies show that students actually do better in electronic online courses than they do in physical lectures, a combination of cynicism and hope leads professors to be suspicious of such claims. The replacement of faculty by machines is thought to be a coldly economic calculation.

But at Harvard, which is wealthier than most oil sheikdoms, the warp speed push into online education is not simply driven by money (although there is a desire to corner a market in the future). For many of the professors Heller interviews in his essay, the attraction of MOOCs is that they will actually improve the elite educational experience.

Take for example Gregory Nagy, professor of classics, and one of the most popular professors at Harvard. Nagy is one of Harvard’s elite professors flinging himself headlong into the world of online education. He is dividing his usual hour-long lectures into short videos of about 6 minutes each—people get distracted watching lectures on their Iphones at home or on the bus. He imagines “each segment as a short film” and says that, “crumbling up the course like this forced him to study his own teaching more than he had at the lectern.” For Nagy, the online experience is actually forcing him to be more clear; it allows for spot-checking the participants comprehension of the lecture through repeated multiple-choice quizzes that must be passed before students can continue on to the next lecture. Dividing the course into digestible bits that can be swallowed whole in small meals throughout the day is, Nagy argues, not cynical, but progress. “Our ambition is actually to make the Harvard experience now closer to the MOOC experience.”

harvard

It is worth noting that the Harvard experience of Nagy’s real-world class is not actually very personal or physical. Nagy’s class is called “Concepts of the Hero in Classical Greek Civilization.” Students call it “Heroes for Zeroes” because it has a “soft grading curve” and it typically attracts hundreds of students. When you strip away Nagy’s undeniable brilliance, his physical course is a massive lecture course constrained only by the size of the Harvard’s physical plant. For those of us who have been on both sides of the lectern, we know such lectures can be entertaining and informative. But we also know that students are anonymous, often sleepy, rarely prepared, and none too engaged with their professors. Not much learning goes on in such lectures that can’t be simply replicated on a TV screen. And in this context, Nagy is correct. When one compares a large lecture course with a well-designed online course, it may very well be that the online course is a superior educational venture—even at Harvard.

As I have written here before, the value of MOOCs is to finally put the college lecture course out of its misery. There is no reason to be nostalgic for the lecture course. It was never a very good idea. Aside from a few exceptional lecturers—in my world I can think of the reputations of Hegel, his student Eduard Gans, Martin Heidegger, and, of course, Hannah Arendt—college lectures are largely an economical way to allow masses of students to acquire basic introductory knowledge in a field. If the masses are now more massive and the lectures more accessible, I’ll accept that as progress.

The real problems MOOCs pose is not that they threaten to replace lecture courses, but that they intensify our already considerable confusion regarding what education is. Elite educational institutions, as Heller writes, no longer compete against themselves. He talks with Gary King, University Professor of Quantitative Social Science and Drew Gilpin Faust, Harvard’s President, who see Harvard’s biggest threat not to be Yale or Amherst but “The University of Phoenix,” the for-profit university. The future of online education, King argues, will be driven by understanding education as a “data-gathering resource.” Here is his argument:

Traditionally, it has been hard to assess and compare how well different teaching approaches work. King explained that this could change online through “large-scale measurement and analysis,” often known as big data. He said, “We could do this at Harvard. We could not only innovate in our own classes—which is what we are doing—but we could instrument every student, every classroom, every administrative office, every house, every recreational activity, every security officer, everything. We could basically get the information about everything that goes on here, and we could use it for the students. A giant, detailed data pool of all activities on the campus of a school like Harvard, he said, might help students resolve a lot of ambiguities in college life.

At stake in the battle over MOOCs is not merely a few faculty jobs. It is a question of how we educate our young people. Will they be, as they increasingly are, seen as bits of data to be analyzed, explained, and guided by algorithmic regularities, or are they human beings learning to be at home in a world of ambiguity.

Most of the opposition to MOOCs continues to be economically tinged. But the real danger MOOCs pose is their threat to human dignity. Just imagine that after journalists and professors and teachers, the next industry to be replaced by machines is babysitters. The advantages are obvious. Robotic babysitters are more reliable than 18 year olds, less prone to be distracted by text messages or twitter. They won’t be exhausted and will have access to the highest quality first aid databases. Of course they will eventually also be much cheaper. But do we want our children raised by machines?

That Harvard is so committed to a digital future is a sign of things to come. The behemoths of elite universities have their sights set on educating the masses and then importing that technology back into the ivy quadrangles to study their own students and create the perfectly digitized educational curriculum.

And yet it is unlikely that Harvard will ever abandon personalized education. Professors like Peter J. Burgard, who teaches German at Harvard, will remain, at least for the near future.

Burgard insists that teaching requires “sitting in a classroom with students, and preferably with few enough students that you can have real interaction, and really digging into and exploring a knotty topic—a difficult image, a fascinating text, whatever. That’s what’s exciting. There’s a chemistry to it that simply cannot be replicated online.”

ard

Burgard is right. And at Harvard, with its endowment, professors will continue to teach intimate and passionate seminars. Such personalized and intense education is what small liberal arts colleges such as Bard offer, without the lectures and with a fraction of the administrative overhead that weighs down larger universities. But at less privileged universities around the land, courses like Burgard’s will likely become ever more rare. Students who want such an experience will look elsewhere. And here I return to my optimism around graduation.

Dale Stephens of Uncollege is experimenting with educational alternatives to college that foster learning and thinking in small groups outside the college environment. In Pittsburgh, the Saxifrage School and the Brooklyn Institute of Social Science are offering college courses at a fraction of the usual cost, betting that students will happily use public libraries and local gyms in return for a cheaper and still inspiring educational experience. I tell my students who want to go to graduate school that the teaching jobs of the future may not be at universities and likely won’t involve tenure. I don’t know where the students of tomorrow will go to learn and to think, but I know that they will go somewhere. And I am sure some of my students will be teaching them. And that gives me hope.

As graduates around the country spring forth, take the time to read Nathan Heller’s essay, Laptop U. It is your weekend read.

You can also read our past posts on education and on the challenge of MOOCs here.

-RB

1May/130

The Re-Germanization of “Hannah Arendt”

ArendtFilm2

I must confess, I am no Roger Ebert. I don’t write movie reviews for a living. I love movies, and watch lots of them, and often have strong opinions, like most of us. More than that I cannot claim.

But I have been deeply engaged in the life and thought of Hannah Arendt, having recently finished a book on her. And one I thing I can tell you is that at her core she was Jewish and also very American. The problem of Jewish identity was something she wrestled with her whole life, and in a very advanced way. She looked for data everywhere, even among Nazis, and she pulled ideas from everywhere, seeking to invent something new. By identity, I don’t mean just personal identity. I mean the collective identity upon which personal identities stand, and the politics that surround them. The problem for her was how an ethnic identity could be anchored in political institutions, and fostered, and protected, and yet avoids the close-mindedness and intellectual rigidity that seem inherent in nationalism. Thus too much is constantly made out of her apparent "non-Love" for the Jewish people, something which she wrote to Gershom Scholem after the publication of Eichmann in Jerusalem, which is also a key scene in this movie. Against the backdrop of her own life, however, the idea that only friends mattered sounded just a bit ironic. Arendt was not exactly a "cultivator of her garden." She spent all her time wrapped up in national and international and cultural politics. Jewish politics was a big part of her life.

So as a fan of both movies and Arendt, you can imagine how much I was looking forward to this movie. Unfortunately, I came out deeply disappointed. It’s not simply that this portrait of Arendt is frozen in amber, and celebrates the misunderstandings of 50 years ago, when Eichmann in Jerusalem had just came out. It’s not simply that it ignores the last 15 years of modern scholarship, which re- excavated her Jewishness in order to make sense of the many things in her writings and actions that otherwise don’t. It’s that it turns her story inside out. She becomes a German woman saving the Jews.

moviestill

I first saw this film in Germany, and I can testify that Germans love the story when told this way. It also seems a story the director loves to tell. After seeing Arendt twice (once in Munich and once in Tel Aviv), I remembered von Trotta’s 2003 movie Rosenstrasse, and was stunned to realize it’s pretty much the same story: German women saving Jewish men. Rosenstrasse, an interesting footnote in Holocaust and legal history ends in a triumphal march with the women bringing their men home, seeming as if they’d risked life and limb. In Hannah Arendt, a similar scene is her big speech at the New School, where the evil administrators (all very Jewish looking) are shamed into submission by her brilliance, while young students (all pretty and Aryan-looking) applaud enthusiastically. Both are archetypal Hollywood “the world is good again” scenes. And both are fundamental distortions of reality, German fantasies being taken for history.

Perhaps that is the key. Perhaps in this age of Tarantino and Spielberg you are free to do what you like. The projection of historical fantasies is now a subgenre. So shouldn’t the Germans be free to enjoy their fantasies about the Jews, about Israel,about German-Jewish relations, about the meaning of German-Jewish reconciliation, you name it? Sure. But, as I’m sure you have noticed, along with passionate fans, these sorts of films always attract large measures of stinging criticism from (a) scholars peeved at gross inaccuracies, and (b) people who hate this fantasy and want a different one. Since for this film I fall into both groups, you should treat my reactions accordingly.

Hollywood conventions may be most visible in the “right with the world” scenes, but they appear throughout the film. The most Hollywood thing about it is that this is a film lionizing thinkers that doesn’t have any thinking in it. We are supposed to know from the camera and the music and the reaction shots that they are having big thoughts and that everyone is awed by them. But if you actually listen to what is supposed to be passing as big thought, Oy. Hannah Arendt and Mary McCarthy: frivolous advice about men. Martin Heidegger, who hovers over the movie like a Black Forest deity, appears via flashbacks, pronouncing things like “We think because we are thinking beings.” Young Hannah Arendt looks up, clearly smitten by such banalities. Under Heidegger’s cloud, Hannah Arendt is not only Germanized, but turned into a sentimental fool. Which is the last description anyone has ever reached for who had ever met her.

As for the Eichmann trial that frames and forms the core of the film, all I can say is don’t get me started. Arendt’s New Yorker articles and the book that came out of them were the source of endless misunderstanding, both at the time and still today. This movie not only adds to it, it builds on it. For von Trotta, “the banality of evil” is a way of normalizing the crimes of the Holocaust: anyone could have done them. Eichmann is no antisemite. Banality is the thus deepest insight, the final dismissal of charges. And it’s the Jews who miss it, and the German-speaking woman who has to tell them, for their own good, to give up on this grudge business and with it also realize their own guilt in the destruction of the Jews.

So far, so normal. Everyday Eichmann in Jerusalem is being misinterpreted like this in classrooms around the world. But there is one thing I can’t forgive, which gives the film its final conclusion, and that is the completely fabricated scene at the end where she is threatened by the Mossad. It is nonsensical for several reasons, but worse is how it is composed. It is a “walking my lonely road” scene that chimes with the very first scene of the movie, when Eichmann is walking along in Argentina just before he is grabbed. There, the Mossad men overpower him completely; he is helpless and held up to scorn. Here, she stands up to them and tells them off; they slink away, grumbling impotent before the truth. The arc is completed. The Israelis, wrong from the beginning, have finally been cowed by The Truth About How Wrong They Were, by the German speaking Athena. And for good measure she throws in a sneering crack about how the Jewish nation must have too much money if it sent four of them.

Tarantino never made up anything more inverted.


-Natan Sznaider

**Natan Sznaider is a Professor at the Academic College of Tel Aviv-Yaffo. Among his several books are Jewish Memory and the Cosmopolitan Order: Hannah Arendt and the Jewish Condition and two books on the sociology of the Holocaust.He was born and grew up in Germany, and is a regularly commentator in the German press. He lives in Tel Aviv.

 

27Feb/120

The Sandstorm of Repression

"If this practice [of totalitarianism] is compared with that of tyranny, it seems as if a way had been found to set the desert itself in motion, to let loose a sand storm that could cover all parts of the inhabited earth. The conditions under which we exist today in the field of politics are indeed threatened by these devastating sand storms."

-Hannah Arendt, The Origins of Totalitarianism

Arendt's concluding image in The Origins of Totalitarianism leaves us with a bleak sense of how the mass of lonely and isolated individuals in modern society – the desert –can readily be swept into the "sandstorm" of totalitarianism.

The book sketches the forces that drive this new form of frenetic political motion, as well as the traditional resources that prevented it earlier: the loneliness of the frightened person as opposed to the solitude of the reflective individual; the community of discussion instead of the techniques of "mass movements."

Of the specific principles of totalitarianism that Arendt raises, few are as intriguing as her recasting of what is now termed polycracy [Neumann/ Hüttenberger/ Broszat], the notion that at every level individuals in a totalitarian society found themselves within overlapping systems of bureaucracy and power, unsure which level was of the most import. "All levels of the administrative machine in the Third Reich," she notes as a principal example, "were subject to a curious duplication of offices, with a fantastic thoroughness." At every level of society, individuals were unsure of how to calibrate actions and statements to their context for self-promotion, defense, or simply to be left alone, since there were often two or more overlapping systems of organization defining their position. The average worker, for instance, might be unsure what factor was decisive for their fate: the role of their actual boss, the party connections of colleagues, the intrigues of the secret police, the family connections of acquaintances, or even their role in apparently secondary organizations such as an automobile association. Ultimately, "regular" politics could never pertain, while the politics of the "movement" always had some traction.

The result in Arendt's telling was often a necessary striving beyond proscribed roles to express alignment with the ideals of the party and in the quest for advantage or mere safety. Unlike aspects of her later argument for the "banality of evil," here simply "fitting in" was not enough. In this manner, later scholars have argued, the decisive feature of totalitarian societies was neither the passive "structure" of organization nor the simple "intent" of members, but rather a new middle term created from the ill ease of masses of individuals.

The "conditions under which we exist today" no doubt still include the original possibilities of totalitarianism, but Arendt would also have us ask what has changed, what new aspects of society facilitate or act as windbreaks, so to speak, for such "sandstorms." Earlier windbreaks passively functioned, as Arendt suggests for even  relatively dystopian societies such as tyrannies, through the exercise of clear lines of authority, the existence of an active private sphere of life, and the possible formation of a discursive community of individuals, even if only in small groups.

At first glance, the new social technologies credited in recent uprisings and protests suggest a strengthening of these windbreaks by giving voice to critiques of incompetent and unjust administration, the solitary opposition of conscience, and, perhaps most decisively, the possibility of organizing coordinated discussion and action on wide scales. Yet, apace with these changes there are also darker transformations allowed by vast increases in computing memory and power that make tracking information and people on social media progressively easier. In repressive societies this is already occurring through the aggregation of past communications, the profiling of individuals, and guilt by association with members of one's social orbit.

By definition secret, statistical, and yet deeply connected with personal preference and affiliations, the aggregation of data concerning the individual in the context of social networking presents a real potential for abuse by any number of outside powers. Although most apparent in directly repressive societies, this transformation has the potential for abuse in numerous forms of governance. For if not managed properly, the laws and infrastructure of the internet could increasingly give rise to the sense – and reality – that one's words and actions could be interpreted in any number of contexts and by many forms of institutions. The malign influence of polycracy on individual decision, previously a signal feature of totalitarian regimes, might start to appear in any political system whatsoever.

As data about the individual becomes potentially available to a spectrum of interests and parties, ranging from credit agencies and divorce lawyers to political opponents and work rivals, it is easy to imagine individuals (and software) attempting to "mask" or redefine preferences, interests, and affiliations.

The result would be a pervasive self-censorship, but also – in light of this confusing secondary power – a corollary attempt to act out in the manner that seems most open to reward. The ability of individuals to first believe in the honesty of their own choices and speech, and with this the honesty of others, could be profoundly altered, as would the nature of civil society.

Precisely in first connecting the private and public spheres in new ways, groups of friends and political action, social media has the possibility to atomize anew. Debates over the role of law and infrastructure in shaping these contexts takes on a new relevance for preserving the basis of the windbreaks of civil society, ranging from recent initiatives such as "do not track" to the separation of terrorist investigations from other forms of surveillance (such as occurs in Germany), to larger innovations such as the E.U.'s "right to data privacy" or plan for the "right to be forgotten." Although integral perhaps to preventing the "sandstorms" Arendt warned of, these innovations may also prove critical to amplifying the positive features of individual conscience and civil society allowed by social media.

-Greg Moynahan