Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.
Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.
Relevant to the most recent Quote of the Week on the danger of intellectuals is Jan Mieszkowski's review of historian Christian Ingrao's recent book Believe and Destroy: Intellectuals in the SS War Machine. Ingrao's book employs a particular qualitative methodology to explore the role and motives of intellectuals within the Nazi elite - specifically of lawyers, historians, philosophers, and similarly trained professionals who joined the Sicherheitsdiest or SD - the intelligence arm of the SS. According to Mieszkowski, "Believe and Destroy focuses on "a group of eighty university graduates: economists, lawyers, linguists, philosophers, historians and geographers." Drawing on a range of archival sources, Ingrao follows their careers from school and university through their participation in the SD and subsequent efforts to defend themselves in postwar trials. (A dozen members of the group were hanged; most of the others received prison sentences.) He is particularly concerned with the transition from the 1930s, when the SD evolved into an immense surveillance and social science research organization operating inside Germany, to the invasion of the Soviet Union in 1941, when these men took the first steps toward putting their theories about the Germanification of foreign lands into practice." Read Roger Berkowitz's further account on Mieszkowski's essay here.
Mikhail Shiskin discusses the way that Russian governance, from the absolutist czars, to the communists, and into today's nominal democracy, has felt that it needed to make a political hero out of Pushkin: "From the times of Pushkin and Nicholas I, it was no longer enough for the earthly czar to be anointed by God; the ruler had also to be sanctified by Russian literature, the second sacred Russian power. That is why Stalin's regime was so concerned with perpetuating the memory of the classic Russian writer. If Orthodox czars based their right to own the bodies and souls of their subjects on heavenly law, the Communists legitimized the dictatorship of the party with "scientific" theses such as, 'The teachings of Marx are omnipotent because they are the truth.' But the real sacred figures who could sanctify the state were Pushkin and Gogol - the poets and the writers. When the people followed the Communists at the beginning of the twentieth century, they gave up Christ, but they found it impossible, as the revolutionary poets exhorted them, 'to throw Pushkin overboard the steamboat of modernity.' They could not raise their hand against that which is most sacred for the Russian soul. So this prison state built monuments to Pushkin everywhere, trying to seem righteous in the people's eyes."
In the wake of the recent system-wide hunger strike in the California prison system, Andrea Jones considers the role of the free press in connecting prisoners to the outside world. "There are more prisoners than ever, but the emotional distance we have from prisons is also greater than ever," suggests Sarita Alami, a historian at work on a project that employs digital methods like topic modeling and text mining to identify patterns in archived prison periodicals. Analyzing the volume and content of inmate journalism from 1912 through 1980 -what she calls the "golden years" - Alami studies intervals of collective unrest and activism in prisons. She has determined that the Great Depression, the early 1950s, and the late 1960s through early 1970s - time periods characterized by widespread riots, lawsuits, and work stoppages - corresponded to upswings in prison journalism, which she posits as a key facilitator of resistance and reform." But in recent decades, "as prison populations ballooned..., inmate-produced media did not experience a parallel upsurge. According to Alami, the penal press was suppressed twofold: by the rise of the prison-industrial complex, and by broad shifts in media consumption. ... the ascension of the Internet, while expanding the scope of information on the outside, served to cut off prisoners from the mediated public sphere of the modern world." She goes on to conclude, convincingly, that prisoners are often punished, particularly with solitary confinement, for trying to write and share their experience of the world.
Discussing her recent essay in Harper's, writer Rebecca Makkai talks about her experience of her grandfather, whom she knew as a yoga instructor who lived in Hawaii, who was also the principal author of Hungary's Second Jewish Law, which passed in 1939. At one point, she strikes a particularly Arendtian note: "There's also the fact that it's just very difficult, psychologically, to reconcile the face of a real person with one of the darkest moments of the twentieth century. It's not the same as looking at someone who's personally violent, likely to reach out and hit you. This guy is chopping up papaya on his balcony, telling jokes, and I think we have an instinct to forgive, to see just the best in that person, to see him at just that moment. (The irony being that this is what he and his colleagues failed to do - to see humans in front of them.)"
Ruth Franklin, writing about Shirley Jackson's 1948 horror short story "The Lottery," draws attention to a few of the letters that the New Yorker received after the story's publication in its pages: There were indeed some cancelled subscriptions, as well as a fair share of name-calling - Jackson was said to be "perverted" and "gratuitously disagreeable," with "incredibly bad taste." But the vast majority of the letter writers were not angry or abusive but simply confused. More than anything else, they wanted to understand what the story meant."
July 22-July 31, 2013
The Hannah Arendt Center 10 DAY/100 MEMBER CAMPAIGN
October 3-4, 2013
The sixth annual fall conference, "Failing Fast" The Educated Citizen in Crisis"
Olin Hall, Bard College
This week on the blog, Jeff Jurgens considers how Hannah Arendt's Jewish identity contributed to her cosmopolitanism. Roger Berkowitz thinks through Arendt's feelings about intellectuals. Your weeked read explores the role and motives of intellectuals within the Nazi elite. And this week we kicked off a short membership drive; Roger explains what's next for the Center, and why you should consider joining us, here.
“WHO'S AFRAID OF THE INTELLECTUALS?” That is the opening sentence of Jan Mieszkowski's excellent review of Belgian historian Christian Ingrao's recent book Believe and Destroy: Intellectuals in the SS War Machine. I have not yet read the book. But Mieszkowski’s review raises important questions about the role of intellectuals in the systematic administration of evil. Questions of the danger intellectuals pose in government that—as I wrote about earlier this week—were often at the center of Arendt’s concern.
Ingrao’s book employs a particular qualitative methodology to explore the question of the role and motives of intellectuals within the Nazi elite—specifically of lawyers, historians, philosophers, and similarly trained professionals who joined the Sicherheitsdiest or SD—the intelligence arm of the Schutzstafel or SS, the paramilitary group that was responsible for many of the crimes against humanity during the holocaust. According to Mieszkowski,
Believe and Destroy focuses on “a group of eighty university graduates: economists, lawyers, linguists, philosophers, historians and geographers.” Drawing on a range of archival sources, Ingrao follows their careers from school and university through their participation in the SD and subsequent efforts to defend themselves in postwar trials. (A dozen members of the group were hanged; most of the others received prison sentences.) He is particularly concerned with the transition from the 1930s, when the SD evolved into an immense surveillance and social science research organization operating inside Germany, to the invasion of the Soviet Union in 1941, when these men took the first steps toward putting their theories about the Germanification of foreign lands into practice.
Eichmann himself—while not an educated professional—worked in the intelligence area of the SD. His role too transformed itself in the late 1930s under the pressures of the Nazi setbacks in the East. His first job at the SD was, as Arendt writes, in the “information department” where he had to “file all information concerning Freemasonry (which in the early Nazi ideological muddle was somehow lumped with Judaism, Catholicism, and Communism) and to help in the establishment of a Freemasonry museum.”
From 1934-1938 Eichmann came to work for the SD office II-112, responsible for overseeing the activities of Jewish and Zionist organizations. His role was to oversee and administer Jewish relations under the Nuremberg laws that separated Jews as second-class citizens but did not deprive them of their citizenship or certain rights. The Nuremberg laws gave many Jews the false security of believing that if they lived separately, they would be left alone. In that capacity, Eichmann became an expert in Jewish administration and emigration.
But his career only took off in March of 1938 when he was sent to Vienna in the wake of the Anschluss where the official German policy switched from voluntary to forced emigration. Eichmann established a Central Office for Jewish Emigration in Vienna, which within one year had deported over 100,000 Austrian Jews – nearly the entire Jewish population that remained – to concentration camps such as Buchenwald, Mauthausen and Auschwitz. Eichmann proved himself a master at working with Jews and Jewish organizations, someone who “was recognized not merely as an expert on “the Jewish question,” but also on “the intricacies of Jewish organizations and Zionist parties,” and someone who was an “’authority’ on emigration and evacuation,” and “a ‘master’ who knew how to make people move.” He was so successful in getting Jews to work with him to organize the evacuations to the East that he “won four promotions” from 1937-1941. It was this second stage of his Nazi career, dealing with the forced evacuation of Jews from the German Reich, that set Eichmann up for his central role in the Final Solution which began around 1941.
Early in the review of Ingrao’s book on intellectuals in the SD, Mieszkowski quotes Arendt, in order to distinguish "joiners" like Adolf Eichmann from the subjects of Believe and Destroy. Eichmann, he argues, was distinct from the intellectuals who gave the orders that the bureaucrats followed and implemented. The question of this difference, between those who administer intelligently but thoughtlessly and those whose job it is to design and administer the overarching policies raises the question of whether or not there is any difference between the highly-educated professionals who populated the SD and their less-educated subordinates like Eichmann. This question is, according to Mieszkowski, what propels Believe and Destroy. He writes:
In fact, Arendt was well aware that there was a place for the thinking man in the Third Reich. In Eichmann in Jerusalem, she goes out of her way to observe that the heads of the Einsatzgruppen, the paramilitary death squads of the SS that conducted mass killings on the Eastern front, were members of an intellectual elite. How did these men, who did not, unlike Eichmann, suffer from a “lack of imagination,” become an integral part of a sustained genocidal operation of unparalleled scale? The Belgian historian Christian Ingrao’s Believe and Destroy: Intellectuals in the SS War Machine attempts to answer this question.
According to Mieszkowski, Ingrao is engaged in looking more closely and with nuance at the educated elites of the Nazi SD. Here is how he describes Ingrao’s approach:
Examining the early lives of his 80 subjects, Ingrao relates a familiar story about the collective trauma that beset Germans in the aftermath of the First World War and the ensuing rise of völkisch ideologies. Nazism, he argues, was an eminently flexible system that allowed aspirations for Germany’s restoration and fears of foreign threats to the nation to be coordinated with racial hierarchies. His young SS-officers-to-be became part of precociously radicalized networks of associations, which deployed intense political activity presented as a defensive struggle against a universal and Protean enemy, an enemy which, on the “home front,” took the shape of the Spartacist, the Social Democrat, the separatist and — already — a Jewishness to which they were profoundly hostile.
All this is relatively well known. The tale becomes less familiar when Ingrao demonstrates that the dissertations of these young scholars (completed in the early 1930s) betray not a crass Nazification of scholarly practices but a more subtle politicization of research that began with the erosion of the boundary between intellectual inquiry and activism. The resulting Volkstumswissenschaften (social sciences focused on national character) were a heady mixture of history, geography, sociology, ethnography, and economics that would slowly come to be dominated by fascist doctrines — a disturbing reminder that there is nothing inherently progressive about interdisciplinarity.
The review, as well as Ingrao’s book, hold out the promise of understanding who these intellectuals were, what they did, and how they justified their participation in war crimes. It offers a glimpse of their initial self-image as scholars and consultants entrusted with helping the Nazi Party administer the Jewish question and other related social and economic concerns.
And it traces the blurring of the line between analysis and politics that infused scholarship with racism. Ingrao’s aim, Mieszkowski writes, is “to move beyond vague psychological speculations about how these men were able to stomach their grisly responsibilities.” He wants to show how the intellectuals could participate ultimately in executions and other crimes because
the executions were codified rituals with carefully crafted gestures and procedures, all designed to lend the slaughter a veneer of the inevitable while defusing the taboos associated with firing on unarmed women and children.
Mieszkowski has questions about Ingrao’s conclusions, and argues that “the precise contours of Ingrao’s proposed analysis remain a bit vague, in part because his commitment to it seems halfhearted.” Whatever the final verdict may be on Ingrao’s book, Mieszkowski’s review is essential reading. It is your weekend read.
[T]here are, indeed, few things that are more frightening than the steadily increasing prestige of scientifically minded brain trusters in the councils of government during the last decades. The trouble is not that they are cold-blooded enough to “think the unthinkable,” but that they do not think.
-Hannah Arendt, "On Violence"
Hannah Arendt’s warning about the power of educated elites in government is one of the most counter-intuitive claims made by an irreverently paradoxical thinker. It is also, given her writing about the thoughtlessness of Adolf Eichmann, jarring to see Arendt call ivy-league graduates with Ph.D.s both dangerous and thoughtless. And yet Arendt is clear that one of the great dangers facing our time is the prestige and power accorded to intellectuals in matters of government.
Arendt issues her warning in the introduction to her essay “On Violence.” It comes amidst her discussion of the truth of Lenin’s prediction that the 20th century would be a “century of wars” and a “century of violence.”
And it follows her claim that even though the technical development of weapons have made war unjustifiable, war nevertheless continues for the “simple fact that no substitute for this final arbiter in international affairs has yet appeared on the political scene.” It is “under these circumstances” of extraordinary violence, Arendt writes, that the entry of social scientists and intellectuals into government is so profoundly frightening.
Whereas most political thinkers believe that in violent times we should welcome educated and rational “scientifically minded brain trusters” in government, Arendt is skeptical. Her reasoning is that these social scientists calculate, they do not think. She explains what she means writing that,
“Instead of indulging in such old-fashioned, uncomputerizable activity, [scientifically minded brain trusters] reckon with the consequences of certain hypothetically assumed constellations without, however, being able to test their hypotheses against actual occurrences.”
She has in mind those consultants, talking heads, and commentators in and out of government who create logically convincing hypothetical constructions of future events. This could be the claim, heard so often today, that if Iran gets a nuclear bomb they will use it or that Al Qaeda and terrorism threatens the existence or freedoms of the United States. For Arendt, such claims always begin the same way, with a hypothesis. They state a possible outcome of a series of events. They then discuss and dismiss alternative possibilities. Finally, this hypothesis turns “immediately, usually after a few paragraphs, into a “fact,” which then gives birth to a whole string of similar non-facts, with the result that the purely speculative character of the whole enterprise is forgotten.” In other words, we move from the speculative possibility that Iran would use nuclear weapons or that terrorism is a meaningful threat to the United States to the conclusion that these outcomes are facts. The danger of intellectuals in politics is that they have a unique facility with ideas and arguments that are quite capable of so enrapturing their own minds with the power of their arguments that they lose sight of reality.
When Arendt speaks about the danger of intellectuals in government she has in mind the example of the Vietnam War. In her essay “Lying and Politics”—a response to the Pentagon Papers—she hammers at the same theme of the danger intellectuals pose to politics. The Pentagon Papers were written by and written about “professional ‘problem solvers,’” who were “drawn into government from the universities and the various think tanks, some of them equipped with game theories and systems analyses, thus prepared, as they thought, to solve all the ‘problems’ of foreign policy.” The John F. Kennedy administration is famous, very much as is the Presidency of Barack Obama, for luring the “best and the brightest” into government service. We need to understand Arendt’s claim that of why such problem solvers are dangerous.
These “problem solvers,” she argues, were men of “self-confidence, who ‘seem rarely to doubt their ability to prevail.’” They were “not just intelligent, but prided themselves on being ‘rational,’ and they were indeed to a rather frightening degree above ‘sentimentality’ and in love with ‘theory,’ the world of sheer mental effort.” They were men so familiar with theories and the manipulation of facts to fit logical argumentation, that they could massage facts to fit their theories. “They were eager to find formulas, preferably expressed in a pseudo-mathematical language, that would unify the most disparate phenomena with which reality presented them.” They sought to transform the contingency of facts into the logical coherence of a lawful and pseudo-scientific narrative. But since the political world is not like the natural world of science, the temptation to fit facts to reality meant that they became practiced in self-deception. That is why the “hard and stubborn facts, which so many intelligence analysts were paid so much to collect, were ignored.”
For Arendt, the “best-guarded secret of the Pentagon papers” is the “relation, or, rather, nonrelation, between facts and decision” which was prepared by the intellectual “defactualization” enabled by the problem solvers. “No reality and no common sense,” Arendt writes, “could penetrate the minds of the problem-solvers.”
Arendt’s suspicion of intellectuals in politics long predates her concern about the Vietnam War, and began with her personal experience of German intellectuals in the 1930s. She was shocked by how many of her friends and how many educated and brilliant German professors, lawyers, and bureaucrats—including but not limited to her mentor and lover Martin Heidegger—were able to justify and rationalize their complicity in the administration of the Third Reich, often by the argument that their participation was a lesser evil.
Similarly, she was struck by the reaction to her book Eichmann in Jerusalem, in which intellectuals constructed elaborate critiques of her book and her argument that had nothing at all to do with the facts of what she had written. In both instances, Arendt became aware of the intellectual facility for massaging facts to fit theories and thus the remoteness from reality that can infect those who live too easily in the life of the mind.
The Iraq War under George W. Bush and the war on terrorism waged under Bush and President Barack Obama are, today, clear examples of situations in which now two U.S. administrations have convinced themselves of the need for military action and unparalleled surveillance of citizens under indisputably false pretenses. Iraq, contrary to assertions that were made by a policy of elite of brain-trusters, had no connection with the 9/11 attacks and had no nuclear weapons.
Similarly, terrorism today does not pose a threat to the existence or the freedom of the United States. What terrorism threatens is the continued existence of the U.S. as the world superpower. What we are fighting for is not our survival, but our continued predominance and power. Some might argue that the fight for continued world dominance is worth the costs of our privacy and liberty; others may disagree. But we should at the very least be honest about what we are fighting for and what the costs of that fight are.
We see a similar flight from fact to theory in the Trayvon Martin case. Shameless commentators on the right continue to insist that race played no role in the altercation, ignoring the fact of racism and the clear racial profiling in this case. But similarly hysterical leftist commentators insist that Zimmerman killed Martin primarily because of his race. Let’s stipulate that George Zimmerman followed Martin in some part because of his race. But let’s also recognize that he killed Martin—at least according to the weight of the testimony—from below after a struggle. We do not know who started the struggle, but there was a struggle and it is quite likely that the smaller and armed Zimmerman feared for his safety. Yes, race was involved. Yes racism persists. Yes we should be angry about these sad facts and should work to change the simply unethical environment in which many impoverished youths are raised and educated. But it is not true that Martin was killed primarily because of his race. It is also likely that the only reason Zimmerman was put on trial for murder was to satisfy the clamor of those advancing their theory, the facts be damned.
If Arendt is justifiably wary of intellectuals in politics, she recognizes their importance as well. The Pentagon papers, which describe the follies of problem-solvers, were written by the very same problem solvers in an unprecedented act of self-criticism. “We should not forget that we owe it to the problem-solvers’ efforts at impartial self-examination, rare among such people, that the actors’ attempts at hiding their role behind a screen of self-protective secrecy were frustrated.” At their best, intellectuals and problems-solvers are also possessed of a “basic integrity” that compels them to admit when their theoretical fantasies have failed. Such admissions frequently come too late, long after the violence and damage has been done. And yet, the fidelity to the facts that fires the best of intellectual and scientific inquiry is, in the end, the only protection we have against the self-same intellectual propensity to self-deception.
“Don’t hold your breath, ‘cause the pretty things are going to hell…”
In the social spheres in which I circulate, both personal and electronic, reactions to the Supreme Court’s twin same-sex marriage rulings Wednesday have tended to fall fairly neatly into one of two categories, each sprinkled liberally with that unique brand of wry humor that long, bitter struggles breed. On one side, the watch-phrase of the day is that it is “the end of an era,” a legal victory so pragmatically important and symbolically immense as to mark a break with a past of marginalization and oppression, a coda or at least a caesura in a national timeline of violence. On the other side, there is a weary gladness that nevertheless casts a wary eye at the map of state-level battles won, and cautions that jubilance be tempered, slightly at least, with the reality that the race is still quite far from run.
You hear relatively few of those somber cynics of the legal system who otherwise are generally keen to point out that historically, grand Supreme Court victories tend not to turn out very well for civil rights movements in the end. Then, these tend to be a disagreeable sort to invite to a victory party, anyway.
In that description of my social world, though, lie the seeds both of a kind of beautiful promise and a form of quiet peril in this political moment that is easily lost behind the spectrum of satisfaction and still images of weeping couples. And how and if and what we capture and carry from this moment hinges, a little at least, in whether or not we can find it in ourselves to tarry on these two things for a time, before resuming our march to where we will have been. These musings should be taken for no more than that: no parades are meant to be rained on here, nor cynics bashed, nor innocence dispelled by piercing insight. Simply a tarrying. A more homemade kind of caesura.
Supreme Court decisions always reveal as much in what they do not settle as in what they do, and the palette of reactions I’ve described does too. In both cases it is the unsettled, the absent which is both silent and intrusive. It didn’t strike me until I began to work on this that I know literally not a single person who supported the Defense of Marriage Act (at least I don’t know that I do, which would simply signal another part of the same difficulty). Not one, and my places of birth generally make the politics of my friends rather diverse (or perhaps more appropriately, in the older sense, queer). And if that or something close to that experience is a fairly widespread one when we tarry long enough to notice – and I think it is, on both sides of the coin – that is deeply troubling, or ought to be deeply troubling as we paint each other pictures over tables and glasses of the road to come.
Some of the bitter fractiousness that marked Washington’s heights has died down a bit in recent months…this morning brought an until recently unthinkable immigration bill through the Senate, and while it faces a bloodier road in the House, that it may yet reach to foot of the road at all is an extraordinary thing, viewed through the eyes of ourselves a year younger. But the at least temporary waning of the sheer, violent ugliness of that divisiveness should not obscure the deeper truth that was revealed in those days of “death panels” and other repeated invocations of cold, dead hands. That the nation is deeply politically divided is facilely true, but also true of nearly all of its short history. But it is possible that we face now something new, or at least a dangerous new incarnation of an old imp from our democracy’s outlands.
One of the reasons some activists will now focus on finding state-level legal cases in which to use the emphasis on dignity in United States v. Windsor’s majority and Kennedy’s quite sweeping description of DOMA’s violation of equal protection is that there is a fear in parts of the movement that, without the power of the court, there remain what might be called “The Unreachables”: a handful of states (or more) in which opposition to non-hetero marriages is so entrenched that they cannot be won politically for the foreseeable future.
The idea of the Unreachable hints at something much deeper than a simple statistical diversion of views. If this were the only problem, then demographic trends are, if too slow for some couples who still wait to marry, at least strongly on the movement’s side. One of Hannah Arendt’s consistent concerns across her writings is the possibility of shared worlds. Underlying the idea of the unreachable state, whether or not it is recognized, is the possibility that the divergences in politics between various parts of this country are only the symptoms of a deeper reality that individual experiences of the world around them are so different, share so little in common from which to draw a common weal, that in some politically and socially salient sense they are no longer sharing a world. And in a nation with an ideologically divided media culture, extraordinary and accelerating wealth disparity, and any number of structural mechanisms that favor political extremism over moderation, there is more to this worry than we might be willing to admit. If I try to cast my soul into the shoes of an evangelical preacher – whose experience of consumption may be far different from mine, whose experience of events of the outside world comes to her or him described in terms immensely unlike mine and contain figures barely recognizable to me, whose social frames and urban structure are radically disparate from my own – who is today mourning with all sincerity, and not cheering…in that moment it’s not moral difference that concerns me, as Scalia invoked in his dissent. It is the vanishingly thin fabric of a jointly sensed world that seems at stake, a jointly sensed world from which a nation has to be imagined.
There’s a sense in which the one thing that Supreme Court decisions do not do, ironically, is decide. At least, they do not decide much: they must be interpreted by lower courts and in the process extended or evacuated, they are subject to legislative challenge and circumvention, they have to be enforced and pursued by those outside the legal system. In that sense, at least, a Supreme Court decision is not an end to anything, let alone an era, and this is why proponents of non-hetero marriage have cautioned each other against over-optimism, the piece of truth in the curmudgeonly dismissal of the power of the High Court. But this essential malleability and chimeric strength becomes a particularly acute problem when filtered through the problem of un-shared worlds. There will be some, in those 36 states that have banned non-hetero marriage (a fact which formally at least remains unchallenged by United States v. Windsor) who will be swayed by the rhetorical and symbolic power of Kennedy’s words, that handful that will actually be heard. But those words, such few of them as trickle down the communicative chain, and the content of the decision, will by necessity be received filtered through social worlds both rich and rigidified. And as sociopolitical soil for Kennedy’s words, some of those worlds are very hostile worlds, indeed.
But in another way, that is exactly the promise in moments like Kennedy’s decision. What’s important about decisions, contra their image and verbiage, is precisely that they are never an end to anything. Their more significant function is not their symbolism, but that they begin.
The irony of a legal judgment is that, from the moment it is uttered, it becomes itself the subject of judgment. It is judged by lawyers, it is judged by lawmakers, it is judged by commentators…and it is judged by janitors and welders and artists and firefighters, equally. And as we circle our collective judgments around a mass of words uttered into our national vocabulary, a possibility is born. Certainly, we may simply pat our social selves on each other’s backs, and revel in our joy (or anger) that we know already to be shared. That’s not such a terrible thing itself. But the greater promise of the day, and the institution, is that it begins something that is shared, however thinly, between Farragut, Tennessee and Coolidge, New Mexico. It raises the possibility that our thoughts and judgments, a few at least, through those connections that remain in our worlds across lines built by mobile histories, might find their way into corners of other worlds. It is in those moments, those moments when we are confronted by someone who is a part of our lives, national or personal, for whom the experience of the day is profoundly different, that a thin tissue of sharing an object of judgment is vital. It may lead to the discovery of commonalities, it may lead to violent disagreement along all-to-familiar lines, but either way, a language is being born across worlds. Here, in this issue, that language is a language around what is most intimate to us, the most precious and tumultuous and defining parts of our lives: our lives with intimate others. And if we can share our lives with intimate others across the bounds of un-shared worlds, even in fraction and splice…then that world will not remain so un-shared, and another small bridge has been built between that which joins ourselves and our partners, our friends, our paramours, to each of our impish outlands. And that, that is cause for hope.
In 1949, The New York Times asked Norbert Wiener, author of Cybernetics, to write an essay for the paper that expressed his ideas in simple form. For editorial and other reasons, Wiener’s essay never appeared and was lost. Recently, a draft of the never-published essay was found in the MIT archives. Written now 64 years ago, the essay remains deeply topical. The Times recently printed excerpts. Here is the first paragraph:
By this time the public is well aware that a new age of machines is upon us based on the computing machine, and not on the power machine. The tendency of these new machines is to replace human judgment on all levels but a fairly high one, rather than to replace human energy and power by machine energy and power. It is already clear that this new replacement will have a profound influence upon our lives, but it is not clear to the man of the street what this influence will be.
Wiener draws a core distinction between machines and computing machines, a distinction that is founded upon the ability of machines to mimic and replace not only human labor, but also human judgment. In the 1950s, when Wiener wrote, most Americans worried about automation replacing factory workers. What Wiener saw was a different danger: that intelligent machines could be created that would “replace human judgment on all levels but a fairly high one.”
Today, of course, Wiener’s prophecy is finally coming true. The IBM supercomputer Watson is being trained to make diagnoses with such accuracy, speed, and efficiency that it will largely replace the need for doctors to be trained in diagnostics.
Google is developing a self-driving car that will obviate the need for humans to judge how fast and near to others they will drive, just as GPS systems already render moot the human sense of direction. MOOCs are automating the process of education and grading so that fewer human decisions need to be made at every level. Facebook is automating the acquisition of friends, lawyers are employing computers to read and analyze documents, and on Wall Street computer trading is automating the buying and selling of stocks. Surveillance drones, of course, are being given increasing autonomy to sift through data and decide which persons to follow or investigate. Finally, in the scandal of the day, the National Security Agency is using computer algorithms to mine data about our phone calls looking for abnormalities and suspicious patterns that would suggest potential dangers. In all these cases, the turn to machines to supplement or even replace human judgment has a simple reason: Even if machines cannot think, they can be programmed to do traditionally human tasks in ways that are faster, more reliable, and less expensive than can be done by human beings. In ways big and small, human judgment is being replaced by computers and machines.
It is important to recognize that Wiener is not arguing that we will create artificial human beings. The claim is not that humans are simply fancy machines or that machines can become human. Rather, the point is that machines can be made to mimic human judgment with such precision and subtlety so that their judgments, while not human, are considered either equal to human judgment or even better. The result, Wiener writes, is that “Machines much more closely analogous to the human organism are well understood, and are now on the verge of being built. They will control entire industrial processes and will even make possible the factory substantially without employees.”
Wiener saw this new machine age as dangerous on at least two grounds. First, economically, the rise of machines carries the potential to upend basic structures of civilization. He writes:
These new machines have a great capacity for upsetting the present basis of industry, and of reducing the economic value of the routine factory employee to a point at which he is not worth hiring at any price. If we combine our machine-potentials of a factory with the valuation of human beings on which our present factory system is based, we are in for an industrial revolution of unmitigated cruelty.
The dangers Wiener sees from our increased reliance on computing machines are not limited to economic dislocation. The real threat that computing machines pose is that as we cede more and more power to machines in our daily lives, we will, he writes, gradually forfeit our freedom and independence:
[I]f we move in the direction of making machines which learn and whose behavior is modified by experience, we must face the fact that every degree of independence we give the machine is a degree of possible defiance of our wishes. The genie in the bottle will not willingly go back in the bottle, nor have we any reason to expect them to be well disposed to us.
In short, it is only a humanity which is capable of awe, which will also be capable of controlling the new potentials which we are opening for ourselves. We can be humble and live a good life with the aid of the machines, or we can be arrogant and die.
For Wiener, our eventual servitude to machines is both an acceptable result and a fait accompli, one we must learn to accept. If we insist on arrogantly maintaining our independence and freedom, we will die. I gather the point is not that machines will rise up and kill their creators, but rather that we ourselves will program our machines to eliminate, imprison, immobilize, or re-program those humans who refuse to comply with paternalistic and well-meaning directives of the machines systems we create in order to provide ourselves with security and plenty.
Wiener counsels that instead of self-important resistance, we must learn to be in awe of our machines. Our machines will improve our lives. They will ensure better medical care, safer streets, more efficient production, better education, more reliable childcare and more human warfare. Machines offer the promise of a cybernetic civilization in which an entire human and natural world is regulated and driven towards a common good with super-human intelligence and calculative power. In the face of such utopian possibility, we must accept our new status as the lucky beneficiaries of the regulatory systems we have created and humble ourselves as beings meant to live well rather than to live free.
Recent revelations about the U.S. government’s using powerful computers to mine and analyze enormous amounts of data collected via subpoenas from U.S. telecom companies is simply one example of the kind of tradeoff Wiener suggests we will and we should make. If I understand the conclusions of Glenn Greenwald’s typically excellent investigative reporting, the NSA uses computer algorithms to scan the totality of phone calls and internet traffic in and out of the United States. The NSA needs all of this data—all of our private data—in order to understand the normal patterns of telephony and web traffic and thus to notice, as well, those exceptional patterns of calling, chatting, and surfing. The civil libertarian challenges of such a program are clear: the construction of a database of normal behavior allows the government to attend to those whose activities are outside the norm. Those outliers can be terrorists or pedophiles; they may be Branch Davidians or members of Occupy Wall Street; they may be Heideggerians or Arendtians. Whomever they are, once those who exist and act in patterns outside the norm are identified, it is up to the government whether to act on that information and what to do with it. We are put in the position of having to trust our government to use that information wisely, with pitifully little oversight. Yet the temptation will always be there for the government to make use of private information once they have it.
In the face of the rise of machines and the present NSA action, we have, Wiener writes, a choice. We can arrogantly thump our chests and insist that our privacy be protected from snooping machines and governmental bureaucracies, or we can sit back and stare in awe of the power of these machines to keep us safe from terrorists and criminals at such a slight cost to our happiness and quality of life. We already allow the healthcare bureaucracy to know the most intimate details of our lives and the banking system to penetrate into the most minute details of our finances and the advertising system to know the most embarrassing details of our surfing and purchasing histories; why, Wiener pushes us to ask, should we shy away from allowing the security apparatus from making use of our communication?
If there is a convincing answer to this hypothetical question and if we are to decide to resist the humbling loss of human freedom and human dignity that Wiener welcomes, we need to articulate the dangers Wiener recognizes and then rationalizes in a much more provocative and profound way. Towards that end, there are few books more worth reading than Hannah Arendt’s The Human Condition. Wiener is not mentioned in Hannah Arendt’s 1958 book; and yet, her concern and her theme, if not her response, are very much in line with the threat that cybernetic scientific and computational thinking pose for the future of human beings.
In her prologue to The Human Condition, Arendt writes that two threatening events define the modern age. The first was the launch of Sputnik. The threat of Sputnik had nothing to do with the cold war or the Russian lead in the race for space. Rather, Sputnik signifies for Arendt the fact that we humans are finally capable of realizing the age-old dream of altering the basic conditions of human life, above all that we are earth-bound creatures subject to fate. What Sputnik meant is that we were then in the 1950s, for the first time, in a position to humanly control and transform our human condition and that we are doing so, thoughtlessly, without politically and thoughtfully considering what that would mean. I have written much about this elsewhere and given a TEDx talk about it here.
The second “equally decisive” and “no less threatening event” is “the advent of automation.” In the 1950s, automation of factories threatened to “liberate mankind from its oldest and most natural burden, the burden of laboring and the bondage to necessity.” Laboring, Arendt writes, has for thousands of years been one essential part of what it means to be a human being. Along with work and action, labor comprises those activities engaged in by all humans. To be human has meant to labor and support oneself; to be human has for thousands of years meant that we produce things—houses, tables, stories, and artworks—that provide a common humanly built world in which we live together; and to be human has meant to have the ability to act and speak in such a way as to surprise others so that your action will be seen and talked about and reacted to with a force that will alter the course and direction of the human world. Together these activities comprise the dignity of man, our freedom to build, influence, and change our given world—within limits.
But all three of these activities of what Arendt calls the vita activa, are now threatened, if not with extinction, then at least with increasing rarity and public irrelevance. As automation replaces human laborers, the human condition of laboring for our necessary preservation is diminished, and we come to rely more and more on the altruism of a state enriched by the productivity of machine labor. Laboring, part of what it has meant to be human for thousands of years, threatens to become ever less necessary and to occupy an ever smaller demand on our existence. As the things we make, the houses we live in, and the art we produce become ever more consumable, fleeting, and temporary, the common world in which we live comes to seem ever more fluid; we move houses and abandon friends with the greater ease than previous ages would dispose of a pair of pants. Our collective focus turns toward our present material needs rather than towards the building of common spiritual and ethical worlds. Finally, as human action is seen as the statistically predictable and understandable outcome of human behavior rather than the surprising and free action of human beings, our human dignity is sacrificed to our rational control and steering of life to secure safety and plenty. The threat to labor, work, and action that Arendt engages emerges from the rise of science—what she calls earth and world alienation—and the insistence that all things, including human beings, are comprehensible and predictable by scientific laws.
Arendt’s response to these collective threats to the human condition is that we must “think what we are doing.” She writes at the end of her prologue:
What I propose in the following is a reconsideration of the human condition from the vantage point of our newest experiences and our most recent fears. This, obviously, is a matter of thought, and thoughtlessness—the heedless recklessness or hopeless confusion or complacent repetition of “truths” which have become trivial and empty—seems to me among the outstanding characteristics of our time. What I propose, therefore, is very simple: it is nothing more than to think what we are doing.
Years before Arendt traveled to Jerusalem and witnessed what she saw as the thoughtlessness of Adolf Eichmann, she saw the impending thoughtlessness of our age as the great danger of our time. Only by thinking what we are doing—and in thinking also resisting the behaviorism and materialism of our calculating time—can we humans hope to resist the impulse to be in awe of our machines and, instead, retain our reverence for human being that is foundation of our humanity. Thinking—that dark, irrational, and deeply human activity—is the one meaningful response Arendt finds to both the thoughtlessness of scientific behaviorism and the thoughtlessness of the bureaucratic administration of mass murder.
There will be great examples of chest thumping about the loss of privacy and the violation of constitutional liberties over the next few days. This is as it should be. There will also be sober warnings about the need to secure ourselves from terrorists and enemies. This is also necessary. What is needed beyond both these predictable postures, however, is serious thinking about the tradeoffs between our need for reliable and affordable security along with honest discussion of what we today mean by human freedom. To begin such a discussion, it is well worth revisiting Norbert Wiener’s essay. It is your weekend read.
If you are interested in pursuing Arendt’s own response to crisis of humanism, you can find a series of essays and public lectures on that theme here.
Does a cross in a courtroom infringe on the religious freedom of non-Christians involved in legal proceedings? Does it violate the principles of a secular state? These questions have recently arisen in Germany thanks to the trial of Beate Zschäpe. Zschäpe is the one surviving member of the National Socialist Underground (NSU), a band of neo-Nazis that allegedly murdered eight people of Turkish descent, one person of Greek descent, and one non-immigrant German police officer in a string of premeditated attacks from 2000 to 2007.
Zschäpe is currently standing trial at the upper court of appeals in Munich, and like other legal chambers in the state of Bavaria, its décor includes a modest wooden cross.
This cross did not evoke comment from the judge and lawyers in the run-up to the trial, and it was not an initial source of concern for the victims’ immediate relatives, who are acting as joint plaintiffs in the case. But it did draw the ire of Mahmut Tanal, a member of the Turkish parliament who attended the first day of the proceedings. Tanal, who is affiliated with the secularist Republican People’s Party, argued that a religious symbol like a cross has no place in the courtroom and should be removed immediately. In his estimation, the cross not only violated the principle of state neutrality in religious affairs, but also constituted a “threat” for the Muslim relatives of the Turkish victims.
Several conservative politicians in Germany responded to his complaints with sharply worded defenses of the cross. Norbert Geis, a parliamentarian for Germany’s Christian Social Union (CSU), announced that “the cross belongs to our culture” and urged Tanal to display more respect for the Christian influence on German life. Günter Krings, a member of parliament for the Christian Democratic Union (CDU), contended that the cross “symbolizes brotherly love and tolerance and is an expression of our Christian-Western roots.” And Günther Beckstein (CSU), Bavaria’s former Minister President, insisted that it was important to make clear, even in a courtroom, that “God stands above the person.”
The matter might have ended there if one of the joint plaintiffs, Talar T., had not agreed with Mahmut Tanal and filed a motion for the cross to be removed. Talar T. insisted that he had a pressing claim “not to be exposed to the influence of a religion—even in the form of a symbol—by the German state.”
Significantly, there is no established legal precedent on this and related matters. The State Court in Saarbrücken ruled in 2001 that a cross must be removed from a courtroom when a concerned party believes that its presence injures her or his right to religious freedom. But it is not clear whether this judgment would apply to courts in Bavaria, especially when Germany’s federalist system grants individual states considerable legal and policymaking autonomy. Indeed, it is precisely this system that has allowed Bavaria to hang crosses in its courtrooms when most other German states avoid and even disavow the practice.
We should not place undue emphasis on this aspect of the trial, which is highly charged for reasons that have nothing to do with the presence or absence of a cross. After all, German prosecutors accuse Zschäpe and her NSU compatriots of a string of xenophobic if not racist murders, and they charge that incompetence at the highest levels of German law enforcement allowed many if not all of these murders to occur. Nevertheless, I would argue that the contention and uncertainty surrounding the cross remain significant in their own right, for they speak to important arguments about the nature of secularism as a modern historical phenomenon.
In a series of recent articles and a concluding book, the University of Chicago anthropologist Hussein Agrama has proposed that secularism, contrary to the normative claims advanced in its favor, is not an institutional framework in which religion and politics are clearly separated. Instead, secularism consistently fashions religion as an object of governmental management and intervention, and it therefore expresses the state’s sovereign power to decide “what count should count as essentially religious and what scope it can have in social life.” Yet in the act of exercising this power, the secular state repeatedly blurs the very line between religion and politics that it aims to draw. For example: if a state insists that religiosity may only be expressed in the private sphere, what is the nature and extent of that sphere? Does it only include the home? Or does it also encompass communal places of worship, or believers’ choice of clothing and other forms of adornment? Is not the demarcation of a private realm of legitimate religious expression itself a political act?
In the end, Agrama argues that secularism is not a solution that neatly defines religion’s place in contemporary life. Instead, it constitutes a problem-space “wherein the question of where to draw a line between religion and politics continually arises.” Moreover, this question cannot be easily ignored, for it is inextricably bound up with the distribution of liberal rights and freedoms.
In Germany’s case, the state and federal governments, including the one in Bavaria, have adopted the principle that the state is independent of religious institutions and should not invoke or favor one religious tradition over another. The state and federal governments have also affirmed the right of all citizens to express their religious beliefs without undue interference from the state. These commitments are basic elements of German liberal governance, and the presence of the cross in Bavarian courtrooms would appear to complicate if not directly contradict them. To use Agrama’s language, the cross blurs the line between religion and politics, and it raises questions about the substance of the religious freedom that citizens may claim.
As my preceding discussion indicates, proponents of the status quo in Bavaria have tended to finesse these difficulties by insisting that the cross is merely a “symbol.” The cross, they imply, evokes a tradition that has exerted a formative influence on culture and politics in Germany and humanist thinking more broadly, but its presence is ultimately incidental to the legal proceedings and judgments that the state initiates. Moreover, the cross does not “threaten” non-Christians because it does not enshrine Christianity as the state’s religion, and it does not infringe on citizens’ freedom of religious belief or their equality before the law. To an important extent, this logic would seem to deny that the cross, at least in this context, is a “religious” artifact at all.
Of course, we might well wonder whether a symbol that is incidental to legal proceedings really needs to be present in a courtroom in the first place. More importantly, though, we might wish to question the innocence of the cross given the larger context of the case against Beate Zschäpe.
The NSU murders have led many migrants and post-migrants, including those from Muslim-majority countries like Turkey, to doubt their full inclusion in the German nation and polity. Moreover, the climate of lingering distrust surrounding Islam has only sharpened many Muslims’ perception that their faith is not a welcome and integral aspect of German life. Thus, even if the inclusion of a cross is not meant to be a “threatening” gesture, it is hardly a neutral, merely “symbolic” one either.
In the wake of the Arab Spring, many Euro-American commentators have wondered whether the new governments in Egypt and other Middle Eastern countries will be “secular” or “religious.” At least some of them have also maintained that “secular” governments will further the region’s democratization and long-term stability. To my mind, this line of thinking presumes that states in Europe and North America are exemplary polities which have more or less resolved the perplexities of secularism. But if the recent debates over the cross in Germany are any indication, such a judgment is premature if not complacent and self-serving. Even in those polities where secularism seems firmly established, uncertainty and dissension over religion persist. Indeed, such a condition may be the norm that defines secularist structures of power, not their fleeting and aberrant exception.
NOTE: as I was finishing this post, the U.S. Supreme Court announced that it will rule on the constitutional status of prayer in town board meetings, based on a case from Greece, New York. Many of my remarks on the Zschäpe trial are pertinent in this instance as well.
In an era defined by pervasive mass mediation, what role might intimacy—a relation of closeness and familiarity with another person—play in the realm of politics? Most of us are familiar with the town hall meetings, living-room campaign events, and other highly publicized yet simultaneously face-to-face interactions that now play a prominent role in this country’s electoral campaigns. Most of us also recognize the ways that Obama and other recent Presidents have presented the nation with stories of individual tribulation in order to “give a human face” to the sub-prime mortgage crisis, the spiraling costs of health care, or the most recent school shooting. To be sure, cynics might argue that political figures stage such moments of (supposed) intimacy merely to evoke and harness constituents’ sentiments in the service of their policies—or to lend themselves an air of human approachability. There is probably some truth in these claims. But I believe such skepticism also underestimates the varied motives that prompt leaders to mobilize the rhetoric of intimacy. It also skirts the diverse reasons why ordinary citizens might seek out—or refuse—those leaders’ expressions of empathy and solidarity.
I was led to these reflections by last week’s meeting between German Federal President Joachim Gauck and the relatives of the ten people murdered by the National Socialist Underground, or NSU, between 2000 and 2007. The NSU was a small but committed group of violent neo-Nazis that initially coalesced in Jena, a university town in the former East Germany, over the course of the 1990s. In the years that followed, the NSU’s core members ranged across the Federal Republic of Germany to target and fatally shoot nine small business owners of migrant backgrounds (eight of them Turkish, one of them Greek). They also shot two police officers, one of whom later died of her injuries, and injured twenty-three additional people in two separate bombings.
The NSU managed to evade detection and arrest for years due to competition between German law enforcement and security agencies, structural barriers to communication, and the evident negligence of investigators on the local, state, and national levels. Perhaps most egregiously, the Federal Office of Constitutional Protection sought to avoid public scrutiny and censure of its actions in late 2011 by deliberately destroying files that documented its long-standing contacts with another right extremist group closely related to the NSU. During these years of bungling, many of those in law enforcement and public security failed to take the possibility of right-wing political violence seriously. Instead, investigators long operated on the premise that the victims had died at the hands of relatives, drug dealers, or other criminals in the underground immigrant milieu. In this manner, law enforcement officials wittingly or unwittingly bought into reductive stereotypes that link people of migrant backgrounds with violence and crime.
On February 18th, Federal President Gauck invited the victims’ families to his offices in Berlin to express his commitment to the full investigation of the murders and the institutional shortcomings that failed to prevent them. This meeting was a particularly delicate one for Gauck since he has struggled to strike a consistent and publicly palatable tone on matters of immigration. In 2010 he awkwardly defended the anti-immigrant populist Thilo Sarrazin and his book Deutschland schafft sich ab, and later that same year he distanced himself from former Federal President Christian Wulff’s claim that “Islam belongs to Germany.” Gauck also appeared distinctly ill at ease during an earlier memorial gathering with the victims’ families in 2012. On the whole, Gauck’s public statements have tended to emphasize the “foreignness” of Muslims and post-war migrants, not their integral and self-evident membership in German national life.
Perhaps in response to his earlier public statements, last week Gauck promised the families that he would invest his full “personal involvement” in the ongoing government-led inquiry. This striking formulation implied a level of engagement on his part that extended beyond the strictly “professional” concern that might be expected of him as Germany’s Federal President. In addition, Gauck insisted ahead of the meeting that it would be small in scale and familiar in tone, and members of the press would not be allowed to attend the proceedings. These conditions would allow him, he contended, to hold personal and private conversations with all of the attendees. In short, Gauck set considerable store in interpersonal intimacy as a means to engage with the surviving family members, acknowledge their loss and mistreatment, and restore at least some of their confidence in German legal and political institutions. He probably also hoped to rehabilitate his own public image to some degree, but I suspect that this motive was not the overriding one during this particular meeting.
Many of the surviving family members accepted Gauck’s invitation, and several of the seventy total attendees contributed in their own fashion to this performance of political intimacy. Ismail Yozgat, father of murder victim Halil Yozgat, arrived with a photograph of his son as a six-year-old hung around his neck, and when he tearfully called on state authorities to “give me my son back,” Gauck wrapped his arm around his shoulders.
But many of the other attendees pointedly refused to speak at length with the Federal President, and some later expressed irritation that Gauck had not lingered with them, but had instead departed promptly at 2 p.m. to attend to other business. Others were troubled by the fact that Gauck had shared his opening remarks with media representatives rather than reserving them for the families alone. And still others had not accepted the Federal President’s invitation at all and remained distant from the gathering. One family worried that the group would actually be too large for Gauck to have a personal conversation with them. A few others, meanwhile, justified their absence on the grounds that the Federal President’s office had not honored their requests for their lawyers to be present. In the words of Angela Wierig, representative for Ayşen Taşköprü: “true empathy would have recognized that my client would have liked me by her side.”
In short, the families of the NSU murder victims were well aware of the intimacy that Gauck had hoped to invest and mobilize in their meeting with him, and they negotiated his framing of the event in diverse ways. While some consented to the terms of Gauck’s invitation, others detected a distinct lack of sincerity in his proclaimed “personal involvement.” Others feared that the gathering would not in fact be intimate enough, while still others refused the offer of intimacy and instead sought to place the meeting on a squarely legalistic, guarded, even confrontational footing.
All of these responses point to the ways that human closeness is not an inevitable or natural state of affairs, but rather a contingent condition that can only fashioned amid varied and often trying circumstances. Moreover, they suggest that intimacy may not be the only or even the best means to redress moments of pain, injustice, and mistrust.
For no small number of the people affected by the NSU murders, rigorous accountability for law enforcement incompetence—not offers of solidarity from the Federal President—offered the most promising route forward.
We have a higher education bubble. The combination of unsustainable debt loads on young people and the advent of technological alternatives is clearly set to upend the staid and often sclerotic world of higher education.
In this month’s The American Interest, Nathan Hardin—the author of Sex & God at Yale: Porn, Political Correctness, and a Good Education Gone Bad (St. Martin’s, 2012) and editor of The College Fix—tries to quantify the destructive changes coming to higher education. Here is his opening paragraph:
In fifty years, if not much sooner, half of the roughly 4,500 colleges and universities now operating in the United States will have ceased to exist. The technology driving this change is already at work, and nothing can stop it. The future looks like this: Access to college-level education will be free for everyone; the residential college campus will become largely obsolete; tens of thousands of professors will lose their jobs; the bachelor’s degree will become increasingly irrelevant; and ten years from now Harvard will enroll ten million students.
Step back a second. Beware of all prognostications of this sort. Nobody knows what will happen tomorrow let alone 50 years from now. Even today the NY Times reports that the University of Cincinnati and the University of Arizona are turning to online courses as a way of increasing enrollment at their residential campuses. Whether this will work and how this will transform the very idea of a residential college are not yet clear. But the kinds of predictions Hardin makes can be provocative, thus inducing of thought. But they are rarely accurate and too often are simply irresponsible.
Beyond the hyperbole, here is something true. Colleges will exist so long as they can convince students and their parents that the value of education is worth the cost. One reason some colleges are suffering today is clearly the cost. But another reason is the declining perception of value. We should also remember that many colleges—especially the best and most expensive ones—are seeing record demand. If and when the college bubble bursts, not all colleges will be hit equally. Some will thrive and others will likely disappear. Still others will adapt. We should be wary of collapsing all colleges into a single narrative or thinking we can see the future.
Part of the problem is that colleges offer education, something inherently difficult to put a value on. For a long time, the “value” of higher education was intangible. It was the marker of elite status to be a Harvard man or some such thing. One learned Latin and Greek and studied poetry and genetics. But what really was being offered was sophistication, character, erudition, culture, and status, not to mention connections and access.
More recently, college is “sold” in a very different way. It promises earning power. This has brought a whole new generation and many new classes into university education as they seek the magic ticket granting access to an upper middle class lifestyle. As the percentage of college graduates increases, the distinction and thus market value of college education decreases. The problem colleges have is that in their rush to open the doors to all paying customers, they have devalued the product they are offering. The real reason colleges are threatened now—if they indeed are threatened—is less financial than it is intellectual and moral. Quite simply, many of our colleges have progressively abandoned their intangible mission to educate students and embraced the market-driven task of credentialing students for employment. When for-profit or internet-based colleges can do this more cheaply and more efficiently, it is only logical that they will succeed.
For many professors and graduate students, the predicted demise of the residential college will be a hard shock. Professors who thought they had earned lifetime security with tenure will be fired as their departments are shuttered or their entire universities closed down. Just as reporters, book sellers, and now lawyers around the country have seen their jobs evaporate by the disruption of the internet, so too will professors be replaced by technological efficiencies. And this may well happen fast.
Gregory Ferenstein, who describes himself as a writer and educator and writes for Techcrunch and the Huffington Post, has gone so far to offer a proposed timeline of the disappearance of most colleges as we know them. Here is his outline, which begins with the recently announced pilot program that will see basic courses at San Jose State University replaced by online courses administered by the private company Udacity:
- [The] Pilot [program in which Udacity is offering online courses for the largest university system in the world, the California State University System] succeeds, expands to more universities and classes
- Part-time faculty get laid off, more community colleges are shuttered, extracurricular college services are closed, and humanities and arts departments are dissolved for lack of enrollment (science enrollment increases–yay!?)
- Graduate programs dry up, once master’s and PhD students realize there are no teaching jobs. Fewer graduate students means fewer teaching assistants and, therefore, fewer classes
- Competency-based measures begin to find the online students perform on par with, if not better than, campus-based students. Major accredited state college systems offer fully online university degrees, then shutter more and more college campuses
- A few Ivy League universities begin to control most of the online content, as universities all over the world converge toward the classes that produce the highest success rates
- In the near future, learning on a college campus returns to its elite roots, where a much smaller percentage of students are personally mentored by research and expert faculty
I put little faith in things working out exactly as Ferenstein predicts, and yet I can’t imagine he is that far off the mark. As long as colleges see themselves in the knowledge-production business and the earnings-power business, they will be vulnerable to cheaper alternatives. Such quantifiable ends can be done more cheaply and sometimes better using technology and distance learning. Only education—the leading of students into a common world of tradition, values, and common sense—depends on the residential model of one-to-one in-person learning associated with the liberal arts college. The large university lecture course is clearly an endangered species.
Which is why it is so surprising to read a nearly diametrically opposed position suggesting that we are about to enter a golden age for untenured and adjunct faculty. This it the opinion of Michael Bérubé, the President of the Modern Language Association. Bérubé gave the Presidential Address at the 2013 MLA meetings in Boston earlier this month.
It is helpful and instructive to compare Hardin’s technophilic optimism with Bérubé’s recent remarks . He dedicated much of his speech to a very different optimism, namely that contingent and adjunct faculty would finally get the increased salaries and respect that they deserved. According to Bérubé:
[F]or the first time, MLA recommendations for faculty working conditions [are] being aggressively promoted by way of social media…. After this, I think, it really will be impossible for people to treat contingent faculty as an invisible labor force. What will come of this development I do not know, but I can say that I am ending the year with more optimism for contingent faculty members than I had when I began the year, and that’s certainly not something I thought I would be able to say tonight.
Bérubé’s talk is above all a defense of professionalization in the humanities. He defends graduate training in theory as a way to approach literary texts. He extols the virtues of specialized academic research over and above teaching. He embraces and justifies “careers of study in the humanities” over and against the humanities themselves. Above all, he argues that there are good reasons to “bother with advanced study in the humanities?” In short, Bérubé defends not the humanities, but the specialized study of the humanities by a small group of graduate students and professors.
I understand what Bérubé means. There is a joy in the pursuit of esoteric knowledge even if he eschews the idea of joy wanting instead to identify his pursuit work and professionalized labor. But to think that there is an optimistic future for the thousands of young graduate students and contingent faculty who are currently hoping to make professional careers in the advanced study of the humanities is lunacy. Yes advanced study of the humanities is joyful for some? But why should it be a paying job? There is a real blindness not only to the technological and economic imperatives of the moment in Bérubé’s speech, but also to the idea of the humanities.
As Hannah Arendt wrote 50 years ago in her essay On Violence, humanities scholars today are better served by being learned and erudite than by seeking to do original research by uncovering some new or forgotten scrap. What we need is not professional humanities scholars so much as educated and curious thinkers and readers.
As I have written before:
To say that excessively specialized humanities scholarship today is irrelevant is not to say that the humanities are irrelevant. The humanities are that space in the university system where power does not have the last word, where truth and beauty as well as insight and eccentricity reign supreme and where young people come into contact with the great traditions, writing, and thinking that have made us whom we are today. The humanities introduce us to our ancestors and our forebears and acculturate students into their common heritage. It is in the humanities that we learn to judge the good from the bad and thus where we first encounter the basic moral facility for making judgments. It is because the humanities teach taste and judgment that they are absolutely essential to politics. It is even likely that the decline of politics today is profoundly connected to the corruption of the humanities.
If humanities programs and liberal arts colleges go the way of the duck-billed platypus, it will only partly be because of new technologies and rising debt. It will also be because the over-professionalization of the humanities has led—in some but not all colleges—to a course of study that simply is not seen as valuable by many young people. The changes that Hardin and Ferenstein see coming will certainly shake up the all-too-comfortable world of professional higher education. That is not bad at all. The question is whether educators can adapt and begin to offer courses and learning that is valuable. But that will only happen if we abandon the hyper-professionalized self-image defended by scholars like Michael Bérubé. One model for such a change is, of course, the public intellectual writing and thinking of Hannah Arendt.
On New Year’s Eve President Obama, despite serious reservations, signed into law a bill that has generated much controversy and charged rhetoric over the last few weeks. That annual bill, the National Defense Authorization Act, specifies the budget and expenditures for the Department of Defense. But the 2012 NDAA is not restricted to the technicalities of the budget. Embedded within the act, as carefully broken down by Glenn Greenwald in an article for Salon, are several sections enabling indefinite military detention without trial and expanding the scope of the War on Terror even further.
The section in question, “Subtitle D- Counterterrorism,” codifies the following:
- The indefinite detention, without trial, by the military and under military rather than civilian jurisdiction, of certain classes of persons;
- The persons covered by the Bill include not only persons who are part of terrorist groups, but also those who offer substantial support to terrorist groups, a vague standard that in recent cases has been applied to lawyers and human rights activists;
- U.S. Citizens are, in certain circumstances, also subject to indefinite detention by the military;
- The use of funds to build a facility in the US for Guantánamo detainees and the transferring of those detainees to the US for any reason is prohibited, rendering the closing of Gitmo impossible.
Some argue that the NDAA merely codifies detention authority that the administration and the military have already claimed, both in their actions and in the 2001 Authorization for Use of Military Force. This argument does hold some weight, particularly because the NDAA’s purposeful ambiguity, as well as Obama’s stated concern about “certain provisions that regulate the detention, interrogation, and prosecution of suspected terrorists,” makes it difficult to determine exactly how the provisions will be used.
But even if it is the case that the NDAA does not constitute a major change in policy (which Greenwald and others dispute), the fact that the NDAA’s provisions put the legislative weight of Congress behind such policy is no small tragedy. From the ACLU to Human Rights Watch to Ron Paul (who called one of its earlier versions “a slip into tyranny”), the NDAA has been denounced for its frightening disregard for the rule of law and the US Constitution. ACLU’s executive director, Anthony D. Romero, had particularly strong words about the bill’s signing:
President Obama’s action today is a blight on his legacy because he will forever be known as the president who signed indefinite detention without charge or trial into law…We are incredibly disappointed that President Obama signed this new bill into law even though his administration has already claimed overly broad detention authority in court. Any hope that the Obama administration would roll back the constitutional excesses of George Bush in the war on terror was extinguished today.
Sadly, what is clear is that the NDAA further entrenches what was already unjust and legally questionable policy. The fact that the bill passed both the Senate and the House, as well as avoided a presidential veto, is unfortunately not a testament to its validity or harmlessness, but to just how much compromising of constitutional values we’ve let ourselves swallow.
The NDAA is the latest manifestation of the crisis mentality that has gripped Washington since the so-called War on Terror began. It seems that we are living in a chronic state of emergency, in which a policy as un-American as indefinite detention is justified simply by claiming that it is keeping us safe. As Elaine Scarry writes in her recent book Thinking in an Emergency, there is an unspoken presumption in times of emergency that either one can think or one can act. We bypass deliberation exactly when it becomes most necessary. Arendt agreed, believing that in exceptional times deliberation and judgment must come to the forefront in all political matters. For her, the faculty of judgment is not the ability to know right and wrong abstractly, but the ability to tell right from wrong, in a given situation.
In the case of NDAA, Congress has washed its hands of its responsibility to think, to determine what is right and wrong in this War of Terror. Instead Congress and the President have handed off authority to the military to use as it sees fit, taking refuge in the bill’s ambiguous phrasing to avoid having to take a real stand. In their unwillingness to say, in law, that indefinite detention is wrong, our politicians have made what was an implicit policy an explicit statement: this is what America does.