The Hannah Arendt Center at Bard College seeks an enthusiastic Program Associate to help grow the Center at an exciting time in its history. The Program Associate would be responsible for working with the Director of the Arendt Center to administer and grow the Center, with the mission to provoke engaged thinking that elevates public discussion of the nation's most pressing political and ethical challenges.
In the spirit of Hannah Arendt, the Center’s mission is to encourage people to "think what we are doing”, the Program Associate should have administrative ability and strong people skills, as well as a passion for building an engaged community around the Arendt Center. Responsibilities include assisting in planning & organizing the Arendt Center Conferences and Lectures, overseeing the Center’s finances and budget, processing invoices, payments, and check requests, working to communicate, engage, and grow the Arendt Center membership through communication via Constant Contact, administering the search for and processing of Arendt Center Fellows and Visiting Scholars, overseeing the work of the Media Coordinator and interns, and using Facebook and Twitter (where required) Conference organization skills include: travel accommodation, online pre-registration, on-site registration, working with multiple departments at Bard to arrange all onsite logistics, and responsibility that everything runs smoothly during the two-day event.
To apply, please send a cover letter, resume and the names of three references by email only to email@example.com . Bard College is an equal opportunity employer and we welcome applications from those who contribute to our diversity.
In the most recent NY Review of Books, David Cole wonders if we've reached the point of no return on the issue of privacy:
“Reviewing seven years of the NSA amassing comprehensive records on every American’s every phone call, the board identified only one case in which the program actually identified an unknown terrorist suspect. And that case involved not an act or even an attempted act of terrorism, but merely a young man who was trying to send money to Al-Shabaab, an organization in Somalia. If that’s all the NSA can show for a program that requires all of us to turn over to the government the records of our every phone call, is it really worth it?”
Cole is beyond convincing in listing the dangers to privacy in the new national security state. Like many others in the media, he speaks the language of necessary trade-offs involved in living in a dangerous world, but suggests we are trading away too much and getting back too little in return. He warns that if we are not careful, privacy will disappear. He is right.
What is often forgotten and is absent in Cole’s narrative is that most people—at least in practice—simply don’t care that much about privacy. Whether snoopers promise security or better-targeted advertisements, we are willing to open up our inner worlds for the price of convenience. If we are to save privacy, the first step is articulating what it is about privacy that makes it worth saving.
Cole simply assumes the value of privacy and doesn’t address the benefits of privacy until his final paragraph. When he does come to explaining why privacy is important, he invokes popular culture dystopias to suggest the horror of a world without privacy:
More broadly, all three branches of government—and the American public—need to take up the challenge of how to preserve privacy in the information age. George Orwell’s 1984, Ray Bradbury’s Fahrenheit 451, and Philip K. Dick’s The Minority Report all vividly portrayed worlds without privacy. They are not worlds in which any of us would want to live. The threat is no longer a matter of science fiction. It’s here. And as both reports eloquently attest, unless we adapt our laws to address the ever-advancing technology that increasingly consumes us, it will consume our privacy, too.
There are two problems with such fear mongering in defense of privacy. The first is that these dystopias seem too distant. Most of us don’t experience the violations of our privacy by the government or by Facebook as intrusions. The second is that on a daily basis the fact that my phone knows where I am and that in a pinch the government could locate me is pretty convenient. These dystopian visions can appear not so dystopian.
Most writing about privacy simply assume that privacy is important. We are treated to myriad descriptions of the way privacy is violated. The intent is to shock us. But rarely are people shocked enough to actually respond in ways that protect the privacy they often say that they cherish. We have collectively come to see privacy as a romantic notion, a long-forgotten idle, exotic and even titillating in its possibilities, but ultimately irrelevant in our lives.
There is, of course, a reason why so many advocates of privacy don’t articulate a meaningful defense of privacy: It is because to defend privacy means to defend a rich and varied sphere of difference and plurality, the right and importance of people actually holding opinions divergent from one’s own. In an age of political correctness and ideological conformism, privacy sounds good in principle but is less welcome in practice when those we disagree with assert privacy rights. Thus many who defend privacy do so only in the abstract.
When it comes to actually allowing individuals to raise their children according to their religious or racial beliefs or when the question is whether people can marry whomever they want, defenders of privacy often turn tail and insist that some opinions and some practices must be prohibited. Over and over today, advocates of privacy show that they value an orderly, safe, and respectful public realm and that they are willing to abandon privacy in the name of security and a broad conception of civility according to which no one should have to encounter opinions and acts that give them offense.
The only major thinker of the last 100 years who insisted fully and consistently on the crucial importance of a rich and vibrant private realm is Hannah Arendt. Privacy, Arendt argues, is essential because it is what allows individuals to emerge as unique persons in the world. The private realm is the realm of “exclusiveness,” it is that realm in which we “choose those with whom we wish to spend our lives, personal friends and those we love.” The private choices we make are guided by nothing objective or knowable, “but strikes, inexplicably and unerringly, at one person in his uniqueness, his unlikeness to all other people we know.” Privacy is controversial because the “rules of uniqueness and exclusiveness are, and always will be, in conflict with the standards of society.” Arendt’s defense of mixed marriages (and by extension gay marriages) proceeds—no less than her defense of the right of parents to educate their children in single-sex or segregated schools—from her conviction that the uniqueness and distinction of private lives need to be respected and protected.
Privacy, for Arendt, is connected to the “sanctity of the hearth” and thus to the idea of private property. Indeed, property itself is respected not on economic grounds, but because “without owning a house a man could not participate in the affairs of the world because he had no location in it which was properly his own.” Property guarantees privacy because it enforces a boundary line, “ kind of no man’s land between the private and the public, sheltering and protecting both.” In private, behind the four walls of house and heath, the “sacredness of the hidden” protects men from the conformist expectations of the social and political worlds.
In private, shaded from the conformity of societal opinions as well from the demands of the public world, we can grow in our own way and develop our own idiosyncratic character. Because we are hidden, “man does not know where he comes from when he is born and where he goes when he dies.” This essential darkness of privacy gives flight to our uniqueness, our freedom to be different. It is privacy, in other words, that we become who we are. What this means is that without privacy there can be no meaningful difference. The political importance of privacy is that privacy is what guarantees difference and thus plurality in the public world.
Arendt develops her thinking on privacy most explicitly in her essays on education. Education must perform two seemingly contradictory functions. First, education leads a young person into the public world, introducing them and acclimating them to the traditions, public language, and common sense that precede him. Second, education must also guard the child against the world, care for the child so that “nothing destructive may happen to him from the world.” The child, to be protected against the destructive onslaught of the world, needs the privacy that has its “traditional place” in the family.
Because the child must be protected against the world, his traditional place is in the family, whose adult members return back from the outside world and withdraw into the security of private life within four walls. These four walls, within which people’s private family life is lived, constitute a shield against the world and specifically against the public aspect of the world. This holds good not only for the life of childhood but for human life in general…Everything that lives, not vegetative life alone, emerges from darkness and, however, strong its natural tendency to thrust itself into the light, it nevertheless needs the security of darkness to grow at all.
The public world is unforgiving. It can be cold and hard. All persons count equally in public, and little if any allowance is made for individual hardships or the bonds of friendship and love. Only in privacy, Arendt argues, can individuals emerge as unique individuals who can then leave the private realm to engage the political sphere as confident, self-thinking, and independent citizens.
The political import of Arendt’s defense of privacy is that privacy is what allows for meaningful plurality and differences that prevent one mass movement, one idea, or one opinion from imposing itself throughout society. Just as Arendt valued the constitutional federalism in the American Constitution because it multiplied power sources through the many state and local governments in the United States, so did she too value privacy because it nurtures meaningfully different and even opposed opinions, customs, and faiths. She defends the regional differences in the United States as important and even necessary to preserve the constitutional structure of dispersed power that she saw as the great bulwark of freedom against the tyranny of the majority. In other words, Arendt saw privacy as the foundation not only of private eccentricity, but also of political freedom.
Cole offers a clear-sighted account of the ways that government is impinging on privacy. It is essential reading and it is your weekend read.
Peggy Noonan is worried about the decadence of elite American culture. While the folks over at DailyKos are foaming about the irony of Ronald Reagan’s speechwriter complaining about the excesses of the power elites, Noonan makes an important point about the corrosive effects that irony has on elites and on culture more generally.
The two targets of Noonan’s scorn are a “Now This News” video compilation of real congressmen quoting their favorite lines from the Netflix series “House of Cards,” and the recent publication of an excerpt from Kevin Roose’s new book Young Money. The “House of Cards” is about the scheming, power hungry, and luxurious life of our political elite in Washington. Roose’s excerpt provides audios, videos, and a description of a recent Kappa Beta Phi meeting, in which Wall Street titans binge on alcohol and engage in skits and speeches making fun of anyone who would question their inalienable right to easy money at the expense of rubes in government and on main street.
Noonan’s response to these sets of recordings is bafflement and disappointment. Why is it, she asks, that elites would join in on the jokes made at their expense?
“I don’t understand why members of Congress, the White House and the media become cooperators in videos that sort of show that deep down they all see themselves as ... actors. And good ones! In a phony drama. Meant I suppose to fool the rubes. It’s all supposed to be amusing, supposed to show you’re an insider who sees right through this town.”
Why do elites join in the laughter of a popular TV serial that grills them and shows them to be callow, avaricious, and without public spirit? Why do they delight in demonstrating their ability to view their failings with irony?
““House of Cards” very famously does nothing to enhance Washington’s reputation. It reinforces the idea that the capital has no room for clean people. The earnest, the diligent, the idealistic, they have no place there. Why would powerful members of Congress align themselves with this message? Why do they become part of it? I guess they think they’re showing they’re in on the joke and hip to the culture. I guess they think they’re impressing people with their surprising groovelocity.”
Noonan is right to see this elite reaction of wanting to be in on the joke as meaningful and worrisome. She finds it decadent:
“They are America’s putative great business leaders. They are laughing, singing, drinking, posing in drag and acting out skits. The skits make fun of their greed and cynicism. In doing this they declare and make clear, just in case you had any doubts, that they are greedy and cynical. All of this is supposed to be merry, high-jinksy, unpretentious, wickedly self-spoofing. But it seems more self-exposing, doesn’t it? And all of it feels so decadent.”
It is insufficient, however, to watch the videos on both these sites and conclude the obvious that they offer damning evidence of corruption and decadence.
What is more important than the decadence on display is the self-satisfied irony. The elites in Washington and Wall Street seem not to care about their decadence and even take joy in the revealing of their decadence. It is as if a burden has been lifted, that we all in the outside world can now know what they have borne in secret. With the secret out, they can enjoy themselves without guilt.
This embrace of the revelation of decadence recalls the cultural milieu of Weimar Germany, and especially the reception of Berthold Brecht’s classic satire the “Threepenny Opera.” Here is how Hannah Arendt describes the arrival and reception of Brecht’s play:
“The play presented gangsters as respectable businessmen and respectable businessmen as gangsters. The irony was somewhat lost when respectable businessmen in the audience considered this a deep insight into the ways of the world and when the mob welcomed it as an artistic sanction of gangsterism. The theme song in the play, “Erst kommt das Fressen, dann kommt die Moral” [First comes the animal-like satisfaction of one’s hungers, then comes morality], was greeted with frantic applause by exactly everybody, though for different reasons. The mob applauded because it took the statement literally; the bourgeoisie applauded because it had been fooled by its own hypocrisy for so long that it had grown tired of the tension and found deep wisdom in the expression of the banality by which it lived; the elite applauded because the unveiling of hypocrisy was such superior, wonderful fun.”
Brecht hoped to shock not only with his portrayal of corruption and the breakdown of morality, but by his gleeful presentation of Weimar decadence; but the effect of “Threepenny Opera” was exactly the opposite, since all groups in society reacted to Brecht’s satire with joy instead of repulsion.
Arendt has little hope for the mob or the bourgeoisie, but she is clearly cut to the quick by the ease with which the elite felt “genuine delight” in watching the bourgeoisie and the mob “destroy respectability.” As Arendt explained, the “members of the elite did not object at all to paying a price, the destruction of civilization, for the fun of seeing how those who had been excluded unjustly in the past forced their way into it.” Because the elite had largely rejected their belief in the justice and meaningfulness of the moral and common values that had supported the edifice of civilization, they found more joy in the ironic skewering of those values than they felt fear at what the loss of common values might come to mean.
There is no greater thinker of decadence than Friedrich Nietzsche. This is how Nietzsche defines decadence in The Case of Wagner as a “question of style”:
“I dwell this time only on the question of style–What is the sign of every literary decadence? That life no longer dwells in the whole. Word becomes sovereign and leaps out of the sentence, the sentence reaches out and obscures the meaning of the page, the page gains life at the expense of the whole–the whole is no longer a whole. But this is the simile of every style of decadence: every time, the anarchy of atoms, the disgregation of the will, “freedom of the individual,” to use moral terms–expanded into a political theory, “equal rights for all.” Life, equal vitality, the vibration and exuberance of life pushed back into the smallest forms; the rest, poor in life. Everywhere paralysis, hardship, torpidity, or hostility, and chaos: both more and more obvious the higher one ascends in forms of organization. The whole no longer lives at all: it is composite, calculated, artificial, and artifact.”
As Andrew Huddleston has recently written, Nietzsche understands that “decadence is literally a kind of disorder – that is, a lack of cohesive order – within the individual or the culture.” It is a sickness by which individuals and groups think only of themselves and lose sight of their belonging to a common world or a meaningful order.
The disordering forces of decadence are not always disadvantageous. Throughout American history centripetal forces have allowed an understanding of power that permits different states and plural groups that pursue their own interests to, nevertheless, hold fast to the common idea of constitutional republican democracy and government by the people. What we see in the irony of the elites—let alone the decadence of the bourgeoisie and the power brokers—is the superior feeling of freedom that proceeds from the belief in the comic dissolution of the moral, political and economic values that have for two centuries animated the American imagination of itself as a exceptional experiment in free and democratic self-government.
Noonan is right to call out this ironic pose of the elite. She is right to worry that “No one wants to be the earnest outsider now, no one wants to play the sober steward, no one wants to be the grind, the guy carrying around a cross of dignity. No one wants to be accused of being staid. No one wants to say, “This isn’t good for the country, and it isn’t good for our profession.”” Her essay is your weekend read. Don’t forget to watch the videos. See if you catch yourself smiling.
On October 27, 2013, Walter Russell Mead and Roger Berkowitz sat down with Jay Rosen and Megan Garber as part of the "Blogging and the New Public Intellectual" series. The series engages in ongoing discussion with the nation’s leading bloggers in politics, history, art, and culture.
Jay Rosen is a media critic, a writer, and a professor of journalism at New York University. You can visit his blog, "Pressthink" here. Megan Garber is a staff writer at The Atlantic. She was formerly an assistant editor at the Nieman Journalism Lab, where she wrote about innovations in the media. Read her work from The Atlantic here.
Roger Berkowitz started the evening by asking: Should journalists be objective or should they be political actors?
Jay Rosen answered: "Journalists have to do more than just flood us with facts." Rosen thinks of the journalist, "as a heightened form of an informed citizen." The panel discussed the idea of the journalist vs. the citizen and the myriad of ways in which the two overlap. As well, the role the Internet plays in creating an informed public through the sharing of information.
Megan Garber added, "I'm not interested in getting my ideas out, I'm interested in exploring things publicly...There is value in convening people together to talk about one thing."
Watch the video of the discussion here.
The next event in the "Blogging and the New Public Intellectual" series will take place March 9 and features a discussion with Tom Goldstein, the Publisher and a regular contributor to the SCOTUSblog.
Learn more about the event here and RSVP to firstname.lastname@example.org.
"Seen from the perspective of the "real" world, the laboratory is the anticipation of a changed environment."
-Hannah Arendt, The Life of the Mind
I find this quote intriguing in that its reference to environments and environmental change speak to the fact that Arendt's philosophy was essentially an ecological one, indeed one that is profoundly media ecological. The quote appears in a section of The Life of the Mind entitled "Science and Common Sense," in which Arendt argues that the practice of science is quite distinct from thinking as a philosophical activity.
As she explains:
Thinking, no doubt, plays an enormous role in every scientific enterprise, but it is a role of a means to an end; the end is determined by a decision about what is worthwhile knowing, and this decision cannot be scientific.
Here Arendt invokes a variation on Gödel's incompleteness theorem in mathematics, noting that science cannot justify itself on scientific grounds, but rather must somehow depend on something outside of and beyond itself. Perhaps more to the point, science, especially as associated with empiricism, cannot be divorced from concrete reality, and does not function only in the abstract realm of ideas that Plato insisted was the only true reality.
The transformation of truth into mere verity results primarily from the fact that the scientist remains bound to the common sense by which we find our bearings in a world of appearances. Thinking withdraws radically and for its own sake from this world and its evidential nature, whereas science profits from a possible withdrawal for the sake of specific results.
It is certainly the case that scientific truth is always contingent, tentative, open to refutation, as Karl Popper explained. Scientific truth is never absolute, never anything more than a map of some other territory, a map that needs to be continually tested and reviewed, updated and revised, as Alfred Korzybski explained by way of establishing his discipline of general semantics. Even the so-called laws of nature and physics need not be considered immutable, but may be subject to change and evolution, as Lee Smolin argues in his insightful book, Time Reborn.
Scientists are engaged in the process of abstracting, insofar as they take the data gained by empirical investigation and make generalizations in the form of theories and hypotheses, but this process of induction cannot be divorced from concrete reality, from the world of appearances. Science may be used to test, challenge, and displace common sense, but it operates on the same level, as a distilled form of common sense, rather than something qualitatively different, a status Arendt reserves for the special activity of thinking associated with philosophy.
Arendt goes on to argue that both common sense and scientific speculation lack "the safeguards inherent in sheer thinking, namely thinking's critical capacity." This includes the capacity for moral judgment, which became horrifically evident by the ways in which Nazi Germany used science to justify its genocidal policies and actions. Auschwitz did not represent a retrieval of tribal violence, but one of the ultimate expressions of the scientific enterprise in action. And the same might be said of Hiroshima and Nagasaki, holding aside whatever might be said to justify the use of the atomic bomb to bring the Second World War to a speedy conclusion. In remaining close to the human lifeworld, science abandons the very capacity that makes us human, that makes human life and human consciousness unique.
The story of modern science is in fact a story of shifting alliances. Science begins as a branch of philosophy, as natural philosophy. Indeed, philosophy itself is generally understood to begin with the pre-Socratics sometimes referred to as Ionian physicists, i.e., Thales, Anaximander, Heraclitus, who first posited the concept of elements and atoms. Both science and philosophy therefore coalesce during the first century that followed the introduction of the Greek alphabet and the emergence of a literate culture in the ancient Greek colonies in Asia Minor.
And just as ancient science is alphabetic in its origins, modern science begins with typography, as the historian Elizabeth Eisenstein explains in her exhaustive study, The Printing Press as an Agent of Change in Early Modern Europe. Simply by making the writings of natural philosophers easily available through the distribution of printed books, scholars were able to compare and contrast what different philosophers had to say about the natural world, and uncover their differences of opinion and contradictions. And this in turn spurned them on to find out for themselves which of various competing explanations are correct, where the truth lies, so that more reading led to even more empirical research, which in turn would have to be published, that is made public, via printing, for the purposes of testing and confirmation. And publication encouraged the formation of a scientific republic of letters, a typographically mediated virtual community.
Eisenstein notes that during the first century following Gutenberg, printed books gave Copernicus access to centuries of recorded observations of the movements of celestial objects, access not easily available to his predecessors. What is remarkable to consider is that the telescope was not invented in his lifetime, that the Polish astronomer arrived at his heliocentric view based only on what could be observed by the naked eye, by gazing up at the heavens, and down at the printed page. The typographic revolution that began in the 15th century was the necessary technological precondition for the Copernican revolution of the 16th century. The telescope as a tool to extend vision beyond its natural capabilities had not yet been invented, and was not required, although soon after its introduction Galileo was able to confirm the theory that Copernicus had put forth a century earlier.
In the restricted literate culture of medieval Europe, the idea took hold that there are two books to be studied in an effort to discern the divine will, and mind: the book of scripture and the book of nature. Both books were seen as sources of knowledge that can be unlocked by a process of reading and interpretation. It was grammar, the ancient study of language, which became one third of the trivium, the foundational curriculum of the medieval university, that became the basis of modern science, and not dialectic or logic, that is, pure thinking, which is the source of the philosophic tradition, as Marshall McLuhan noted in The Classical Trivium. The medieval schoolmen of course placed scripture in the primary position, whereas modern science situates truth in the book of nature alone.
The publication of Francis Bacon's Novum Organum in 1620 first formalized the separation of science from philosophy within print culture, but the divorce was finalized during the 19th century, coinciding with the industrial revolution, as researchers became known as scientists rather than natural philosophers. In place of the alliance with philosophy, science came to be associated with technology; before this time, technology, and engineering, often referred to as mechanics, represented entirely different lines of inquiry, utterly practical, often intuitive rather than systematic. Mechanics was part of the world of work rather than that of action, to use the terms Arendt introduced in The Human Condition, which is to say that it was seen as the work of the hand rather than the mind. By the end of 19th century, scientific discovery emerged as the main the source of major technological breakthroughs, rather than innovation springing fully formed from the tinkering of inventors, and it became necessary to distinguish between applied science and theoretical science, the latter nonetheless still tied to the world of appearances.
Today, the acronym STEM, which stands for science, technology, engineering, and mathematics, has become a major buzzword in education, a major emphasis in particular for higher education, and a major concern in regards to economic competitiveness. We might well take note of how recent this combination of fields and disciplines really is, insofar as mathematics represents pure logic and highly abstract forms of thought, and science once was a purely philosophical enterprise, both aspects of the life of the mind. Technology and engineering, on the other hand, for most of our history took the form of arts and crafts, part of the world of appearances.
The convergence of science and technology also had much to do with scientists' increasing reliance on scientific instruments for their investigations, a trend increasingly prevalent following the introduction of both the telescope and the microscope in the early 17th century, a trend even more apparent from the 19th century on. The laboratory is in fact another such instrument, a technology whose function is to provide precisely controlled conditions, beyond its role as a facility for the storage and use of other scientific instruments. Scientific instruments are media that extend our senses and allow us to see the world in new ways, therefore altering our experience of our environment, while the discoveries they lead to provide us with the means of altering our environments physically. And the laboratory is an instrument that provides us with a total environment, enclosed, controlled, isolated from the world to become in effect the world. It is a micro-environment where experimental changes can be made that anticipate changes that can be made to the macro-environment we regularly inhabit.
The split between science and philosophy can also be characterized as a division between the eye and the ear. Modern science, as intimately bound up in typography, is associated with visualism, the idea that seeing is believing, that truth is based on vision, that knowledge can be displayed visually as an organized set of facts, rather than the product of ongoing dialogue, and debate. McLuhan noted the importance of the fixed point of view as a by-product of training the eye to read, and Walter Ong studied the paradigm-shift in education attributed to Peter Ramus, who introduced pedagogical methods we would today associated with textbooks, outlining, and the visual display of information. Philosophy has not been immune to this influence, but retains a connection to the oral-aural mode through the method of Socratic dialogue, and by way of an understanding of the history of ideas as an ongoing conversation. Arendt, in The Human Condition, explained action, the realm of words, as a social phenomenon, one based on dialogic exchanges of ideas and opinions, not a solitary matter of looking things up. And thinking, which she elevates above the scientific enterprise in The Life of the Mind, is mostly a matter of an inner dialogue, or monologue if you prefer, of hearing oneself think, of silent speech, and not of a mental form of writing out words or imaginary reading. We talk things out, to others and/or to ourselves.
Science, on the other hand, is all about visible representations, as words, numbers, illustrations, tables, graphs, charts, diagrams, etc. And it is the investigation of visible phenomena, or otherwise of phenomena that can be rendered visible through scientific instruments. Acoustic phenomena can only be dealt with scientifically by being turned into a visual measurement, either of numbers or of lines going up and down to depict sound waves. The same is true for the other senses; smell, taste, and touch can only be dealt with scientifically though visual representation. Science cannot deal with any sense other than sight on its own terms, but always requires an act of translation into visual form. Thus, Arendt notes that modern science, being so intimately bound up in the world of appearances, is often concerned with making the invisible visible:
That modern science, always hunting for manifestations of the invisible—atoms, molecules, particles, cells, genes—should have added to the world a spectacular, unprecedented quantity of new perceptible things is only seemingly paradoxical.
Arendt might well have noted the continuity between the modern activity of making the invisible visible as an act of translation, and the medieval alchemist's search for methods of achieving material transformation, the translation of one substance into another. She does note that the use of scientific instruments are a means of extending natural functions, paralleling McLuhan's characterization of media as extensions of body and biology:
In order to prove or disprove its hypotheses… and to discover what makes things work, it [modern science] began to imitate the working processes of nature. For that purpose it produced the countless and enormously complex implements with which to force the non-appearing to appear (if only as an instrument-reading in the laboratory), as that was the sole means the scientist had to persuade himself of its reality. Modern technology was born in the laboratory, but this was not because scientists wanted to produce appliances or change the world. No matter how far their theories leave common-sense experience and common-sense reasoning behind, they must finally come back to some form of it or lose all sense of realness in the object of their investigation.
Note here the close connection between reality, that is, our conception of reality, and what lends someone the aura of authenticity, as Walter Benjamin would put it, is dependent on the visual sense, on the phenomenon being translated into the world of appearances (the aura as opposed to the aural). It is no accident then that there is a close connection in biblical literature and the Hebrew language between the words for spirit and soul, and the words for invisible but audible phenomena such as wind and breath, breath in turn being the basis of speech (and this is not unique to Hebraic culture or vocabulary). It is at this point that Arendt resumes her commentary on the function of the controlled environment:
And this return is possible only via the man-made, artificial world of the laboratory, where that which does not appear of its own accord is forced to appear and to disclose itself. Technology, the "plumber's" work held in some contempt by the scientist, who sees practical applicability as a mere by-product of his own efforts, introduces scientific findings, made in "unparalleled insulation… from the demands of the laity and of everyday life," into the everyday world of appearances and renders them accessible to common-sense experience; but this is possible only because the scientists themselves are ultimately dependent on that experience.
We now reach the point in the text where the quote I began this essay with appears, as Arendt writes:
Seen from the perspective of the "real" world, the laboratory is the anticipation of a changed environment; and the cognitive processes using the human abilities of thinking and fabricating as means to their end are indeed the most refined modes of common-sense reasoning. The activity of knowing is no less related to our sense of reality and no less a world-building activity than the building of houses.
Again, for Arendt, science and common sense both are distinct in this way from the activity of pure thinking, which can provide a sorely needed critical function. But her insight as to the function of the laboratory as an environment in which the invisible is made visible is important in that this helps us to understand that the laboratory is, in fact, what McLuhan referred to as a counter-environment or anti-environment.
In our everyday environment, the environment itself tends to be invisible, if not literally so, then functionally insofar as whatever fades into the background tends to fall out of our perceptual awareness or is otherwise ignored. Anything that becomes part of our routine falls into this category, becoming environmental, and therefore subliminal. And this includes our media, technology, and symbol systems, insofar as they are part of our everyday world. We do pay attention to them when they are brand new and unfamiliar, but once their novelty wears off they become part of the background, unless they malfunction or breakdown. In the absence of such conditions, we need an anti-environment to provide a contrast through which we can recognize the things we take for granted in our world, to provide a place to stand from which we can observe our situation from the outside in, from a relatively objective stance. We are, in effect, sleepwalkers in our everyday environment, and entering into an anti-environment is a way to wake us up, to enhance awareness and consciousness of our surroundings. This occurs, in a haphazard way, when we return home after spending time experiencing another culture, as for a brief time much of what was once routinized about own culture suddenly seems strange and arbitrary to us. The effect wears off relatively quickly, however, although the after-effects of broadening our minds in this way can be significant.
The controlled environment of the laboratory helps to focus our attention on phenomena that are otherwise invisible to us, either because they are taken for granted, or because they require specialized instrumentation to be rendered visible. It is not just that such phenomena are brought into the world of appearances, however, but also that they are made into objects of concerted study, to be recorded, described, measured, experimented upon, etc.
McLuhan emphasized the role of art as an anti-environment. The art museum, for example, is a controlled environment, and the painting that we encounter there has the potential to make us see things we had never seen before, by which I mean not just objects depicted that are unfamiliar to us, but familiar objects depicted in unfamiliar ways. In this way, works of art are instruments that can help us to see the world in new and different ways, help us to see, to use our senses and perceive in new and different ways. McLuhan believed that artists served as a kind of distant early warning system, borrowing cold war terminology to refer to their ability to anticipate changes occurring in the present that most others are not aware of. He was fond of the Ezra Pound quote that the artist is the antenna of the race, and Kurt Vonnegut expressed a similar sentiment in describing the writer as a canary in a coal mine. We may further consider the art museum or gallery or library as a controlled environment, a laboratory of sorts, and note the parallel in the idea of art as the anticipation of a changed environment.
There are other anti-environments as well. Houses of worship function in this way, often because they are based on earlier eras and different cultures, and otherwise are constructed to remove us out of our everyday environment, and help us to see the world in a different light. They are in some way dedicated to making the invisible world of the spirit visible to us through the use of sacred symbols and objects, even for religions whose concept of God is one that is entirely outside of the world of appearances. Sanctuaries might therefore be considered laboratories used for moral, ethical, and sacred discovery, experimentation, and development, and places where changed environments are also anticipated, in the form of spiritual enlightenment and the pursuit of social justice. This also suggests that the scientific laboratory might be viewed, in a certain sense, as a sacred space, along the lines that Mircea Eliade discusses in The Sacred and the Profane.
The school and the classroom are also anti-environments, or at least ought to be, as Neil Postman argued in Teaching as a Conserving Activity. Students are sequestered away from the everyday environment, into a controlled situation where the world they live in can be studied and understood, and phenomena that are taken for granted can be brought into conscious awareness. It is indeed a place where the invisible can be made visible. In this sense, the school and the classroom are laboratories for learning, although the metaphor can be problematic when it used to imply that the school is only about the world of appearances, and all that is needed is to let students discover that world for themselves. Exploration is indeed essential, and discovery is an important component of learning. But the school is also a place where we may engage in the critical activity of pure thinking, of critical reasoning, of dialogue and disputation.
The classroom is more than a laboratory, or at least it must become more than a laboratory, or the educational enterprise will be incomplete. The school ought to be an anti-environment, not only in regard to the everyday world of appearances and common sense, but also to that special world dominated by STEM, by science, technology, engineering and math. We need the classroom to be an anti-environment for a world subject to a flood of entertainment and information, we need it to be a language-based anti-environment for a world increasingly overwhelmed by images and numbers. We need an anti-environment where words can take precedence, where reading and writing can be balanced by speech and conversation, where reason, thinking, and thinking about thinking can allow for critical evaluation of common sense and common science alike. Only then can schools be engaged in something more than just adjusting students to take their place in a changed and changing environment, integrating them within the technological system, as components of that system, as Jacques Ellul observed in The Technological Society. Only then can schools help students to change the environment itself, not just through scientific and technological innovation, but through the exercise of values other than the technological imperative of efficiency, to make things better, more human, more life-affirming.
The anti-environment that we so desperately need is what Hannah Arendt might well have called a laboratory of the mind.
Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.
Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.
Drones are simply one weapon in a large arsenal with which we fight the war on terror. Even targeted killings, the signature drone capability, are nothing new. The U.S. and other countries have targeted and killed individual leaders for decades if not centuries, using snipers, poisons, bombs, and many other technologies. To take a historical perspective, drones don’t change much. Nor is the airborne capacity of drones to deliver devastation from afar anything new, having as its predecessors the catapult, the long bow, the bomber, and the cruise missile. And yet, there is seemingly something new about the way drones change the feel and reality of warfare. On one side, drones sanitize the battlefield from a space of blood, fear, and heroic fortitude into a video game played on consoles. On the other side, drones dominate life, creating a low pitched humming sound that reminds inhabitants that at any moment a missile might pierce their daily routines. The two sides of this phenomenology of drones is the topic of an essay by Nasser Hussain in The Boston Review: “In order to widen our vision, I provide a phenomenology of drone strikes, examining both how the world appears through the lens of a drone camera and the experience of the people on the ground. What is it like to watch a drone’s footage, or to wait below for it to strike? What does the drone’s camera capture, and what does it occlude?” You can also read Roger Berkowitz’s weekend read on seeing through drones.
Marilynne Robinson, speaking to the American Conservative about her faith, elaborates on what she sees as the central flaws in contemporary American Christianity: "Something I find regrettable in contemporary Christianity is the degree to which it has abandoned its own heritage, in thought and art and literature. It was at the center of learning in the West for centuries—because it deserved to be. Now there seems to be actual hostility on the part of many Christians to what, historically, was called Christian thought, as if the whole point were to get a few things right and then stand pat. I believe very strongly that this world, these billions of companions on earth that we know are God’s images, are to be loved, not only in their sins, but especially in all that is wonderful about them. And as God is God of the living, that means we ought to be open to the wonderful in all generations. These are my reasons for writing about Christian figures of the past. At present there is much praying on street corners. There are many loud declarations of personal piety, which my reading of the Gospels forbids me to take at face value. The media are drawn by noise, so it is difficult to get a sense of the actual state of things in American religious culture."
Is poetry going the way of the Dodo bird? Vanessa Place makes this argument in a recent essay “Poetry is Dead. I Killed It,” on the Poetry Foundation website. And Kenneth Goldsmith, in the New Yorker, asks whether Place is right. The internet, he suggests, has killed or at least so rethought poetry that it may be unrecognizable. "Quality is beside the point—this type of content is about the quantity of language that surrounds us, and about how difficult it is to render meaning from such excesses. In the past decade, writers have been culling the Internet for material, making books that are more focussed [sic] on collecting than on reading. These ways of writing—word processing, databasing, recycling, appropriating, intentionally plagiarizing, identity ciphering, and intensive programming, to name just a few—have traditionally been considered outside the scope of literary practice."
In a rare interview, famously reclusive Calvin and Hobbes cartoonist Bill Watterson prognosticates on the future of the comics: "Personally, I like paper and ink better than glowing pixels, but to each his own. Obviously the role of comics is changing very fast. On the one hand, I don’t think comics have ever been more widely accepted or taken as seriously as they are now. On the other hand, the mass media is disintegrating, and audiences are atomizing. I suspect comics will have less widespread cultural impact and make a lot less money. I’m old enough to find all this unsettling, but the world moves on. All the new media will inevitably change the look, function, and maybe even the purpose of comics, but comics are vibrant and versatile, so I think they’ll continue to find relevance one way or another. But they definitely won’t be the same as what I grew up with."
Cambodian director Rithy Panh's new movie, The Missing Picture is about the rule of the Khmer Rouge in Cambodia. In making the film, he had to confront the challenge of making a movie about atrocities that are famously without explicit visual records, and he hit upon a unique solution: clay dolls. Although these figures "are necessarily silent, immobile, and therefore devoid of the intensity of those moments in other Panh films where his camera bores in on the face of a witness and lingers there as he remembers what happened, or what he did," Richard Bernstein suggests that they give the movie a unique power.
This week on the blog, Ian Storey revisits George Orwell's prescient essay, "Politics and the English Language." Jeffrey Champlin looks at James Muldoon's essay about Arendt's writngs on the advocacy of council systems in On Revolution. And your weekend read looks at the cultural impact of drones on the nations and groups that are employing them.
The response has been swift and negative to the Rolling Stone Magazine cover—a picture of Dzhokhar Tsarnaev who with his now dead brother planted deadly homemade bombs near the finish-line of the Boston Marathon. The cover features a picture Tsarnaev himself posted on his Facebook page before the bombing. It shows him as he wanted himself to be seen—that itself has offended many, who ask why he is not pictured as a suspect or convict. In the photo he is young, hip, handsome, and cool. He could be a rock star, and given the context of the Rolling Stone cover, that is how he appears.
The cover is jarring, and that is intended. It is controversial, and that was probably also intended. Hundreds of thousands of comments on Facebook and around the web are critical and angry, asking how Rolling Stone could portray the bomber as a rock-star. They overlook or ignore the text accompanying the photo on the cover, which reads: “The Bomber. How a Popular, Promising Student Was Failed by His Family, Fell Into Radical Islam, and Became a Monster.” CVS and other retailers have announced they will not sell the magazine in their stores.
That is unfortunate, for the story written by Janet Reitman is exceptionally good and deserves to be read.
Controversies like this have a perverse effect. Just as the furor over Hannah Arendt’s Eichmann in Jerusalem resulted in the viral dissemination of her claims about the Jewish leaders, so too will this Rolling Stone cover be seen by millions of people who otherwise would never have heard of Rolling Stone. What is more, such publicity makes it ever less likely that the story itself will be read seriously, just as Arendt’s book was criticized by everyone, but read by few.
Reitman’s narrative itself is unexceptional. It is a common story line: young, normal kid becomes radicalized and does something none of his old friends can believe he could do. This is a now familiar narrative that we hear in the wake of the tragedies in Newtown (Adam Lanza was described as a nice quiet kid) and Columbine (Time’s cover announced “The Monsters Next Door.”)
This is also the narrative that Rolling Stone managing editor Will Dana embraced to defend the Cover on NPR arguing it was an "apt image because part of what the story is about is what an incredibly normal kid [Tsarnaev] seemed like to those who knew him best back in Cambridge.” It was echoed too by Erin Burnett, on CNN, who recently invoked Hannah Arendt’s idea of the “banality of evil.” In the easy frame the story offers, Tsarnaev was a good kid, part of a striving immigrant family, someone who loved multi-racial America. And then something went wrong. He found Islam; his family fell apart; and he became a monster.
This story is too simple. And yet within the Rolling Stone story, there is a wealth of information and reporting that does give a nuanced and thoughtful portrayal of Tsarnaev’s journey into the heart of evil.
One fact that is important to note is that Tsarnaev is not Eichmann. Eichmann was a member of the SS, a nationalist security service engaged in world war and dedicated to wiping certain races of peoples off the face of the earth. He committed genocide as part of a system of extermination, something both worse than and yet less messy than murder itself. It is Tsarnaev, who had no state apparatus behind him, who become a cold-blooded murderer. The problems that Hannah Arendt thought that the court in Jerusalem faced with Eichmann—that he was a new type of criminal—do not apply in Tsarnaev’s case. He is a murderer. To understand him is not to understand a new type of criminal. And yet it is a worthy endeavor to try to understand why more and more young men like Tsarnaev are so easily radicalized and drawn to murdering innocent people in the name of a cause.
Both Eichmann and Tsarnaev were from upwardly striving bourgeois families that struggled with economic setbacks. Eichmann was white and Austrian, Tsarnaev an immigrant in Cambridge, but both were economically disaffected. Tsarnaev wanted to make money and, like his parents, dreamed of a better life.
Tsarnaev’s family had difficulty fitting in with U.S. culture. His father was ill and could not work. His mother sought to earn money. And his older brother, whom he idolized, saw his dreams of Olympic boxing dashed partly because he was not a citizen. He increasingly turned to a radical version of Islam. When Tsarnaev’s parents both returned to Dagestan, he fell increasingly under the influence of his older brother.
Like Eichmann, Tsarnaev appears to have adopted an ideology that provided a coherent and meaningful narrative that gave his life significance. One can see this in a number of tweets and statements that are quoted in the article. For example, just before the bombing, he tweeted:
"Evil triumphs when good men do nothing."
"If you have the knowledge and the inspiration all that's left is to take action."
"Most of you are conditioned by the media."
Like Eichmann, Tsarnaev came to see himself as a hero, someone willing to suffer and even die for a noble cause. His cause was different—anti-American jihad instead of anti-Semitic Nazism—but he was an ideological idealist, a joiner, someone who found meaning and importance in belonging to a movement. A smart and talented and by most accounts good young man, he was lost and adrift, searching for someone and something to give his life purpose. He found that someone in his brother and that something in jihad against America, the land that previously he had so embraced. And he became someone who believed that what he was doing was right and necessary, even if he understood also that it was wrong.
We see clearly this ambivalent understanding of right and wrong in the note Tsarnaev apparently scrawled while he was hiding in a boat before he was captured. Here is how Reitman’s article describes what he wrote:
When investigators finally gained access to the boat, they discovered a jihadist screed scrawled on its walls. In it, according to a 30-count indictment handed down in late June, Jihad [Tsarnaev's nickname] appeared to take responsibility for the bombing, though he admitted he did not like killing innocent people. But "the U.S. government is killing our innocent civilians," he wrote, presumably referring to Muslims in Iraq and Afghanistan. "I can't stand to see such evil go unpunished. . . . We Muslims are one body, you hurt one, you hurt us all," he continued, echoing a sentiment that is cited so frequently by Islamic militants that it has become almost cliché. Then he veered slightly from the standard script, writing a statement that left no doubt as to his loyalties: "Fuck America."
Eichmann too spoke of his shock and disapproval of killing innocent Jews, but he justified doing so for the higher Nazi cause. He also said that when he found out about the sufferings of Germans at the hands of the allies, it made it easier for him to justify what he had done, because he saw it as equivalent. The fact that the Germans were aggressors, that they had started the war, and that they were killing and torturing innocent people simply did not register for Eichmann, just as it did not register for Tsarnaev that the people in the Boston marathon were innocent. There are, of course, innocent people in Iraq and Afghanistan who have died at the hands of U.S. bombs. Even for those of us who were against the wars and question their sense and justification, however, there is a difference between death in a war zone and terrorism.
The Rolling Stone article does a good job of chronicling Tsarnaev's slide into a radical jihadist ideology, one mixed with conspiracy theories.
The Prophet Muhammad, he noted on Twitter, was now his role model. "For me to know that I am FREE from HYPOCRISY is more dear to me than the weight of the ENTIRE world in GOLD," he posted, quoting an early Islamic scholar. He began following Islamic Twitter accounts. "Never underestimate the rebel with a cause," he declared.
His rebellious cause was to awaken Americans to their complicity both in the bombing of innocent Muslims and also to his belief in the common conspiracy theory that America was behind the 9/11 attacks. In one Tweet he wrote: "Idk [I don’t know] why it's hard for many of you to accept that 9/11 was an inside job, I mean I guess fuck the facts y'all are some real #patriots #gethip."
Besides these tweets that offer a provocative insight into Tsarnaev's emergent ideological convictions, the real virtue of the article is its focus on Tsarnaev's friends, his school, and his place in American youth culture. While his friends certainly do not support or condone what Tsarnaev did, many share some of his conspiratorial and anti-American beliefs. Here are two descriptions of the mainstream nature of many of his beliefs:
To be fair, Will and others note, Jahar's perspective on U.S. foreign policy wasn't all that dissimilar from a lot of other people they knew. "In terms of politics, I'd say he's just as anti-American as the next guy in Cambridge," says Theo.
This is not an uncommon belief. Payack, who [was Tsarnaev's wrestling coach and mentor and] also teaches writing at the Berklee College of Music, says that a fair amount of his students, notably those born in other countries, believe 9/11 was an "inside job." Aaronson tells me he's shocked by the number of kids he knows who believe the Jews were behind 9/11. "The problem with this demographic is that they do not know the basic narratives of their histories – or really any narratives," he says. "They're blazed on pot and searching the Internet for any 'factoids' that they believe fit their highly de-historicized and decontextualized ideologies. And the adult world totally misunderstands them and dismisses them – and does so at our collective peril," he adds.
The article presents a sad portrait of youth culture, and not just because all these “normal” kids are smoking “a copious amount of weed.” The jarring realization is that these talented and intelligent young people at a good school in a storied neighborhood come off so disaffected. What is more, their beliefs in conspiracies are accepted by the adults in their lives as commonplaces; their anti-Americanism is simply a noted fact; and their idolization of slacking (Tsarnaev's favorite word, his friends say, “was "sherm," Cambridge slang for ‘slacker’”) is seen as cute. There is painfully little concern by adults to insist that the young people face facts and confront unserious opinions.
In short, the young people in Tsarnaev's story appear to be abandoned by adults to their own youthful and quite fanciful views of reality. Youth culture dominates, and adult supervision seems absent. There is seemingly no one who, in Arendt’s language from “The Crisis in Education”, takes responsibility for teaching them to love the world as it is.
The Rolling Stone article and cover do not glorify a monster; but they do play on two dangerous trends in modern culture that Hannah Arendt worried about in her writing: First, the rise of youth culture and the abandonment of adult authority in education; and second, the fascination bourgeois culture has for vice and the short distance that separates an acceptance of vice from an acceptance of monstrosity. If only all the people who are so concerned about a magazine cover today were more concerned about the delusions and fantasies of Tsarnaev, his friends, and others like them.
Taking responsibility for teaching young people to love the world is the very essence of what Arendt understands education to be. It will be the topic of the Hannah Arendt Center upcoming conference “Failing Fast: The Crisis of the Educated Citizen.” Registration for the conference opened this week. For now, ignore the controversy and read Reitman’s article “Jahar’s World.” It is your weekend read. It is as good an argument for thinking seriously about the failure of our approach to education as one can find.
Thomas Levin of Princeton came to Bard Tuesday to give a lecture to the Drones Seminar, a weekly class I am participating in, led by my colleague Thomas Keenan and conceived by two of our students Arthur Holland and Dan Gettinger. Levin has studied surveillance techniques for years and he came to think with us about how the present obsession with drones will transform our landscape and our imaginations. At a time when the obsession with drones in the media is focused on their offensive capacities, it is important to recall that drones were originally developed as a surveillance technology. If drones are to become omnipresent in our lives, what will that mean?
Levin began by reminding us of the embrace of other surveillance devices in mass culture, like recording devices at the turn of the 20th century. He offered old postcards and cartoons in which unsuspecting servants or children were caught goofing off or insulting their superiors with newfangled recording devices like the cylinder phonograph and, later, hidden cameras and spy satellites. The realization emerges that we are being watched, and this sense pervades the popular consciousness. In looking to these representations from mass culture of the fear, awareness, and even expectation that we will be watched and listened to, Levin finds the emergence of what he calls “rhetoric of surveillance.”
In short, we talk and think constantly about the fact that we are or may be being watched. This cannot but change the way we behave and act. Levin poses this question. What, he asks, is the emerging drone imaginary?
To answer that question it is helpful to revisit an uncannily prescient imagination of the rise of drones in a text written over half a century ago, Ernst Jünger’s The Glass Bees. Originally published in 1957 and recently reissued in translation with an introduction by science fiction novelist Bruce Sterling, Jünger’s text centers around a job interview between an unnamed former light cavalry officer and Giacomo Zapparoni, secretive, filthy rich, and powerful proprietor of The Zapparoni Works that “manufactured robots for every imaginable purpose.” Zapparoni’s secret, however, is that he instead of big and hulking robots, he specialized in Lilliputian robots that gave “the impression of intelligent ants.”
The robots were not powerful in themselves, but they worked together. Like drone bees and drone ants—that exist only for procreation and then die—the small robots, or drones, serve specific purposes in industry or business. Zapparoni’s tiny robots “could count, weigh, sort gems or paper money….” Their power came from their coordination.
The robots “worked in dangerous locations, handling explosives, dangerous viruses, and even radioactive materials. Swarms of selectors could not only detect the faintest smell of smoke but could also extinguish a fire at an early stage; others repaired defective wiring, and still others fed upon filth and became indispensable in all jobs where cleanliness was essential.” Dispensable and efficient, Zapparoni’s little robots could do the most dangerous and least desirable tasks.
In The Glass Bees, we are introduced to Zapparoni’s latest invention: flying glass bees that can pollinate flowers much more efficiently and quickly than natural bees. The bees “were about the size of a walnut still encased in its green shell.” They were completely transparent and they were an improvement upon nature, at least insofar as the pollination of flowers was concerned. If a true or natural bee “sucked first on the calyx, at least a dessert remained.” But Zapparoni’s glass bees “proceeded more economically; that is, they drained the flower more thoroughly.” What is more, the bees were a marvel of agility and skill: “Given the flying speed, the fact that no collisions occurred during these flights back and forth was a masterly feat.” According to the cavalry officer, “It was evident that the natural procedure had been simplified, cut short, and standardized.”
Before our hero is introduced to Zapparoni’s bees, he is given a warning: “Beware of the bees!” And yet he forgets this warning. Watching the glass bees, the cavalry officer is fascinated. He felt himself “come under the spell of the deeper domain of techniques,” which like a spectacle “both enthralled and mesmerized.” His mind, he writes, went to sleep and he “forgot time” and “also entirely forgot the possibility of danger.”
Jünger’s book tells, in part, the story of our fascination and subjection to technologies of surveillance. On Facebook or Words with Friends, or even using our smart phones or GPS systems, we allow our fascination with technology to dull our sense of its danger. As Jünger writes: “Technical perfection strives toward the calculable, human perfection toward the incalculable. Perfect mechanisms—around which, therefore, stands an uncanny but fascinating halo of brilliance—evoke both fear and a titanic pride which will be humbled not by insight but only by catastrophe.”
The protagonist of The Glass Bees, a former member of the Light Cavalry and later a tank inspector, had once been fascinated by the “succession of ever new models becoming obsolete at an ever increasing speed, this cunning question-and-answer game between overbred brains.” What he came to see is that “the struggle for power had reached a new stage; it was fought with scientific formulas. The weapons vanished in the abyss like fleeting images, like pictures one throws into the fire. New ones were produced in protean succession.” Victory ceased to be about physical battle; it became, instead, a contest of technical mastery and knowledge.
The danger drones pose is not necessarily military. As General Stanley McChrystal rightly said when I asked him about this last week at the New York Historical Society, drones are simply another military tool that can be used for good or ill. Many fret today about collateral damage by drones and forget that if we had to send in armies to do these tasks the collateral damage would be much greater. Others worry about assassination, but drones are simply the tool, not the person pulling the trigger. It may be true that having drones when others don’t offers an enormous military advantage and makes the decision to go to kill easier, but when both sides have drones, we will all think heavily between beginning a cycle of illegal assassinations.
Rather, the danger of drones is how they change us as humans. As we humans interact more regularly with drones and machines and computers, we will inevitably come to expect ourselves and our friends and our colleagues and our lovers to act with the efficiency and selflessness of drones. Sherry Turkle worries that mechanical companions offer such fascination and unquestionable love that humans are beginning to prefer spending time with their machines than with other humans—who make demands, get tired, act cranky, and disappoint us. Ron Arkin has argued that robot soldiers will be more humane at war than human soldiers, who often act rashly out of exhaustion, anger, or revenge. Doctors are learning to rely on Watson and artificially intelligent medical machines, who can bring databases of knowledge to bear on diagnoses with the speed and objectivity that humans can only dream of. In every area of human life where humans once were thought to be necessary, drones and machines are proving more reliable, more capable, and more desirable.
The danger drones represent is not what they do better than humans, but that they do it better than humans. They are a further step in the human dream of self-improvement—the desire to overcome our shame at our all-too-human limitations.
The incredible popularity of drones today is partly a result of their freeing us to fight wars with ever-reduced human and economic costs. But drones are popular also because they appeal to the human desire for perfection. The question is, however, how perfect we humans can be before we begin to lose our humanity. That is, of course, the force of Jünger’s warning: Beware of the bees!
As drones appear everywhere around us, you would do well to put down the newspaper and turn off You Tube and, instead, revisit Ernst Jünger’s classic tale of drones. The Glass Bees is your weekend read. You can read Bruce Sterling’s introduction to The Glass Bees here.
We commonly assume that political acts and claims are shaped by some form of reasoning. How then do we respond to political stands in which arguments are piled atop arguments in contradictory ways, and where the force of the various arguments is less important than victory? We see in political discourse a definite willingness to embrace any argument that helps one win, whether or not it makes sense.
One example of our cynical embrace of bad arguments is the recent controversy over the East Side Gallery in Berlin. The Gallery is comprised of a series of murals that, over the course of the past two decades, an international cast of artists has painted and re-painted on an approximately one-mile stretch of the Berlin Wall. Indeed, the East Side Gallery occupies the longest existing remnant of the Wall, and it has become a significant landmark not only for those visitors who seek to experience something of the city’s Cold War past, but also for those long-time residents who regard it as an embodiment of the city’s contemporary feel and texture.
The tumult of the past few weeks erupted over the plans of a developer, Maik Uwe Hinkel, to construct luxury apartments and an office complex in the former border zone—now a modest green space—that lies between the East Side Gallery and the Spree River. According to the agreements reached by Hinkel and the local government, these new buildings would entail the creation of an access road and pedestrian bridge to allow passage to pedestrians, bicyclists, and emergency vehicles. The road and bridge, in turn, would require the removal of two stretches of the East Side Gallery and their replacement in the adjacent green space. Local planners had first approved the construction and the alteration to the East Side Gallery back in 2005, and since that time Hinkel’s plans had aroused little concerted opposition.
When workers lifted out one concrete slab from the Gallery on Friday, March 2nd, however, hundreds of demonstrators flocked to the site to prevent any further removals. A group of activists hastily organized a larger demonstration that same weekend, one that ultimately drew a raucous crowd of more than six thousand people. In the face of these surprising protests, Berlin Mayor Klaus Wowereit declared that all further work on the site would be postponed until at least March 18th, when a meeting of the major players would decide its fate. Since then, the developer and the relevant local officials have all declared their eagerness to find a solution that preserves the East Side Gallery in its current state. Even the slab removed earlier this month seems destined to return to its former location.
Yet the apparent success of the protest threatens to overshadow the problematic aspects of the demonstrators’ arguments. On the one hand, many of the organizers and protesters regarded their opposition as a small but significant rejoinder to the insistent tide of commercial development in post-Wall Berlin. To adopt the terms of Sharon Zukin’s recent book Naked City, they saw the East Side Gallery as an embodiment of the city’s distinctive authenticity and rootedness, which they argued should be protected from the homogenizing onslaught of upscale growth and gentrification. To wit, one of the coalitions that spearheaded the protest calls itself “Sink the Media Spree” (Mediaspree Versenken), a name that invokes developers’ recent efforts to transform the area along the river into a headquarters for high-tech communications and media. Its webpage declares that this portion of Berlin should preserve “the neighborhood” as it currently exists and not fall victim to “profit mania” (Kiez statt Profitwahn).
But the East Side Gallery cannot be cast so readily as an incarnation of local authenticity, especially the kind that stands opposed to commerce. First of all, many government actors and city residents were far more eager to see the Wall dismantled in the months and years after November 1989 than to see it preserved, and they condoned if not actively contributed to its wholesale removal. As a result, the survival of the East Side Gallery represents the exception, not the rule, in the city’s engagement with the Wall as a material structure. Second, artists from around the world initially established the East Side Gallery as a celebration of artistic and political liberty, but their murals received support from the local and national governments because they helped to draw tourists to Berlin and added to the city’s cachet as a cultural destination. In the light of this state patronage, I find it rather curious to hear activists pitching the East Side Gallery against the forces of capital and development.
On the other hand, many demonstrators contended that the alteration of the East Side Gallery would amount to an intolerable attack on the city’s historical inheritance. One variation of this position is that the removal of the two sections constitutes a dilution if not erasure of Germany’s traumatic past. According to this argument, the East Side Gallery should be left intact so that residents and visitors can confront the traces of the country’s division. Another, more strident variation insists that the construction plans display a callous disregard for those who suffered under the East German regime and, more specifically, lost their lives while attempting to escape it. In the words of one activist in Der Tagesspiegel: “the most important point is not whether the Wall will be opened. We are against the combination of removing the Wall and building hotels and apartments in death strips.”
Again, the East Side Gallery’s connection with Germany’s fraught past is not nearly as straightforward as the activists and demonstrators have suggested. As Brian Ladd details in his book The Ghosts of Berlin, the murals of the East Side Gallery were not painted until the early 1990s, after the Wall had fallen and East Germany had ceased to exist. In fact, this portion of the Wall could not have been painted before 1989, because it stood in East Berlin, and anyone who attempted to leave a mark on it, or even lingered near it, would have been apprehended by East German police officers or border soldiers. Of course, amateur and professional artists did draw and paint some striking imagery on the Berlin Wall during the Cold War, but they created it on the Wall’s “outer” surface while standing in West Berlin, where they had much less to fear from East German border personnel. The muralists who launched and maintained the East Side Gallery certainly meant to evoke and further this tradition of “Wall art,” but in the process they abstracted it from a prior historical era and relocated it in another part of the city.
I note these objections not because I support the proposed construction or the alteration of the East Side Gallery. In particular, I am not at all convinced that the partial removal of the Wall is really necessary, whether or not Hinkel and the city go ahead with the area’s development. But I am troubled by the protesters’ reluctance to take the ironies and complexities of the current circumstances more fully into account. They are too eager to cast the developer and local officials as the villains in this story, particularly when the city and the federal government have in fact created a substantial memorial landscape related to the Wall. And they are too quick to position themselves on the moral high ground. Given the Wall’s disappearance from virtually every other part of the city, their demands for preserving the East Side Gallery seem more than a little belated.
“All thought arises out of experience, but no thought yields any meaning or even coherence without undergoing the operations of imagining and thinking.”
- Hannah Arendt, Thinking
In the wake of an extraordinarily brutal punctuation to an extraordinarily brutal year of gun violence in the United States and across the continent, the eye of American politics has finally turned back toward something it perhaps ought never have left, the problem in this country of the private ownership of the means to commit extraordinary brutality.
Perhaps unsurprisingly, public discourse around the problem has descended nearly instantaneously from fractiousness into what could now only generously be termed playground name-calling (to spend millions of dollars to publicly call one’s opponent an “elitist hypocrite” should feel extraordinary, even if it doesn’t). There are many tempting culprits to blame for this fall. The actors, of course, include some powerful players whose opposing ideologies so deeply inflect their understanding of the situation that it is entirely uncertain whether they are in fact seeing the same world, let alone the same problem within it. There is the stage on which the actors play, a largely national media structure whose voracious demands can be fed most easily, if not most effectively, by those who seek the currency of political power in hyperbole and absoluteness of conviction. Finally, there is the problem of constructing the problem itself: is it clear that private ownership of the means of extraordinary violence is so distinct a problem from that of its public ownership and (borderless) use? Can the line of acceptability between means of extraordinary brutality really be settled by types of implements, let alone the number of bullets in a magazine? What are the connections and disconnections between the events – Oak Creek, Chicago, Newtown,… – that have summoned the problem back onto our collective stage, and why had the problem disappeared in the first place when the violence so demonstrably had not? There is something in all of these instincts, but before we rush to decry our national theater (more Mamet than melodrama), it’s worth remembering that the problem is an extraordinary one, and that many of the pathologies of our various reactions to it spring from the same seed as our best resources: the nature of thinking itself.
The rhetoric used in describing the problem of gun violence – formulated so readily and so intractably – coupled with the unavoidable connection of the problem with intense emotion make it tempting to suspect one’s political opponents in this arena of ceasing to think altogether. I will admit to sometimes being convinced that there was no thought at all behind some of the words being splayed across television screens and RSS feeds (not, it should be said, entirely without reason). Arendt, in Thinking, describes thinking and feeling as inherently mutually antagonistic, and whether or not that is true it certainly seems that the tenor and pitch of the vitriol make thinking, let alone conversing, difficult. But that may point to a reality still more sobering than the perennially (and maybe banally) true observation that a great deal of what passes for public discourse did not require serious thought in its formulation: that when we deal with certain kinds of events, and try to engage in the process of translating them and reconstructing them into the form of a problem, we are running up against dimensions of the human experience so extraordinary that they shove us flatly against the limits of what we are able to do in thought. Perhaps the struggle now is less against a chronic inability to think, and more with recognizing the ways in which the limits of how we can feel and see and know – and then think – have created limits not just to how we can understand the problem, but to how we can understand each others’ responses to it.
One permanent refrain in this debate is the culpability of violent media in generating cultures in which, it is said, such extraordinary brutality becomes possible (ignoring, it might be objected, that humankind has shown a rather vibrant aptitude for brutality for quite some time). The newest variation on this theme, which in structure has changed little since its revival by Tipper Gore and Susan Baker in the 1980s, is that violent video games, by wedding the sensation of the rapid pleasures of accomplishment unique to video games with a sense of agency in apparent violence have created a generation desensitized not just to images of extraordinary violence, but to the prospect of committing it oneself. A friend of mine who has good reason to be sensitive was so infuriated at the NRA’s release of a mobile app promoting “responsible gun use” one month to the day after the Newtown shootings that he couldn’t eat for several days.
If it is possible to set aside questions of titanically poor taste and worse (and its not clear that we should), there is something about this way of thinking about the problem of violent imaginaries that reflects what I am suggesting is an issue of pathologies arising from mental necessities.
There is little use denying that being intensively immersed in gaming environments (any gaming environments, and not just violent electronic ones) for extended periods of time can seriously, if usually temporarily, alter a person’s phenomenal experience of their own agency and the realness of the world around them (I confess this as a recovering Sid Meier enthusiast myself). But the concept of de-sensitization is a difficult one in particular because, as Arendt points out, de-sensitization is precisely what thought does, and must do to carry out its work. Nowhere is this more clear than in those cases in which we are confronted with events that seriously strain the possibility of thinking about them at all.
Thinking about tragedies involves a twin process that need not, and should not, lessen the experience of their terribleness…but it always can. That twin process, as Arendt describes it, is one of de-sensation and re-sensation. When we try to think about what has occur, we have to call it up, we reproduce it “by repeating in [our] imagination, we de-sense whatever had been given to our senses.” In remembering, we convert the data of our senses, including our common sense, into objects of thought. We do that in order to make them fit for the preoccupation of thought, our “quest for meaning;” in other words, re-sensation, the process of translation into narrative and metaphor by which facts become truths.
It’s not difficult to see how extraordinary brutality challenges this double operation to the point of impossibility. On the one hand, this model of de-sensation by the reproductive imagination presumes a kind of voluntarism to the recollection, when often, and most especially in the cases like those of immediate victims where the stakes are highest, recollection comes unbidden, and far from de-sensing involves the cruel and incessant reiteration of sense that is renewed in all of its thought-destroying power. On the other hand, extraordinary brutality by its very nature resists re-sensation in proportion to its extraordinariness: to read the trial of Anders Breivik, for example, is to watch a play of the utter failure of not only the killer’s own efforts at narrative, but those of every single speaking person involved. It is not a surprise that these trials test the law’s own limited strictures of re-sensation to the breaking point, which often comes as nothing more than quiet acquittal (as with Mathieu Ngudjolo Chui, in whose case international law was forced to confess the inadequacy of its categories).
What’s more difficult to see is how that terrible challenge presented by extraordinary brutality to our very capacity to think is simultaneously a challenge to our politics, one perhaps graver still for our hope, as Arendt puts it in her Denktagebuch, to share a world with those with whom we must live. Extraordinary brutality makes a shamble of our narrating powers, and the failures of others to make sense of it which incite our scorn – as when, I will admit, even as someone who grew up in a gun culture, I literally cannot make sense of the suggestion that high-capacity magazines would be better combated by their increased prevalence in the school environment itself – are no less replicated by our own attempts, whether or not we can see and admit it. Imagination’s other function, its most political function for Arendt, is to put ourselves in the place of others in order to more fully see the political world that confronts us. If this is true, then it is not our capacity to put ourselves in the place of a killer that most threatens our political capacity to respond, whatever the prevalence of this problem in popular discourse. This may often be an impossibility, but the stakes are much lower than that of the impossibility of putting ourselves in the places of others who are also trying – and like us mostly failing – to respond. In trying and failing to renarrate tragedy in order to construct political problems and solutions, we come up against the limits of our imaginations, limits are themselves defined by the bounds of our prior experiences and our thought itself. When it comes to the world of the gun (and here, I can only urge a look at the truly remarkable The Language of the Gun), we are running up against the reality that contemporary American polity covers experiences of the world divergent to such an extreme – how much, in terms of sensory experience in their personal history do David Keene and Alan Padilla share, really? – that answers truly are being constructed from worlds which, in the senses that matter to policymaking, don’t overlap. And in an environment where that is true, the first, most critical order must be the one that is neglected most: not to analyze why our competing solutions are right or wrong, but to understand why the solutions we are proposing arise from the experiences of the world we have had, including our experiences of the tragedies we cannot re-sense.
Responses cannot be crafted out of worlds that are not shared, and tending to the former requires a kind of tending to the latter that we see vanishingly rarely, thought the torch still carried by a few radio producers and documentary filmmakers. Absent that kind of dedicated world-making – and perhaps that process requires a time and restraint that too is threatened by extraordinary brutality – we will simply be left with what we have, an issue politics without common sense because the only sense that is common, the event, is insensible. When they respond in ways we cannot abide, understanding our political others is an almost impossibly difficult task. It is also one that a polity cannot possibly do without.
San Jose State University is experimenting with a program where students pay a reduced fee for online courses run by the private firm Udacity. Teachers and their unions are in retreat across the nation. And groups like Uncollege insist that schools and universities are unnecessary. At a time when teachers are everywhere on the defensive, it is great to read this opening salvo from Leon Wieseltier:
When I look back at my education, I am struck not by how much I learned but by how much I was taught. I am the progeny of teachers; I swoon over teachers. Even what I learned on my own I owed to them, because they guided me in my sense of what is significant.
I share Wieseltier’s reverence for educators. Eric Rothschild and Werner Feig lit fires in my brain while I was in high school. Austin Sarat taught me to teach myself in college. Laurent Mayali introduced me to the wonders of history. Marianne Constable pushed me to be a rigorous reader. Drucilla Cornell fired my idealism for justice. And Philippe Nonet showed me how much I still had to know and inspired me to read and think ruthlessly in graduate school. Like Wieseltier, I can trace my life’s path through the lens of my teachers.
The occasion for such a welcome love letter to teachers is Wieseltier’s rapacious rejection of homeschooling and unschooling, two movements that he argues denigrate teachers. As sympathetic as I am to his paean to pedagogues, Wieseltier’s rejection of all alternatives to conventional education today is overly defensive.
For all their many ills, homeschooling and unschooling are two movements that seek to personalize and intensify the often conventional and factory-like educational experience of our nation’s high schools and colleges. According to Wieseltier, these alternatives are possessed of the “demented idea that children can be competently taught by people whose only qualifications for teaching them are love and a desire to keep them from the world.” These movements believe that young people can “reject college and become “self-directed learners.”” For Wieseltier, the claim that people can teach themselves is both an “insult to the great profession of pedagogy” and a romantic over-estimation of “untutored ‘self’.”
The romance of the untutored self is strong, but hardly dangerous. While today educators like Will Richardson and entrepreneurs like Dale Stephens celebrate the abundance of the internet and argue that anyone can teach themselves with simply an internet connection, that dream has a history. Consider this endorsement of autodidactic learning from Ray Bradbury from long before the internet:
Yes, I am. I’m completely library educated. I’ve never been to college. I went down to the library when I was in grade school in Waukegan, and in high school in Los Angeles, and spent long days every summer in the library. I used to steal magazines from a store on Genesee Street, in Waukegan, and read them and then steal them back on the racks again. That way I took the print off with my eyeballs and stayed honest. I didn’t want to be a permanent thief, and I was very careful to wash my hands before I read them. But with the library, it’s like catnip, I suppose: you begin to run in circles because there’s so much to look at and read. And it’s far more fun than going to school, simply because you make up your own list and you don’t have to listen to anyone. When I would see some of the books my kids were forced to bring home and read by some of their teachers, and were graded on—well, what if you don’t like those books?
In this interview in the Paris Review, Bradbury not only celebrates the freedom of the untutored self, but also dismisses college along much the same lines as Dale Stephens of Uncollege does. Here is Bradbury again:
You can’t learn to write in college. It’s a very bad place for writers because the teachers always think they know more than you do—and they don’t. They have prejudices. They may like Henry James, but what if you don’t want to write like Henry James? They may like John Irving, for instance, who’s the bore of all time. A lot of the people whose work they’ve taught in the schools for the last thirty years, I can’t understand why people read them and why they are taught. The library, on the other hand, has no biases. The information is all there for you to interpret. You don’t have someone telling you what to think. You discover it for yourself.
What the library and the internet offer is unfiltered information. For the autodidact, that is all that is needed. Education is a self-driven exploration of the database of the world.
Of course such arguments are elitist. Not everyone is a Ray Bradbury or a Wilhelm Gottfried Leibniz, who taught himself Latin in a few days. Hannah Arendt refused to go to her high school Greek class because it was offered at 8 am—too early an hour for her mind to wake up, she claimed. She learned Greek on her own. For such people self-learning is an option. But even Arendt needed teachers, which is why she went to Freiburg to study with Martin Heidegger. She had heard, she later wrote, that thinking was happening there. And she wanted to learn to think.
What is it that teachers teach when they are teaching? To answer “thinking” or “critical reasoning” or “self-reflection” is simply to open more questions. And yet these are the crucial questions we need to ask. At a period in time when education is increasingly confused with information delivery, we need to articulate and promote the dignity of teaching.
What is most provocative in Wieseltier’s essay is his civic argument for a liberal arts education. Education, he writes, is the salvation of both the person and the citizen. Indeed it is the bulwark of a democratic politics:
Surely the primary objectives of education are the formation of the self and the formation of the citizen. A political order based on the expression of opinion imposes an intellectual obligation upon the individual, who cannot acquit himself of his democratic duty without an ability to reason, a familiarity with argument, a historical memory. An ignorant citizen is a traitor to an open society. The demagoguery of the media, which is covertly structural when it is not overtly ideological, demands a countervailing force of knowledgeable reflection.
That education is the answer to our political ills is an argument heard widely. During the recent presidential election, the candidates frequently appealed to education as the panacea for everything from our flagging economy to our sclerotic political system. Wieseltier trades in a similar argument: A good liberal arts education will yield critical thinkers who will thus be able to parse the obfuscation inherent in the media and vote for responsible and excellent candidates.
I am skeptical of arguments that imagine education as a panacea for politics. Behind such arguments is usually the unspoken assumption: “If X were educated and knew what they were talking about, they would see the truth and agree with me.” There is a confidence here in a kind of rational speech situation (of the kind imagined by Jürgen Habermas) that holds that when the conditions are propitious, everyone will come to agree on a rational solution. But that is not the way human nature or politics works. Politics involves plurality and the amazing thing about human beings is that educated or not, we embrace an extraordinary variety of strongly held, intelligent, and conscientious opinions. I am a firm believer in education. But I hold out little hope that education will make people see eye to eye, end our political paralysis, or usher in a more rational polity.
What then is the value of education? And why is that we so deeply need great teachers? Hannah Arendt saw education as “the point at which we decide whether we love the world enough to assume responsibility for it." The educator must love the world and believe in it if he or she is to introduce young people to that world as something noble and worthy of respect. In this sense education is conservative, insofar as it conserves the world as it has been given. But education is also revolutionary, insofar as the teacher must realize that it is part of that world as it is that young people will change the world. Teachers simply teach what is, Arendt argued; they leave to the students the chance to transform it.
To teach the world as it is, one must love the world—what Arendt comes to call amor mundi. A teacher must not despise the world or see it as oppressive, evil, and deceitful. Yes, the teacher can recognize the limitations of the world and see its faults. But he or she must nevertheless love the world with its faults and thus lead the student into the world as something inspired and beautiful. To teach Plato, you must love Plato. To teach geology, you must love rocks. While critical thinking is an important skill, what teachers teach is rather enthusiasm and love of learning. The great teachers are the lovers of learning. What they teach, above all, is the experience of discovery. And they do so by learning themselves.
Education is to be distinguished from knowledge transmission. It must also be distinguished from credentialing. And finally, education is not the same as indoctrinating students with values or beliefs. Education is about opening students to the fact of what is. Teaching them about the world as it is. It is then up to the student, the young, to judge whether the world that they have inherited is loveable and worthy of retention, or whether it must be changed. The teacher is not responsible for changing the world; rather the teacher nurtures new citizens who are capable of judging the world on their own.
Arendt thus affirms Ralph Waldo Emerson's view that “He only who is able to stand alone is qualified for society.” Emerson’s imperative, to take up the divine idea allotted to each one of us, resonates with Arendt’s Socratic imperative, to be true to oneself. Education, Arendt insists, must risk allowing people their unique and personal viewpoints, eschewing political education and seeking, simply, to nurture independent minds. Education prepares the youth for politics by bringing them into a common world as independent and unique individuals. From this perspective, the progeny of teachers is the educated citizen, someone one who is both self-reliant in an Emersonian sense and also part of a common world.
We face a challenge of leadership; there is a void in our body politics that remains to be filled. First, expectations of the president need to re-evaluated. The public’s perception of the president is unrealistic and inflated. A CBS News/New York Times poll in March 2012 reported that 54% of people believe the president “can do a lot” about gas prices.
Our economic recession adds another dimension to the public’s bloated expectations. In the wake of the 2008 economic recession all eyes turned on what the President-elect would do once in office. People believed and still do that the President had the ability to fix the global economic meltdown. The public expected the President to solve our economic problem without understanding that in the globalized neo-liberal regime markets are highly connected. It is no longer possible for a single country to ameliorate the effects of an economic meltdown.
The president will only matter in this century if it is first addressed how we perceive the president. He is neither a deity nor a dictator. His actions in an increasingly filibuster-happy congress are limited. The public’s expectations must be re-evaluated and shaped to accept reality. The president cannot solve all our problems; the very fabric of the American constitution prohibits the president from securing more powers. The justified fear of an autocrat prohibits action. This tradeoff was accepted by the founding fathers and it must now be accepted again.
Once expectations are adjusted, how then does the president matter? The president will matter as long as he can engage citizens in our democratic process. The pervasive idea that democracy is simply voting has filled the minds of millions. The civic and democratic institutions lie asleep in times where the market prevails. People have given up on government; they see it as an artifact to be studied in history books. The president must see his role as protector of our democracy; he must be its biggest champion. This cannot only be done through rhetoric alone. The president must help foster an engaged citizenry that actively participates in our democracy.
The danger to our politics does not come from terrorist it comes from a citizenry that is not informed, does not participate, and could care less. When the media suggests the president must rise above politics the only way that can be done is to address the inherent problems in our current political system. It is to remind citizens of the price paid by their forefathers for political rights. The president must become the chief persuader thereby helping bring citizens into the political fold. The only way for the president to matter in this century is for people to see him as a protector of this great experiment and not merely as passerby.
These leaders will come from the left and the right alike, engaging citizens should not be a partisan issue. They must also come with a historical understanding of our democracy and American institutions. This does not mean they will rise from academia, but that their understanding cannot be informed by current political debates but rather by history. New political leaders must accept a non-politicized history that seeks truth.
Facts have become politicized, each side molding it to their own advantage. Objective truths are irrelevant because each side has been allowed to massage it. On August 28, 2012 New Jersey Governor Chris Christie lied by omission. He gave the keynote address at the Republican National Convention claiming that there has been a New Jersey come back. That his policies have worked and all it takes is serious leaders to tackle our problem. He claims he cut the state deficit while decreasing taxes. The governor forgets to mention he also cut pensions, teachers, firefighters, and many others. What is more glaring is New Jersey’s unemployment rate at over 9%. The myth is created allowing Governor Christie to become a hero in the Republican Party. The truth does not lie with either party. A new leader must inform citizens of the reality rather than try to score political points. This may be impossible but it is the only way that the president will matter.
People are tired of the partisan bickering; Obama’s unemployment rate is just as bad as Governor Christie’s and yet both sides claim victory. A president will not matter until he can acknowledge the fundamental problems at hand. For a leader to matter he must stand for something greater than his own party. He must stand for citizen participation and access to information. A leader would not claim victory but would relate to citizens the problems we face and the solutions they believe will solve it. They must acknowledge when those solutions do not work. It is a pragmatic president that will matter in this century, one who is willing to suffer the consequences of failed policies for democracies sake.
The Millennial generation will inherit a troubled world by the year 2040. Their ability to lead will prove extremely important. They will be the heirs to the American dilemma. The hope is that they rise and fill the leadership void not as past generations have done, but as new leaders different and emboldened by a fight for a vibrant participatory democracy. It is John Dewey that should inform what a new president needs to fight for. “[T]he task of democracy is forever that of creation of a freer and more humane experience in which all share and to which all contribute.”
 John Dewey, “Creative Democracy—The Task before US
Have you not watched Newt Gingrich's take down of CNN's John King at the opening of the Republican debate last night? You should.
Gingrich's supremely confident critique of the media's obsession with personal issues certainly put the Republican contest back in play and may have set him on the road to the nomination. It is also fascinating in the widely divergent reactions it has spawned.
The Republicans attending the debate gave Gingrich two standing ovations within three minutes. Most commentators have concluded that Gingrich won the debate in the first five minutes. But reaction on the left has been contemptuous.
Andrew Sullivan has great coverage and collects the responses.
John Marshall marvels at his hubris: "Shameless, hubris, chutzpah, whatever. It was pitch perfect for his intended audience. He took control of the debate and drew down all the tension about when the debate would turn to the open marriage stuff."
Andrew Sprung writes of an "astounding display of the Audacity of Hubris."
PM Carpenter shouts that it was "the most despicable display of grotesque demagoguery I have ever witnessed."
Tim Stanley (hat tip to Andrew Sullivan) has the best characterization of the rhetorical power of Gingrich's answer.
To understand the full power of Gingrich’s answer, you really have to watch him give it. The former Speaker has three standard expressions: charmed bemusement (“Why are you asking me that, you fool?”), indignant (“Why are you asking me that, you swine?”) and supreme confidence (“That’s not the question I would have asked, you moron”). Each comes with its own number of chins. For his stunning “No, but I will”, Newt employed the full dozen. He looked straight down them, with half moon goblin eyes. “I think the destructive, vicious, negative nature of much of the news media makes it harder to govern this country, harder to attract decent people to run for public office. And I am appalled that you would begin a presidential debate on a topic like that.” By the time his chins unfolded, Gingrich was in total command of the debate.
The interesting question is: was Gingrich wrong to react the way he did? Did his angry and forceful response show hubris and contempt? Or is it the confident and powerful response of a true leader?
For years, liberals and conservatives alike have kvetched unceasingly about how the media cares more about scandal than substance.
What was John King thinking starting off the last Republican debate before a crucial primary with a question about marital infidelity from decades ago? One can of course argue that infidelity goes to character, and maybe it could have been asked about in some way. But is it really the most important issue of the debate? There are plenty of questions about Gingrich's character that are more pertinent to his ability to be President. Whether he once asked his wife to allow him to keep a mistress is not what disqualifies him to be President.
The reason Gingrich is still in this contest is because he has a supreme confidence in himself. He believes that he is the only candidate with big ideas, the only one willing to really buck the status quo. He styles himself a leader, and the strengths and weaknesses of his idea of leadership were on display in his answer to John King.
The strengths are clear. He elevated himself far above his questioner. He assumed a leadership position and pushed through without any self-doubt or self-criticism. Imagine someone like President Obama acting with such assurance? It is almost inconceivable. I can't imagine watching Gingrich and not feeling something like: Finally! someone has the courage to say what they believe and tell the media to get over their titillations and focus on the fact that this is the most important Presidential election in a generation.
Gingrich's weaknesses are clear as well. The man is imperious. He lives at times in a fantasy world of his own, one in which he is the philosopher king straining to keep calm and save the rest of us before he explodes at our idiocy. Nothing is more indicative of his hubris is his contempt for the Congressional Budget Office, the non-partisan body that Gingrich regularly assails and wants to abolish. Why that has never been asked about in the debates is a travesty, and in many ways supports Gingrich's tirade. In any case, it speaks much more to the question of character and leadership than his marital problems.