In 1949, The New York Times asked Norbert Wiener, author of Cybernetics, to write an essay for the paper that expressed his ideas in simple form. For editorial and other reasons, Wiener’s essay never appeared and was lost. Recently, a draft of the never-published essay was found in the MIT archives. Written now 64 years ago, the essay remains deeply topical. The Times recently printed excerpts. Here is the first paragraph:
By this time the public is well aware that a new age of machines is upon us based on the computing machine, and not on the power machine. The tendency of these new machines is to replace human judgment on all levels but a fairly high one, rather than to replace human energy and power by machine energy and power. It is already clear that this new replacement will have a profound influence upon our lives, but it is not clear to the man of the street what this influence will be.
Wiener draws a core distinction between machines and computing machines, a distinction that is founded upon the ability of machines to mimic and replace not only human labor, but also human judgment. In the 1950s, when Wiener wrote, most Americans worried about automation replacing factory workers. What Wiener saw was a different danger: that intelligent machines could be created that would “replace human judgment on all levels but a fairly high one.”
Today, of course, Wiener’s prophecy is finally coming true. The IBM supercomputer Watson is being trained to make diagnoses with such accuracy, speed, and efficiency that it will largely replace the need for doctors to be trained in diagnostics.
Google is developing a self-driving car that will obviate the need for humans to judge how fast and near to others they will drive, just as GPS systems already render moot the human sense of direction. MOOCs are automating the process of education and grading so that fewer human decisions need to be made at every level. Facebook is automating the acquisition of friends, lawyers are employing computers to read and analyze documents, and on Wall Street computer trading is automating the buying and selling of stocks. Surveillance drones, of course, are being given increasing autonomy to sift through data and decide which persons to follow or investigate. Finally, in the scandal of the day, the National Security Agency is using computer algorithms to mine data about our phone calls looking for abnormalities and suspicious patterns that would suggest potential dangers. In all these cases, the turn to machines to supplement or even replace human judgment has a simple reason: Even if machines cannot think, they can be programmed to do traditionally human tasks in ways that are faster, more reliable, and less expensive than can be done by human beings. In ways big and small, human judgment is being replaced by computers and machines.
It is important to recognize that Wiener is not arguing that we will create artificial human beings. The claim is not that humans are simply fancy machines or that machines can become human. Rather, the point is that machines can be made to mimic human judgment with such precision and subtlety so that their judgments, while not human, are considered either equal to human judgment or even better. The result, Wiener writes, is that “Machines much more closely analogous to the human organism are well understood, and are now on the verge of being built. They will control entire industrial processes and will even make possible the factory substantially without employees.”
Wiener saw this new machine age as dangerous on at least two grounds. First, economically, the rise of machines carries the potential to upend basic structures of civilization. He writes:
These new machines have a great capacity for upsetting the present basis of industry, and of reducing the economic value of the routine factory employee to a point at which he is not worth hiring at any price. If we combine our machine-potentials of a factory with the valuation of human beings on which our present factory system is based, we are in for an industrial revolution of unmitigated cruelty.
The dangers Wiener sees from our increased reliance on computing machines are not limited to economic dislocation. The real threat that computing machines pose is that as we cede more and more power to machines in our daily lives, we will, he writes, gradually forfeit our freedom and independence:
[I]f we move in the direction of making machines which learn and whose behavior is modified by experience, we must face the fact that every degree of independence we give the machine is a degree of possible defiance of our wishes. The genie in the bottle will not willingly go back in the bottle, nor have we any reason to expect them to be well disposed to us.
In short, it is only a humanity which is capable of awe, which will also be capable of controlling the new potentials which we are opening for ourselves. We can be humble and live a good life with the aid of the machines, or we can be arrogant and die.
For Wiener, our eventual servitude to machines is both an acceptable result and a fait accompli, one we must learn to accept. If we insist on arrogantly maintaining our independence and freedom, we will die. I gather the point is not that machines will rise up and kill their creators, but rather that we ourselves will program our machines to eliminate, imprison, immobilize, or re-program those humans who refuse to comply with paternalistic and well-meaning directives of the machines systems we create in order to provide ourselves with security and plenty.
Wiener counsels that instead of self-important resistance, we must learn to be in awe of our machines. Our machines will improve our lives. They will ensure better medical care, safer streets, more efficient production, better education, more reliable childcare and more human warfare. Machines offer the promise of a cybernetic civilization in which an entire human and natural world is regulated and driven towards a common good with super-human intelligence and calculative power. In the face of such utopian possibility, we must accept our new status as the lucky beneficiaries of the regulatory systems we have created and humble ourselves as beings meant to live well rather than to live free.
Recent revelations about the U.S. government’s using powerful computers to mine and analyze enormous amounts of data collected via subpoenas from U.S. telecom companies is simply one example of the kind of tradeoff Wiener suggests we will and we should make. If I understand the conclusions of Glenn Greenwald’s typically excellent investigative reporting, the NSA uses computer algorithms to scan the totality of phone calls and internet traffic in and out of the United States. The NSA needs all of this data—all of our private data—in order to understand the normal patterns of telephony and web traffic and thus to notice, as well, those exceptional patterns of calling, chatting, and surfing. The civil libertarian challenges of such a program are clear: the construction of a database of normal behavior allows the government to attend to those whose activities are outside the norm. Those outliers can be terrorists or pedophiles; they may be Branch Davidians or members of Occupy Wall Street; they may be Heideggerians or Arendtians. Whomever they are, once those who exist and act in patterns outside the norm are identified, it is up to the government whether to act on that information and what to do with it. We are put in the position of having to trust our government to use that information wisely, with pitifully little oversight. Yet the temptation will always be there for the government to make use of private information once they have it.
In the face of the rise of machines and the present NSA action, we have, Wiener writes, a choice. We can arrogantly thump our chests and insist that our privacy be protected from snooping machines and governmental bureaucracies, or we can sit back and stare in awe of the power of these machines to keep us safe from terrorists and criminals at such a slight cost to our happiness and quality of life. We already allow the healthcare bureaucracy to know the most intimate details of our lives and the banking system to penetrate into the most minute details of our finances and the advertising system to know the most embarrassing details of our surfing and purchasing histories; why, Wiener pushes us to ask, should we shy away from allowing the security apparatus from making use of our communication?
If there is a convincing answer to this hypothetical question and if we are to decide to resist the humbling loss of human freedom and human dignity that Wiener welcomes, we need to articulate the dangers Wiener recognizes and then rationalizes in a much more provocative and profound way. Towards that end, there are few books more worth reading than Hannah Arendt’s The Human Condition. Wiener is not mentioned in Hannah Arendt’s 1958 book; and yet, her concern and her theme, if not her response, are very much in line with the threat that cybernetic scientific and computational thinking pose for the future of human beings.
In her prologue to The Human Condition, Arendt writes that two threatening events define the modern age. The first was the launch of Sputnik. The threat of Sputnik had nothing to do with the cold war or the Russian lead in the race for space. Rather, Sputnik signifies for Arendt the fact that we humans are finally capable of realizing the age-old dream of altering the basic conditions of human life, above all that we are earth-bound creatures subject to fate. What Sputnik meant is that we were then in the 1950s, for the first time, in a position to humanly control and transform our human condition and that we are doing so, thoughtlessly, without politically and thoughtfully considering what that would mean. I have written much about this elsewhere and given a TEDx talk about it here.
The second “equally decisive” and “no less threatening event” is “the advent of automation.” In the 1950s, automation of factories threatened to “liberate mankind from its oldest and most natural burden, the burden of laboring and the bondage to necessity.” Laboring, Arendt writes, has for thousands of years been one essential part of what it means to be a human being. Along with work and action, labor comprises those activities engaged in by all humans. To be human has meant to labor and support oneself; to be human has for thousands of years meant that we produce things—houses, tables, stories, and artworks—that provide a common humanly built world in which we live together; and to be human has meant to have the ability to act and speak in such a way as to surprise others so that your action will be seen and talked about and reacted to with a force that will alter the course and direction of the human world. Together these activities comprise the dignity of man, our freedom to build, influence, and change our given world—within limits.
But all three of these activities of what Arendt calls the vita activa, are now threatened, if not with extinction, then at least with increasing rarity and public irrelevance. As automation replaces human laborers, the human condition of laboring for our necessary preservation is diminished, and we come to rely more and more on the altruism of a state enriched by the productivity of machine labor. Laboring, part of what it has meant to be human for thousands of years, threatens to become ever less necessary and to occupy an ever smaller demand on our existence. As the things we make, the houses we live in, and the art we produce become ever more consumable, fleeting, and temporary, the common world in which we live comes to seem ever more fluid; we move houses and abandon friends with the greater ease than previous ages would dispose of a pair of pants. Our collective focus turns toward our present material needs rather than towards the building of common spiritual and ethical worlds. Finally, as human action is seen as the statistically predictable and understandable outcome of human behavior rather than the surprising and free action of human beings, our human dignity is sacrificed to our rational control and steering of life to secure safety and plenty. The threat to labor, work, and action that Arendt engages emerges from the rise of science—what she calls earth and world alienation—and the insistence that all things, including human beings, are comprehensible and predictable by scientific laws.
Arendt’s response to these collective threats to the human condition is that we must “think what we are doing.” She writes at the end of her prologue:
What I propose in the following is a reconsideration of the human condition from the vantage point of our newest experiences and our most recent fears. This, obviously, is a matter of thought, and thoughtlessness—the heedless recklessness or hopeless confusion or complacent repetition of “truths” which have become trivial and empty—seems to me among the outstanding characteristics of our time. What I propose, therefore, is very simple: it is nothing more than to think what we are doing.
Years before Arendt traveled to Jerusalem and witnessed what she saw as the thoughtlessness of Adolf Eichmann, she saw the impending thoughtlessness of our age as the great danger of our time. Only by thinking what we are doing—and in thinking also resisting the behaviorism and materialism of our calculating time—can we humans hope to resist the impulse to be in awe of our machines and, instead, retain our reverence for human being that is foundation of our humanity. Thinking—that dark, irrational, and deeply human activity—is the one meaningful response Arendt finds to both the thoughtlessness of scientific behaviorism and the thoughtlessness of the bureaucratic administration of mass murder.
There will be great examples of chest thumping about the loss of privacy and the violation of constitutional liberties over the next few days. This is as it should be. There will also be sober warnings about the need to secure ourselves from terrorists and enemies. This is also necessary. What is needed beyond both these predictable postures, however, is serious thinking about the tradeoffs between our need for reliable and affordable security along with honest discussion of what we today mean by human freedom. To begin such a discussion, it is well worth revisiting Norbert Wiener’s essay. It is your weekend read.
If you are interested in pursuing Arendt’s own response to crisis of humanism, you can find a series of essays and public lectures on that theme here.
Graduation is upon us. Saturday I will be in full academic regalia mixing with the motley colors of my colleagues as we send forth yet another class of graduates onto the rest of their lives. I advised three senior projects this year. One student is headed to East Jerusalem, where she will be a fellow at the Bard Honors College at Al Quds University. Another is staying at Bard where he will co-direct Bard’s new Center for the Study of the Drone. The third is returning to the United Kingdom where he will be the fourth person in a new technology driven public relations start up. A former student just completed Bard’s Masters in Teaching and will begin a career as a high school teacher. Another recent grad is returning from Pakistan to New York where she will earn a Masters in interactive technology at the Tisch School for the Arts at NYU. These are just a few of the extraordinary opportunities that young graduates are finding or making for themselves.
The absolute best part of being a college professor is the immersion in optimism from being around exceptional young people. Students remind us that no matter how badly we screw things up, they keep on dreaming and working to reinvent the world as a better and more meaningful place. I sometimes wonder how people who don’t have children or don’t teach can possibly keep their sanity. I count my lucky stars to be able to live and work around such amazing students.
I write this at a time, however, in which the future of physical colleges where students and professors congregate in small classrooms to read and think together is at a crossroads. In The New Yorker, Nathan Heller has perhaps the most illuminating essay on MOOC’s yet to be written. His focus is on Harvard University, which brings a different perspective than most such articles. Heller asks how MOOCs will change not only our wholesale educational delivery at state and community colleges across the country, but also how the rush to transfer physical courses into online courses will transform elite education as well. He writes: “Elite educators used to be obsessed with “faculty-to-student-ratio”; now schools like Harvard aim to be broadcast networks.”
By focusing on Harvard, Heller shifts the traditional discourse surrounding MOOCs, one that usually concentrates on economics. When San Jose State or the California State University system adopts MOOCs, the rationale is typically said to be savings for an overburdened state budget. While many studies show that students actually do better in electronic online courses than they do in physical lectures, a combination of cynicism and hope leads professors to be suspicious of such claims. The replacement of faculty by machines is thought to be a coldly economic calculation.
But at Harvard, which is wealthier than most oil sheikdoms, the warp speed push into online education is not simply driven by money (although there is a desire to corner a market in the future). For many of the professors Heller interviews in his essay, the attraction of MOOCs is that they will actually improve the elite educational experience.
Take for example Gregory Nagy, professor of classics, and one of the most popular professors at Harvard. Nagy is one of Harvard’s elite professors flinging himself headlong into the world of online education. He is dividing his usual hour-long lectures into short videos of about 6 minutes each—people get distracted watching lectures on their Iphones at home or on the bus. He imagines “each segment as a short film” and says that, “crumbling up the course like this forced him to study his own teaching more than he had at the lectern.” For Nagy, the online experience is actually forcing him to be more clear; it allows for spot-checking the participants comprehension of the lecture through repeated multiple-choice quizzes that must be passed before students can continue on to the next lecture. Dividing the course into digestible bits that can be swallowed whole in small meals throughout the day is, Nagy argues, not cynical, but progress. “Our ambition is actually to make the Harvard experience now closer to the MOOC experience.”
It is worth noting that the Harvard experience of Nagy’s real-world class is not actually very personal or physical. Nagy’s class is called “Concepts of the Hero in Classical Greek Civilization.” Students call it “Heroes for Zeroes” because it has a “soft grading curve” and it typically attracts hundreds of students. When you strip away Nagy’s undeniable brilliance, his physical course is a massive lecture course constrained only by the size of the Harvard’s physical plant. For those of us who have been on both sides of the lectern, we know such lectures can be entertaining and informative. But we also know that students are anonymous, often sleepy, rarely prepared, and none too engaged with their professors. Not much learning goes on in such lectures that can’t be simply replicated on a TV screen. And in this context, Nagy is correct. When one compares a large lecture course with a well-designed online course, it may very well be that the online course is a superior educational venture—even at Harvard.
As I have written here before, the value of MOOCs is to finally put the college lecture course out of its misery. There is no reason to be nostalgic for the lecture course. It was never a very good idea. Aside from a few exceptional lecturers—in my world I can think of the reputations of Hegel, his student Eduard Gans, Martin Heidegger, and, of course, Hannah Arendt—college lectures are largely an economical way to allow masses of students to acquire basic introductory knowledge in a field. If the masses are now more massive and the lectures more accessible, I’ll accept that as progress.
The real problems MOOCs pose is not that they threaten to replace lecture courses, but that they intensify our already considerable confusion regarding what education is. Elite educational institutions, as Heller writes, no longer compete against themselves. He talks with Gary King, University Professor of Quantitative Social Science and Drew Gilpin Faust, Harvard’s President, who see Harvard’s biggest threat not to be Yale or Amherst but “The University of Phoenix,” the for-profit university. The future of online education, King argues, will be driven by understanding education as a “data-gathering resource.” Here is his argument:
Traditionally, it has been hard to assess and compare how well different teaching approaches work. King explained that this could change online through “large-scale measurement and analysis,” often known as big data. He said, “We could do this at Harvard. We could not only innovate in our own classes—which is what we are doing—but we could instrument every student, every classroom, every administrative office, every house, every recreational activity, every security officer, everything. We could basically get the information about everything that goes on here, and we could use it for the students. A giant, detailed data pool of all activities on the campus of a school like Harvard, he said, might help students resolve a lot of ambiguities in college life.
At stake in the battle over MOOCs is not merely a few faculty jobs. It is a question of how we educate our young people. Will they be, as they increasingly are, seen as bits of data to be analyzed, explained, and guided by algorithmic regularities, or are they human beings learning to be at home in a world of ambiguity.
Most of the opposition to MOOCs continues to be economically tinged. But the real danger MOOCs pose is their threat to human dignity. Just imagine that after journalists and professors and teachers, the next industry to be replaced by machines is babysitters. The advantages are obvious. Robotic babysitters are more reliable than 18 year olds, less prone to be distracted by text messages or twitter. They won’t be exhausted and will have access to the highest quality first aid databases. Of course they will eventually also be much cheaper. But do we want our children raised by machines?
That Harvard is so committed to a digital future is a sign of things to come. The behemoths of elite universities have their sights set on educating the masses and then importing that technology back into the ivy quadrangles to study their own students and create the perfectly digitized educational curriculum.
And yet it is unlikely that Harvard will ever abandon personalized education. Professors like Peter J. Burgard, who teaches German at Harvard, will remain, at least for the near future.
Burgard insists that teaching requires “sitting in a classroom with students, and preferably with few enough students that you can have real interaction, and really digging into and exploring a knotty topic—a difficult image, a fascinating text, whatever. That’s what’s exciting. There’s a chemistry to it that simply cannot be replicated online.”
Burgard is right. And at Harvard, with its endowment, professors will continue to teach intimate and passionate seminars. Such personalized and intense education is what small liberal arts colleges such as Bard offer, without the lectures and with a fraction of the administrative overhead that weighs down larger universities. But at less privileged universities around the land, courses like Burgard’s will likely become ever more rare. Students who want such an experience will look elsewhere. And here I return to my optimism around graduation.
Dale Stephens of Uncollege is experimenting with educational alternatives to college that foster learning and thinking in small groups outside the college environment. In Pittsburgh, the Saxifrage School and the Brooklyn Institute of Social Science are offering college courses at a fraction of the usual cost, betting that students will happily use public libraries and local gyms in return for a cheaper and still inspiring educational experience. I tell my students who want to go to graduate school that the teaching jobs of the future may not be at universities and likely won’t involve tenure. I don’t know where the students of tomorrow will go to learn and to think, but I know that they will go somewhere. And I am sure some of my students will be teaching them. And that gives me hope.
As graduates around the country spring forth, take the time to read Nathan Heller’s essay, Laptop U. It is your weekend read.
You can also read our past posts on education and on the challenge of MOOCs here.
I must confess, I am no Roger Ebert. I don’t write movie reviews for a living. I love movies, and watch lots of them, and often have strong opinions, like most of us. More than that I cannot claim.
But I have been deeply engaged in the life and thought of Hannah Arendt, having recently finished a book on her. And one I thing I can tell you is that at her core she was Jewish and also very American. The problem of Jewish identity was something she wrestled with her whole life, and in a very advanced way. She looked for data everywhere, even among Nazis, and she pulled ideas from everywhere, seeking to invent something new. By identity, I don’t mean just personal identity. I mean the collective identity upon which personal identities stand, and the politics that surround them. The problem for her was how an ethnic identity could be anchored in political institutions, and fostered, and protected, and yet avoids the close-mindedness and intellectual rigidity that seem inherent in nationalism. Thus too much is constantly made out of her apparent "non-Love" for the Jewish people, something which she wrote to Gershom Scholem after the publication of Eichmann in Jerusalem, which is also a key scene in this movie. Against the backdrop of her own life, however, the idea that only friends mattered sounded just a bit ironic. Arendt was not exactly a "cultivator of her garden." She spent all her time wrapped up in national and international and cultural politics. Jewish politics was a big part of her life.
So as a fan of both movies and Arendt, you can imagine how much I was looking forward to this movie. Unfortunately, I came out deeply disappointed. It’s not simply that this portrait of Arendt is frozen in amber, and celebrates the misunderstandings of 50 years ago, when Eichmann in Jerusalem had just came out. It’s not simply that it ignores the last 15 years of modern scholarship, which re- excavated her Jewishness in order to make sense of the many things in her writings and actions that otherwise don’t. It’s that it turns her story inside out. She becomes a German woman saving the Jews.
I first saw this film in Germany, and I can testify that Germans love the story when told this way. It also seems a story the director loves to tell. After seeing Arendt twice (once in Munich and once in Tel Aviv), I remembered von Trotta’s 2003 movie Rosenstrasse, and was stunned to realize it’s pretty much the same story: German women saving Jewish men. Rosenstrasse, an interesting footnote in Holocaust and legal history ends in a triumphal march with the women bringing their men home, seeming as if they’d risked life and limb. In Hannah Arendt, a similar scene is her big speech at the New School, where the evil administrators (all very Jewish looking) are shamed into submission by her brilliance, while young students (all pretty and Aryan-looking) applaud enthusiastically. Both are archetypal Hollywood “the world is good again” scenes. And both are fundamental distortions of reality, German fantasies being taken for history.
Perhaps that is the key. Perhaps in this age of Tarantino and Spielberg you are free to do what you like. The projection of historical fantasies is now a subgenre. So shouldn’t the Germans be free to enjoy their fantasies about the Jews, about Israel,about German-Jewish relations, about the meaning of German-Jewish reconciliation, you name it? Sure. But, as I’m sure you have noticed, along with passionate fans, these sorts of films always attract large measures of stinging criticism from (a) scholars peeved at gross inaccuracies, and (b) people who hate this fantasy and want a different one. Since for this film I fall into both groups, you should treat my reactions accordingly.
Hollywood conventions may be most visible in the “right with the world” scenes, but they appear throughout the film. The most Hollywood thing about it is that this is a film lionizing thinkers that doesn’t have any thinking in it. We are supposed to know from the camera and the music and the reaction shots that they are having big thoughts and that everyone is awed by them. But if you actually listen to what is supposed to be passing as big thought, Oy. Hannah Arendt and Mary McCarthy: frivolous advice about men. Martin Heidegger, who hovers over the movie like a Black Forest deity, appears via flashbacks, pronouncing things like “We think because we are thinking beings.” Young Hannah Arendt looks up, clearly smitten by such banalities. Under Heidegger’s cloud, Hannah Arendt is not only Germanized, but turned into a sentimental fool. Which is the last description anyone has ever reached for who had ever met her.
As for the Eichmann trial that frames and forms the core of the film, all I can say is don’t get me started. Arendt’s New Yorker articles and the book that came out of them were the source of endless misunderstanding, both at the time and still today. This movie not only adds to it, it builds on it. For von Trotta, “the banality of evil” is a way of normalizing the crimes of the Holocaust: anyone could have done them. Eichmann is no antisemite. Banality is the thus deepest insight, the final dismissal of charges. And it’s the Jews who miss it, and the German-speaking woman who has to tell them, for their own good, to give up on this grudge business and with it also realize their own guilt in the destruction of the Jews.
So far, so normal. Everyday Eichmann in Jerusalem is being misinterpreted like this in classrooms around the world. But there is one thing I can’t forgive, which gives the film its final conclusion, and that is the completely fabricated scene at the end where she is threatened by the Mossad. It is nonsensical for several reasons, but worse is how it is composed. It is a “walking my lonely road” scene that chimes with the very first scene of the movie, when Eichmann is walking along in Argentina just before he is grabbed. There, the Mossad men overpower him completely; he is helpless and held up to scorn. Here, she stands up to them and tells them off; they slink away, grumbling impotent before the truth. The arc is completed. The Israelis, wrong from the beginning, have finally been cowed by The Truth About How Wrong They Were, by the German speaking Athena. And for good measure she throws in a sneering crack about how the Jewish nation must have too much money if it sent four of them.
Tarantino never made up anything more inverted.
**Natan Sznaider is a Professor at the Academic College of Tel Aviv-Yaffo. Among his several books are Jewish Memory and the Cosmopolitan Order: Hannah Arendt and the Jewish Condition and two books on the sociology of the Holocaust.He was born and grew up in Germany, and is a regularly commentator in the German press. He lives in Tel Aviv.
"If this practice [of totalitarianism] is compared with that of tyranny, it seems as if a way had been found to set the desert itself in motion, to let loose a sand storm that could cover all parts of the inhabited earth. The conditions under which we exist today in the field of politics are indeed threatened by these devastating sand storms."
-Hannah Arendt, The Origins of Totalitarianism
Arendt's concluding image in The Origins of Totalitarianism leaves us with a bleak sense of how the mass of lonely and isolated individuals in modern society – the desert –can readily be swept into the "sandstorm" of totalitarianism.
The book sketches the forces that drive this new form of frenetic political motion, as well as the traditional resources that prevented it earlier: the loneliness of the frightened person as opposed to the solitude of the reflective individual; the community of discussion instead of the techniques of "mass movements."
Of the specific principles of totalitarianism that Arendt raises, few are as intriguing as her recasting of what is now termed polycracy [Neumann/ Hüttenberger/ Broszat], the notion that at every level individuals in a totalitarian society found themselves within overlapping systems of bureaucracy and power, unsure which level was of the most import. "All levels of the administrative machine in the Third Reich," she notes as a principal example, "were subject to a curious duplication of offices, with a fantastic thoroughness." At every level of society, individuals were unsure of how to calibrate actions and statements to their context for self-promotion, defense, or simply to be left alone, since there were often two or more overlapping systems of organization defining their position. The average worker, for instance, might be unsure what factor was decisive for their fate: the role of their actual boss, the party connections of colleagues, the intrigues of the secret police, the family connections of acquaintances, or even their role in apparently secondary organizations such as an automobile association. Ultimately, "regular" politics could never pertain, while the politics of the "movement" always had some traction.
The result in Arendt's telling was often a necessary striving beyond proscribed roles to express alignment with the ideals of the party and in the quest for advantage or mere safety. Unlike aspects of her later argument for the "banality of evil," here simply "fitting in" was not enough. In this manner, later scholars have argued, the decisive feature of totalitarian societies was neither the passive "structure" of organization nor the simple "intent" of members, but rather a new middle term created from the ill ease of masses of individuals.
The "conditions under which we exist today" no doubt still include the original possibilities of totalitarianism, but Arendt would also have us ask what has changed, what new aspects of society facilitate or act as windbreaks, so to speak, for such "sandstorms." Earlier windbreaks passively functioned, as Arendt suggests for even relatively dystopian societies such as tyrannies, through the exercise of clear lines of authority, the existence of an active private sphere of life, and the possible formation of a discursive community of individuals, even if only in small groups.
At first glance, the new social technologies credited in recent uprisings and protests suggest a strengthening of these windbreaks by giving voice to critiques of incompetent and unjust administration, the solitary opposition of conscience, and, perhaps most decisively, the possibility of organizing coordinated discussion and action on wide scales. Yet, apace with these changes there are also darker transformations allowed by vast increases in computing memory and power that make tracking information and people on social media progressively easier. In repressive societies this is already occurring through the aggregation of past communications, the profiling of individuals, and guilt by association with members of one's social orbit.
By definition secret, statistical, and yet deeply connected with personal preference and affiliations, the aggregation of data concerning the individual in the context of social networking presents a real potential for abuse by any number of outside powers. Although most apparent in directly repressive societies, this transformation has the potential for abuse in numerous forms of governance. For if not managed properly, the laws and infrastructure of the internet could increasingly give rise to the sense – and reality – that one's words and actions could be interpreted in any number of contexts and by many forms of institutions. The malign influence of polycracy on individual decision, previously a signal feature of totalitarian regimes, might start to appear in any political system whatsoever.
As data about the individual becomes potentially available to a spectrum of interests and parties, ranging from credit agencies and divorce lawyers to political opponents and work rivals, it is easy to imagine individuals (and software) attempting to "mask" or redefine preferences, interests, and affiliations.
The result would be a pervasive self-censorship, but also – in light of this confusing secondary power – a corollary attempt to act out in the manner that seems most open to reward. The ability of individuals to first believe in the honesty of their own choices and speech, and with this the honesty of others, could be profoundly altered, as would the nature of civil society.
Precisely in first connecting the private and public spheres in new ways, groups of friends and political action, social media has the possibility to atomize anew. Debates over the role of law and infrastructure in shaping these contexts takes on a new relevance for preserving the basis of the windbreaks of civil society, ranging from recent initiatives such as "do not track" to the separation of terrorist investigations from other forms of surveillance (such as occurs in Germany), to larger innovations such as the E.U.'s "right to data privacy" or plan for the "right to be forgotten." Although integral perhaps to preventing the "sandstorms" Arendt warned of, these innovations may also prove critical to amplifying the positive features of individual conscience and civil society allowed by social media.