Franco Moretti is a literature professor, and founder of the Stanford Literary Lab, who believes in something called "computational criticism," that is, the ability of computers to aid in the understanding of literature. Joshua Rothman's recent profile of Moretti has provoked a lot of response, most of it defending traditional literary criticism from the digital barbarians at the gates. Moretti's defenders argue, however, that his critics have failed to understand a crucial difference between his work and what they're worried it might supplant: "The basic idea in Moretti’s work is that, if you really want to understand literature, you can’t just read a few books or poems over and over (“Hamlet,” “Anna Karenina,” “The Waste Land”). Instead, you have to work with hundreds or even thousands of texts at a time. By turning those books into data, and analyzing that data, you can discover facts about literature in general—facts that are true not just about a small number of canonized works but about what the critic Margaret Cohen has called the 'Great Unread.'"
The truth Moretti is after, however, has nothing to do with literature, with the bone curdling insights of tragedy or the personal insights of the novel's hero. What Moretti seeks is a better understanding of all the other texts, of the entirety of texts and the overarching literariness of a period or of history as a whole. One could say that rather than supplant the traditional literary critic, Moretti's work will aid the literary historian, if only to give a potentially comprehensive idea of any given zeitgeist. That is true, so far as it goes. But as the already decreasing numbers of literature students are now in part siphoned off into alternative studies of literature that ignore and even disdain the surprising and irreducible quality of momentary shock of insight, the declining impact of the literary sensibility will only be accelerated. This is hardly to condemn Moretti and his data-oriented approach to literature as a reservoir of information into mass society; we ought, nevertheless, to find in the popularity of such trends the provocation to remind ourselves why literature is meant to be read by humans instead of machines.
RB h/t Josh Kopin
Magnus Carlsen—just 22 years old—beat Viswanathan Anand (the reigning world chess champion) this week at the World Chess Championships in Chennai, India. There has been much excitement about Carlsen’s victory, and not simply because of his youth. As Joe Weisenthal writes, Carlsen’s win signifies the emergence of a new kind of chess. We can profitably speak of at least three eras.
First, what is often called the Romantic era of chess. Here is how Weisenthal describes it:
In the old days, high-level chess was a swashbuckling game filled with daring piece sacrifices and head-spinning multi-move combinations where the winner would pull off wins seemingly out of nowhere.
Beginning in the middle of the 20th century, Weisenthal explains, chess became more methodical. New champions would still take chances, but they were studied risks, more considered, and often pre-tested in preparation games. Players would study all past games by opponents analyzed through computers. This meant that the spontaneous move was more often than not beaten back by the prepared answer.
As the study of chess became more rigorous, these wild games became more and more rare at the highest level, as daring (but theoretically weak) combinations became more easy to repel…. Modern chess champions have won by building crushing, airtight, positional superiorities against their opponents, grinding them down and forcing a resignation. The chess is amazing, although frequently less of a high-wire act.
The third era of recent chess might be called the computer age. It began, for better or worse, when IBM’s Deep Blue super computer beat the great chess champion Gary Kasparov in 1997. The current generation of players (like Carlsen) were raised playing chess against computers. This has changed the way the game is played.
In an essay a while back in the NYRB, Kasparov reflected on what the rise of chess-playing computers meant.
The heavy use of computer analysis has pushed the game itself in new directions. The machine doesn’t care about style or patterns or hundreds of years of established theory. It counts up the values of the chess pieces, analyzes a few billion moves, and counts them up again. (A computer translates each piece and each positional factor into a value in order to reduce the game to numbers it can crunch.) It is entirely free of prejudice and doctrine and this has contributed to the development of players who are almost as free of dogma as the machines with which they train. Increasingly, a move isn’t good or bad because it looks that way or because it hasn’t been done that way before. It’s simply good if it works and bad if it doesn’t. Although we still require a strong measure of intuition and logic to play well, humans today are starting to play more like computers.
One way to put this is that as we rely on computers and begin to value what computers value and think like computers think, our world becomes more rational, more efficient, and more powerful, but also less beautiful, less unique, and less exotic. The romantic era of elegant and swashbuckling chess is over. But so too is the rational, calculated, grinding chess that Weisenthal describes as the style of the late 20th century. Since all players are trained by the logical rigidity of playing against computers, playing by pure logic will rarely give one side the ultimate advantage.
Which brings us to Carlsen and the buzz about his victory at the World Chess Championships. Behind Carlsen’s victories is what is being called his “nettlesomeness,” a concept apparently developed by the computer science professor Ken Regan. The idea has been described recently by Tyler Cowen:
Carlsen is demonstrating one of his most feared qualities, namely his “nettlesomeness,” to use a term coined for this purpose by Ken Regan. Using computer analysis, you can measure which players do the most to cause their opponents to make mistakes. Carlsen has the highest nettlesomeness score by this metric, because his creative moves pressure the other player and open up a lot of room for mistakes. In contrast, a player such as Kramnik plays a high percentage of very accurate moves, and of course he is very strong, but those moves are in some way calmer and they are less likely to induce mistakes in response.
For Weisenthal, the rise of “nettlesomeness” signifies the "new era of post-modern chess. It's not about uncorking crazy, romantic brilliancies. And it's not about achieving crushing, positional victories. It's about being as cool as a computer while your opponent does things that are, well, human."
I am not sure Weisenthal gives full credit to Carlsen’s nettlesomeness. Yes, Carlsen does engage in a bit of emotional warfare—the getting up from the table, trying to throw off one’s opponent. But his nettlesomeness also involves “his creative moves pressure the other player and open up a lot of room for mistakes.” This is important.
In Kasparov’s earlier essay, he also describes his experience of two matches played against the Bulgarian Veselin Topalov, at the time the world's highest ranked Chess Master. When Kasparov played him in regular timed chess, he bested Topalov 3-1. But when he played him in a match when both were allowed to consult a computer for assistance, the match ended in a 3-3 draw. The lesson Kasparov drew from this is that computer-assisted chess magnifies the importance of human creativity:
The computer could project the consequences of each move we considered, pointing out possible outcomes and countermoves we might otherwise have missed. With that taken care of for us, we could concentrate on strategic planning instead of spending so much time on calculations. Human creativity was even more paramount under these conditions.
One may, however, question Kasparov’s conclusion. The computers did even out the match. As he admits, “My advantage in calculating tactics had been nullified by the machine.” More often than not, the result of computer-assisted chess is a draw.
What Carlsen’s victory may show, however, is that at a time when most players learn against machines and become technical wizards, it is those players who rise above the calculating game and are adept at finding the surprising or at least unsettling moves that will, at the very top of the sport, prove victorious. That is what Regan and Cowen mean by nettlesomeness. All of which suggests that, at least for the top chess player in the world, chess remains a human endeavor in which creativity can be enlisted to discombobulate human opponents playing increasingly like machines.
For your weekend read, take a long gander at Weisenthal’s essay. It includes simulated chess games to illustrate his point! Happy reading and playing.
Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.
Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.
How does the rise of a secret, inscrutable, and unaccountable security bureaucracy in the United States impact law-abiding citizens? This is a crucial question as many of us struggle to understand the domestic spying programs unveiled by Edward Snowden. In one such program, Xkeyscore, low-level NSA analysts are permitted to “mine enormous agency databases by filling in a simple on-screen form giving only a broad justification for the search. The request is not reviewed by a court or any NSA personnel before it is processed.” It is arguably true that the government needs to be able to act in extraordinary ways to protect the country at a time of world terrorism. It is equally true, however, that once such information is available and held by the government, it is likely that it will be abused. Information is easily transferred. If the government collects and holds data on citizens, that data will eventually be misused, whether by the government or others. One case in point is Laura Poitras. In Peter Maass’ must-read cover story in last week’s New York Times Magazine, he tells how since 2006 Poitras has been on government watch lists because of rumors falsely spread about her. While winning awards and producing lauded documentaries, she was repeatedly detained, met with armed guards, and had her computers and notes taken, searched, and held for weeks—because of secret and ultimately false rumors. And all before she got involved with Edward Snowden. Now Poitras—who has helped to bring Snowden’s revelations about the illegal excesses of government surveillance to light in a responsible manner—may never be able to enter the United States again without being harassed and arrested. It is important to balance the need for security against the rights of citizens and the essential American right of free speech and meaningful dissent. But how did it happen that the Attorney General of the United States of America had to write to the President of Russia assuring him that if Snowden were extradited to the U.S. he would not be tortured? As Daniel Ellsberg has pointed out, when he turned himself in after publishing the Pentagon papers, he was freed on bond pending trial. Would the Obama administration’s justice department have treated Snowden that way? There is in the end a fine line separating the surveillance of terrorists and the harassment of citizens. Maass’ article sheds light on the surveillance state through the personal story of one woman. Wherever you come down on the question of national security surveillance, it is an essay that you should read.
Laura Miller reviews Jesse Walker's new short history of American conspiracy theories, For Walker, the conspiracy theory is a kind of national past time, with some conspiracy or another widely discussed within many disparate demographics. Miller delves into why this might be: "As Walker sees it, our brains are predisposed to see patterns in random data and to apply stories to explain them, which is why conspiracy theory can be so contagious. Although conspiracies do exist, we need to be vigilant against our propensity to find them whether they are there or not. The most sensible outlook would appear to be that of Robert Anton Wilson, who concluded that “powerful people” could well be “engaged in criminal plots” but who found it unlikely that “the conspirators were capable of carrying out those plots competently.” Or, I would add, of covering them up effectively."
President Obama gave a speech this week promising to take on university tuition. It is a worthy goal at a time of skyrocketing student debt. But the devil is in the details and here the details include a universal assessment board that will rank how well schools prepare students for employment. The idea is to allow students and parents to know which schools are the best return on their investment and to shame colleges and universities into cutting costs and focusing more on preparing students for gainful employment. There are many questions that could be asked, including whether we are better served spending money to make college more affordable or by actually turning high school—which is already free and mandatory—into a meaningful experience that prepares students for work and citizenship? But philosophical questions aside, does such assessment work? Not according to Colin Macilwain, writing in the Scientific Journal Nature. Discussing “Snowball,” a system designed to assess British Universities, Macilwain writes: “A major problem with metrics is the well-charted tendency for people to distort their own behaviour to optimize whatever is being measured (such as publications in highly cited journals) at the expense of what is not (such as careful teaching). Snowball is supposed to get around that by measuring many different things at once. Yet it cannot quantify the attributes that society values most in a university researcher — originality of thinking and the ability to nurture students. Which is not the same as scoring highly in increasingly ubiquitous student questionnaires.” As assessments become a way of life, it is important to recall their unintended ill-effects.
In an essay about the ways that Iran's regime has used the deaths of "martyrs" to political advantage in the past and how opponents of the regime used that same rhetoric to push the opposite way following the death of Neda Agha-Soltan in 2009, Mehdi Okasi describes his own youthful push back as an American-Iranian visiting Tehran as a teenager: "I ignored my family’s warnings, and carried my copy of The Satanic Verses with me throughout Tehran: to coffee shops, internet cafes, even the park. I held it in my hand as I walked around the city, placed it on tables as I ordered in restaurants, or on the counter at the local bakery where my sweet tooth was placated daily by cream pastries layered with jam and rolled in crushed pistachios. I even made a point of opening it in view of police and soldiers. But to my disappointment, no one paid me any attention. When I visited the many bookstores around Engelob Square, I asked booksellers if they had a copy squirreled away. My question didn’t inspire rage or offense. They didn’t gasp in disbelief or chase me out the store with a broom. Instead, in a rather bored tone, they informed me that the book wasn’t available in Iran. When they learned that I was visiting from America, they added that I could probably find a copy at so-and-so’s bookstore. Like anything else that was forbidden, you only had to know where to look and how to ask for it."
Ta-Nehisi Coates has spent part of the summer learning French in Paris. His continuing education in a foreign tongue, and his decision to pursue that education in a place where that language has spoken, has revealed to him the arrogance of native speakers of English; Coates tells his friends that he wishes more Americans were multilingual and "they can't understand. They tell me English is the international language. Why would an American need to know anything else?" For his own part, Coates seems to have been dissuaded of that particular notion simply by venturing into the world outside of his door; humility and empathy have been his prizes. "You come to this place" he says "and find yourself disarmed. You see that it has its own culture, its own ages and venerable traditions, that the people do not tremble before you. And then you understand that there is not just intelligent life in outer space, but life so graceful that it shames you into silence."
The sixth annual fall conference, "Failing Fast:The Crisis of the Educated Citizen"
Olin Hall, Bard College
Learn more here.
Thomas Levin of Princeton came to Bard Tuesday to give a lecture to the Drones Seminar, a weekly class I am participating in, led by my colleague Thomas Keenan and conceived by two of our students Arthur Holland and Dan Gettinger. Levin has studied surveillance techniques for years and he came to think with us about how the present obsession with drones will transform our landscape and our imaginations. At a time when the obsession with drones in the media is focused on their offensive capacities, it is important to recall that drones were originally developed as a surveillance technology. If drones are to become omnipresent in our lives, what will that mean?
Levin began by reminding us of the embrace of other surveillance devices in mass culture, like recording devices at the turn of the 20th century. He offered old postcards and cartoons in which unsuspecting servants or children were caught goofing off or insulting their superiors with newfangled recording devices like the cylinder phonograph and, later, hidden cameras and spy satellites. The realization emerges that we are being watched, and this sense pervades the popular consciousness. In looking to these representations from mass culture of the fear, awareness, and even expectation that we will be watched and listened to, Levin finds the emergence of what he calls “rhetoric of surveillance.”
In short, we talk and think constantly about the fact that we are or may be being watched. This cannot but change the way we behave and act. Levin poses this question. What, he asks, is the emerging drone imaginary?
To answer that question it is helpful to revisit an uncannily prescient imagination of the rise of drones in a text written over half a century ago, Ernst Jünger’s The Glass Bees. Originally published in 1957 and recently reissued in translation with an introduction by science fiction novelist Bruce Sterling, Jünger’s text centers around a job interview between an unnamed former light cavalry officer and Giacomo Zapparoni, secretive, filthy rich, and powerful proprietor of The Zapparoni Works that “manufactured robots for every imaginable purpose.” Zapparoni’s secret, however, is that he instead of big and hulking robots, he specialized in Lilliputian robots that gave “the impression of intelligent ants.”
The robots were not powerful in themselves, but they worked together. Like drone bees and drone ants—that exist only for procreation and then die—the small robots, or drones, serve specific purposes in industry or business. Zapparoni’s tiny robots “could count, weigh, sort gems or paper money….” Their power came from their coordination.
The robots “worked in dangerous locations, handling explosives, dangerous viruses, and even radioactive materials. Swarms of selectors could not only detect the faintest smell of smoke but could also extinguish a fire at an early stage; others repaired defective wiring, and still others fed upon filth and became indispensable in all jobs where cleanliness was essential.” Dispensable and efficient, Zapparoni’s little robots could do the most dangerous and least desirable tasks.
In The Glass Bees, we are introduced to Zapparoni’s latest invention: flying glass bees that can pollinate flowers much more efficiently and quickly than natural bees. The bees “were about the size of a walnut still encased in its green shell.” They were completely transparent and they were an improvement upon nature, at least insofar as the pollination of flowers was concerned. If a true or natural bee “sucked first on the calyx, at least a dessert remained.” But Zapparoni’s glass bees “proceeded more economically; that is, they drained the flower more thoroughly.” What is more, the bees were a marvel of agility and skill: “Given the flying speed, the fact that no collisions occurred during these flights back and forth was a masterly feat.” According to the cavalry officer, “It was evident that the natural procedure had been simplified, cut short, and standardized.”
Before our hero is introduced to Zapparoni’s bees, he is given a warning: “Beware of the bees!” And yet he forgets this warning. Watching the glass bees, the cavalry officer is fascinated. He felt himself “come under the spell of the deeper domain of techniques,” which like a spectacle “both enthralled and mesmerized.” His mind, he writes, went to sleep and he “forgot time” and “also entirely forgot the possibility of danger.”
Jünger’s book tells, in part, the story of our fascination and subjection to technologies of surveillance. On Facebook or Words with Friends, or even using our smart phones or GPS systems, we allow our fascination with technology to dull our sense of its danger. As Jünger writes: “Technical perfection strives toward the calculable, human perfection toward the incalculable. Perfect mechanisms—around which, therefore, stands an uncanny but fascinating halo of brilliance—evoke both fear and a titanic pride which will be humbled not by insight but only by catastrophe.”
The protagonist of The Glass Bees, a former member of the Light Cavalry and later a tank inspector, had once been fascinated by the “succession of ever new models becoming obsolete at an ever increasing speed, this cunning question-and-answer game between overbred brains.” What he came to see is that “the struggle for power had reached a new stage; it was fought with scientific formulas. The weapons vanished in the abyss like fleeting images, like pictures one throws into the fire. New ones were produced in protean succession.” Victory ceased to be about physical battle; it became, instead, a contest of technical mastery and knowledge.
The danger drones pose is not necessarily military. As General Stanley McChrystal rightly said when I asked him about this last week at the New York Historical Society, drones are simply another military tool that can be used for good or ill. Many fret today about collateral damage by drones and forget that if we had to send in armies to do these tasks the collateral damage would be much greater. Others worry about assassination, but drones are simply the tool, not the person pulling the trigger. It may be true that having drones when others don’t offers an enormous military advantage and makes the decision to go to kill easier, but when both sides have drones, we will all think heavily between beginning a cycle of illegal assassinations.
Rather, the danger of drones is how they change us as humans. As we humans interact more regularly with drones and machines and computers, we will inevitably come to expect ourselves and our friends and our colleagues and our lovers to act with the efficiency and selflessness of drones. Sherry Turkle worries that mechanical companions offer such fascination and unquestionable love that humans are beginning to prefer spending time with their machines than with other humans—who make demands, get tired, act cranky, and disappoint us. Ron Arkin has argued that robot soldiers will be more humane at war than human soldiers, who often act rashly out of exhaustion, anger, or revenge. Doctors are learning to rely on Watson and artificially intelligent medical machines, who can bring databases of knowledge to bear on diagnoses with the speed and objectivity that humans can only dream of. In every area of human life where humans once were thought to be necessary, drones and machines are proving more reliable, more capable, and more desirable.
The danger drones represent is not what they do better than humans, but that they do it better than humans. They are a further step in the human dream of self-improvement—the desire to overcome our shame at our all-too-human limitations.
The incredible popularity of drones today is partly a result of their freeing us to fight wars with ever-reduced human and economic costs. But drones are popular also because they appeal to the human desire for perfection. The question is, however, how perfect we humans can be before we begin to lose our humanity. That is, of course, the force of Jünger’s warning: Beware of the bees!
As drones appear everywhere around us, you would do well to put down the newspaper and turn off You Tube and, instead, revisit Ernst Jünger’s classic tale of drones. The Glass Bees is your weekend read. You can read Bruce Sterling’s introduction to The Glass Bees here.
One of the great documents of American history is the Constitution of the Commonwealth of Massachusetts, written in 1779 by John Adams.
In Section Two of Chapter Six, Adams offers one of the most eloquent testaments to the political virtues of education. He writes:
Wisdom and knowledge, as well as virtue, diffused generally among the body of the people, being necessary for the preservation of their rights and liberties; and as these depend on spreading the opportunities and advantages of education in the various parts of the country, and among the different orders of the people, it shall be the duty of legislatures and magistrates, in all future periods of this commonwealth, to cherish the interests of literature and the sciences, and all seminaries of them; especially the university at Cambridge, public schools, and grammar-schools in the towns; to encourage private societies and public institutions, rewards and immunities, for the promotion of agriculture, arts, sciences, commerce, trades, manufactures, and a natural history of the country; to countenance and inculcate the principles of humanity and general benevolence, public and private charity, industry and frugality, honesty and punctuality in their dealings; sincerity, and good humor, and all social affections and generous sentiments, among the people.
Adams felt deeply the connection between virtue and republican government. Like Montesquieu, whose writings are the foundation on which Adams’ constitutionalism is built, Adams knew that a democratic republic could only survive amidst people of virtue. That is why his Constitution also held that the “happiness of a people and the good order and preservation of civil government essentially depend upon piety, religion, and morality.”
For Adams, piety and morality depend upon religion. The Constitution he wrote thus holds that a democratic government must promote the “public worship of God and the public instructions in piety, religion, and morality.” One of the great questions of our time is whether a democratic community can promote and nourish the virtue necessary for civil government in an irreligious age? Is it possible, in other words, to maintain a citizenry oriented to the common sense and common good of the nation absent the religious bonds and beliefs that have traditionally taught awe and respect for those higher goods beyond the interests of individuals?
Hannah Arendt saw the ferocity of this question with clear eyes. Totalitarianism was, for here, the proof of the political victory of nihilism, the devaluation of the highest values, the proof that we now live in a world in which anything is possible and where human beings no longer could claim to be meaningfully different from ants or bees. Absent the religious grounding for human dignity, and in the wake of the loss of the Kantian faith of the dignity of human reason, what was left, Arendt asked, upon which to build the world of common meaning that would elevate human groups from their bestial impulses to the human pursuit of good and glory?
The question of civic education is paramount today, and especially for those of us charged with educating our youth. We need to ask, as Lee Schulman recently has: “What are the essential elements of moral and civic character for Americans? How can higher education contribute to developing these qualities in sustained and effective ways?” In short, we need to insist that our institutions aim to live up to the task Adams claimed for them: “to countenance and inculcate the principles of humanity and general benevolence, public and private charity, industry and frugality, honesty and punctuality in their dealings; sincerity, and good humor, and all social affections and generous sentiments, among the people.”
Everywhere we look, higher education is being dismissed as overly costly and irrelevant. In many, many cases, this is wrong and irresponsible. There is a reason that applications continue to increase at the best colleges around the country, and it is not simply because these colleges guarantee economic success. What distinguishes the elite educational institutions in the U.S. is not their ability to prepare students for technical careers. On the contrary, a liberal arts tradition offers useless education. But parents and students understand—explicitly or implicitly—that such useless education is powerfully useful. The great discoveries in physics come from useless basic research that then power satellites and computers. New brands emerge from late night reveries over the human psyche. And those who learn to conduct an orchestra or direct a play will years on have little difficulty managing a company. What students learn may be presently useless; but it builds the character and forms the intellect in ways that will have unintended and unimaginable consequences over lives and generations.
The theoretical justifications for the liberal arts are easy to mouth but difficult to put into practice. Especially today, defenses of higher education ignore the fact that colleges are not doing a great job of preparing students for democratic citizenship. Large lectures produce the mechanical digestion of information. Hyper-specialized seminars forget that our charge is to teach a liberal tradition. The fetishizing of research that no one reads exemplifies the rewarding of personal advancement at the expense of a common project. And, above all, the loss of any meaningful sense of a core curriculum reflects the abandonment of our responsibility to instruct students about making judgments about what is important. At faculties around the country, the desire to teach what one wants is seen as “liberal” and progressive, but it means in practice that students are advised that any knowledge is equally is good as any other knowledge.
To call for collective judgment about what students should learn is not to insist on a return to a Western canon. It is to say that if we as faculties cannot agree on what is important than we abdicate our responsibility as educators, to lead students into a common world as independent and engaged citizens who can, and will, then act to remake and re-imagine that world.
John Adams was one of Hannah Arendt’s favorite thinkers, and he was because he understood the deep connection between virtue and republicanism. Few documents are more worth revisiting today than the 1780 Constitution of the Commonwealth of Massachusetts. It is your weekend read.
The Wall Street Journal ran an interview this week with Luke Muehlhauser, the Executive Director of the Singularity Institute. The Journal asked: Will Artificial Intelligence Make us Obsolete? Muehlhauser's answer was, well yes. In his words:
Cognitive science has discovered that everything the human mind does is done by information processing and machines can do information processing too.
The first statement is clearly false, or at least depends on a strangely mixed up idea of "information processing." The old determinist canard that humans are simply complex machines has not been proven or discovered by cognitive science. And even if humans do process billions upon billions of bits of information it is not at all clear that such a humanly fallible process is reproducible. That is not the claim that cognitive science can make.
But cognitive science can claim that machines can be built that act in ways that are so like humans as to be almost nearly indistinguishable from them. Or, they can even be better than humans in doing many quintessentially human tasks. So machines can not only beat humans at chess, they can make moves that seem like moves only a human could have made, as Gary Kasparov learned to his dismay in the second game of his rematch with Deep Blue. Machines can create paintings that appear to be fully creative, as does Aaron, the painting machine created by artist and computer scientist Harold Cohen. And machines can increasingly make ethical decisions in warfare, as the robo-ethicist Ron Arkin has argued—decisions that are more humane than those made by human warriors.
Too much of the debate over artificial intelligence is caught up in the technical and really irrelevant question of whether machines can fully replicate human beings. The point is that if machines act "as if" they are human, or if they are capable of doing what humans do better than humans, we will gradually and continually allow machines to take over more and more of the basic human activities that make up our world. Already computers make most of the trades on Wall Street and computers are increasingly used in making medical diagnoses. Computers are being used to educate our children and write news stories. Caregivers for the elderly are being replaced by robotic companions. And David Levy, artificial intelligence researcher at the University of Maastricht in the Netherlands, argues that we will be marrying robots in the near future. It is not that these robotic lovers or artificial artists are human, but that they love and paint in ways that do or will soon pass the Turing test: they will be impossible to distinguish from human works.
Undoubtedly one reason machines are acting more human is that humans themselves are acting less so. As we interact more and more with machines, we begin to act predictably, repetitively, and less surprisingly. There is a convergence at foot, and it is the dehumanization of human beings as much as the humanization of robots that should be worrying us.
There is probably no presidential speech more quoted in Academic circles than Dwight D. Eisenhower's 1961 farewell speech, on the final day of his presidency. It was in that speech that Eisenhower warned of the danger of a military-industrial complex.
The need for a permanent army and a permanent arms industry creates, he writes, a gargantuan defense establishment that would wield an irresistible economic, political, and spiritual influence. In the face of this military-industrial complex, we as a nation must remain vigilant.
In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists and will persist.
Eisenhower's speech was prescient. Particularly academics love to point to his speech to criticize bloated defense spending and point to the need to critically resist the military demands for more weapons and more soldiers. They are undoubtedly right to do so.
This is true even as today the military may be the one significant institution in American life where top leaders are arguing that America's world preeminence is not sustainable. In Edward Luce's excellent new book Time to Start Thinking, he describes how military leaders are convinced that the U.S. "should sharply reduced its "global footprint" by winding up all wars, notably in Afghanistan, and by closing peacetime military bases in Germany, South Korea, the UK, and elsewhere." The military leaders Luce spoke to also said that the US must learn to live with a nuclear Iran and "stop spending so much time and resources on the war against Al-Qaeda." Military leaders, Luce reports, are upset that "In this country 'shared sacrifice' means putting a yellow ribbon around the oak tree and then going shopping." Many military people seem to share Admiral Michael Mullen's view that the US national debt is the "country's number one threat—greater than that posed by terrorism, by weapons of mass destruction, and by global warming." One must think hard about the fact that military leaders see the need for "shared sacrifice" that will shrink the military-industrial complex while Americans and their elected leaders still speak about tax cuts and stimulus.
Too frequently forgotten in Eisenhower's speech, or even simply overlooked, is the fact that Eisenhower follows his discussion of the military-industrial complex with a similar warning about the dangers of a "revolution in the conduct of research." Parallel to the military-industrial complex is the danger of a university-government complex. (Hat Tip, Tom Billings (see comments)). Eisenhower writes:
Akin to, and largely responsible for the sweeping changes in our industrial-military posture, has been the technological revolution during recent decades. In this revolution, research has become central; it also becomes more formalized, complex, and costly. A steadily increasing share is conducted for, by, or at the direction of, the Federal government.
Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.
Just as modern warfare demands a huge and constant arms industry, so too does the technological revolution demand a huge and constant army of researchers and scientists. This army can only be organized and funded by government largesse. There is a danger, Eisenhower warns, that the university-government complex will take on a life of its own, manufacturing unreal needs (e.g. a Bachelor of Arts degree in order to manage an assembly line) and liberally funding research with little regards to quality, meaning, or need. While the university-government complex is not nearly as expensive or dangerous as the military-industrial complex, there is little doubt that it exists.
Eisenhower warns of a double threat of this university-government complex. First, the nation's scholars could be dominated by Federal employment, and gear their research to fit with governmental mandates. And second, the opposite danger, that "public policy could itself become the captive of a scientific-technological elite."
The existence and power of just such a scientific-technological elite is undeniable today. On the one side are the free-market idealogues, those acolytes of Friedman, Hayek, and Coase, who insist that policy be geared towards rational, self-regulating, economic actors. That real people do not conform to theories of rational behavior is a problem with the people, not the theories.
On the other side are the welfare-state adherents, who insist on governmental support for not only the poor, but also the working classes, the bankers, and corporations. The sad fact that 50 years of anti-poverty programs have not alleviated poverty or that record amounts of money spent on education has seen educational attainment decrease rather than increase is seen to be no argument for the failure of technocratic-governmental solutions. It just means more money and more technical know-how are needed.
It is simply amazing that people in academia can actually defend the current system that we are part of. Of course there are good schools and fine teachers and serious students. But we all know the system is a failure. Graduate students are without prospects; faculty spend so much time publishing articles and books that no one reads; administrators make ever more - sometimes twelve times as much as full professors-and come more and more to serve as the lifeblood of universities; and it is the rare student who amidst the large classes, absent faculty, and social and financial pressures, somehow makes college an intellectual experience.
The idea and practice of college needs to be re-imagined and re-thought. Entrenched interests will oppose this. But at this point the system is so broken that it simply cannot survive. On a financial level, large numbers of universities are being kept afloat on the largesse of federal student loans. If those loans were to disappear or dry up, many colleges would disappear or at the least shrink greatly. This should not happen. And yet, putting our young people $1 trillion in debt is not an answer. For too long we have been paying for our lifestyles with borrowed money. We are now used to our inflated lifestyles and unwilling to give them up. Something will have to give.
The current cost of a college education is unsustainable except for the very top schools that attract the very richest students who then fund endowments that allow those schools to subsidize economic, national, and racial diversity. For schools that cannot attract the wealthiest or do not have endowments that protect them from market forces, change will have to come. This will mean, in many instances, faculty salaries will decrease and costs will have to come down. In other colleges, costs will rise and university education will be ever less accessible. Either way, the conviction that everyone needs a liberal arts degree will probably be revised.
I have no crystal ball showing where this will all lead. But there are better and worse ways that the change will come, and I for one hope that if we turn to honestly thinking about it in the present, the future will be more palatable. This is the debate we need to have.
Rebecca Thomas has a long and thoughtful response to my post on Gary Kasparov's article on computers, chess, and humanity. The whole comment is worth reading. But here is how she begins:
Regarding the first of the three comments, I have to take issue with the idea that a chess game played by a computer is necessarily less beautiful than one played by a human. There are various kinds of beauty, and mathematical beauty is a very real thing. Some proofs are more elegant than others, for instance. Some x-y curves are quite beautiful, and often these are captured by particularly compact mathematical expressions. One could wonder why: is this preference for simplicity inherent in our idea of beauty, or have we preferentially developed (mathematical) language to describe things we find beautiful?
Surely, there is beauty to math. And there are various kinds of beauty. An efficient, powerful, and unstoppable game of chess played by a computer may be truly beautiful in its reduction of complexity to simplicity.
The point Kasparov makes is not that rationality is not beautiful in some way, but that it changes the idea of beauty in chess. A bold move, a risky move, a daring move has been valued in the world of chess. Chess, despite its rational reputation, has had an emotional and adventurous side. However, against computers--or even against humans aided by computers--such risks rarely succeed and thus they are devalued.
Chess changes. It becomes less quirky, less risky, and more rational. I don't think it wrong to say that chess becomes less human. Chess, in the age of computer chess, loses the valuation of a particularly human beauty, even if it might reflect an impeccably beautiful mathematical rationality.
It need not be that mathematical beauty is inferior to human beauty, as Rebecca suggests I must mean, but simply that the elimination of the human beauty as a meaningful option in chess is to be regretted.
II. Rebecca's second related point is to concede that chess players are internalizing the values of computer chess and thus playing more and more like computers. But, she argues this is not so new or so bad. Chess has changed before. New theories of chess emerge all the time. Why is this different? Why is this change, she might ask, the tipping point that makes chess less human?
I think the answer is the one given above. The values and approach to chess this particular change inaugurates take one element of chess--its rationality--and elevate it to the only relevant element of chess. All competing theories are judged by their ability to succeed over a hyper-rational strategy, and they will eventually be found wanting. Those who play chess (as opposed to making art with chess pieces) will succumb to the values of computerized chess. While earlier theories of chess may have aspired for complete dominance, only a purely rational computer chess can achieve that aim.