Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.
Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.
Peter Ludlow in the Stone remarks on the generational divide in attitudes towards whistle blowers, leakers, and hackers. According to Time Magazine, “70 percent of those age 18 to 34 sampled in a poll said they believed that Snowden “did a good thing” in leaking the news of the National Security Agency’s surveillance program.” Ludlow agrees and cites Hannah Arendt’s portrait of Adolf Eichmann for support: “In “Eichmann in Jerusalem,” one of the most poignant and important works of 20th-century philosophy, Hannah Arendt made an observation about what she called “the banality of evil.” One interpretation of this holds that it was not an observation about what a regular guy Adolf Eichmann seemed to be, but rather a statement about what happens when people play their “proper” roles within a system, following prescribed conduct with respect to that system, while remaining blind to the moral consequences of what the system was doing — or at least compartmentalizing and ignoring those consequences.” Against those who argue that it is hubris for leakers to make the moral decision to expose wrongdoing, Ludlow insists: “For the leaker and whistleblower the answer to Bolton is that there can be no expectation that the system will act morally of its own accord. Systems are optimized for their own survival and preventing the system from doing evil may well require breaking with organizational niceties, protocols or laws. It requires stepping outside of one’s assigned organizational role.” Roger Berkowitz judges Ludlow’s use of Arendt in the Weekend Read.
Two years on, Rebecca Solnit reflects on the failure of Occupy Wall Street. It is difficult to deny that failure. Yet "change," Solnit writes "is rarely as simple as dominos. Sometimes, it’s as complex as chaos theory and as slow as evolution. Even things that seem to happen suddenly turn out to be flowers that emerge from plants with deep roots in the past or sometimes from long-dormant seeds." Solnit is not so sure that Occupy will prove to be as unsuccessful as it has seemed so far. It may be that the experience of acting and speaking in public left the occupiers with a feeling for the empowering nature of speech. And it may be that these newly empowered speakers have simply moved on to other movements. Or maybe, as did the Woodstock generation, they will grow up, move on, and retreat into their private lives. The protestors are gone. Zuccotti Park sits unoccupied. But the experience of public action and the sense of injustice in the face of unprecedented income inequality live on, which means that Occupy is still a story without an end. It has not failed—at least not yet.
In a wide ranging interview conducted by a former student of hers, Marilynne Robinson opens up about what she finds dangerous in contemporary thinking: "I think there are limits to how safe a progressive society can be when its conception of the individual seems to be shrinking and shrinking. It’s very hard to respect the rights of someone you do not respect. I think that we have almost taught ourselves to have a cynical view of other people. So much of the scientism that I complain about is this reductionist notion that people are really very small and simple. That their motives, if you were truly aware of them, would not bring them any credit. That’s so ugly. And so inimical to the best of everything we’ve tried to do as a civilization and so consistent with the worst of everything we’ve ever done as a civilization." There are few writers today who speak so forcefully and so insightfully.
In an interview, Norwegian writer Karl Ove Knausgård, author of the suggestively titled six book autobiographical series My Struggle, talks about the recent evolution of shame and the role he thinks it plays in writing: "It’s constructed for social purposes, to protect us and make us behave well to others. But for me, the shame has become a bit extreme. However, if you take for example my mother, you’ll see that she’s driven by moral values – meaning that you should behave and shouldn’t behave in certain ways, and not trespass any limits. If you go back further, to my grandmother, you’ll see that she’s even more like that: driven by shame and the thought that you shouldn’t think you’re someone special… but now, society has become almost shameless. That’s actually good since it gives a kind of freedom. We consider the old, functionless shame destructive. Today, if you have a strong sense of shame you also have a strong desire to overcome it. And that’s when you can write."
Drones, Killer Robots and Push-ButtonWars
A Conversation with Roger Berkowitz and Peter Asaro
Learn more here.
The sixth annual fall conference, "Failing Fast:The Educated Citizen in Crisis"
Olin Hall, Bard College
Learn more here.
Hannah Arendt: Film Screening, Lecture, and Discussion with Roger Berkowitz
One Day University
Learn more here.
Thomas Levin of Princeton came to Bard Tuesday to give a lecture to the Drones Seminar, a weekly class I am participating in, led by my colleague Thomas Keenan and conceived by two of our students Arthur Holland and Dan Gettinger. Levin has studied surveillance techniques for years and he came to think with us about how the present obsession with drones will transform our landscape and our imaginations. At a time when the obsession with drones in the media is focused on their offensive capacities, it is important to recall that drones were originally developed as a surveillance technology. If drones are to become omnipresent in our lives, what will that mean?
Levin began by reminding us of the embrace of other surveillance devices in mass culture, like recording devices at the turn of the 20th century. He offered old postcards and cartoons in which unsuspecting servants or children were caught goofing off or insulting their superiors with newfangled recording devices like the cylinder phonograph and, later, hidden cameras and spy satellites. The realization emerges that we are being watched, and this sense pervades the popular consciousness. In looking to these representations from mass culture of the fear, awareness, and even expectation that we will be watched and listened to, Levin finds the emergence of what he calls “rhetoric of surveillance.”
In short, we talk and think constantly about the fact that we are or may be being watched. This cannot but change the way we behave and act. Levin poses this question. What, he asks, is the emerging drone imaginary?
To answer that question it is helpful to revisit an uncannily prescient imagination of the rise of drones in a text written over half a century ago, Ernst Jünger’s The Glass Bees. Originally published in 1957 and recently reissued in translation with an introduction by science fiction novelist Bruce Sterling, Jünger’s text centers around a job interview between an unnamed former light cavalry officer and Giacomo Zapparoni, secretive, filthy rich, and powerful proprietor of The Zapparoni Works that “manufactured robots for every imaginable purpose.” Zapparoni’s secret, however, is that he instead of big and hulking robots, he specialized in Lilliputian robots that gave “the impression of intelligent ants.”
The robots were not powerful in themselves, but they worked together. Like drone bees and drone ants—that exist only for procreation and then die—the small robots, or drones, serve specific purposes in industry or business. Zapparoni’s tiny robots “could count, weigh, sort gems or paper money….” Their power came from their coordination.
The robots “worked in dangerous locations, handling explosives, dangerous viruses, and even radioactive materials. Swarms of selectors could not only detect the faintest smell of smoke but could also extinguish a fire at an early stage; others repaired defective wiring, and still others fed upon filth and became indispensable in all jobs where cleanliness was essential.” Dispensable and efficient, Zapparoni’s little robots could do the most dangerous and least desirable tasks.
In The Glass Bees, we are introduced to Zapparoni’s latest invention: flying glass bees that can pollinate flowers much more efficiently and quickly than natural bees. The bees “were about the size of a walnut still encased in its green shell.” They were completely transparent and they were an improvement upon nature, at least insofar as the pollination of flowers was concerned. If a true or natural bee “sucked first on the calyx, at least a dessert remained.” But Zapparoni’s glass bees “proceeded more economically; that is, they drained the flower more thoroughly.” What is more, the bees were a marvel of agility and skill: “Given the flying speed, the fact that no collisions occurred during these flights back and forth was a masterly feat.” According to the cavalry officer, “It was evident that the natural procedure had been simplified, cut short, and standardized.”
Before our hero is introduced to Zapparoni’s bees, he is given a warning: “Beware of the bees!” And yet he forgets this warning. Watching the glass bees, the cavalry officer is fascinated. He felt himself “come under the spell of the deeper domain of techniques,” which like a spectacle “both enthralled and mesmerized.” His mind, he writes, went to sleep and he “forgot time” and “also entirely forgot the possibility of danger.”
Jünger’s book tells, in part, the story of our fascination and subjection to technologies of surveillance. On Facebook or Words with Friends, or even using our smart phones or GPS systems, we allow our fascination with technology to dull our sense of its danger. As Jünger writes: “Technical perfection strives toward the calculable, human perfection toward the incalculable. Perfect mechanisms—around which, therefore, stands an uncanny but fascinating halo of brilliance—evoke both fear and a titanic pride which will be humbled not by insight but only by catastrophe.”
The protagonist of The Glass Bees, a former member of the Light Cavalry and later a tank inspector, had once been fascinated by the “succession of ever new models becoming obsolete at an ever increasing speed, this cunning question-and-answer game between overbred brains.” What he came to see is that “the struggle for power had reached a new stage; it was fought with scientific formulas. The weapons vanished in the abyss like fleeting images, like pictures one throws into the fire. New ones were produced in protean succession.” Victory ceased to be about physical battle; it became, instead, a contest of technical mastery and knowledge.
The danger drones pose is not necessarily military. As General Stanley McChrystal rightly said when I asked him about this last week at the New York Historical Society, drones are simply another military tool that can be used for good or ill. Many fret today about collateral damage by drones and forget that if we had to send in armies to do these tasks the collateral damage would be much greater. Others worry about assassination, but drones are simply the tool, not the person pulling the trigger. It may be true that having drones when others don’t offers an enormous military advantage and makes the decision to go to kill easier, but when both sides have drones, we will all think heavily between beginning a cycle of illegal assassinations.
Rather, the danger of drones is how they change us as humans. As we humans interact more regularly with drones and machines and computers, we will inevitably come to expect ourselves and our friends and our colleagues and our lovers to act with the efficiency and selflessness of drones. Sherry Turkle worries that mechanical companions offer such fascination and unquestionable love that humans are beginning to prefer spending time with their machines than with other humans—who make demands, get tired, act cranky, and disappoint us. Ron Arkin has argued that robot soldiers will be more humane at war than human soldiers, who often act rashly out of exhaustion, anger, or revenge. Doctors are learning to rely on Watson and artificially intelligent medical machines, who can bring databases of knowledge to bear on diagnoses with the speed and objectivity that humans can only dream of. In every area of human life where humans once were thought to be necessary, drones and machines are proving more reliable, more capable, and more desirable.
The danger drones represent is not what they do better than humans, but that they do it better than humans. They are a further step in the human dream of self-improvement—the desire to overcome our shame at our all-too-human limitations.
The incredible popularity of drones today is partly a result of their freeing us to fight wars with ever-reduced human and economic costs. But drones are popular also because they appeal to the human desire for perfection. The question is, however, how perfect we humans can be before we begin to lose our humanity. That is, of course, the force of Jünger’s warning: Beware of the bees!
As drones appear everywhere around us, you would do well to put down the newspaper and turn off You Tube and, instead, revisit Ernst Jünger’s classic tale of drones. The Glass Bees is your weekend read. You can read Bruce Sterling’s introduction to The Glass Bees here.
China has embraced the idea of a Western college education in a big way. As the NY Times reported recently, the country is making a $250 billion-a-year investment designed to give millions of young Chinese citizens a college education. “Just as the United States helped build a white-collar middle class in the late 1940s and early 1950s by using the G.I. Bill to help educate millions of World War II veterans, the Chinese government is using large subsidies to educate tens of millions of young people as they move from farms to cities.”
But for most of these newly minted college graduates, jobs are scarce. One reason is that these graduates often have few marketable skills and they refuse to take the jobs that actually exist. What China needs are people to work in factories. But for college graduates, factory work has little or even no allure.
Consider the case of Wang Zengsong.
Wang Zengsong is desperate for a steady job. He has been unemployed for most of the three years since he graduated from a community college here after growing up on a rice farm. Mr. Wang, 25, has worked only several months at a time in low-paying jobs, once as a shopping mall guard, another time as a restaurant waiter and most recently as an office building security guard.
But he will not consider applying for a full-time factory job because Mr. Wang, as a college graduate, thinks that is beneath him. Instead, he searches every day for an office job, which would initially pay as little as a third of factory wages.
“I have never and will never consider a factory job — what’s the point of sitting there hour after hour, doing repetitive work?” he asked.
This story is actually not unique to China. In the United States too, we here repeatedly that small businesses are unable to expand because they cannot find qualified workers. The usual reprise is that high school graduates don’t have the skills. Rarely asked is why college graduates don’t apply? I assume the reason is the same as in China. College graduates see production work as beneath them.
Plenty of college graduates, many with debt, are interning for free or working odd jobs that pay little; yet they do not even consider learning a skill and taking a job that would require them to build something. Just like their comrades in China, these young people identify as knowledge workers, not as fabricators. For them, a job making things is seen as a step down. Something that is beneath them.
Disdain for manual labor combined with respect for cognitive work is the theme of Matthew B. Crawford’s book Shop Craft as Soul Craft, based on his article by the same name that appeared in 2006 in The New Atlantis. Crawford’s writing is rich and his thinking profound. But boiled down, I took three main points from his book and article.
First, there is a meaningful and thoughtful component to manual labor. To make something is not thoughtless, but requires both skill and intelligence. This is true if you are building a table, where you must think about the shape, functionality, and aesthetics of a table. But even in factory work, there is the challenge of figuring out how to do something better. And in the modern factory, labor demands technical skill, problem solving, and creativity. Whether you are building a house or making a battery, making things requires thought. What is more, it is good for the soul. Here is how Crawford writes about the soul benefits of craft:
Hobbyists will tell you that making one’s own furniture is hard to justify economically. And yet they persist. Shared memories attach to the material souvenirs of our lives, and producing them is a kind of communion, with others and with the future. Finding myself at loose ends one summer in Berkeley, I built a mahogany coffee table on which I spared no expense of effort. At that time I had no immediate prospect of becoming a father, yet I imagined a child who would form indelible impressions of this table and know that it was his father’s work. I imagined the table fading into the background of a future life, the defects in its execution as well as inevitable stains and scars becoming a surface textured enough that memory and sentiment might cling to it, in unnoticed accretions. More fundamentally, the durable objects of use produced by men “give rise to the familiarity of the world, its customs and habits of intercourse between men and things as well as between men and men,” as Hannah Arendt says. “The reality and reliability of the human world rest primarily on the fact that we are surrounded by things more permanent than the activity by which they were produced, and potentially even more permanent than the lives of their authors.”
Arendt values those who make things, especially things that last, because lasting objects give permanence to our world. And such workers who make things are above all thinkers in her understanding. Work is the process of transfiguring the idea of something into a real and reliable object.
But even laborers who make consumable goods are, for Arendt, doing deeply human activity. To be human has been, for time immemorial, also to labor, to produce the goods one needs to live. A life without labor is impoverished and “the blessing of labor is that effort and gratification follow each other as closely as producing and consuming the means of subsistence.” Granted, in repetitive factory labor these blessings may seem obscure, but then again, Dilbert has taught us much about the supposed blessings of office work as well.
Second, Crawford tells the story of how schools in the U.S. have done away with shop classes, home economics, and auto-repair, all classes I and many others took in junior high and high school. In the pursuit of college preparation, education has ceased to value the blessings of labor and work.
Third, Crawford argues that in a global economy it will be work with out hands and not just work with our brains that pays well. When legal analysis can be outsourced or replaced by robots as easily as phone operators, the one kind of job that will remain necessary for humans is repair work, fixing things, and building things. Such work requires the combination of mental and physical dexterity that machines will unlikely reach for a very long time. Thus, Crawford argues that by emptying our schools of training in handwork, we are not only intellectually impoverishing our students, but also failing to train them for the kinds of jobs that will actually exist in the future.
Many of my students might now agree. I have former students who have written excellent senior theses on Emerson and Heidegger now working on Organic farms or learning the trade of gourmet cheese production. Others are making specialty furniture. One is even making a new custom-built conference table for the Hannah Arendt Center here at Bard. These students love what they do and are making good livings doing it. They are enriching the world with meaningful objects and memories that they are producing, things they can share as gifts and sell with pride.
Many of the best jobs out there now are in the specialty craft areas. These jobs require thought and creativity, but also experience with craftsmanship and labor. Crawford does not argue against training people well in the liberal arts, but he does raise important questions about our valuation of intellectual over manual labor. We here in the U.S. as well as our friends in China should pay attention. Perhaps we need to rethink our intellectual aversion to production. Maybe we should even begin again to teach crafts and skills in school.
Crawford will be speaking at the next Hannah Arendt Center Conference “The Educated Citizen” on Oct. 3-4, at Bard College. We invite you to join us. Until then, I commend to you his book or at least his essay; Shop Craft as Soul Craft is your weekend read.
The Wall Street Journal ran an interview this week with Luke Muehlhauser, the Executive Director of the Singularity Institute. The Journal asked: Will Artificial Intelligence Make us Obsolete? Muehlhauser's answer was, well yes. In his words:
Cognitive science has discovered that everything the human mind does is done by information processing and machines can do information processing too.
The first statement is clearly false, or at least depends on a strangely mixed up idea of "information processing." The old determinist canard that humans are simply complex machines has not been proven or discovered by cognitive science. And even if humans do process billions upon billions of bits of information it is not at all clear that such a humanly fallible process is reproducible. That is not the claim that cognitive science can make.
But cognitive science can claim that machines can be built that act in ways that are so like humans as to be almost nearly indistinguishable from them. Or, they can even be better than humans in doing many quintessentially human tasks. So machines can not only beat humans at chess, they can make moves that seem like moves only a human could have made, as Gary Kasparov learned to his dismay in the second game of his rematch with Deep Blue. Machines can create paintings that appear to be fully creative, as does Aaron, the painting machine created by artist and computer scientist Harold Cohen. And machines can increasingly make ethical decisions in warfare, as the robo-ethicist Ron Arkin has argued—decisions that are more humane than those made by human warriors.
Too much of the debate over artificial intelligence is caught up in the technical and really irrelevant question of whether machines can fully replicate human beings. The point is that if machines act "as if" they are human, or if they are capable of doing what humans do better than humans, we will gradually and continually allow machines to take over more and more of the basic human activities that make up our world. Already computers make most of the trades on Wall Street and computers are increasingly used in making medical diagnoses. Computers are being used to educate our children and write news stories. Caregivers for the elderly are being replaced by robotic companions. And David Levy, artificial intelligence researcher at the University of Maastricht in the Netherlands, argues that we will be marrying robots in the near future. It is not that these robotic lovers or artificial artists are human, but that they love and paint in ways that do or will soon pass the Turing test: they will be impossible to distinguish from human works.
Undoubtedly one reason machines are acting more human is that humans themselves are acting less so. As we interact more and more with machines, we begin to act predictably, repetitively, and less surprisingly. There is a convergence at foot, and it is the dehumanization of human beings as much as the humanization of robots that should be worrying us.
Read Hannah Arendt's seminal 1963 essay, "The Conquest of Space and the Stature of Man".
According to Ron Arkin, a Professor of Computer Technology at Georgia Tech and Director of the GeorgiaTech Mobile Robot Lab, "It is fairly easy to make robots that behave more ethically and humanely than humans on the battlefield."
What does it mean to be human? According to Roger Berkowitz, "Being human is the free and thoughtful effort to make common judgments together."
Watch Roger Berkowitz’s July 4th TED Talk in East Hampton in which he discusses the growing desire to have technology oversee what once belonged exclusively to the province of the individual mind: man’s capacity to judge. Berkowitz addresses the increasing and alarming reliance on robotic decision makers in the fields of medicine and military strategy. How, he asks, does our desire to eliminate human fallibility disturb the fundamental landscape of personal and political life?
Roger Berkowitz writes and speaks frequently and on the challenge that technological utopianism poses to humanity.
Human Being in an Inhuman Age. (Published in the inaugural edition of HA, the Journal of the Hannah Arendt Center at Bard College).
Exploring the Human Condition. (Video)
John Markoff has a new installment in The New York Times Smarter Than You Think Series today, The Boss is Robotic, And Rolling Up Behind You. After an earlier article looking at the use of robots in the classroom, here Markoff looks at the use of robots to enhance and expand the reach of those in higher level management situations. The importance of these articles is that the robots Markoff is investigating are not for low-level menial tasks like factory work or giving solace to elderly patients. The great change coming to our economy and our lives is that the automation of handwork that has hollowed out the lives of so many lower class laborers is coming now to the professions usually thought immune to the threats of automation. As robots get smarter and more mobile, the human advantages of thinking and walking are being whittled away.
With the help of RP-7i, a robot from InTouch Health, Dr. Alan Shatzel can sit at home and role into a patient's room at any hospital where an RP-7i is stationed.
The advantage of the RP-7i is that the doctor can "be in the room," not only hearing and seeing as if on a teleconference call, but being present via what is referred to as "telepresence." The doctor can speak with the patient, zoom in on the monitors, note the way he uses their hands or curl their lips. As Dr. John Whapham who also uses a RP-7i says of the experience:
You're live, and you can walk around, examine, image, zoom in and out. I do it all the time.
Markoff explores a number of these new telepresence robots and notes that these robots offer the promise of enhancing the work of doctors as well as other professionals. These professionals will be freed from their physical offices even more so than they are currently.
In addition, they will be able to work in many locations at once. From an economic perspective, one can easily imagine a hospital or chain of hospitals reducing the number of chief surgeons from say 10 to 5, as those five now sit in a control room monitoring different groups of patients via different telepresences in different hospitals. Whereas for centuries automation was largely seen as a threat to the lower and menial workers, advances in technology are now threatening to transform the work of the most highly educated elite.
Finally, these telepresence robots are not mere cost-cutting devices, although they are that as well.
For now, most of the mobile robots, sometimes called telepresence robots, are little more than ventriloquists' dummies with long, invisible strings. But some models have artificial intelligence that lets them do some things on their own, and they will inevitably grow smarter and more agile. They will not only represent the human users, they will augment them.
Soon these robots will, as Markoff writes, include artificial intelligence features that will enhance the surgeon's own human capacities. The robots will have have infinite data-storing capacities to access records of past procedures and scan a patients entire medical history. There is little doubt that as these machines progress quickly, they will be second-guessing and advising the doctors who control them.
So what does it mean that the robots will augment their human users?
1. Economically, the world will have use for far fewer highly-trained doctors. I have written about how robots are replacing teachers as well. This is part of the more general attack that computers and robots pose for the middle and even upper-middle classes in the next few decades. As my colleague Walter Russell Mead writes in his recent blogpost:
The upper middle class benefited over the last generation from a rising difference between the living standards of professional and blue collar American workers. This is likely to change; from civil service jobs in government to university professors, lawyers, health care personnel, middle and upper middle management in the private sector, the upper-middle class is going to face a much harsher environment going forward. Automation, outsourcing and unremitting pressures to control costs are going to squeeze upper middle class incomes. What blue collar workers faced in the last thirty years is coming to the white collar workforce now.
2. Medical care will change as doctors work alongside artificial intelligence robots. Just as computer assisted chess players make fewer mistakes and take fewer chances so that more games end in draws, computer-assisted medicine will become more careful and proficient.
Those familiar with Hannah Arendt's work will recall her own certainty that the rise of automation would soon have an extraordinary impact on our world. Her worry was that humans today are simply not prepared for a life in which most of us will not have jobs because there will not much left for humans to do that computers and robots cannot. Thus at the very time when automation promises to realize the ancient dream of freeing us from the necessity to labor, we humans don't know what to do with our time outside of our work. The threat of automation, she writes, is political as much as it is economic. But more on this later.
Read more of Markoff's article here:
This week I had lunch with an ex-student who is thinking about traveling to Korea to teach English. She told me that another of my students was in Korea now teaching English. And I just got an email from another former student asking for a law school recommendation. She has been, you guessed it, teaching English in Korea. It seems that the Korean government is doing a good job subsidizing my former students.
But this, according to today's NY Times, may soon change. South Korea is working to replace native English speaking teachers with robots who are cheaper and more reliable. South Korea now plans to deploy 8,400 robots in the nation's kindergartens by 2013. And budgetary pressures in the program to enlist native English speakers is leading the government to turn to robotic teachers.
A front page essay from the Smarter than You Think series also in today's NY Times explores the growing uses of robots in teaching at all levels. According to Benedict Carey and John Markoff, scientists around the world
are developing robots that can engage people and teach them simple skills, including household tasks, vocabulary or, ... elementary imitation and taking turns.
While they quote computer scientists who say that they have neither the intention nor the ability to replace human teachers, clearly budget conscious schools and governments will seek to employ robots as teachers.
Teachers are threatened not only by robots, but also by electronic and distance education. A study last year for the US Department of Education found, to the great chagrin of many teachers and educators, that
The automation of the workforce is attacking the arts as well as teaching. As Paul Woodiel writes on the Times Oped page today, broadway's musicians and violinists are being replaced by synthesizers.
One question rarely asked in such discussions is: "what is good teaching?" or "what is great music?"and what does it teach? It may be that robots and computers are indeed better at teaching basic skills and customizing learning for individual students. Are electronic synthesizers better at playing the violins on the Great White Way?
But what seems, at least at this point, beyond the reach of robotic teaching is the flash of inspiration that opens a student's mind to the beauty and truth of the world. Then again, most students don't want such teaching--just as most broadway theatre goers don't need the human touch of the violin--which may mean that there are quite a few job openings for professor bots and synthesizers around the world.