“Most of the mistakes in thinking are inadequacies of perception rather than mistakes of logic.”
— Edward de Bono
(Featured Image: Edward de Bono; Source: Pensamiento Lateral)
Edward de Bono's Biography
Dr. Edward de Bono is the world's leading authority on conceptual thinking as the driver of organizational innovation, strategic leadership, individual creativity, and problem solving. Since 1970 his exclusive tools and methods have brought astonishing results to organizations large and small worldwide and to individuals from a wide range of cultures, educational backgrounds, occupations, and age groups. Dr. de Bono delivers the advanced training solutions that are greatly needed for success in these challenging times.
(Sourced from de Bono Thinking Systems)
For more information about de Bono's life and work, please click here.
"The end of the old is not necessarily the beginning of the new."
Hannah Arendt, The Life of the Mind
This is a simple enough statement, and yet it masks a profound truth, one that we often overlook out of the very human tendency to seek consistency and connection, to make order out of the chaos of reality, and to ignore the anomalous nature of that which lies in between whatever phenomena we are attending to.
Perhaps the clearest example of this has been what proved to be the unfounded optimism that greeted the overthrow of autocratic regimes through American intervention in Afghanistan and Iraq, and the native-born movements known collectively as the Arab Spring. It is one thing to disrupt the status quo, to overthrow an unpopular and undemocratic regime. But that end does not necessarily lead to the establishment of a new, beneficent and participatory political structure. We see this time and time again, now in Putin's Russia, a century ago with the Russian Revolution, and over two centuries ago with the French Revolution.
Of course, it has long been understood that oftentimes, to begin something new, we first have to put an end to something old. The popular saying that you can't make an omelet without breaking a few eggs reflects this understanding, although it is certainly not the case that breaking eggs will inevitably and automatically lead to the creation of an omelet. Breaking eggs is a necessary but not sufficient cause of omelets, and while this is not an example of the classic chicken and egg problem, I think we can imagine that the chicken might have something to say on the matter of breaking eggs. Certainly, the chicken would have a different view on what is signified or ought to be signified by the end of the old, meaning the end of the egg shell, insofar as you can't make a chicken without it first breaking out of the egg that it took form within.
So, whether you take the chicken's point of view, or adopt the perspective of the omelet, looking backwards, reverse engineering the current situation, it is only natural to view the beginning of the new as an effect brought into being by the end of the old, to assume or make an inference based on sequencing in time, to posit a causal relationship and commit the logical fallacy of post hoc ergo propter hoc, if for no other reason that by force of narrative logic that compels us to create a coherent storyline. In this respect, Arendt points to the foundation tales of ancient Israel and Rome:
We have the Biblical story of the exodus of Israeli tribes from Egypt, which preceded the Mosaic legislation constituting the Hebrew people, and Virgil's story of the wanderings of Aeneas, which led to the foundation of Rome—"dum conderet urbem," as Virgil defines the content of his great poem even in its first lines. Both legends begin with an act of liberation, the flight from oppression and slavery in Egypt and the flight from burning Troy (that is, from annihilation); and in both instances this act is told from the perspective of a new freedom, the conquest of a new "promised land" that offers more than Egypt's fleshpots and the foundation of a new City that is prepared for by a war destined to undo the Trojan war, so that the order of events as laid down by Homer could be reversed.
Fast forward to the American Revolution, and we find that the founders of the republic, mindful of the uniqueness of their undertaking, searched for archetypes in the ancient world. And what they found in the narratives of Exodus and the Aeneid was that the act of liberation, and the establishment of a new freedom are two events, not one, and in effect subject to Alfred Korzybski's non-Aristotelian Principle of Non-Identity. The success of the formation of the American republic can be attributed to the awareness on their part of the chasm that exists between the closing of one era and the opening of a new age, of their separation in time and space:
No doubt if we read these legends as tales, there is a world of difference between the aimless desperate wanderings of the Israeli tribes in the desert after the Exodus and the marvelously colorful tales of the adventures of Aeneas and his fellow Trojans; but to the men of action of later generations who ransacked the archives of antiquity for paradigms to guide their own intentions, this was not decisive. What was decisive was that there was a hiatus between disaster and salvation, between liberation from the old order and the new freedom, embodied in a novus ordo saeclorum, a "new world order of the ages" with whose rise the world had structurally changed.
I find Arendt's use of the term hiatus interesting, given that in contemporary American culture it has largely been appropriated by the television industry to refer to a series that has been taken off the air for a period of time, but not cancelled. The typical phrase is on hiatus, meaning on a break or on vacation. But Arendt reminds us that such connotations only scratch the surface of the word's broader meanings. The Latin word hiatus refers to an opening or rupture, a physical break or missing part or link in a concrete material object. As such, it becomes a spatial metaphor when applied to an interruption or break in time, a usage introduced in the 17th century. Interestingly, this coincides with the period in English history known as the Interregnum, which began in 1649 with the execution of King Charles I, led to Oliver Cromwell's installation as Lord Protector, and ended after Cromwell's death with the Restoration of the monarchy under Charles II, son of Charles I. While in some ways anticipating the American Revolution, the English Civil War followed an older pattern, one that Mircea Eliade referred to as the myth of eternal return, a circular movement rather than the linear progression of history and cause-effect relations.
The idea of moving forward, of progress, requires a future-orientation that only comes into being in the modern age, by which I mean the era that followed the printing revolution associated with Johannes Gutenberg (I discuss this in my book, On the Binding Biases of Time and Other Essays on General Semantics and Media Ecology). But that same print culture also gave rise to modern science, and with it the monopoly granted to efficient causality, cause-effect relations, to the exclusion in particular of final and formal cause (see Marshall and Eric McLuhan's Media and Formal Cause). This is the basis of the Newtonian universe in which every action has an equal and opposite reaction, and every effect can be linked back in a causal chain to another event that preceded it and brought it into being. The view of time as continuous and connected can be traced back to the introduction of the mechanical clock in the 13th century, but was solidified through the printing of calendars and time lines, and the same effect was created in spatial terms by the reproduction of maps, and the use of spatial grids, e.g., the Mercator projection.
And while the invention of history, as a written narrative concerning the linear progression over time can be traced back to the ancient Israelites, and the story of the exodus, the story incorporates the idea of a hiatus in overlapping structures:
A1. Joseph is the golden boy, the son favored by his father Jacob, earning him the enmity of his brothers
A2. he is sold into slavery by them, winds up in Egypt as a slave and then is falsely accused and imprisoned
A3. by virtue of his ability to interpret dreams he gains his freedom and rises to the position of Pharaoh's prime minister
B1. Joseph welcomes his brothers and father, and the House of Israel goes down to Egypt to sojourn due to famine in the land of Canaan
B2. their descendants are enslaved, oppressed, and persecuted
B3. Moses is chosen to confront Pharaoh, liberate the Israelites, and lead them on their journey through the desert
C1. the Israelites are freed from bondage and escape from Egypt
C2. the revelation at Sinai fully establishes their covenant with God
C3. after many trials, they return to the Promised Land
It can be clearly seen in these narrative structures that the role of the hiatus, in ritual terms, is that of the rite of passage, the initiation period that marks, in symbolic fashion, the change in status, the transformation from one social role or state of being to another (e.g., child to adult, outsider to member of the group). This is not to discount the role that actual trials, tests, and other hardships may play in the transition, as they serve to establish or reinforce, psychologically and sometimes physically, the value and reality of the transformation.
In mythic terms, this structure has become known as the hero's journey or hero's adventure, made famous by Joseph Campbell in The Hero with a Thousand Faces, and also known as the monomyth, because he claimed that the same basic structure is universal to all cultures. The basis structure he identified consists of three main elements: separation (e.g., the hero leaves home), initiation (e.g., the hero enters another realm, experiences tests and trials, leading to the bestowing of gifts, abilities, and/or a new status), and return (the hero returns to utilize what he has gained from the initiation and save the day, restoring the status quo or establishing a new status quo).
Understanding the mythic, non-rational element of initiation is the key to recognizing the role of the hiatus, and in the modern era this meant using rationality to realize the limits of rationality. With this in mind, let me return to the quote I began this essay with, but now provide the larger context of the entire paragraph:
The legendary hiatus between a no-more and a not-yet clearly indicated that freedom would not be the automatic result of liberation, that the end of the old is not necessarily the beginning of the new, that the notion of an all-powerful time continuum is an illusion. Tales of a transitory period—from bondage to freedom, from disaster to salvation—were all the more appealing because the legends chiefly concerned the deeds of great leaders, persons of world-historic significance who appeared on the stage of history precisely during such gaps of historical time. All those who pressed by exterior circumstances or motivated by radical utopian thought-trains, were not satisfied to change the world by the gradual reform of an old order (and this rejection of the gradual was precisely what transformed the men of action of the eighteenth century, the first century of a fully secularized intellectual elite, into the men of the revolutions) were almost logically forced to accept the possibility of a hiatus in the continuous flow of temporal sequence.
Note that concept of gaps in historical time, which brings to mind Eliade's distinction between the sacred and the profane. Historical time is a form of profane time, and sacred time represents a gap or break in that linear progression, one that takes us outside of history, connecting us instead in an eternal return to the time associated with a moment of creation or foundation. The revelation in Sinai is an example of such a time, and accordingly Deuteronomy states that all of the members of the House of Israel were present at that event, not just those alive at that time, but those not present, the generations of the future. This statement is included in the liturgy of the Passover Seder, which is a ritual reenactment of the exodus and revelation, which in turn becomes part of the reenactment of the Passion in Christianity, one of the primary examples of Campbell's monomyth.
Arendt's hiatus, then represents a rupture between two different states or stages, an interruption, a disruption linked to an eruption. In the parlance of chaos and complexity theory, it is a bifurcation point. Arendt's contemporary, Peter Drucker, a philosopher who pioneered the scholarly study of business and management, characterized the contemporary zeitgeist in the title of his 1969 book: The Age of Discontinuity. It is an age in which Newtonian physics was replaced by Einstein's relativity and Heisenberg's uncertainty, the phrase quantum leap becoming a metaphor drawn from subatomic physics for all forms of discontinuity. It is an age in which the fixed point of view that yielded perspective in art and the essay and novel in literature yielded to Cubism and subsequent forms of modern art, and stream of consciousness in writing.
Beginning in the 19th century, photography gave us the frozen, discontinuous moment, and the technique of montage in the motion picture gave us a series of shots and scenes whose connections have to be filled in by the audience. Telegraphy gave us the instantaneous transmission of messages that took them out of their natural context, the subject of the famous comment by Henry David Thoreau that connecting Maine and Texas to one another will not guarantee that they have anything sensible to share with each other. The wire services gave us the nonlinear, inverted pyramid style of newspaper reporting, which also was associated with the nonlinear look of the newspaper front page, a form that Marshall McLuhan referred to as a mosaic. Neil Postman criticized television's role in decontextualizing public discourse in Amusing Ourselves to Death, where he used the phrase, "in the context of no context," and I discuss this as well in my recently published follow-up to his work, Amazing Ourselves to Death.
The concept of the hiatus comes naturally to the premodern mind, schooled by myth and ritual within the context of oral culture. That same concept is repressed, in turn, by the modern mind, shaped by the linearity and rationality of literacy and typography. As the modern mind yields to a new, postmodern alternative, one that emerges out of the electronic media environment, we see the return of the repressed in the idea of the jump cut writ large.
There is psychological satisfaction in the deterministic view of history as the inevitable result of cause-effect relations in the Newtonian sense, as this provides a sense of closure and coherence consistent with the typographic mindset. And there is similar satisfaction in the view of history as entirely consisting of human decisions that are the product of free will, of human agency unfettered by outside constraints, which is also consistent with the individualism that emerges out of the literate mindset and print culture, and with a social rather that physical version of efficient causality. What we are only beginning to come to terms with is the understanding of formal causality, as discussed by Marshall and Eric McLuhan in Media and Formal Cause. What formal causality suggests is that history has a tendency to follow certain patterns, patterns that connect one state or stage to another, patterns that repeat again and again over time. This is the notion that history repeats itself, meaning that historical events tend to fall into certain patterns (repetition being the precondition for the existence of patterns), and that the goal, as McLuhan articulated in Understanding Media, is pattern recognition. This helps to clarify the famous remark by George Santayana, "those who cannot remember the past are condemned to repeat it." In other words, those who are blind to patterns will find it difficult to break out of them.
Campbell engages in pattern recognition in his identification of the heroic monomyth, as Arendt does in her discussion of the historical hiatus. Recognizing the patterns are the first step in escaping them, and may even allow for the possibility of taking control and influencing them. This also means understanding that the tendency for phenomena to fall into patterns is a powerful one. It is a force akin to entropy, and perhaps a result of that very statistical tendency that is expressed by the Second Law of Thermodynamics, as Terrence Deacon argues in Incomplete Nature. It follows that there are only certain points in history, certain moments, certain bifurcation points, when it is possible to make a difference, or to make a difference that makes a difference, to use Gregory Bateson's formulation, and change the course of history. The moment of transition, of initiation, the hiatus, represents such a moment.
McLuhan's concept of medium goes far beyond the ordinary sense of the word, as he relates it to the idea of gaps and intervals, the ground that surrounds the figure, and explains that his philosophy of media is not about transportation (of information), but transformation. The medium is the hiatus.
The particular pattern that has come to the fore in our time is that of the network, whether it's the decentralized computer network and the internet as the network of networks, or the highly centralized and hierarchical broadcast network, or the interpersonal network associated with Stanley Milgram's research (popularly known as six degrees of separation), or the neural networks that define brain structure and function, or social networking sites such as Facebook and Twitter, etc. And it is not the nodes, which may be considered the content of the network, that defines the network, but the links that connect them, which function as the network medium, and which, in the systems view favored by Bateson, provide the structure for the network system, the interaction or relationship between the nodes. What matters is not the nodes, it's the modes.
Hiatus and link may seem like polar opposites, the break and the bridge, but they are two sides of the same coin, the medium that goes between, simultaneously separating and connecting. The boundary divides the system from its environment, allowing the system to maintain its identity as separate and distinct from the environment, keeping it from being absorbed by the environment. But the membrane also serves as a filter, engaged in the process of abstracting, to use Korzybski's favored term, letting through or bringing material, energy, and information from the environment into the system so that the system can maintain itself and survive. The boundary keeps the system in touch with its situation, keeps it contextualized within its environment.
The systems view emphasizes space over time, as does ecology, but the concept of the hiatus as a temporal interruption suggests an association with evolution as well. Darwin's view of evolution as continuous was consistent with Newtonian physics. The more recent modification of evolutionary theory put forth by Stephen Jay Gould, known as punctuated equilibrium, suggests that evolution occurs in fits and starts, in relatively rare and isolated periods of major change, surrounded by long periods of relative stability and stasis. Not surprisingly, this particular conception of discontinuity was introduced during the television era, in the early 1970s, just a few years after the publication of Peter Drucker's The Age of Discontinuity.
When you consider the extraordinary changes that we are experiencing in our time, technologically and ecologically, the latter underlined by the recent news concerning the United Nations' latest report on global warming, what we need is an understanding of the concept of change, a way to study the patterns of change, patterns that exist and persist across different levels, the micro and the macro, the physical, chemical, biological, psychological, and social, what Bateson referred to as metapatterns, the subject of further elaboration by biologist Tyler Volk in his book on the subject. Paul Watzlawick argued for the need to study change in and of itself in a little book co-authored by John H. Weakland and Richard Fisch, entitled Change: Principles of Problem Formation and Problem Resolution, which considers the problem from the point of view of psychotherapy. Arendt gives us a philosophical entrée into the problem by introducing the pattern of the hiatus, the moment of discontinuity that leads to change, and possibly a moment in which we, as human agents, can have an influence on the direction of that change.
To have such an influence, we do need to have that break, to find a space and more importantly a time to pause and reflect, to evaluate and formulate. Arendt famously emphasizes the importance of thinking in and of itself, the importance not of the content of thought alone, but of the act of thinking, the medium of thinking, which requires an opening, a time out, a respite from the onslaught of 24/7/365. This underscores the value of sacred time, and it follows that it is no accident that during that period of initiation in the story of the exodus, there is the revelation at Sinai and the gift of divine law, the Torah or Law, and chief among them the Ten Commandments, which includes the fourth of the commandments, and the one presented in greatest detail, to observe the Sabbath day. This premodern ritual requires us to make the hiatus a regular part of our lives, to break the continuity of profane time on a weekly basis. From that foundation, other commandments establish the idea of the sabbatical year, and the sabbatical of sabbaticals, or jubilee year. Whether it's a Sabbath mandated by religious observance, or a new movement to engage in a Technology Sabbath, the hiatus functions as the response to the homogenization of time that was associated with efficient causality and literate linearity, and that continues to intensify in conjunction with the technological imperative of efficiency über alles.
To return one last time to the quote that I began with, the end of the old is not necessarily the beginning of the new because there may not be a new beginning at all, there may not be anything new to take the place of the old. The end of the old may be just that, the end, period, the end of it all. The presence of a hiatus to follow the end of the old serves as a promise that something new will begin to take its place after the hiatus is over. And the presence of a hiatus in our lives, individually and collectively, may also serve as a promise that we will not inevitably rush towards an end of the old that will also be an end of it all, that we will be able to find the opening to begin something new, that we will be able to make the transition to something better, that both survival and progress are possible, through an understanding of the processes of continuity and change.
Hannah Arendt considered calling her magnum opus Amor Mundi: Love of the World. Instead, she settled upon The Human Condition. What is most difficult, Arendt writes, is to love the world as it is, with all the evil and suffering in it. And yet she came to do just that. Loving the world means neither uncritical acceptance nor contemptuous rejection. Above all it means the unwavering facing up to and comprehension of that which is.
Every Sunday, The Hannah Arendt Center Amor Mundi Weekly Newsletter will offer our favorite essays and blog posts from around the web. These essays will help you comprehend the world. And learn to love it.
Over at SCOTUSblog, Burt Neuborne writes that “American democracy is now a wholly owned subsidiary of Oligarchs, Inc.” The good news, Neuborne reminds, is that “this too shall pass.” After a fluid and trenchant review of the case and the recent decision declaring limits on aggregate giving to political campaigns to be unconstitutional, Neuborne writes: “Perhaps most importantly, McCutcheon illustrates two competing visions of the First Amendment in action. Chief Justice Roberts’s opinion turning American democracy over to the tender mercies of the very rich insists that whether aggregate contribution limits are good or bad for American democracy is not the Supreme Court’s problem. He tears seven words out of the forty-five words that constitute Madison’s First Amendment – “Congress shall make no law abridging . . . speech”; ignores the crucial limiting phrase “the freedom of,” and reads the artificially isolated text fragment as an iron deregulatory command that disables government from regulating campaign financing, even when deregulation results in an appalling vision of government of the oligarchs, by the oligarchs, and for the oligarchs that would make Madison (and Lincoln) weep. Justice Breyer’s dissent, seeking to retain some limit on the power of the very rich to exercise undue influence over American democracy, views the First Amendment, not as a simplistic deregulatory command, but as an aspirational ideal seeking to advance the Founders’ effort to establish a government of the people, by the people, and for the people for the first time in human history. For Justice Breyer, therefore, the question of what kind of democracy the Supreme Court’s decision will produce is at the center of the First Amendment analysis. For Chief Justice Roberts, it is completely beside the point. I wonder which approach Madison would have chosen. As a nation, we’ve weathered bad constitutional law before. Once upon a time, the Supreme Court protected slavery. Once upon a time the Supreme Court blocked minimum-wage and maximum-hour legislation. Once upon a time, the Supreme Court endorsed racial segregation, denied equality to women, and jailed people for their thoughts and associations. This, too, shall pass. The real tragedy would be for people to give up on taking our democracy back from the oligarchs. Fixing the loopholes in disclosure laws, and public financing of elections are now more important than ever. Moreover, the legal walls of the airless room are paper-thin. Money isn’t speech at obscenely high levels. Protecting political equality is a compelling interest justifying limits on uncontrolled spending by the very rich. And preventing corruption means far more than stopping quid pro quo bribery. It means the preservation of a democracy where the governed can expect their representatives to decide issues independently, free from economic serfdom to their paymasters. The road to 2016 starts here. The stakes are the preservation of democracy itself.” It is important to remember that the issue is not really partisan, but that both parties are corrupted by the influx of huge amounts of money. Democracy is in danger not because one party will by the election, but because the oligarchs on both sides are crowding out grassroots participation. This is an essay you should read in full. For a plain English review of the decision, read this from SCOTUSblog. And for a Brief History of Campaign Finance, check out this from the Arendt Center Archives.
Zephyr Teachout, the most original and important thinker about the constitutional response to political corruption, has an op-ed in the Washington Post: “We should take this McCutcheon moment to build a better democracy. The plans are there. Rep. John Sarbanes (D-Md.) has proposed something that would do more than fix flaws. H.R. 20, which he introduced in February, is designed around a belief that federal political campaigns should be directly funded by millions of passionate, but not wealthy, supporters. A proposal in New York would do a similar thing at the state level.” Teachout spoke at the Arendt Center two years ago after the Citizens United case. Afterwards, Roger Berkowitz wrote: “It is important to see that Teachout is really pointing out a shift between two alternate political theories. First, she argues that for the founders and for the United States up until the mid-20th century, the foundational value that legitimates our democracy is the confidence that our political system is free from corruption. Laws that restrict lobbying or penalize bribery are uncontroversial and constitutional, because they recognize core—if not the core—constitutional values. Second, Teachout sees that increasingly free speech has replaced anti-corruption as the foundational constitutional value in the United States. Beginning in the 20th century and culminating in the Court's decision in Citizens United, the Court gradually accepted the argument that the only way to guarantee a legitimate democracy is to give unlimited protection to the marketplace of idea. Put simply, truth is nothing else but the product of free debate and any limits on debate, especially political debate, will delegitimize our politics.” Read the entirety of his commentary here. Watch a recording of Teachout’s speech here.
A new exhibition opened two weeks ago at the Haus der Kulturen der Welt in Berlin that examines the changing ways in which states police and govern their subjects through forensics, and how certain aesthetic-political practices have also been used to challenge or expose states. Curated by Anselm Franke and Eyal Weizman, Forensis “raises fundamental questions about the conditions under which spatial and material evidence is recorded and presented, and tests the potential of new types of evidence to expand our juridical imagination, open up forums for political dispute and practice, and articulate new claims for justice.” Harry Burke and Lucy Chien review the exhibition on Rhizome: “The exhibition argues that forensics is a political practice primarily at the point of interpretation. Yet if the exhibition is its own kind of forensic practice, then it is the point of the viewer's engagement where the exhibition becomes significant. The underlying argument in Forensis is that the object of forensics should be as much the looker and the act of looking as the looked-upon.” You may want to read more and then we suggest Mengele’s Skull: The Advent of a Forensic Aesthetics.
In an interview, Leslie Jamison, author of the very recently published The Empathy Exams, offers up a counterintuitive defense of empathy: “I’m interested in everything that might be flawed or messy about empathy — how imagining other lives can constitute a kind of tyranny, or artificially absolve our sense of guilt or responsibility; how feeling empathy can make us feel we’ve done something good when we actually haven’t. Zizek talks about how 'feeling good' has become a kind of commodity we purchase for ourselves when we buy socially responsible products; there’s some version of this inoculation logic — or danger — that’s possible with empathy as well: we start to like the feeling of feeling bad for others; it can make us feel good about ourselves. So there’s a lot of danger attached to empathy: it might be self-serving or self-absorbed; it might lead our moral reasoning astray, or supplant moral reasoning entirely. But do I want to defend it, despite acknowledging this mess? More like: I want to defend it by acknowledging this mess. Saying: Yes. Of course. But yet. Anyway.”
In a review of Romanian writer Herta Muller's recently translated collection Christina and Her Double, Costica Bradatan points to what changing language can do, what it can't do, and how those who attempt to manipulate it may also underestimate its power: “Behind all these efforts was the belief that language can change the real world. If religious terms are removed from language, people will stop having religious feelings; if the vocabulary of death is properly engineered, people will stop being afraid of dying. We may smile today, but in the long run such polices did produce a change, if not the intended one. The change was not in people’s attitudes toward death or the afterworld, but in their ability to make sense of what was going on. Since language plays such an important part in the construction of the self, when the state subjects you to constant acts of linguistic aggression, whether you realize it or not, your sense of who you are and of your place in the world are seriously affected. Your language is not just something you use, but an essential part of what you are. For this reason any political disruption of the way language is normally used can in the long run cripple you mentally, socially, and existentially. When you are unable to think clearly you cannot act coherently. Such an outcome is precisely what a totalitarian system wants: a population perpetually caught in a state of civic paralysis.”
Charles Samuleson, author of "The Deepest Human Life: An Introduction to Philosophy for Everyone," has this paean to the humanities in the Wall Street Journal: “I once had a student, a factory worker, who read all of Schopenhauer just to find a few lines that I quoted in class. An ex-con wrote a searing essay for me about the injustice of mandatory minimum sentencing, arguing that it fails miserably to live up to either the retributive or utilitarian standards that he had studied in Introduction to Ethics. I watched a preschool music teacher light up at Plato's "Republic," a recovering alcoholic become obsessed by Stoicism, and a wayward vet fall in love with logic (he's now finishing law school at Berkeley). A Sudanese refugee asked me, trembling, if we could study arguments concerning religious freedom. Never more has John Locke —or, for that matter, the liberal arts—seemed so vital to me.”
Arthur C. Brooks makes the case that charitable giving makes us happier and even more successful: “In 2003, while working on a book about charitable giving, I stumbled across a strange pattern in my data. Paradoxically, I was finding that donors ended up with more income after making their gifts. This was more than correlation; I found solid evidence that giving stimulated prosperity…. Why? Charitable giving improves what psychologists call “self-efficacy,” one’s belief that one is capable of handling a situation and bringing about a desired outcome. When people give their time or money to a cause they believe in, they become problem solvers. Problem solvers are happier than bystanders and victims of circumstance.” Do yourself a favor, then, and become a member of the Arendt Center.
What Heidegger's Denktagebuch reveals about his thinking during the Nazi regime.
April 8, 2014
Goethe Institut, NYC
Learn more here.
"My Name is Ruth."
An Evening with Bard Big Read and Marilynne Robinson's Housekeeping
Excerpts will be read by Neil Gaiman, Nicole Quinn, & Mary Caponegro
April 23, 2014
Richard B. Fisher Center, Bard College
Learn more here.
This week on the blog, our Quote of the Week comes from Martin Wager, who views Arendt's idea of world alienation through the lens of modern day travel. Josh Kopin looks at Stanford Literary Lab's idea of using computers and data as a tool for literary criticism. In the Weekend Read, Roger Berkowitz ponders the slippery slope of using the First Amendment as the basis for campaign finance reform.
Magnus Carlsen—just 22 years old—beat Viswanathan Anand (the reigning world chess champion) this week at the World Chess Championships in Chennai, India. There has been much excitement about Carlsen’s victory, and not simply because of his youth. As Joe Weisenthal writes, Carlsen’s win signifies the emergence of a new kind of chess. We can profitably speak of at least three eras.
First, what is often called the Romantic era of chess. Here is how Weisenthal describes it:
In the old days, high-level chess was a swashbuckling game filled with daring piece sacrifices and head-spinning multi-move combinations where the winner would pull off wins seemingly out of nowhere.
Beginning in the middle of the 20th century, Weisenthal explains, chess became more methodical. New champions would still take chances, but they were studied risks, more considered, and often pre-tested in preparation games. Players would study all past games by opponents analyzed through computers. This meant that the spontaneous move was more often than not beaten back by the prepared answer.
As the study of chess became more rigorous, these wild games became more and more rare at the highest level, as daring (but theoretically weak) combinations became more easy to repel…. Modern chess champions have won by building crushing, airtight, positional superiorities against their opponents, grinding them down and forcing a resignation. The chess is amazing, although frequently less of a high-wire act.
The third era of recent chess might be called the computer age. It began, for better or worse, when IBM’s Deep Blue super computer beat the great chess champion Gary Kasparov in 1997. The current generation of players (like Carlsen) were raised playing chess against computers. This has changed the way the game is played.
In an essay a while back in the NYRB, Kasparov reflected on what the rise of chess-playing computers meant.
The heavy use of computer analysis has pushed the game itself in new directions. The machine doesn’t care about style or patterns or hundreds of years of established theory. It counts up the values of the chess pieces, analyzes a few billion moves, and counts them up again. (A computer translates each piece and each positional factor into a value in order to reduce the game to numbers it can crunch.) It is entirely free of prejudice and doctrine and this has contributed to the development of players who are almost as free of dogma as the machines with which they train. Increasingly, a move isn’t good or bad because it looks that way or because it hasn’t been done that way before. It’s simply good if it works and bad if it doesn’t. Although we still require a strong measure of intuition and logic to play well, humans today are starting to play more like computers.
One way to put this is that as we rely on computers and begin to value what computers value and think like computers think, our world becomes more rational, more efficient, and more powerful, but also less beautiful, less unique, and less exotic. The romantic era of elegant and swashbuckling chess is over. But so too is the rational, calculated, grinding chess that Weisenthal describes as the style of the late 20th century. Since all players are trained by the logical rigidity of playing against computers, playing by pure logic will rarely give one side the ultimate advantage.
Which brings us to Carlsen and the buzz about his victory at the World Chess Championships. Behind Carlsen’s victories is what is being called his “nettlesomeness,” a concept apparently developed by the computer science professor Ken Regan. The idea has been described recently by Tyler Cowen:
Carlsen is demonstrating one of his most feared qualities, namely his “nettlesomeness,” to use a term coined for this purpose by Ken Regan. Using computer analysis, you can measure which players do the most to cause their opponents to make mistakes. Carlsen has the highest nettlesomeness score by this metric, because his creative moves pressure the other player and open up a lot of room for mistakes. In contrast, a player such as Kramnik plays a high percentage of very accurate moves, and of course he is very strong, but those moves are in some way calmer and they are less likely to induce mistakes in response.
For Weisenthal, the rise of “nettlesomeness” signifies the "new era of post-modern chess. It's not about uncorking crazy, romantic brilliancies. And it's not about achieving crushing, positional victories. It's about being as cool as a computer while your opponent does things that are, well, human."
I am not sure Weisenthal gives full credit to Carlsen’s nettlesomeness. Yes, Carlsen does engage in a bit of emotional warfare—the getting up from the table, trying to throw off one’s opponent. But his nettlesomeness also involves “his creative moves pressure the other player and open up a lot of room for mistakes.” This is important.
In Kasparov’s earlier essay, he also describes his experience of two matches played against the Bulgarian Veselin Topalov, at the time the world's highest ranked Chess Master. When Kasparov played him in regular timed chess, he bested Topalov 3-1. But when he played him in a match when both were allowed to consult a computer for assistance, the match ended in a 3-3 draw. The lesson Kasparov drew from this is that computer-assisted chess magnifies the importance of human creativity:
The computer could project the consequences of each move we considered, pointing out possible outcomes and countermoves we might otherwise have missed. With that taken care of for us, we could concentrate on strategic planning instead of spending so much time on calculations. Human creativity was even more paramount under these conditions.
One may, however, question Kasparov’s conclusion. The computers did even out the match. As he admits, “My advantage in calculating tactics had been nullified by the machine.” More often than not, the result of computer-assisted chess is a draw.
What Carlsen’s victory may show, however, is that at a time when most players learn against machines and become technical wizards, it is those players who rise above the calculating game and are adept at finding the surprising or at least unsettling moves that will, at the very top of the sport, prove victorious. That is what Regan and Cowen mean by nettlesomeness. All of which suggests that, at least for the top chess player in the world, chess remains a human endeavor in which creativity can be enlisted to discombobulate human opponents playing increasingly like machines.
For your weekend read, take a long gander at Weisenthal’s essay. It includes simulated chess games to illustrate his point! Happy reading and playing.
"Thinking is skilled work. It is not true that we are naturally endowed with the ability to think clearly and logically - without learning how and without practicing.
One of my favorite images in Arendt's writings comes not from Arendt herself, but her citation of the poem "Magic" by Rainer Maria Rilke. Rilke's poem reads (in an approximate translation):
From indescribable transformation originate
Amazing shapes. Feel! Trust!
We suffer often: To ashes turn our flames;
Yet art can set on fire the dust.
Magic is here. In the realm of enchantment
The ordinary word appears elevated
But sounds as real as if the dove called
To seek its invisible mate.
Arendt cites Rilke's poem in the final section of the chapter of the Human Condition on Work. It is part of her discussion of art and her claim that "the immediate source of the art work is the human capacity for thought."
Art, Arendt writes, has its foundation in thinking. Works of art, she writes, are "thought things." They are thingifications of thoughts, or to use a word that is so often abused, they are reifications of thoughts—The making of thoughts into things. It is this process of transformation and transfiguration that Rilke captures in "Magic": To "set fire to the dust" and bring beauty and truth to the real world. That is what art does.
My mind turned to Rilke's poem as I watched the great South African artist William Kentridge deliver the first of his 2012 Norton Lectures at Harvard University.
Kentridge spoke in praise of shadows, and situated his talk within a reading of Plato's allegory of the Cave in Book VII of the Republic. The story of the cave begins with prisoners shackled and unmovable who see shadows along a wall projected by a fire. First one sets himself free and climbs out into the light of the sun and, slowly, painfully, comes to recognize in the light of the sun that the shadows were indeed shadows, untrue. The parable illustrates the error of sensible things and is one part of Plato's illustration of his theory of ideas. The ideas, supersensible truths of reason and logic, do not deceive and change like the shadowy things of the world. Only what lasts eternally is true; all that is sensible and fleeting is false.
Kentridge tells the story of Plato's cave to explain why he sees art, and especially his art, in opposition to the Platonic idea of truth. If Plato celebrates the primacy of the eternally true over the shadows, Kentridge argues that art elevates the image above the truth. For this reason, at least in part, Kentridge's art works with shadows. Shadow figures and shadow puppets.
Kentridge lauds shadows. In the very limitations of the shadows, in the gaps, in the gaps that inspire in us leaps to complete an image, that is where we think and learn. The leanness of the illusion pushes us to complete the recognition. It is in shadows that we find our agency in apprehending the world.
Shadow art is, for Kentridge, political. Plato's politics depends on a truth known and understood by the few and then imposed on the many. In this sense philosophy is, in Arendt's words, opposed to politics, and the philosopher either must seek merely to be left alone by the people (which is difficult because philosophers are dangerous), or they will always seek to dominate and tyrannize the polity with their reason. Arendt's lifelong battle is to free politics from the certainty of rational and philosophical truth, to open us to a politics of opinion and openness.
Knowledge is power and there is, in Kentridge's words, a relation between knowledge and violence. Kentridge embraces shadows and silhouettes to oppose the philosophical and Platonic tyranny of reason. He writes elsewhere:
I am interested in a political art, that is to say an art of ambiguity, contradiction, uncompleted gestures and uncertain ending - an art (and a politics) in which optimism is kept in check, and nihilism at bay.
Optimism must be kept in check since any certainty about the destination can underwrite the need for violence to bring others to that end. For Kentridge, "There is no destination. all destinations, all bright lights, arouse our mistrust."
Kentridge offers us an image of the artist. He speaks from the studio and from his notebook to emphasize the source of artistic truth in the thought image rather than the logical word. An artist thinks. He sees. He makes art. He makes things that reflect not truth and certainty but gaps, misgivings, and questions. Kentridge gives reality to the questionability of the world in his shadow art. In this way his art reminds us of the magic of Rilke's fire that transfigures dust into flame.
Few modern artists work magic like William Kentridge. His Norton Lectures are a great introduction to his art and the thinking behind his art. If you are not graduating this weekend, take the time to hear and look at what Kentridge says and makes.
You can view Kentridge's First Norton Lecture here. Consider it your visual weekend read.