Grown Up Digital: The Net Gen as Learners and Teachers



By Don Tapscott

Is the Internet changing our brains? We know what our brains look like on drugs—-but do we know what our brains look like on the web? Don Tapscott, one of the experts in the realm of Internet communication says that our minds have been improved by unlikely mechanisms, such as video games and the much-scorned Wikipedia. Even though it is hard to imagine World of Warcraft as the implementer of intellectual prowess and the facilitator of social skills, today’s children and teenagers, the sons and daughters of Dungeons and Dragons players, are smarter than their parents. For some educators, the news that their students have sharper, better developed minds than they do, will come as a bit of a surprise. However Tapscott insists,

…what we are seeing is the first case of a generation that is growing up with brains that are wired differently from those of the previous generation. Evidence is mounting that Net Geners process information and behave differently because they have indeed developed brains that are functionally different from those of their parents. They’re quicker, for example to process fast-moving images…

What does it all mean? What are the implications for the future? Tapscott’s book is an informative and insightful journey into the way the twenty-somethings—the Net Generation—think. Despite the scientific data that suggests that the brain of a person who has been web-trained his or her entire life is different from the book generation, the main thesis of Tapscott is not so much brain change but power change. He posits the Net Gen as the “Lap Generation,” the first generation to lap or pass their parents by possessing authority their elders do not understand: how to use electronic technology. The result of the younger generation’s apparent natural mastery of all things tech, Tapscott thinks, is the end of hierarchies and the abolition of a centralized authority. The author focuses on four areas, family, education, business, and politics. All of these entities are being faced with the Lap Generation and their egalitarian mindsets.


The youth of today are better informed, more adept at technology, and savvier with the ways and means of the Twenty-first century than the adults who are still in charge of education, businesses, and governments. What Tapscott’s book points to is a huge generations gap, a chasm as wide as the famous “generation gap” of Margaret Mead. For the Baby Boomers, their parents’ pre-war knowledge and experiences were irrelevant and useless, making what the author refers as the authoritarian family structure of the era extremely frustrating for the Boomers. The fathers, who acted like CEO’s, as Tapscott calls them, pontificated, but they had little of use to share and were unwilling to learn from their children. After years of having to endure lectures on topics that were alien to teenagers in the Sixties, the Boomers escaped the home front, never to return to the clutches of authority.

In contrast, today’s parents, who are the Boomers grown up, are more open to listening and to allowing their children to show them how to log onto the Internet. The relationship between parent and child is more open and more nurturing. Parents and children are close, so close that an entirely new kind of parent has emerged, “the Helicopter parent.” As an educator, I am familiar with that kind of ever-hovering parent but did not know that these same parents will continue to hover. They will go on job interviews after college, and will even confront the boss if their child is not well treated. How are the parents so well informed about the office politics for their child? The Lap Generation, the “boomerang” generation, making a strictly economic decision, likes to live at home. There are no hierarchies, only equality, in this new family.

After reading Tapscott’s observation about the new family, it occurred to me that this new arrangement bodes well for the distant future when the Boomer parents are elderly. For the first time in generations, it may be possible that the children will care for the parents. The Boomers ran away from home and abandoned their parents. Many Boomers today are facing the conundrum of what to do about an elderly parent or two. It is not uncommon for the Boomer’s elderly parents to be abandoned—again—in a facility where they will live out the last of their golden years, unvisited, and will die, unmourned. But the Boomers who have been respectful and kind to their children should expect better care from their children. What else could this new kind of anti-authoritarian family offer to the future?


Tapscott warned that

Educators should take note. The current model of pedagogy is teacher focused, one-way, one size fits all. It isolates the student in the learning process…. (Net Geners) will respond to the new model of education that is beginning to surface—student-focused and multiway, which is customized and collaborative..

Tapscott states that the Net Gen carries with it two sets of expectation when these students enter schools and colleges. First, they are shaped by their experience with the Internet, which demands that they interact with technology, search for content, and socialize with their peers, long distance. Second, they expect to shape and participate in their own education. Rather than passively accepting intoned truths delivered from behind the lectern on high, this generation wants to participate and collaborate in what they expect to be a joint enterprise. The author characterized current education as being a one-way model, that is one-person talks and another listens. It occurs to me that, in fact, the educational system reflects the technology. The Guttenberg technology, based upon the printing press, is a one-way form of communication. The author writes and the reader reads. The radio repeated this form of speaking and listening that reflected the print technology. Then television came along and replicated the Gutenberg method once again. Education is based upon the premise that an educated person, i.e. e. the teacher, is also a reader who has read and who, is, therefore, qualified to redeliver the written messages in an oral form, again repeating the model of one way communication.

Following my line of thinking, the real challenge to today’s educational model is the Internet, which is a two-way mode of communication. In contrast the traditional Sermon on the Mount, the Web is participatory, non-authoritarian communication, a call and response format that is ignored and discredited by the authorities until they feel threatened by the sound of Other voices. The call and response nature of the Internet—this new technology—means that education must become more participatory for the Net Gen students. Tapscott writes that the Net Gen students expect interactive teaching and learning. If they cannot actively collaborate, they will tune out and get bored with traditional methods of lecturing. Although Tapscott does not get into the weeds of pedagogy, I suspect that, contrary to their current teachers, this is a generation that would accept and welcome distance learning. Today’s students are used to learning from the computer, an instrument that many of today’s educators view with suspicion. On one hand the computer is a convenient tool, on the other hand, it challenges the authority of the teacher who wants to be the sole source of knowledge.

Tapscott describes the elders of the Net Gen, the Gen Xers, as being “aggressive communicators who are extremely media centered.” But unlike the Gen X, the Net Gen grew up using the “programmable web.” “And every time you use it, you change it.” The author continues later, “On the Net, the children have had to search for, rather than simply look at, information. This forces them to develop thinking and investigative skills—-they must become critics. Which Web sites are good?” Tapscott rightly calls the model of education we currently use—-teacher lecturing and student listening—-as Industrial, but I think he may be off by a few centuries. The model is more that of a pre-Gutenberg culture, before the printing press made it possible for people to read what they wanted. I would agree with Jeffrey Bannister, quoted in Tapscott’s book, who uses the term, “pre-Gutenberg.

We’ve got a bunch of professors reading from handwritten notes, writing on blackboards and the students are writing down what they say. This is a pre-Gutenberg model.

I might point out in passing, to Bannister, that in attempting to accommodate multiple learners, it is considered good practice to write on the board for the students who learn by reading, not hearing. Indeed, Tapscott also states that,

Students are individuals who have individual ways of learning and absorbing information. Some are visual learners; others learn by listening. Still others learn by physically manipulating something.

As early as 1967, as Marshall McLuhan, also quoted by Tapscott, said,

Today’s child is bewildered when he enters the nineteenth-century environment that still characterizes the educational establishment, where information is scarce but ordered and structured by fragmented, classified patters, subjects, and schedules.

The New Learning must be customized for each student’s needs. Tapscott also quotes Howard Gardner, who called today’s educational model as mass production, a reflection of the industrial economy, which created assembly lines and Taylorism that forced human beings to work in tandem with machines. According to Gardner, school is also mass-production. “You teach the same think to students in the same way and assess them all in the same way,” he says. True but this is how No Child Left Behind teaches, as it must, for the standardized test. Even the best secondary schools teach towards to entrance exams so that the students can get the highest scores, not necessarily the best critical thinking skills. The test becomes the teacher. How are the Net Geners going to respond to a mechanism so crude and arbitrary as an SAT test? Note that these standardized tests do not take into account the way that the test-takers, the Net Gen, actually think. Change takes place at a glacial pace, especially when the entire educational system comes from a foundation based upon magical thinking: if the speaker says it, it is so. Education equals authority—unquestioned authority. How did strange combination of information without questions come about? And how did such a procedure become labeled as “education?”

When Gutenberg invented the printing press, the Church was against this new instrument, because the sacred words, once intoned only from the pulpit would be distributed to the great unwashed, delivered by the voice of authority. The Church feared, rightly, that the power of the printed word and of reading would allow the people to challenge the priesthood. The authority of the Church was unquestioned and was based upon a far older form of disseminating information, an oral culture of story telling. A culture of story telling is a logo centric culture, backed by the presence of the speaker who is the source of the story, information, and the truth. God spoke to Noah, to the Prophets, etc. and the word of God was transcribed. It was the task of religion to tell to those congregated the words of the Lord. The Church inherited a largely illiterate society—even kings and queens often could neither read nor write–that had to be preached to. Through years of standing for six to eight hours in cathedrals, hearing mysterious Latin, listening to sermons, and “reading’ the sculptural programs and the frescoes, the uneducated people under the care of the clergy were socially conditioned to listen to one voice (God’s) and one source of authority (the Church). The Protestant movement was proof that once the common person could read the words of the Bible, those people would take unto themselves the power to interpret God himself.

There are historically close ties between the Church and the University. The first universities, the Sorbonne and Oxford, were affiliated with religion and, with the clergy the only educated group, the priests became the first faculties. The traces of this history are clearly visible any graduation day with the procession of professors marching down the center aisle of the school auditorium, like the clergy files down the nave, in full “regalia,” wearing the long black robes, very monk like. Further traces of the Church lie in the very practice of lecturing: the teacher stands at the head of the class and speaks alone. The students speak only to ask questions and are expected to subside into obedient silence. Just as the priests re-spoke the Word of God, academics re-speak the words of their precursors. The very form of academic and scholarly phraseology mimes the sacred scriptures. “As —- tells us,” “As —- famously said,” and so on. Logos being handed down from authority figure to authority figure. Academics depend upon the logocentric tradition and upon the mystical belief that the speaker is backed by the fullness of authority. It is as if Moses descended from the mountain, bearing tablets written in stone—not to be altered—-after communing with the Almighty.

The assumption of a plenitude of knowledge, like that of the completeness of presence, is a false one but authority must be protected at all costs. Another prevailing characteristic of education, inherited from the Church, is, paradoxically, secrecy. Knowledge is guarded by the initiated, those who are learned in the ways of scholarship; knowledge is not to be given out freely, especially insider secrets. Like the Greek temples where only the priests were allowed inside the inner sanctum, only those inside the circle of the select are allowed to “speak” or be “present,” that is to publish, that is to “re-speak” the already spoken. The Internet has changed all that. The Net Geners are not readers, they are not listeners; they are iconographers. As Tapscott notes,

Net Geners who have grown up digital have learned how to read images…. they may be more visual than their parents are…. (They) tend to ignore lengthy instructions for their homework assignments…

Tapscott points out that students of today learn better through images. Indeed, this generation has invented a series of new hieroglyphs that function as signs such as happy “= (: and sad= ):”

Today’s students, Tapscott points out, will want to customize their education. He mentions that “tinkering” has made a come back. Indeed it has. The time of the mash-up has come. In higher intellectual circles, we call the mash-up, or sampling, bricoulage, that is, taking the existing culture and making something else with it. This is postmodern thinking, reclaim, reuse, remake, recycle. The very same teachers who teach postmodern theories are those who insist upon “original” work from students who are what I call, the Mash-Up Generation. The professors who eagerly and enthusiastically teach Postmodernism, or the questioning of the “metanarrative” of Modernism, will reject cutting and pasting and demand that the student cite “sources,” or the validating voices of authority. The same professors find it hard to accept that a student has ideas of his or her own, attitudes that stem naturally from their own generation, for, although the Boomers may have resisted authority, they knew it existed.

If my generation got into trouble for questioning authority, this generation gets into trouble for leveling sources. Every voice, every bit of cultural material has equal value and can be freely borrowed and re-used. Net Gen seeks convenience and speed over venerated voices, who are often unwilling to make themselves available on the web. Even more threatening to the traditional authority of educators is the declining value of scholarly knowledge, which is being by-passed and ignored by the mainstream undergraduate. Every teacher knows that students think that Google is a database. Students routinely ignore the expensive databases, paid for by student tuition, made available through library websites. Getting into the date bases is a clumsy and cumbersome and often unrewarding enterprise, because the technology of these databases is antediluvian. Naturally the student goes to Google’s fast and functional search engine to find information. Like the Net Gener who gets a job and finds, to his horror, that the technology is twenty years behind the times, the student will not tolerate the ritual of multiple clicks and passwords and all the other paraphernalia that work to make knowledge inaccessible. Even when forced to read a credible source, the students, accustomed to the all-purpose Net-speak, rebel at the insider jargon, written by scholars writing to scholars.

Net Geners want to be informed, not talked at. They like to take materials they find helpful or interesting and remake it. As opposed to always referring back to the authorities, the Net Gen likes to write its own material and to create its own content. Tapscott indicates that the Web actually encourages creativity and productivity because the Web gives easy access to inventors. From their habits of playing video games or participating in the virtual reality of Second Life, the Net Geners learn how to play their own game. Speaking of video games, Tapscott says,

“This kind of play is deeply creative. It involves trial and error, learning by experiment, role playing, failure, and many other aspects of creative thinking.”

None of this kind of creativity is allowed in education. Play is forbidden and failure is mocked. In contrast, the author discusses a thirteen year-old writer who contributes stories to a website where they are read by thousands of readers. “Isn’t that better than writing on paper and hoping that some day it might get published?” Tapscott asks. For today’s teachers and professors the Web 2.0 is something Roland Barthes would have loved: this new Web is called the “read-write” web—we read it and we write it.

Although there are many teachers who are eager and willing try more experimental student centered ways of making learning a collaborative enterprise between mentor and apprentice, they are constrained by a system that demands command and control. Distance Learning still attempts to replicate a now-obsolete classroom format, by demanding assignments at set due dates, by demanding chat room appearances at a set time, and so on. This is hardly learning the way the student needs it, customized, when the student can devote the time to it, at a pace that facilitates learning. Even distance learning classes end after a set number of weeks. Traditional classroom education is ruled by the physics of time and space: one teacher to a classroom, a certain number of students in a space, taught a common denominator course that must fit into a larger curriculum at a specific time. Student centered education is evidenced by allowing students to speak more or to participate in class discussion. There is no time for the teacher to waste. S/he has a set amount of material that must be covered.

Students are increasing unwilling to learn in the traditional manner, because they assume all knowledge is available on the Internet. Why learn math when one has a calculator? Why not teach how to use the calculator to find the answer? Why plow through many books when Wikipedia tells you anything you want to know and, even better, you too can write the content. Tapscott tells an amusing story about interviewing a young man named Joe O’Shea who stated that he never read a book—why should he? All the information he needed is on the Internet.

“I don’t read books per se,” he told the erudite and now somewhat stunned crowd. “I got to Google and I can absorb relevant information quickly. Some of this comes from books. But sitting down and going through a book from cover to cover doesn’t make sense. It’s not a good use of my time as I can get all of the information I need faster through the web. You need to know how to do it—to be a skilled hunter.”

Before you educators out there jump to your feet to explain the difference between “information” and “knowledge,” know that the punch line was that the young man had just been awarded a Rhodes scholarship.


Tapscott describes a new world in which the consumers remake the product, as they are remaking education. Education, he suggests, should think like a business and respond to the consumers, but Tapscott also points out that the businesses, which do not respond with agility to the demands of the Net Gen can get into trouble. The Net Gen, rightly, in my view, views businesses and corporations with suspicion. Tapscott points to the empowerment of the NetGeners who like to be “prosumers,” that is, proactive consumers, who customize their products. Young people have been prosumers for generations, but no one has named their practices until recently. Little girls have always treated their Barbies to new hair-dos and teen-age boys have always modified their cars with after-market products and custom decoration. This desire to contribute to mass-produced and mass marketed products has only recently been harnessed by companies such as Apple where “there’s an app for that.”

The users of Apple have often been referred to as a “cult” because of their devotion to the product. The term “cult” is derogatory and comes from those who simply don’t understand how the Net Gen thinks. Apple is thought of by the techies as an honorable company, which strives to produce a product that is beautifully designed and user friendly. In addition, the company also works closely with its user base, from the Bleeding Edgers to the novice customer, asking the tech savvy to participate in the improvement of the function and design of the product and watching for the difficulties of the blunderer so Apple can make function more straightforward. The reason why the flap over the iPhone4 and its broken antennae was so minor to Apple users is because those customers know that the company will fix and improve the problem with the next iteration of the phone. The Apple user is invariably an Early Adopter who expects such glitches and enjoys participating in the fix. This kind of audience participation is the Apple business model and it has won the company a devoted following. And we now have the iPhone6 and are test-driving the Apple Watch.

But all companies are not so accommodating to the customer base. Witness the hostile relationship between music lovers and the music industry, publishers and those who write and read books, the car companies (Toyota) and those who drive. The new generation of consumers wants to customize their experience with the product, Tapscott declares, but the corporate mind thinks in terms of profit not prosumer. To the Net Gen, music and art and literature and knowledge, like information, should belong to no one and everyone. Downloading “illegal” music is common practice, done without shame or remorse. How can anyone own music? Doesn’t art belong to everyone? The Net Gen is forcing companies that want to survive to be transparent and participatory, Tapscott writes. Older corporations do not want to interact with their customers. Like the traditional media, the corporate mind insists upon a one-way communication: top down. As Tapscott says,

…the industry has built a business model around suing its customers. And the industry that brought us the Beatles is now hated by its customers and is collapsing. Sadly, obsession with control, privacy, and proprietary standards on the part of large industry players has only served to further alienate and anger music listeners…

Tapscott states that the Net Gen prefers flexible hours and “want to chose when and where they want to work.” not only that these young people what their work to be “meaningful.” “They’re not loyal to an employer; they’re loyal to their career path,” he remarks. Imagine the surprise of business types when the Net Gen shows up to “work.” The Net Gen wants to play. The Net Gen employee comes to a company for one reason—-no, not a job—to learn. Once the Net Gen worker learns what s/he needs, s/he will move on to the next learning experience. It is pointless to expect the Net Gener to be “loyal” to the company. The concept of loyalty that his grandfather may have enjoyed was broken when companies began sending jobs overseas in the Seventies. Companies still expect the employee to commit to being a permanent fixture, while refusing to guarantee lifetime employment, much less health care. For the average corporation, human beings are a financial liability, but the Net Gener comes to play with the idea of contributing creatively.

Companies tend to create what Tapscott calls, a “generational firewall,” which separates the newbies from the oldtimers. This strange way of not utilizing recruited talent is not unfamiliar to me. I have often asked, why hire someone who is then suppressed and under utilized? Business runs on a hierarchal basis, those at the top give orders and the orders roll downhill where the underlings carry out the dictates. The Net Gen employee, according to Tapscott, does not accept hierarchy and assume that they were hired for their talents. If they cannot and are not allowed to participate as an equal, the most talented will simply move on. Their attitude, quite properly, is: if you won’t listen to me, why should I stay? Net Gen wants to contribute and needs to contribute to something meaningful. As the parents of the Net Geners changed the modeling of parenting, education needs to change its traditional assignments and business needs to change its traditional models. Show the Net Geners what’s in it for them.


That same attitude—-what’s in it for me? appears in politics. Today there are two common questions in popular culture: “What would Jesus do?” And “What’s in it for me?” We assume that Jesus would not say, “What’s in it for me?” We like to think he would say, “What can I do for you?” “What’s in it for me?” is a business question and the answer has to be “profits.” “Profits” is a business answer. So when a politician promises to run the government like a business, that implies that the government will not be in the service of the people but in the service of profit making entities, like corporations. Imagine if government were run like a business, like, say an oil company or a music company. Tapscott is convinced that the Net Geners have a better way. The Net Gen voter is an active participant who, unlike her grandparents, is a volunteer or a community activist, Tapscott says. Some of the Boomers joined the Peace Corps, some marched for Civil Rights and some protested against the Viet Nam War. Others marched for women’s rights and demanded gay rights. The Boomer’s children are the Net Roots who became activated by the prospect of being allowed to participate in the election of Barack Obama.

Tapscott discusses the Internet based campaign at length, and reading these passages, now that we are two years into the Obama administration, is enlightening. I think that much of what Tapscott writes is insightful and informative and I learned a lot from reading his book, however, I do think he is too sunny and too hopeful and too optimistic. Politics is a case in point as the enthusiasm for Obama wanes quickly. The Net Gen expected results. When Obama promised “transparency,” they thought that the President was thinking like the open artless, and fearless sharing that takes place on Facebook. The web is totally open and uncontrolled as a source of energy and information. The web is a place where things happen. That is why so many people (like me) devote their time to contributing to it. But the Net Gen quickly learned its lesson. As Tapscott writes,

Most Net Geners believe that the mechanics of power and policy making are controlled by self-interested politicians and organized lobby groups…The Net Generation does not put much trust in politicians and political institutions—-not because they are uninterested, but rather because political systems have failed to engage them in a manner that fits their digital and ethical upbringing.

The Net Gen experience as Internet users has taught them that if they coalesce towards a cause they can make changes. The fact that the Net Gen volunteers for Obama were so excited because they were “natural” Democrats, that is, they shared a cultural attitude that the government should work for the people, and that they—the (young) people could shape the outcome through their participation. According to Tapscott, the Net Geners are not conservative but more open to change and new ways of thinking than any other generation. But a Democratic victory did not bring the change they expected. And now the Net Gen has turned their backs on the administration. Why? The problem is that the government is controlled by a group of middle-aged people who will not let go of power. Just look at Congress on C-Span. All old White Men. No one under forty. No poor people. Few People of Color. Some women here and there. No collaboration, no participation from half of the members of Congress, who appear to have abdicated their governing responsibility in the pursuit of political power. The strategy of not participating—this is not the Net Gen type of thinking in Congress.

Things only get worse when one turns on to the news programs. The gap in age is shocking. Although there are some networks or news programs I do not watch, I do record at least four hours of news a day on TV to which I listen while I am writing) and read three newspapers a day. There are no young faces, no young writers (and therefore no young readers), no young voices, no young way of thinking. Only the Hill reporter, Luke Russert, the bright son of the late Tim Russert, stands out as someone under thirty. An entire generation is being left out of the conversation. The elders reflect back on their days with President Carter or President Clinton, prehistoric eras for the Net Gen, and discuss and debate raging political quarrels that are non-issues for the younger generation.

People—usually men—well beyond their childbearing years decide abortion policy. People—increasingly women as well—who are too old to fight send their young generation off to war for their own political ends or their lobbyists needs. People with lifetime jobs in Congress decide how much money the unemployed will or will not get. People with guaranteed government health care decide that others cannot have those same privileges and see no hypocrisy in their positions. Those who are heterosexual (they say) decide the personal lives of homosexuals. And so on.

Would results be different if the younger generation made itself heard? As Tapscott points out, this generation is far more tolerant than their parents or grandparents. It is their grandparents who are concerned about racial and gender equality, interracial marriage, “illegal” immigration, gay marriage, and other hot button issues. For their grandparents, global warming is debatable, for this generation, raised on green values; a devastated planet is their inheritance. If you asked a Net Gener which problem worried him more, the budget deficit or global warming, he would say, “global warming.” Always the optimist, Tapscott writes,

I’m convinced we’re in the early days of something unprecedented. Young people, and with them the entire world, are beginning to collaborate—for the first time ever—around a single idea: changing the weather.

For the Net Gener, it is discouraging to see who is in power and to watch how they behave. Partisan bickering and political game playing instead of collaborative game, negation instead of affirmation, blocking change instead of accepting it—all of this is alien to the younger generation. Those in the government and those elected to office are one-way communicators, out of touch and out of date. They allow the public to “speak” every two years at the ballot box. And these are the people to whom the question of Net Neutrality will be turned over. The corporations want to segment the Internet so that they can profit maximize what has been a free good, available to everyone. The case of whether or not the net will remain the great equalizer will probably be decided by the Supreme Court, presided over by a Chief Justice who does not understand e-mail.

Not a wonk, I am probably better informed than some people and I value the facts over ideology. So does the Net Gen. For us it is not Democrat or Republican, liberal or conservative, it is integrity, honor, and the desire to tell the truth. For Washington D. C., it is sound bytes and talking points. By selling the “War on Terror” the “War for Weapons of Mass Destruction,” and the need to Bail Out the Big Banks to the credulous public, the government has created what a Bush appointee called, a “post truth” society. How true. For the Net Gen, truth matters. The trust of the public in its leaders has been shattered, leaving a vacuum for the bloggers and talkers to fill. Another authority has to be appointed and anointed. For the older generation, still willing to accept one-way communication, sound bytes stand for wisdom, tweets become knowledge, and talking points are the truth. The Net Gen finds it astounding when the politicians change their stories and refuse accountability, even when they are caught changing their positions or lying or fabricating stories. The Net Gen is used to trolling the Internet and finding the facts and cannot understand how their elders can lie, get caught, pay no consequences, lie again and so on. No wonder they are disillusioned by politics.

The Future

Tapscott does not entirely ignore the real problems brought by the Internet revolution. He points to the gap between the have-nots of technology and those who are active users. His main examples are the poor or the third world, but there are other have-nots, closer to home, such as the poor, the elderly, or the close minded, or the technophobes who are getting left further and further behind. Then there are the bad effects of the Web. One of the odd and underreported facts of technology is that the Bleeding Edge is usually made up of illegal or questionable practices that become outlets for pathologies, including on line gaming, Wall Street derivatives, pornography, pedophilia, including on-line bullying. It is these Early Adopters who benefit the Web by using it and creating new pathways, meaning that all these nebulous people are always one step in advance to the forces of law and order. Parents protest the perfectly legal video games, such as the horrible Grand Theft Auto (which has awesome artwork) but forget that they watch and enjoy violent adult films such as Pulp Fiction. That said, the dangers of the Internet are real but, in the name of freedom, the Net Gen will defend the right of anyone and anything to prowl there. One can only hope that the same Supreme Court that granted freedom of speech to corporations will see fit to allow the Net to remain open to all comers.

Tapscott believes that “Net Geners are quick to recognize that the best way to achieve power and control is through people, not over people. Good lesson. The Net Gen is intelligent enough to know that Obama cannot change Washington D. C. There are too many entrenched interests. The question has become not what can I do for you? but what’s in it for me? All that hard work, all that dedication, all that Hope and no pay off, no results. People go into politics to get things done, to make things happen and when nothing changes, you turn away. It’s like your last job: you learned something new and then moved on. How sad. The problem for the Net Gen is that the fifty-sixty something generation of Baby Boomers have no intention of changing or of letting go of power. They are impervious to the Net Gen. “They” being the Big Banks, they being the Big Corporations, like Big Oil, are so powerful, have such a stranglehold on America that “They” answer to no one. Big Business cares not about the Net Gen, neither as employees, nor as consumers. By the time the Net Gen will have their turn to come into power, they too will be in their fifties, fully thirty years from now. The Baby Boomers joined the Tea Party in their maturity. What will the Net Gen do with their golden years? Tapscott concludes his book,

The big remaining question for older generations is whether that power will be shared with gratitude—or whether we will stall until a new generation grabs it from us. Will we have the wisdom and courage to accept them, their culture, and their media? Will we be effective in offering our experience to help them manage the dark side? Will we grant them the opportunity to fulfill their destiny? I think this will be a better world if we do.

Suggested readings from Don Tapscott’s Bibliography:

Beck, John C., and Mitchell Wade, Got Game: How the Gamer Generation is Changing the Workplace, 2004

Benkler, Yochai, The Wealth of Networks: How Social Production Transforms Markets and Freedom, 2006

Carlson, Scott, “The Net Generation Goes to College,” Chronicle of Higher Education, Oct. 7, 2005,

Gee, James Paul, What Video Games Have to Teach Us about Learning and Literacy, 2003

Howe, Neil, and William Strauss, Millennials Go to College: Strategies for a New Generation on Campus, 2003

——Millennials Rising: the Next Great Generation, 2000

Keen, Andrew, The Cult of the Amateur: How Today’s Internet is Killing Our Culture, 2007

Moglen, Eben, “Anarchism Triumphant: Free Software and the Death of Copyright,” First Monday, August, 1999,

Prensky, Marc, Digital Game-Based Learning, 2000

Roos, Dave, “How Net Generation Students Learn and Work,”, May 5, 2008

Tapscott, Don, and Anthony D. Williams, Wikinomics: Harnessing the Power of Mass Collaboration, 2006

Weinberger, David, Everything is Miscellaneous: The Power of the New Digital Disorder, 2007

Mentioned in his book but not included in his bibliography:

Carr, Nicholas, “Is Google Making us Stupid?” Atlantic Monthly, July, August, 2008

California’s Little Red Schoolhouse: Higher E-Education on Line



Note: I wrote this article in September of 2010, long before there was the word or concept called MOOC.

In the future—soon to be available for your grandchildren—there will be no classrooms. The era of the Little Red Schoolhouse will be over. As we watch the Budget Masters of the Educational Universe scramble for funds, we see them raise tuitions and cut back on enrollment—a truly antediluvian solution, for the flood has already occurred. This Flood, our Deluge is called the “Depression,” characterized by lack of jobs, lack of homes, and subsequent lack of taxes to support public education at the college level. This regressive action of raising tuition and lowering students seen on the part of California and other states can only be stopgap measure. In the future, what the state will cut back and eliminate is the real prize, not the students, but the expensive luxury of having a faculty, fully laden with bennies—health and retirement and a bad attitude. Do the math: faculty costs money, students bring in money. If this were your budget, which item you would eliminate? An expense or a source of income? Strangely, the state has eliminated both the expense and the income and the students are being shortchanged.

Why are students, who really need to get out into the work force, being forced to compete for classes? Why are students asked to wait five or six years to graduate? College classes being cut, and inquiring minds want to know—why? Because the faculty, even the part-time teachers and graduate students, are expensive, it is a simple short-term solution to eliminate people. It is not the classes the university system in California is cutting, it is the faculty who are being eliminated and the effect of slashing the faculty is the cutting back of the number of classes. Although the goal was to save money, the result is a self-fulfilling prophecy: cut the faculty, cut the classes, cut the students, cut the income. Impossible as it seems, budget cuts can also result in a cut in income.

Is there a solution to this impossible problem the state has created for itself? But wait, is what we see as a problem, cutting classes, really a solution in disguise. Is the state is putting in action with a long range goal of getting ride of faculty on a permanent basis? In the September 5th issue of The New York Times Book Review, Christopher Shea wrote about “The End of Tenure.” Shea discussed two recent books, Higher Education: How Colleges are Wasting Our Money and Failing Our Kids—and What We can do about It by Andrew Hacker and Claudia C. Dreifus and Crisis on Campus: A Bold Plan for Reforming Our Colleges and Universities by Mark C. Taylor, whose writings on this subject I have been following. Both authors question the effacy of tenured professors and what Taylor calls the “education bubble.” The fact is that university education is morally unsustainable. It is simply immoral to ask either the students or the faculty to support or to countenance a system of tenure that privileges the few at the expense of the many. It is untenable to put forward an ideal of education open to all, on one hand, while sustaining within the same system a hierarchical pyramid of exploitation of junior teachers. Not only does building a structure based upon disrespect for the have-nots, the “part-timers,” on the part of the “haves,” the tenured members of the faculty have unfortunate ethical consequences, for as Shea remarked, “The labor system…is clearly unjust.”

But it is unlikely that the state of California cares whether or not young talent and new ideas are being crushed beneath the chariot wheels of the privileged faculty who, after years of expensive research paid for by taxpayers, will produce a book read by a dozen people (what Taylor calls “overspecialized research”. The state is interested in eliminating an expensive luxury and that would be faculty, however privileged or exploited. So here is another question: How can it make any financial sense for every community college in the state of California to teach (re-teach) the same course in different classrooms at different times throughout the state? Why should every California (University or State) campus offer the same requirements in endless multiples, semester after semester, year after year? The result of such needless repetition visually is not unlike a mise-en-abyme, looking into an endless corridor of repetition and duplication of nearly identical courses. In what universe does it make monetary sense to duplicate efforts on the part of many, many faculty members, to duplicate many, many classrooms, to build many, many physical plants, called campuses, to feed, house, shelter and support thousands of people for two, four, even ten years, counting graduate schools, when all these students could be taught in the realm beyond the campus—cyber space? Why contribute to pollution by building expensive physical plants for classrooms, which must be heated and cooled? Why encourage students to drive to school, clogging freeways, expelling pollution? In the past the answer would have been that student have to come to school, emphasis on “come” as in “get to” as in “arrive” as in “be there.” But no more. Technology has changed the tradition of “going away to college.”

The shift is already underway. For at least a decade distance learning has been offered as an alternative or a substitute to on-campus learning. Indeed, some professional schools are already all e-based. Of course, these for-profit colleges have, in the eyes of academic snobs, given distance learning a bad name. Tenured faculty in the University of California is solidly against the notion of teaching on line, but for all the wrong reasons. It is true that e-education is the solution to the expense of huge campuses, bloated salaries of faculty and administration, exploitation of “lesser” teaching staff, damage to the environment caused by commuting. It is also true that, whether they like it or not, the State will bring about learning via computers, slowly but surely. The recent cutbacks in faculty will, like the last round of cutbacks in the 1990s, will be permanent. With little fuss, more and more classes will be put on line. It is well-known that many professors who desperately need the work have been developing on-line classes that are then the property of the institution which hired these people. The professors get a (very) small salary for their services and the colleges get the money for as long as the class runs. It should give the elite teachers some pause to realize that the classes of the future are being written by those they consider to “inferior” for tenure.

The objections of established faculty to distance learning are well taken, but for all the wrong reasons. In an effort to reproduce the virtual effect of a virtual classroom and a virtual teacher, the set-up is for on line classes still that of the Little Red Schoolhouse, complete with the student, the textbook, and the teacher, lacking only the Little Red Apple. Distance is the only difference. The software for course management has tended to replicate the ideal or traditional classroom experience, valuing “class discussions” and “student participation,” recreating a “group of learners,” who must make an on-line appearance at a stipulated time. The demand for student “presence” is intended to make sure that the students are actually “attending” the class. Ironically, there is no way of knowing if the “actual” student who is enrolled is “present,” or if a paid substitute is “taking” the entire class for a fee, while the “real” student is out having fun. The “teacher” is present, making scheduled appearances, guiding and leading and teaching unseen students, but as those of us have taught these courses know, the time and effort expended by the virtual teacher explodes exponentially to the point that a cost-benefit analysis reveals that the costs to the teacher’s time greatly exceed any monetary benefits to the instructor.

In the past, such an investment in teaching for the beginning educator would pay off in a full time job. But these jobs are in the process of being eliminated in favor of asking part time people to put in many more hour than they would in a “real” classroom. The result is that many veteran teachers simply opt out of these rudimentary and sentimental cyber Little Red Schoolhouse classrooms, leaving the field to those willing or inexperienced enough to be unable to say “no.” Because these cyber classrooms and course management systems are modeled on a web replication of a real classroom experience, their scope is deliberately limited to only what the individual teacher can handle. In other words, the hours put in by the teacher is expanded but not his or her pay and not the number of students and not the amount of money coming in to the school.

Already the teachers working on line are lowering the amount of education the students gets for the sake of their own survival. “Lectures,” for example, mandatory in a “real” classroom, have been eliminated on line. It is impossible to replicate the sheer amount of information given in a classroom lecture in an on-line situation. In the virtual world, the students need only read the text and answer questions, engage in virtual discussions, and take tests based upon the book’s content. Even without attempting to provide lectures as posts for the students to read as supplements or explanations of the textbook, the burden of caring for individual students, instead of presiding over a group, is overwhelming to the teacher. Little is gained by the student, by the teacher, or by the school through continuing these old-fashioned methods in a format that is antithetical to the Little Red Schoolhouse. One of the great virtues of distance education could be the sheer lack of the classroom. In cyberspace, the student can progress at his or her own speed and finish a course in, say, a week and be done with it. And indeed, this is exactly how the for-profit colleges allow students to work. But in the traditional colleges, the experience is drawn out over a semester because of sentimentality and nostalgia. The professor who used to be able to leave the classroom and leave the students behind is now “on call,” like a country doctor, to the students, all times of the day and every day.

The current course management practices in distance learning insist upon in-class or on-campus methods of teaching that prevent a serious examination of the possibilities of cyber learning. The only element provided on line by distance learning is distance or an alternative to attending a class on campus. But Blackboard, Moodle, E-Luminate—all of these course management systems, no matter how nostalgically they are constructed—have the technological seeds for expansion in scope. Indeed, the only factor holding back the capacity of the virtual classroom and its student enrollment is the lack of faculty willing, qualified, and trained for distance learning. The only way to increase the numbers of students in a virtual classroom is to have the class taught by a team of collaborating teachers, a rather clumsy solution. At this point, we are stuck with two problems: the limitations of the teachers and the limitations of the time, a semester or a quarter, in which the courses are taught. How are these problems to be solved?

Let’s start with scuttling the old model of the Little Red Schoolhouse. We shall see that when the limitations of the Little Red Schoolhouse are eliminated, then all of its traditional elements will be wiped away—except for the one determining reason for the Schoolhouse: the students. If you eliminate the limitations of the virtual “classroom,” you can have unlimited students. Once you expand the scope of learning, not teaching, then the Little Red Schoolhouse dissolves. Since “teaching” such a course, to thousands and thousands of students, will be impossible for any one human being, the professor will also have to be dispensed with. The result is the replacement of the faculty with course management systems, the campus for cyber-space: a good financial trade off for the state, and a vast increase in the number of student served and a consequent flood of pure profit revenue. We are imagining Life After Faculty.

What would education look like? Let us being by eliminating “education.” After all, we just eliminated the faculty. We must rename “education.” How would such a transformation work? If this theoretically unlimited classroom is where distance learning is headed then Step One will be the development of canned courses. That is, the many duplicated courses in say, Survey of Western Art I, throughout the state will morph into THE COURSE, THE STANDARD COURSE for a particular subject. The personalized course, “brought to life” by an inspirational teacher who sparks the dullest pupil’s brain will vanish. Traditional courses taught by individual teachers in his or her own way with his or her individual expertise will be replaced by THE COURSE, developed by a team of educators and experts. Because the aging tenured faculty will not want to be left out of the inevitable process, the “educators” will be the soon-to-be retired specialists in a field, such as art history, who will, as a committee, write the course content, assignments, requirements and tests. The “experts” will be technical advisors, who will set up the course materials, making them suitable for computer learning.

Cyber learning will necessarily be different and will take into account—unlike vestigial courses offered in today’s vestigial classrooms—the fact that the students are NOT in a classroom, are NOT learning through listening or through teacher demonstration or, in the case of art history, pointing at the object. The students will NOT be limited in time by a traditional semester or quarter system, which will also be eliminated. The students will NOT be in contact with a teacher or with one another. They will not be “on campus” at all. The Second Step will obviously be the complete elimination of the faculty. Thousands of individuals in all academic fields will be, as the British say, “redundant.” No longer necessary. It is possible that the elimination will take place through attrition: the Old Ones will be the educators on THE COURSE committee and the Young Ones will simple fall out of graduate school, as young seedlings fall on barren ground. The Young Ones will have neither a course nor a campus to sink their roots into. More on the fate of the Young Ones later. As the Old Ones are retired, willingly or unwillingly, all of their particular courses will be replaced by STANDARD CANNED COURSES, virtually provided, without individual teachers guiding and directing discussions and learning. Gone will be “real” classrooms with their uncomfortable desks, their chalkboards and white boards, their power point presentations, the Blue Books, their hierarchies of the Smart and the Dumb, and the presence of the all-knowing authority figure.

If on campus classrooms are eliminated and the students stay home, the impact upon the university and college campuses will be enormous. Campuses will shrink to labs and administrative buildings, and even these buildings will be few in number, serviced by a small parking garage and perhaps a nice cafeteria. The rest of the campus might become a verdant park, including playing fields for college sports. The Administration of Higher Learning, now mainly computerized registration, will become increasingly centralized, with Deans and Chairs and Provosts, and the like, will become unnecessary relics. Along with the faculty, they will be discarded. Administration will be mostly financial officers and tech personnel, for “staff” will shrink in numbers, although unlike the professors, the staff, like Cher, will not disappear.

Let us return to the impact of course management systems and the disappearance of the teacher upon education. Step Three will be the re-definition of “education.” Many sentimental and nostalgic people have already recoiled from this picture of the future in instinctive horror, picturing the end of the college campus with their academic groves, the swath of green, the quad, crisscrossed by connective paths, the brick buildings, ivy climbing up the elderly walls, the book-laden students walking in clusters, scurrying to class, talking to friends, making connections, mating for life, with autumn leaves drifting down in anticipation of the first football weekend, leading to a solemn graduation ceremony, a rite of passage, a ceremony that requires medieval robes, complete with cowls and mortar boards, perched jauntily upon heads old and young….How could we let all this tradition go?

The answer is very easily and very quickly in the face of a faster and cheaper and more efficient alternative. In the case of the automobile it was the people who made the choice: we gave away our horse, we turned our barn into a garage, the blacksmith became the auto mechanic, and we all learned how to drive motorized vehicles. In the case of what we sentimentally call “education,” we will have little choice; we will not make the decisions. The Budget Masters can and will make the decisions for us. Finances and demographics will dictate the future. Once software that is suitable for mass education without teachers is developed, there will be no turning back. Why maintain thousands of teachers and thousands of classrooms when all of these expensive physical entities can be eliminated? Why maintain the verdant campuses and ivy covered halls if no one is at home? Campuses with students will no longer make financial sense—in a very few years.

So what will cyber-education look like? Without discussing the fate of college football and other sports, education will become mass dissemination of units of linked information. So “education” will be replaced by “dissemination,” and “knowledge” will become “information.” Thinking—one of the traditional academic goals and by-products of education—will become a SKILL SET to be learned or, shall we say, consumed and applied. Courses traditionally have combined content and critical thinking and developmental and evaluative practices of reason. Cyber courses will be split between the disseminating of information, which must be mastered, and the instruction of analytical skills, which must be learned. Students will not be encouraged to critique, say, the economic system, but will learn of a variety of economic systems throughout time and will receive training in critical evaluation in an unconnected course. The student may or may not apply any of the analytical or critical skill sets to any of the information gained. What use the student makes of the courses taken is up to the student and his or her needs and inclinations.

Let us imagine the California college experience of the future. The Community College system, now a centralized entity, will provide basic foundational two-year classes. The California State University system, similarly constructed, will provide the third and fourth year required courses. The University of California system will provide specialized high-level courses for the various majors. In fact, over time, these levels could simply become all one University System, eliminating the now unnecessary separations. To the extent that separate campuses retain their names or still exist, these greatly reduced local sites will be used solely for the majors that need lab work—such as the arts, music and dance, and the sciences and sports. Because most sciences can be done on line, we can envision campus life consisting of two dominant groups, the artists and the jocks. Everyone else will stay home.

Where will “home” be? Anywhere and everywhere. Anyone can take these courses from any location. All you have to do is pay. Gone is the admissions process, except for the jocks, which need to try out and be selected for aptitude and athletic talent. Admissions to a specific university traditionally have served two purposes: one is frankly elitist and the other is practical. Elitist hierarchies have been created: certain University of California campuses are considered “better” than others because the students are “better” because their entry grades are higher. The University of California campuses are, in turn, valued over their Cinderella sisters, the California State system for the same reasons. The Community Colleges are used by all and scorned by everyone. Practical limitations of campus space have resulted in limitations on enrollment, leading to selective admissions of more or less qualified students: the “best” go to Cal Berkeley and the “worst” go to a community college. The professors are paid accordingly, rewarded accordingly, and worked accordingly.

A professor at a UC School will teach three or four classes a year and will be paid three times more—at least—than their Cal State counterpart who must teach eight classes a year. A community college teacher will also have four classes a semester, but unlike his or her higher-up counterpart s/he will have no graduate assistant to help with research or grading. As one goes down the hierarchy, the workload and the inequality increases and the salary decreases, based upon the assumption that some professors are “better” than others and must therefore must be treated in a more privileged manner and that some students are “worse” than others and deserve a supposedly lower quality education. All of us who have been through the UC system, taking the occasional Community College class, know that one can have an amazing teacher there in the “lower depths” and have a simply terrible teacher at the University.

But because the vestiges of that unjust hierarchy will undoubtedly remain, it will be the university professors who will probably survive, as the “educators,” inventing THE COURSE, putting the other teachers out of a job. That said, the students would benefit the most from the elimination of this ancient architecture of privilege. With admissions based upon campus space no longer necessary, the game will change. The goal is no longer to pass on privilege from family to family, from social class to social class. The idea is now to educate the whole population. Everyone starts at the same level and everyone finishes at the same level. Excellence is now based solely upon how well one does in the courses. There will be no hierarchies among campuses; there will be only one degree from one university. Everyone else simply buys a course—pays for the information—on the open market. The trick is that the purchaser “owns” the course only when the course is completed. Initially what you pay for is the right to “inhabit” the class. Think of buying a home: you provide a down payment, but you do not “own” the house you inhabit until you complete your obligations, that is, make your payments in full, paying off your mortgage.

Students will be allowed to “inhabit” a course for a limited period of time, say two years. If the requirements are not completed in two years, then the class is “foreclosed” and the student needs to repurchase or move on to a more suitable course. The student may get out of the course at any time, but no money will be refunded after a certain length of time. Because students must pay monthly “rent,” the course will cost more for those who take longer to finish. Think in terms of insurance payments on your home or fees on your condominium. The course can be a cheap or as expensive as the student allows or can manage. Certainly the smarter and more prepared students will finish faster and cheaper than those who have less aptitude or time, but the former group has always been advantaged over the latter group. Some buyers may never put together a degree; some may purchase particular courses for specific purposes; others will obtain a university degree. The revenue stream coming to the State will be large—because the student pool has enormously increased—-and will be continuous—because humans, by their very nature, will procrastinate on their courses and pay “rent” for months or even years.

Without any admission requirements, the student body, with the aid of translator widgets, will be international. So what are the students buying? The students are buying, not education, but access to information. Unlike “education,” now a quaint practice in quotation marks, information will not come from textbooks, written by authority figures, will not be personal, ideological, or value-based. Information will be disseminated with low literary levels—almost like bullet points. But the lines of basic facts will be laced with links to documents of all kinds, from primary to commentary, all available and ever hyper-expanding for the students to peruse. One of the arguments, made for decades, as to why women and people of color are excluded from course such as history and literature is that a full and complete portrayal of the role of African Americans in the history of the United States would take up too much time in the traditional three-hour class in a traditional semester.

There are only so many classroom hours available and the need to teach of the accomplishments, however dubious, of the white male must take precedent. A single semester or a single year is insufficient to include Virginia Woolf or Georgia O’Keeffe in a course in literature or art. Although the demand for the inclusion of women and people of color has resulted in the insertion of tokens here and there, American education has been traditionally Eurocentric, white and male. One of the problems, a very real one, is the training of individual teachers who are forced to (over) specialize. A professor of English literature will have concentrated on Chaucer and will be required by her university to publish or perish in a specific and narrow area of his or her field and is discouraged from developing other fields of concentration, such as contemporary Anglo-Indian authors. In cyber space there are no such limitations—not the teacher’s time, not the teacher’s knowledge. In the cyber world there are only links that propel the student into the endless pleasures of hyperspace.

Students will be required to learn the history of the United States as the histories—plural—of the genders and ethnicities of America. The result will be “histories” written by experts found by links to articles or books: no one teacher is expected to attempt to cover all the materials. The students will be given the benefits of many scholars in the field. Information will be theoretically limitless. There will be no professor in the classroom explaining why Langston Hughes cannot be taught because Ted Hughes is more important than Sylvia Plath and so on. Authority is gone, guidance is extinct, mentors are absent, and as are the idiosyncratic and unqualified and abusive professors who try to impose their wills upon helpless students. The student is the “activated learning agent” who browses and chooses what lines of information to follow, evaluate and develop. Assignments and tests provide the only direction.

Somewhere in the cyber background are computerized evaluations of the students’ homework materials or perhaps vestiges of professors, now nameless survivors of the college and university system, who are given the tasks of writing assignments and tests and making sure the computer programs take note of the correct “key words.” Of course, one can do an assignment over and over until the desired grade is obtained, and, ideally, one can learn through re-doing. Some few of these students will be attracted to the possibility of endless learning and limitless information gathering. Those will be the future scholars who may actually come into personal contact with others of their kind in a specialized area of the University system called “graduate school,” but there is no need to enter a campus. Graduate school can be as “virtual” as undergraduate “education.” For those who remember the tyrannical and politicized and competitive atmosphere of graduate school, the simple pleasure of pursing a train of thought in solitary splendor in cyber archives will be quite sufficient. Indeed graduate school will shrink back to its original dimensions: a place for professionals, such as lawyers, doctors, and architects, and a place for those with the income and leisure time to concentrate on a field of study for a decade or more.

Once there are no jobs at universities at the end of graduate school, students will move on to other professions, leaving behind only the dedicated learners, the one who truly care about what we used to call “knowledge.” They, the solitary, the few, will be entrusted with the task of creating new information by synthesizing a vast array of floating data and documentation. But their task will be fundamentally different from current scholars who have the pretense of “originality.” These postmodern writers will be the true bricoleurs, or should we say, bricoleurs who will admit to the fact: they do not create; they assemble units of usable information for the students to consume and assimilate. Instead of being limited to “publication” in “peer reviewed” professional “journals” or university supported “presses,” the new cyber scholar simply posts his or her writing on his or her website. Interested readers can find this work and can make contact with the writer to ask for additional information, to exchange thoughts and resources and so on.

People who wish to be this new kind of scholar can develop their own specializations within scholarly territories that are now “unguarded” and open because “gatekeepers” can no longer function. Information is now everywhere, free for the taking. True, we all have memories of that special professor who mentored and encouraged us but they will continue to exist in cyber space. In cyber space, no close-minded professor can tell the cowed graduate student what s/he should or should not read, what s/he should or should not believe. Authority has almost no meaning on line. The information “market” determines what it needs and takes it. Just as “education” has become redefined, professor eliminated, the “student body” also becomes a sentimental artifact of the past.

This elimination of one of the major means of socialization of young people (and old people) will probably be only an extension of what will be happening in the workplace, with more and more people working from home. The trade-off is losing an incompetent or tyrannical professor or boss and gaining autonomy and independence and success based upon merit rather than favoritism or looks or privilege. The losses must be considered and constitute a real problem: money is saved, revenues are increased, the population is more efficiently informed and trained, but human contact is drastically altered. Perhaps the germ of human socialization in the future is already upon us: sites like provide hook-ups for dating, but there is no reason why there could not be similar sites for college students who will meet on line and create social group. Although Facebook was set up so that linked students could study for an art history exam, this social network is not specifically directed towards students. Perhaps one can envision as a positive possibility to increase of one’s circle of acquaintances being anywhere in the world. People are resourceful in their desire to be together. They will find a way to create new kinds of communities. We can call this phenomenon “global info-cation.”

If you have found this material useful, please give credit to

Dr. Jeanne S. M. Willette and Art History Unstuffed. Thank you.

[email protected]

Climate Denial and Al Gore’s The Assault on Reason (2007)


A Review of


One of the great “what ifs” in American history is “what if Al Gore had become president in 2000?” Notice I did not say, “What if Al Gore had won the 2000 election?” For some, George W. Bush did not defeat Al Gore, instead the Supreme Court in what many left-wing thinkers consider a coup-d’état handed him the presidency. Who knows who really won? The counting of the votes, hanging chads, butterfly ballot and all that, was never completed but was halted by the Court. The Republican response to the Democratic dismay was to “suck it up” and accept the loss. While this transfer of the presidency to George W. Bush has never left the consciousness of the Democrats, and while we will never know who actually won the most votes in Florida, some things we do know for certain and that is what would not have occurred if Gore had become president.


Imagine what we would not have had

  • No war in Iraq
  • No “discretionary” wars
  • No Patriot Act
  • No torture, no torture memos,
  • No wholesale spying on the American people
  • No Guantanamo Bay
  • No Abu Ghraib
  • No flouting of the Geneva Convention
  • No privatization of the military
  • No Haliburton, no KRB
  • No wars fought on credit cards
  • No unfunded prescription drug programs
  • No government lying
  • No outing of CIA agents
  • No inaction on Katrina
  • Job outsourcing offset by jobs at home
  • No Great Recession
  • No Bush Tax Cuts to the Wealthy
  • No massive debts
  • No union-busting governors
  • No Defense of Marriage Act
  • No polarization between political parties
  • No John Roberts
  • No Samuel Alito
  • No Citizens United Decision
  • No Tea Party
  • No Sarah Palin
  • No Michelle Bachmann
  • No Barack Obama

What we would have had:

  • A Short War in Afghanistan
  • A Green Economy
  • Green Jobs in America
  • Smaller Wall Street Crash
  • Illegal Immigrants made legal tax-paying citizens
  • The Protection of Reproductive Rights
  • The Protection of Voting Rights
  • Well-funded Social Security and Medicare and Medicaid
  • Compromise and Negotiation
  • A Respect for Truth and for Reality

In a book that was little noted and will probably not be long remembered, Al Gore, would be non-president reviewed the work of the Bush Administration and took measure of how American changed under George W. Bush. Whether or not one admires Bush or not–and sadly there are few who do these days–one cannot argue that his tenure in office did not have an impact upon the nation. Each president teaches the nation a series of lessons, some of them with lasting repercussions, some good and some bad. Lyndon Johnson taught us that presidents lie. Richard Nixon taught us that government is not to be trusted. Ronald Reagan taught us that greed was good. George H. W. Bush taught us to use racist lies as a campaign strategy. Bill Clinton taught us that presidents have sex while in office. George W. Bush taught us that it was just fine to spend money we do not have and have no way of paying back. Barack Obama taught us that resistance is futile. Al Gore taught us how to lose gracefully. Al Gore also taught retired public servants who to make the most of their retirement and how to maximize their experience for the public good. Of all the ex-politicians, Al Gore has contributed to the globe perhaps the most admirably, warning the world of the coming catastrophe of Global Warming or Climate Change or whatever you want to call it. Only Jimmy Carter and Bill Clinton have equaled Gore in public service after serving in elected office. We are still waiting to see what the Bushes, Senior and Junior, will do to show that they deserved the faith their voters put in them to serve the people.

We know what happened under George W. Bush. But what if Gore had been president? What are the arguments that things would have been better as the result of a Gore presidency? First, Gore would have retained the surplus accrued under Clinton. There would have been no tax cuts for the rich. So how would all that extra money have been spent? Undoubtedly, the deficit would have been paid down over time. But there are always rainy days and the unexpected and we would have been financially prepared for eventualities. During the first decade of the twenty-first century, there were two events that could not have been planned for. In making the second point, we could arguably ask: would there have been a September 11th?

While it is doubtful that the terrible insane plan to turn planes into weapons could have been detected, there would have been much more awareness of the dangers of Islamic terrorism in the Gore administration than in the Bush administration. The Bush State Department was fully briefed by the outgoing administration on the threats from Al Qaeda and chose, famously, to ignore the information. Third, while we can assume that, regardless of the increased vigilance that 9/11 would have happened anyway, we also know that there would have been no war in Iraq. Certainly after September 11th, American would have fought what probably would have been a short and sharp war in Afghanistan. How short, we cannot know, but certainly not the ten plus years we are witnessing now of what we now call “forever wars.”


Another cost of the Bush wars was the very expensive privatization of the military. Once the military took care of itself, from cooking to cleaning to fighting. Under the Bush administration, the basic cost of running a war was enormously increased by outsourcing what had been standard military tasks to private companies, which proceeded to overcharge the government. It has long been known that the Defense Department had always been the target of enrichment scams on the part of civilian businesses and there were attempts, however feeble, to keep the outrageous overcharges under control. Under the Bush administration, the ceding of the military to private enterprise exploded the cost of the war beyond what it would have normally been.

And none of the increased costs were paid for. During the Second World War, the military was self-sufficient and the citizens paid the costs, one day at a time, through the sale of war bonds. Instead, no bid contract were handed out from everyone to electricians to caterers to commandos, effectively doubling the personnel and causing costs to spiral out of control. It is doubtful that under president Al Gore that the wars would have been either plural or privatized. Without the wars, there would have been no Patriot Act, no wholesale spying on the American people, no Guantanamo Bay, no Abu Ghraib, no torture memos, no flouting of the Geneva Convention, no decline in American credibility and no loss of American honor.

Fourth, this war would have been paid for. The two Bush wars–Afghanistan and Iraq–were the first in American history to be waged without a tax increase and fought totally on borrowed money. Fifth, it is unlikely that going into two wars on credit cards would have been coupled with another charge on the card, the unfunded and unpaid-for prescription drug plan for Medicare. Although it would be safe to assume that none of the budget busting events that happened under Bush—two wars, a tax cut and a prescription drug deal, all not paid for—under a Gore Administration, it would not be safe to assume that there would have been no financial melt-down. The second unforeseeable eventuality, the crisis of 2008, could well have come about regardless of who was in charge. The only real question is how bad would it have been?

The Gore administration would have, in all probability, continued the de-regulation of the lending financial industries undertaken by the Clinton economic team. What is unclear is the extent of the financial excesses. During the Bush years, Wall Street came to resemble Las Vegas even more than usual. The stock market and its minions take its cues from political leadership and the market clearly followed the lead of the Bush administration and adopted the philosophy of short-term goals and short-term gains, to borrow and spend with no thought to the consequences. The market will always take advantage of the slightest permissive loophole and even invent a few more but, under Bush, there was clear permission to binge.

Recall that after 9/11, the president urged the nation to shop. Credit cards were flashed and homes were used as the proverbial piggy bank and, thanks to “liar’s loans,” value was extracted from what was the homeowner’s major financial asset. The market may always be counted on to behave badly and selfishly but, under Bush, the basic fabric of responsibility and morality and ethical behavior became openly unraveled. The bills finally came due and the entire structure, built on fantasy, came crashing down. Would the Gore administration have bailed out Wall Street?

It is possible that, given the precedents, such as the Savings and Loan debacle, the answer would have been “yes.” But it is probably safe to assume the crash would have been much less severe and the money would have been there to pump into the economy. Not only that, but the economy would have been in much better shape and could have better absorbed such a blow. Under the Bush administration, there was no job creation and no rise in middle class income. Jobs were going out the door and traveling to other nations with cheap labor. Tax incentives were created to encourage outsourcing and corporations were allowed to not pay taxes. Of course with the high cost of labor and the stringency of regulations in America, all the businesses that could do so shipped their jobs overseas.

This practice was nothing new and had been going on since the 1970s. Outsourcing is not a bad thing in and of itself. American consumers have certainly enjoyed affordable commodities; from television sets to automobiles, and it make sense to allow certain societies to specialize in manufacturing if the advantage exists. The problem is that, under Bush, these lost jobs were never replaced. Real wages went down and, when taxes were cut, especially on the people who continued to experience a rise in income, revenues fell sharply. With not enough coming in and with huge unprecedented amounts of money going out, a deficit rapidly replaced the surplus and America went into a deep financial hole.

With the Afghanistan War over, with the rich paying their fare share, with no unpaid-for prescription drug plan, with no war in Iraq, and with a healthy economy, the Gore administration would have been ready for the Wall Street Crash of ’08. The Bush administration encouraged jobs to leave America and did nothing to encourage job creation at home. Eight, here is where there would have been an enormous difference between Bush and Gore. Environmentally conscious, Gore would have started green industries in America, creating green jobs. Green jobs are the kind of jobs that cannot be outsourced and the range of these kinds of jobs is enormous, offering opportunities to men and women with a wide range of skills and education. In addition, green jobs would have been located everywhere, eliminating the pockets of joblessness and limiting the dependence on federal spending seen in the southern part of the United States, for example. People could have actually afforded their homes, paid their bills, and, who knows, maybe there would have been no total meltdown that impacted homeowners. Maybe Wall Street would have had to suffer for its own excesses. Who knows?

Given the aging Baby Boomers under Gore would there have been an upswing in socialized medicine and health care? Or to put it another way, would Social Security and Medicare and Medicaid been in financial trouble? The crisis in these government guarantees of public health is due to the lack of taxes to support them. With normal tax revenues, there is no problem for the future of any of these programs. It is even possible that, under Gore, American would have been allowed to buy drugs on a competitive market, even allowed to buy drugs in Canada, bringing down the cost of health care. But there is something more to consider. Under Gore would there have been a Democratic push to legalize illegal immigrants? Given the rewards, why not?

Legal citizens pay taxes, instead of sending the surplus to Mexico, because they now have a stake in their new nation. The influx of income would be felt immediately in local and state and federal governments. People of Latino descent are a fast growing demographic and a young demographic, more than taking care of the spaces left by the Baby Boomers who will very shortly stop paying taxes and will start extracting their contributions to their retirement. The current budget “crisis” could be solved simply by ending the Bush tax cuts and by making illegal aliens legal. Legal citizens can vote and in gratitude, they would vote for the party that had given them citizenship.

Republicans know this fact of life and will continue to obstruct Democratic efforts to solve the “immigration problem,” which like many of the so-called “problems” we are told we have are problems of Bush’s making, because they know that the Republican base is a small one. The idea of a permanent Democratic majority is simply unthinkable to the Republicans, even Bush knew that, but his own party blew the opportunity he gave them. The Republicans can offset their smaller numbers with larger campaign spending, which is no anonymous and unlimited, thanks to the Supreme Court infamous Citizens United decision. And that Decision brings up another major difference between the administration of Bush and Gore. Under Gore, there would have been no John Roberts and no Samuel Alito and no rightward turn to the Supreme Court. Instead, Gore would have nominated two more liberal or neutral justices to the Court and there would have been no rollback of civil liberties and no decisions that favored corporations over citizens such as we have seen over the past decade.

Finally, the last thing that we do know is that without Bush and the drift rightward of his administration, there would have been no Barack Obama. Obama, a conservative Reagan Democrat, was able to position himself left of Bush, only because of the extreme right leaning positions taken by that administration. Obama’s mild Republican health care policies, which seek to shield American citizens from predatory health care companies, were a shock due to the strong contrast to Bush’s laisse faire attitude towards the poor and the middle class. Without a right-wing Bush administration, there would also have been no Sarah Palin. The Bush administration prepared the ground for an extreme Republican agenda and for extreme Republican candidates who do not read newspapers and who want to pray the gay away.

At the end of a Gore administration, the next president could have been a moderate Republican, like Mitt Romney, or another environmentally conscious Democrat. Most importantly, it is possible that without the foil of the disasters of the Republican Party in 2008, Barack Obama would never have been a viable candidate. But he became plausible and possible as a future president because he did two things, opposed the now-hated Iraq war and offered the nation hope. It is doubtful that whomever the President would have been in 2008 that there would have been the latest upsurge of the John Birch Society, the Tea Party. The Tea Party emerged, as did Sarah Palin, on the fertile soil of the Wall Street Bailout. With a good economy, there would have been no need for a faux “tax revolt.” Strangely, the fallout from the Wall Street meltdown ultimately benefitted the very party that was responsible for the debacle. Today, when nothing substantial gets done in Washington, it is hard to imagine what might have been. As unimaginable as it seems, the Democrats and the Republicans would be talking to each other today.

Just as Ronald Reagan allowed greed to emerge unchecked in America, George Bush allowed and encouraged a take-no-prisoners approach to politics. Taking a page from the book of his father’s late unlamented advisor, Lee Atwater, for the campaigns of the younger Bush, no trick was too dirty, no lie was too extreme, as long as it worked politically. The result was the birth of scorn for “reality-based” narratives and the door to stories that had no basis in fact was opened. It was fine to lie about the weapons of mass destruction, it was OK to reveal the identity of an officer of the CIA, just as it was perfectly acceptable to torture and to hold people indefinitely without charge or trial. If one side believes in an untenable scenario and castigates anyone who wants to tell the truth, then compromise is impossible. Once facts become meaningless then the party, which believes in non-facts can neither see nor agree to other points of view. When the Bush administration showed its willingness to buy into improbable versions of actual reality, the way was cleared for political gridlock. Without an agreement on basic facts and basic truths, no actions could ever be taken.


What the Bush administration taught us is that there was no accountability. Would the Wall Street Robber Barons have been allowed to go free in a post-Gore administration? Probably not, but Obama, following a regime without penalties, threatened the bankers with only Elizabeth Warren. But there is such a thing as accountability and we, the middle class American citizens, are still paying for old sins that we did not commit. In his best selling 2007 book, The Attack on Reason, Al Gore does not mention any of the might-have-beens listed above. He simply outlines in a clear precise language the failings of the Bush administration. Writing before the Wall Street Crash, his concerns have to do with civil liberties lost and the campaign of misinformation that passes for “news” during the first decade of the twenty-first century. Gore is especially concerned about the spread of false information by a mass media that is controlled by corporations and political interests. Gore quotes Edward Muskie, a former presidential contender, brought low by media manipulation in 1970,

“There are only two kinds of politics. They’re not radical and reactionary or conservative or liberal or even Democrat and Republican. There are only the politics of fear and the politics of trust.”

We all know that the next famous quote was “I am not a crook,” uttered by Richard Nixon. The president engineered his own demise by turning a small political misdemeanor into a massive cancerous cover-up, bringing the term “Watergate” and all things “gate” into being to designate scandals that could not be overcome. Watergate, like the McCarthy Hearings, was played out on television to a fascinated audience who was dazzled at the cast of luminaries brought low. Watergate was a rare case of the truth coming out and of that truth having consequences and of those responsible being held accountable. It would be the last time such a public political punishment would occur. Watergate was a story broken by a great newspaper, The Washington Post and what lodged in the public psyche was that newspapers, print media, was the last resort of truth. Since Watergate the public spends more and more time passively consuming television in a one-way no-exchange experience. As Gore points out, today, television is the public’s main source of political news on government business and newspapers are folding one by one.

Not only are newspapers dying and television ratings are soaring but television viewing has become more and more of a niche experience. Unlike newspapers where a range of news and opinions co-exist, television programs appeal to the fears and prejudices of the audience. Television exists to entertain and to make money for the owners not to seek and find the truth. Furthermore, competition has greatly lessened among media outlets since the 1970s and a few vast conglomerates control everything. Monopoly capitalism has captured the news, turning it into a source of revenue. In such an atmosphere, reason has no place.

Gore’s main thesis is that reason has been replaced by “dogma and blind faith.” The result is “a new kind of power” that is arbitrary because the public is not informed and cannot consent from an informed position. Gore also states that this power comes from “deep poisoned wells of racism, ultranationalism, religious strife, tribalism, anti-Semitism, sexism, and homophobia…” In such an atmosphere, the ugliness that always underlies any body politic is allowed and even encouraged to emerge. Real problems can be ignored while non-problems and fake crises distract the American people. The result is a replacement of our system of checks and balances with unchecked power and influence thanks to a “coalition” that serves their own interests, not that of the public.

Gore used the Iraq War and the systematic lies that led to it as a prime example of the techniques of distraction. It is now known that the Bush administration came into office with the goal of deposing Saddam Hussein and the administration’s spin machine diverted attention away from Osama bin Laden to phantom weapons of mass destruction. Anyone who disagreed with President Bush or brought facts to bear was dismissed as “unpatriotic.” Ideology replaced facts, faith replaced information, fantasy replaced history, and dogmatism replaced reason so that Bush could “benefit friends and supporters.”

The coalition, or the “friends with benefits,” that Gore describes is made of a number of groups or what Bush called “my base.” He lists “the economic royalists” who want only to eliminate taxation and regulation, an “ideology” which has an “almost religious fervor.” The public interest does not exist for these people. Indeed, any government programs that aid the people are disincentives to make these people work hard for low wages. The interests of the “wealthy and large corporations” have the highest priority for Republican ideology. The infallibility of this ideological position is buttressed by what Gore lists as “well funded foundations, think tanks, action committees, media companies, and front groups capable of simulating grassroots activism and mounting a sustained assault on any reasoning process that threatens their economic goals.”

True, Republicans have been trying to dismantle the New Deal and the prosperity of the middle class for eighty years, but Gore asserts “this is different: the absolute dominance of the politics of wealth is something new.” He traces the long struggle in America to create an equal society, which is also a struggle against monopoly power and corporate interference with the workings of government but once regulations that made sure that there were many choices of media outlets to ensure competition. Under Reagan, Gore points out, media competition was ended when regulations were lifted allowing vast corporations to gather together many television and radio stations and newspapers into one bundle that spoke with a single mind, devoted to preserving the wealth of the wealthy. Any information that should get in the way of ideology is promptly distorted for the cause or spun in a favorable direction. Gore says of the Bush administration,

“I cannot remember any administration adopting this kind of persistent, systematic abuse of the truth and the institutionalization of dishonesty as a routine part of the policy process.” Gore states that the result of administration tactics was “to introduce a new level of viciousness in partisan politics.”

The Supreme Court, always compliant with right wing agendas, helped President Bush garner unprecedented and unchecked power to the executive branch. The Bush doctrine became whatever the president did was legal, a stance taken unsuccessfully by President Nixon. Bush was allowed to flout the American legal system and to disdain international laws. All Supreme Court decisions were made in favor of corporations and their powers and against the people, leaving the individual with no recourse, not even the right to a trial by jury. Bush was less interested in social issues than the later Republicans would be. He was far more interested in amassing the power to do what he wanted, whether it was warrantless wiretapping, searches without search warrants and the “right” to put an unprecedented number of innocent citizens under surveillance for no particular reason. The public was not allowed to assemble freely and any protestors were removed far away from the President and corralled in special sections so that Bush’s day would not be ruined by any sign of dissent.


Gore ends his description of the illegal and unconstitutional abuses of the Bush administration by stating what it would take to create a “well-informed citizenry” that democracy requires. He does not have much faith in television and puts his faith instead in the Internet. Gore warns that there are powers, corporate powers, which want to control the Internet by giving the content the rich and famous approve the green light of high speed and forcing the dissenters into the slow land of endless downloads. This compartmentalization of the Internet into fast and slow ideologically structured lanes is a real and present danger. One can only hope that the True Believers and the Bloggers will keep protecting the last bastion of true participatory democracy. This book was published before the Bush presidency ended and does not account the last days of the Bush Bonfire when Wall Street burned. Reading The Assault on Reason three years into the Obama presidency is to recognize how totally the Bush administration ruined the very promising situation it inherited from the Clinton-Gore administration. One realizes that this is a group of politicians where were discredited to a man and woman but they were never held accountable. They just got out of town and left the government in a shambles.

What was gained? What was the Bush Administration all about? Reading Gore’s book helps us understand that what was gained by the monied interests was a significant weakening of regulations of all kinds, a shrinking of taxes on the rich, an enlargement of subsidies even for the wealthiest corporations, and a lack of meaningful consequences when oil spills or chemicals leak or coal mines cave in and people die. Wall Street banks can demand money from taxpayers and then refuse to help the very same citizens refinance their mortgages while giving themselves record bonuses. Global Warming is now a hoax and every time it snows, the right wing throws verbal snowballs at Al Gore. Every time there is a tornado or a flood or a drought, then the same people call the federal government. Labor unions, especially teachers, are now the villains and these groups are under assault so that more tax breaks can be given to the wealthy. States’ rights have made a comeback and even Obama, a black man who should know better, says that states should decide on whether or not to “allow” gay marriage. “Compromise” and “negotiation” are bad words for a person whose election promise is to destroy government as we know it. Washington is in gridlock. Media has rewritten lived history: the deficit was caused by Obama who was not born in the United States and who wants us to all become “European,” whatever that means.

Gore has been largely silent about these events that have unfolded since his book was published, but he cannot be surprised by the trend of today’s events. He has not been outspoken like Bill Clinton, nor has he overtly supported Obama. He has put forward the facts of Global Warming, won his Nobel Prize, and he will watch to see all his prophecies come true in forest fires, tornadoes, floods, droughts, melting ice caps, the extinction of polar bears, the widening of the hole in the ozone layer, endless winters, rising sea levels—Gore watches it all. Some Americans look away from the dust storms and cry “Hoax.” Other Americans have lost hope, and no wonder. The game, we learned, was rigged for the rich and not for the public interest. The land will be raped for the profits of the few and we the many will pay for the destruction. Meanwhile, we watch television and see good-hearted well-meaning Americans demonstrating in Revolutionary War costumes to preserve tax cuts for the Wall Street bankers. The media they watch has convinced them to dismantle all the social programs they enjoy and use. These good people have been gathered together by powerful corporate interests who can bend them to their will. Reason has no place in politics. Nor do facts. Nor does reality. Spin rules. Slogans speak. If Al Gore is right, the last refuge of the honest broker is the Internet…while it lasts.

If you have found this material useful, please give credit to

Dr. Jeanne S. M. Willette and Art History Unstuffed. Thank you.

[email protected]

The French, the Holocaust and Sarah’s Key (2011)


The deportation of French Jews to their deaths in Nazi concentration camps raises questions similar to those asked of the Germans—how could such supposedly “civilized” peoples enter into a cold-blooded program of mass extermination? Sarah’s Key puts the question squarely to the people of France who took decades to acknowledge their complicity and participation in the roundup of French citizens during the German occupation of France. In May of 2010, the British magazine The Economist summed up the rather sorry record,

The French have tended to confront their record under Nazi occupation with a mixture of denial, silence and myth. The second world war was not on the school curriculum until 1962. Textbooks scarcely mentioned the Holocaust. No French leader from de Gaulle to Mitterrand acknowledged the state’s part in deporting Jews to Nazi death camps. It was not until Jacques Chirac became president in 1995 that the French state accepted its official complicity, prompting much soul-searching over collaboration, memory and guilt.

As the film shows, some of the French participated with gusto while others were reluctant and even defiant heroes who tried to help the Jews. Despite individual acts of mercy or heroism, it is clear that without the passivity of the majority of the French, the deportations could not have happened. Denmark sheltered and protected its Jewish population but the French did not. In contrast, trains full of French Jews bound for death, left for concentration camps year after year, up to three days before the Allies marched into a liberated Paris. The maniacal determination to continue to slaughter up to the last minute, even when it was clear that the Germans had lost the war, was unprecedented—even soldiers surrender when they are defeated.

Sarah’s Key, based on Tatiana de Rosnay’s 2007 book, is the story of the infamous Raid or Rafle on French Jews who were then deposited in the Vélodrome d’hiver (Winter Cycling Station). This sports arena, once the site of bicycle races, was the holding pen for these tragic people, mostly women and children. After five days without food or water or sanitation, the Jews were sent to interim camps in France. There, mothers were separated from their children and sent on their final destination in concentration camps in Poland. The children spent weeks in camps such as the one at Drancy before they too were shipped to the gas chambers.

The French demolished the bicycle stadium after the war and this site of such suffering and other sites of infamy have been thoroughly obliterated. Under contract from the Gestapo, French moving companies would follow a Nazi sweep through a Jewish neighborhoods, gather up the contents of vacated Jewish flats and take clothing, furniture and personal items to sorting sites all over Paris. These buildings for the “appropriations” have all disappeared and the sites now have a new identity—an advertising agency and a haute couture fashion house and a construction site.

A few memorials for the victims exist but it was not until 1993 that the French finally came to grips with their role in the extermination when the President Chirac gave a speech that pleased no one but began the process of healing a long-festering wound. ” These dark hours will stain our history forever, and are an insult to our history and tradition. Yes, the criminal insanity of the occupier was seconded by the French, by the French state,” Chirac said.

In 2010, despite the recent release of a French film about the Vél d’hiv’, The Round Up (Le Rafle), which focused on the fate of the Jewish children left behind, President Sarkozy refused to add anything to these original comments. Indeed, the wonderful films, The Sorrow and the Pity (1969), directed by Marcel Ophüls was an early and isolated effort, followed two decades later by Claude Lanzmann’s Shoah (1985) and Louis Malle’s Au revoir les Enfants (1987) are among the most powerful and earliest tellings of the Holocaust by French artists. Slowly, books have emerged on this traumatic period of history that the French want to forget. A quick glance at the publications makes it clear that there was silence until a new generation began to re-write French history in the 1990s, a full decade after the Germans began to take serious steps at atonement. Sarah’s Key is the story of how history has an uncomfortable way of not dying.

Starring Kristin Scott Thomas as an investigative journalist, Sarah’s Key, is a fiction that is also an allegory of guilt and shame. “Julia Jarmond” works for a not-so-well known news magazine and snags the assignment of doing a substantive story on the roundup at the “Vél’ d’hiv’” as the French refer to this racial crime. “Julia” is an American married to a successful French businessman, “Bertrand Tézac” (Frédéric Pierrot) who takes over an apartment in the Marais that has belonged to his family for a long time. When the couple and their young teenage daughter decide to remodel and move in, “Julia” is in the midst of working up her story on Vél’ d’hiv’ and the film proceeds to tell two stories, one of the contemporary investigation about the Deportation and the other of the original Jewish inhabitants of the flat. Shortly after the Vél’ d’hiv’, the Tézac family acquired the flat and generations of guilt by complicity. The movie is a study of how evil lives long and thrives, spreading out to ensnare innocent people who become stained, if only through association.

The first family who lived in the apartment, the Jewish family, were the ideal family: two parents, two children, a boy and a girl and a cat. As soon as the German occupation of France began in 1940, Jews were marked and forced to wear the dreaded yellow star. And then the French launched the poetically named “Operation Spring Wind,” the round up of 12,800 Jews on 16, 1942. When the Nazis arrive at the door of the Starzynski family to take the mother and her children away, the quick-witted “Sarah” (Mélusine Mayance) locks her little brother, “Michel” (Paul Mercier), in a large closet and carries the key with her to the Vél’ d’hiv’. It is here that the father reunites with this wife and child but he blames the little girl for leaving her brother behind. The child who sensed the danger could have no way of comprehending the true fate awaiting her—she assumed she would return to her brother. There is no way to give the key to anyone; there are no kind souls to trust, and the Starzynski family is shipped to the Beaune-la-Rolande, where the parents are taken away and “Sarah,” ill and feverish, is left behind on her own.

When Sarah recovers, she manages to escape with a friend through the kindness of a French guard. Haunted by the driving desire to unlock Michel from the closet, she and the other little girl run for their lives and escape the certain death at Auschwitz. They find refuge with a kindly couple (Niles Arestrup and Dominique Frot) in a small town, but the other little girl dies of diphtheria. The couple hides Sarah from the Nazis and disguises her as a young boy. This masquerade allows the trio to travel to Paris and here is where the Tézac family encounters the Starzynski family, or what’s left of it. The Tézac father and son are the only ones present in the flat when Sarah bursts into the apartment on a hot August day and unlocks the door to free her brother. Of course Michel is dead. The Tézac family, the males, now have a secret which they keep to themselves: a dead Jewish child who obeyed his sister and waited for her to come home and let him out.

It is into this Tézac family that “Julia” has married. Of course, it is a bit of a coincidence that an investigative journalist would be married to the grandson of the man who took over an empty apartment “abandoned” by a Jewish family; but the film is really an allegory of loss and memory and the determination to not look back. Sarah’s Key is not only about the personal memory of the traumatic discovery of the body of a child, whose presence was known only through its death scent in the Paris summer, it is also about the will of an entire nation to forget and to put a twin humiliation behind it: the humiliation of occupation and the humiliation of liberation. True, the French hated the Germans but they also cohabitated and collaborated with them for four years. All over France, there are clear signs of denial of the complicity and participation of the ordinary French people in the persecution of the Jews. (For a more complete discussion of French ambiguity, read Holocaust Monuments and National Memory Cultures in Germany and France by Peter Carrier, published in 2005.)

The Deportation Memorial, of which I have written elsewhere, appears in this film as a background for “Julia’s” growing knowledge about the legacy of the deportations. Designed by Georges-Henri Pingusson in 1962, the memorial consists of white walls carved with the names of 200,000 French deported by the Nazis. The effect is to include the names of the 76,000 Jews without admitting that the French themselves were in charge of the operation and to obscure the French participation in the Holocaust. The memorial alphabetizes the victims, unlike the Vietnam Veterans Memorial, designed twenty years later by Maya Lin who organized the names of the dead chronologically. If this chronological arrangement had been followed by Pingusson, the day of July 16 would have been an entire section of Jewish names, overwhelming all the other non-Jewish names and indicating a systematic round up of Jews. As in Germany, it was not just what Daniel Goldhagen called “ordinary Germans,” but the ordinary French and the corporations and businesses who were also culpable. (One of the best books written about the process of “coming to terms with the past” is Coming to Terms with the Nazi Past by Philip Gassert and Alan E. Steinweis was published in 2006.)

SNCF, the French national railway, so adept at building and operating a superb rails system, was also adept at keeping its silence over its role in transporting 76,000 Jews to concentration camps. Then SNCF bid for a now-defunct (killed by a Republican governor) high-speed rail system between Tampa and Orlando and ran into the wrath of Holocaust Survivors in Florida. After seventy years of silence, SNCF finally apologized in 2010…sort of…to the victims, less than 3,000 of whom returned home.

The SNCF had long maintained that it was “owned” and controlled by the Germans and that the company and the employees were “under orders.” In addition, the railway did not, it claimed, profit from the deportation “business.” Historians have refuted each of these claims but the SNCF outlines its familiar self-defense on the English language website put up by the company in the fall of 2010. During the Deportations, each train car carried over Jewish 2,000 souls and the casualty rate was usually around 500 people on the way to Auschwitz. The employees of the SNCF dutifully cleaned out the cars and returned to France for their next “cargo.”

The American President, Franklin Delano Roosevelt was well aware of the active participation of he SNCF and he stated in 1944, “All who knowingly take part in deportation of Jews to their death in Poland or Norwegians and French to their death in Germany are equally guilty with the executioner. All who share the guilt shall share the punishment.” In their 1981 book, Vichy France and the Jews, (one of the earlier French books on the topic) Michael Robert Marrus and Robert O. Paxton, write of the constant demands the Nazis made upon the railroad that complex deportation schedules be kept or else, Eichmann warned, the French would be “denied the privilege of participating in the Final Solution.” SNCF quickly fell in line.

Until the corporation met the Survivors of Florida, SNCF managed to escape responsibility but, with an eye to contracts for high speed rail systems in Florida and California, both states with large Jewish populations, SNCF apologized. “In the name of the S.N.C.F., I bow down before the victims, the survivors, the children of those deported, and before the suffering that still lives,” said Guillaume Pepy, who is the chair of the corporation. Denying that there is any connection between their desire to secure lucrative contracts in America, the SNCF is donating a train station at Bobigny as a memorial to the victims of the deportations of Jews between 1943 and 1944 from that site.

Sarah’s Key tells a small but painful story, not of heroic resistance, but of coming to terms with indifference and blindness on the part of one French family who belatedly tried to do the right thing. Sarah and her family symbolize all the innocent lives snuffed out by one of the purest examples of evil that has ever existed. Sarah makes that evil become achingly real. The Tézac family never acknowledges Sarah’s rightful ownership of the flat or her right to be compensated for their uncontested occupancy: their guilt will never take them that far. But the film does not condemn all the French. The family who rescued Sarah raised her, sheltered her and loved her but knew that she would never be whole, she would never get over her guilt over the death of her brother. These “righteous Gentiles” were wise enough to let Sarah go and the young woman vanishes from France. The journalist tracks down the fate of Sarah and her brother, and in the course of her journey into the past, she parts with her husband. “Julia” is trying to put things right but although she can bring some comfort to the Tézac family who learn that the head of the family had faithfully sent money to Sarah’s new family, that is not enough recompense for her husband. But there are some graves that should not be disturbed.

Eventually, “Julia” follows the trail of Sarah to New York where she married an American named “Richard Rainsferd.” But having committed suicide, Sarah is long dead, leaving behind her husband and a son, played by Aiden Quinn. Even in New York, the truth is obscured. It was not uncommon for a Holocaust Survivor to die of Survivor’s Guilt and the Rainsferd family closes the door on Sarah’s sad past and moves to the future. When “Julia” finds “William Rainsferd,” he is unaware that his mother was Jewish, because she insisted on protecting him by baptizing him a Christian. It is with “William” that the circle closes as he rediscovers his mother’s past and is finally able to understand and to grieve over her death, and the death of his uncle in a locked closet and the death of his grandparents in Auschwitz.

In the end, with the investigation over and her long article published, “Julia” leaves Paris and returns to her homeland, New York, where she raises her unexpected child, Sarah, all alone. The film ends with “Julia” and “William” and little “Sarah,” in a New York restaurant. The final moments of the film show “Julia” and “William” going over Sarah’s memorabilia and finding a peace with a past that is and is not theirs. The message is not uplifting but a heartfelt, “never again.” This is a wonderful movie, far and away one of the best films of 2011. See it.

For my readers who would like to learn more of this historical period, the most recent book on the subject is And the Show Went On. Cultural LIfe in Nazi-Occupited Paris by Alan Riding, published in 2011.

Dr. Jeanne S. M. Willette

The Arts Blogger




Werner Herzog’s Cave of Forgotten Dreams (2010)



Thirty thousand years ago. This is when art began. Chauvet Cave. This is where art began. Southern France near the Pont d-arc formation. This is where the first art was made. This is the oldest and the best art. Art never got any better than this.

Chauvet Cave Wall

And the German film director, Werner Herzog, was given special permission to visit this spectacular cave with a small film crew to photograph the marks on the walls made by prehistoric artists. Unfortunately, this film will be shown in only a few art houses, almost none of which are equipped to show the movie as it was shot in 3D. The loss of dimensionality is a genuine one in this case for the artists made use of the convex swellings and the concave niches, which are the natural contours of the walls.

Older but less well known than the caves of Altamira and Lascaux, Chauvet is significant because of the great age of the paintings. Imagine drawings are so old that when the lines were drawn, homo sapiens coexisted with Neanderthals. But Neanderthals do not draw. Neanderthals do not make art. With elegant strokes depicting the animals familiar to the Ice Age inhabitants the two species would be divided between human and not-quite human. Such is the power of art.

“They were here!” Éliette Brunel shouted when she and Jean-Marie Chauvet and Christian Hillaire discovered Chauvet Cave in 1994. Although Brunel was the first to see the paintings on the walls, it was her colleague, Jean-Marie Chauvet, the leader of the exhibition, who would give his name to this unusually large cave. The cave was immediately sealed to the public and only scientific teams were allowed inside. The cave has been mapped with lasers, which are able to draw a three dimensional picture of a long and irregular shaped opening into the limestone cliffs above the Ardèche River.

An iron door has closed the opening originally made by the explorers who sensed the faint whiff of cave air wafting from a slight crack in the cliff face. A narrow metal pathway wends its way along the cave floor, carefully skirting animal bones and the fragile footprints of a child and a wolf and bears.

“Papa, look, oxen.” Like the caves in the Pyrenees, Chauvet Cave had been kept sealed and safe by a rockslide, which covered the original opening where the earliest artists entered. Interestingly enough, Chauvet was the first cave with prehistoric art to be discovered by an adult. The cave of Altamira was discovered in 1879 by Marcelino Sanz de Sautuloa, a nobleman and amateur archaeologist (the only kind of archaeologist at that time), who was excavating near the mouth of the cave when his eight year old daughter, Maria, went deeper into the cavern to take a look. She is reported to have called out to her father when she saw what she though were drawings of oxen which we now know as an extinct version called “auroches.” Like her father, many people assumed that prehistoric people were primitive brutes, incapable of making art, and the cave paintings were presumed to be forgeries or modern day graffiti. It would take seventy years before the paintings would be proved authentic.

Altamira Ceiling

A little girl then was the first human to set eyes upon these paintings made 17,500 years ago, but the next cave paintings were discovered by a little boy and his dog. In 1940, Marcel Ravidat and his dog, Robot, found a narrow opening, and he returned with his friends, Jacques Marsal, Georges Agnel, and Simon Coencas to explore. This cave had been sealed up to the extent that it was accessible only by small curious boys, and, this time, there could be no doubt that the paintings they discovered were authentic. These paintings were about the same age as those of Altamira and perhaps a bit younger. But the way in which the artists painted these caves was quite different from those of the Chauvet Cave, which are, incredibly, twice as old. The artists of Altamira and Lascaux used more color—ochers, burnt sienna, black and red. In contrast to the more austere monochromes of the Chauvet artists, they filled in the shapes of the animals with these natural colors that enhanced the naturalistic effects.

Bulls of Lascaux

Lascuax and Altamira are both closed to the pubic and both sites have created virtual recreations of the caves. Lascaux has a personal tour. One can visit Lascaux via a video, which takes you inside the cave, providing an idea of the ruggedness of the surfaces of the walls. Altamira has a doppelganger, a duplicate cave that is an exact replica that can be visited by the public, whose moist humid breath cause mildew and mold to threaten the irreplaceable paintings of both caves. Chauvet has the Herzog film, a remarkable accomplishment for the director and his colleagues and those who are the keepers of the cave.

Werner Herzog and His Crew

These Ur-artists entered Chauvet via a frontal opening in the cliff wall, but at some point thousands of years ago, the overhang above the cave collapsed and buried the mouth. Like Lascaux, the contemporary entrance is a narrow side passage that is a vertical drop into the cavern. Once inside, deep in the darkness, one encounters, not so much the art, but the sheer effort expended by humans to make art. Clearly, art was so important to the tribes of southern France that individuals were willing to go deep underground with torches to draw on the walls. In Lascaux and Altamira, the artists used small primitive lamps, but, in Chauvet, fires on the floor or torches held aloft had to light the impenetrable blackness. As the torch burned down, the person who held it scraped the tip against the wall to lop off the dead end so that the torch could be reignited. Carbon dating suggests that these black streaks are some twenty thousand years old.

Cave Bear

Although the carbon dating has been controversial, it seems that some thirty two thousand years ago, the artists scrapped the wall surfaces to provide themselves with a blanched wall, a clean ground to work upon. They drew the animals they knew—long extinct cave bears (whose bones and skulls are everywhere), maneless lions, leopards, rhinoceroses, reindeer, horses, deer, ibexes, even owls and yes, the auroches. At the beginning of the cave, we see the first sign of what would prove to be the very sign of humanity—the urge to put marks on the wall—a series of palm red prints arranged in a circle by the same artist. This artist appeared to be concerned with making a personal history in the cave, for, further down the passageway, his distinctive hand, with its crooked little finger, reappears. His is one of the few bursts of color in an otherwise cool palette that is enhanced by the diamond-like sparkles of the mineral deposits on the cave floor.

Hand of the Artist

The artists made cunning use of the undulating shapes along the surface of the cave wall to mimic bulging bodies of the animals, their thighs, their bellies. Oddly—and I have seen this effect in no other cave paintings—these artists give their animals multiple legs, as though they were running. The cinematic illusion is similar to Giacomo Balla’s famous the Dynamism of a Dog on a Leash (1912).


As in most of the caves of the prehistoric times, there are few humans, and in Chauvet, there is only one, a partial torso of a woman drawn on a hanging pendulous rock. The filmmakers were not allowed to approach on foot to examine the drawing, much less to view the far side of the rock and the rest of the drawing. Herzog’s crew put a camera on a pole and was able to get a shot of the entire torso. The emphasis on the vulva of the female is reminiscent of the numerous “Venuses” found as small figurines all over northern Europe. Like the animals this piece of a woman is a line drawing, free of color.

Chauvet Venus

The narrator contended that the drawing was a combination of a woman and a bull or a union of a woman and a bull. I say “contended” only because the drawing is very difficult to read but I take him at his word. What interests me is that this theme of the woman and the bull floats though history to emerge mysteriously in the Minoan art of Crete and in the myths of ancient Greece: the story of the bestial coupling of the wife of King Minos with a bull, resulting in a monstrous minotaur. Although he was long dead when this cave drawing was discovered, the art historian, Aby Warburg, who wrote of how the deep psychology of humanity moved like an undertext or subtext throughout the history of art, would have been enthralled.

The contributors to Cave of Forgotten Dreams had what I consider to be a problematic tendency to speculate about why the drawings were done. Most assume that “religion” had something to do with the intention of the artists. Although I can only respect the expertise of these scholars, I feel that speculation can be anachronistic and that the truth of the art can only be far more mysterious than anything we can imagine. It is impossible to put ourselves into the minds of our ancient prehistoric fore bearers. All we can ever know is what we see.

These drawings are strange to us in deep and powerful ways. The approach of the artists remains the same over thousands of years. The idea of “new” or “novel” or avant-garde or rebellion against what ere obviously deeply rooted traditions simply does not exist. Fifteen thousand years separate Chauvet from Lascaux and yet both caves are instantly identifiable as “prehistoric” as “cave art.” The consistency of the aesthetics of the drawings and paintings suggests that art-making may have been connected to ritual, making the “style” impervious to change. But we do not know if the art is ritualistic.


There is some indication that the artists put certain animals together in what we would call “narration.” Two lions, a male and a female, seem to hunt, side by side. Two rhinos clash in combat, tangling their long curved tusks, probably to win a mate. A group of horses run together as a herd, one with its mouth open as though it is breathing, panting or neighing. But we cannot always read the overlapping as an attempt to link animals with each other for one overlapping was clearly a superimposition. Remarkably this over-drawing was done five thousands years after the original rendering.

Claw Marks Left by a Bear

Superimpositions are common in other caves. So are the handprints, so are the “dots” but we have no idea what these marks mean. Are the superimpositions a form of tagging, a sign of ownership, a record of a changing of generations? What are the drawings? Reportage of a hunt? Prayers for a kill? Worship of the beasts? We will never know the answers but we do know that these drawings are stunning in their blunt simplicity, amazing in their elegance of line. A lion was drawn with a single stroke measuring six feet long. Imagine the confidence of the artist to make such an elegant assured gesture. What are we seeing? “Natural” talent? Frequent practice? An apprenticeship with a “Master/Mistress” artist? The “style,” if one could use such a word, is comparable to a supremely arrogant Picasso or the deft hand of Matisse. The Chauvet drawings are so basic, so primal, so primary and so complete that we have been struggling to return to our atavistic selves, to redeem ourselves as artists.


Werner Herzog and his remarkable movie has allowed us a privileged look at some of the greatest art in the world. He takes us to a place we can never go. We are enchanted witnesses to his journey into the bowels of the earth where the art is secreted. At some point in time, Chauvet will probably be closed in by the innumerable stalagmites and stalactites that are forming even as I write from the relentless drip, drip, drip of water leaking into the cave. The formations seem to take the place of the living breathing humans who once visited here, compelled by the inexplicable need to make art. Rearing from the floor like sentinels, hanging from the ceiling like hovering guardians, these pale shapes are ghosts of artists past, transfixed like Lot’s wife, into pillars, watching over the art.

Chauvet Valley, Pont d’arc

If you have found this material useful, please give credit to

Dr. Jeanne S. M. Willette and Art History Unstuffed. Thank you.

[email protected]



James Higginson’s Willful Blindness (2012)


A film by James Higginson

Unlike all other art forms invented out of modern technology, film has remained stubbornly entrenched in its pre-industrial heritage. Even though the technology of “moving images” allowed for a wide range of artistic experimentation, early “movies” re-presented the theatrical experience and borrowed from painting gestures, postures and poses, the vocabulary of visual communication. Trained on the familiar, movie audiences expect to have their belief suspended and that suspension rests upon the ability of directors and actors to create a new reality. Given that making movies is a business, those demands have shaped the history of film, preventing the kind of growth and development that has changed other art forms. The “movies” have been mired in the late nineteenth century and it is now the beginning of the twentieth first century and still mainstream film stays the same. If film is to “progress” or change, any experimentation must take place outside of the commercial world and any advance if film as an art form rests in the hands of artists.

Crafted by Berlin-based photographer and filmmaker, James Higginson, Willful Blindness is part of the sub-culture of “art films” where the “consumer” does not exist and where the art audience wants change and innovation. Higginson comes out of a history of experimental art films in the tradition of Bruce Connor’s A Movie and Andy Warhol’s Empire. Connor started with the idea that a strip of film has rows of cels or square pictorial units, each of which is filled with or contains a single image. But Connor challenged the assumption that these strips had to flow seamlessly from one segment to another, and he took the concept of montage or editing and spliced together found footage to subvert and disrupt the needs of movie audiences to have a “story.” Warhol, conversely, eschewed editing altogether in Empire by reducing “filming” to its most basic essence—pointing the camera at an object—in this case the Empire State Building—and turning the camera on. For eight hours the camera hummed, the sun traversed the skies, weather arrived and departed and the building remained unmoved. Like Connor, Warhol was also playing with attention span and the process of looking, seeing and watching, in at attempt to reinvent or de-invent “film.” This de-invention, or deconstruction of film, means to strip the moving image of its overgrowths of “movie” conventions.

Like these artistic pioneers, Higginson starts with the premise that the medium of recording movement has its own inherent (but changing) properties and that the “movies” have ignored the possibilities of what can be done with camera and film. One of the tropes of “going to the movies” is the dream. When entering the theater, we leave the real world of sunlight behind and enter into a cave where flickering images are projected onto a screen. As if frozen in a private dream, we sit and gaze raptly, as if watching our own dream. Afterwards, we wake up, walk out of the dark, and reemerge into the ordinary, which announces itself as a place of light. An award winning film, Willful Blindness moves back and forth between dream and reality, between the present and the past, by borrowing the semiotics of light and dark—that which is well-lit is the outside of the Real and that which is dark is the inside of Desire.

A canny and aware filmmaker, James Higginson deploys his film tools with the mastery of a mature artist. While Connor and Warhol used black and white film in their classic experiments, Higginson works with color, but his color pays homage to the black and white history of movie making with a bleached and grayed out tones intercut with slashes of jarring red color. These are the main contrivances that Higginson wields—the unparalleled ability of the camera to stare, the post filming intervention of montage—cutting and pasting—and the historical role of color. In using color as mood and atmosphere, Higginson evokes other film artists, who somehow ventured into the mainstream, using color artistically, such as Todd Haynes in his homage film, Far From Heaven (2002).

To concentrate on the plot of Willful Blindness is to miss the point of this film. The story and the action is really a conceptual play with the properties of film. Higginson plays with two elements of filmmaking, both often overlooked: the fact that one looks at a movie and conversely the fact that the film conceals as much as it reveals. Willful Blindness begins with an act of enforced watching, deliberately suggestive of the determined ennui of Empire except that something is actually happening or unfolding in successive waves. The viewer is brought to earth, forced the pavement as the camera drags along the ground. Someone—male or female—is crawling, putting one hand in front of another, dragging an unseen body along behind. All we see are the hands, reaching outward for purchase.

Here, Higginson takes up one of the single most overlooked characteristics of the movies—the ellipses—or that, which is left out and not seen. Usually the ellipse is used to move the story forward: rather than showing the character walking from one place to another, the director will end the scene and will begin a new one. The significance of this lack or empty space in the action is that the viewer mentally fills in this gap. When the viewer sees the grasping reaching hands, s/he enters empathetically into the action, even inhabiting the invisible body of the actor who is an obvious victim of some terrible event. Higginson takes the notion of “economy” in art to extremes, showing a difficult and complex set of actions, dragging oneself along a city sidewalk, with only the barest of suggestions.

Conveying extreme effort, Higginson works against the forward movement, however, labored and difficult, not by looping the film but by seeming to overlap the progress: one step forward, two steps back. The great effort of the crawler is repeatedly impeded but not prevented, adding layers of frustration on the viewer. Higginson makes the watcher watch. There is no way to intervene or help. He makes the viewer suffer along with the wounded protagonist; the film deliberately drags, mimicking the painful scraping of the hands on the rough pavement. The irritation at this prolonged scene counters the way in which mainstream movies quickly “establish” the first act for the impatient audience.

Playing with the conventions of slow motion and the undeniable advance of a strip of film through the sprocket, Higginson considers the very concept of “pace” in a movie. In contrast to the slow sequence, are the recurring brisk and rapid actions of a woman walking in bright red very high heels—pace personified. Once again we are on the ground, once again we cannot see the body, only the feet and those shoes, moving fast with purpose. And these red shoes—baleful and malevolent, intimating violence—are the mirror images of the victim’s slow hurt hands. These are perpetrator shoes, quickening the processional pace of the film, reassuring the viewer that a story has a beginning, middle, and end that it moves forward and comes to a terminus. The engine of the film is the determined red heels, but where are we going?

Early on, Higginson warns the viewer: he will give color and he will take it away. Color, for this filmmaker, conveys both life and death. Full of vibrancy, the red heels are full of life but they are as red as blood and predict and forebode. The hands are drained of color and the environment is emptied of life as if by a vampire. Willful Blindness is a dark and black film without daylight, without bright color. Often the viewer is blind in that it is difficult to see, thwarting the very purpose of the movies, watching and looking. The movie lights turn on only when the red heels appear. But Higginson not only keeps the viewer in the dark, so to speak, but also refuses to bow to the main demand of movie making—explain to the viewer what is going on. He keeps us willfully blind and pertinaciously mires us in the dark as if to trap us in a nightmare.

The red heels are the parentheses of Willful Blindness the film’s alpha and omega—its beginning and the end. They belong to a traveling woman. At its heart, Willful Blindness is a canonical road movie in which the main character travels. This journey into darkness is punctuated with a series of incidents, which occur along the way, perhaps connected or perhaps not. In between, Higginson investigates the most compelling aspect of the camera vision: voyeurism. Movie-making essentially splits between what society allows us to see, what is deemed desirable, and what society thinks we should be sheltered from, that which is forbidden. People come to the movies to see the forbidden—sex and violence, which always hover on the edge of pornography and unbridled bloodthirstiness. We enter into an imaginative place to give way to our most unsocial instincts, which are also our most basic and that, therefore, must be the most rigorously suppressed.

Higginson serves up hints of pornography and unsavory sex, but his real theme, resonating throughout his photographic work, is violence. Violence, in Willful Blindness, is private, closed and secretive, taking place in some sort of twisted domestic setting. Willful Blindness is an excruciating journey into extremity, filling the viewer with dread. Along the journey, Higginson picks up and discards the old dead languages of traditional film—the German Expressionist style, the film noir of the crime story, pornography and gratuitous violence, as if searching for the right way to detonate an act of retribution. His reanimation of these old allegories is where the practical practice of editing or cutting unwanted or unnecessary scenes—becomes an act of slashing and hacking, and the film reaches its denouement.

The editing style, which deservedly won a prize, the cropping of fragments, the slicing into slivers of film, mimics Hitchcock’s famous shower scene in Psycho with its eighty odd cuts. Higginson has moved beyond the literal metaphors of the master and dwells in the conceptual: he cuts the film—rapidly and repeatedly, implying and indicating terrible acts of violence. Suddenly color bleeds into the film, drenching it. For the viewer, dragged hand over hand into a nightmare composed of a web of image that is both beautiful and dreadful, this explosion of horror is a cathartic relief. We leave the cave of sublimated Desire, our need for revenge satiated.

Higginson was not content with deconstructing the givens of filmmaking; he rethought the role of sound as well. Sound, in a visual medium, is by definition an invasion of an alien other. In fact when “talkies” took the place of silent movies, the purists objected. The technology of sound—talking, ambient noise and music—totally changed the way in which movies functioned. The broad gestures inherited from painting disappeared and pantomime was replaced by dialogue. Interestingly, early silent movies were much more oriented towards action and activity compared to the films of the thirties and forties which relied much more on actors talking to each other to move the plot along. But dialogue along with the sound effects are “natural,” lifelike, an enhancement of the “reality effect.” But music is inherently unnatural.

It is with the music and the editing of sound that the viewer, who has been intensely interacting with the fabula, becomes most aware of Higginson as the orchestrator of the syuzhet. Suddenly, one is jolted into realizing that, contrary to mainstream film; there is no dialogue, no voice over, not even subtitles. But not no sound. Once again the artist has pushed filmmaking back in time, to an era when the images had to stand on their own, but the music stood in for human speech. All silent films were, in fact, not “silent” but were designed to have music accompany them. If the theater venue could afford it, an entire orchestra would do the accompaniment, if, if the theater were in a small town, then a simple piano player pounding out the film score would suffice.

Although the sound design is by Higginson himself, working under the alias “Roberto Pelligrini,” with his assistant Maik Wolf, the music for Willful Blindness is a totally original score by Roland Hackl. Hackl is part of the European tradition of contemporary film music, for like his colleagues and predecessors, Daft Punk and Tangerine Dream, he comes out of the techno music scene. Once on the fringes of the music scene, techno is now mainstream but is far more flexible in format and sound than established forms of popular music, such as rock ‘n’ roll and blues. Techno has no history, it comes from machines that are also without history; it is electronically generated artificial sounds that are mimickeries of a new kind of “music.” Hackl has skillfully explored the in-between-ness of techno/music and its split personality and its greatly expanded abilities to evoke emotions within the audience and to intervene with the diegesis. In the hands of Hackl, the absence of the naturalizing effects of dialogue becomes an asset to be exploited and music re-takes its traditional original role in the film as a stand-alone experience, quick-marching the viewer to the determined denouement.

At the end is a reentry into the light of reality and the woman in the red heels strides purposefully towards her appointed task—something must be buried. Bizarrely, the world ignores all this activity, suggesting that, contrary to what we believed, we are still trapped in a bad dream. James Higginson takes the concept of film to its final limits—that it is not the camera that is the projector, it is us, our minds, reaching out of the depths of the repressed impulses who streams our darkest fears onto a helpless blank white screen. The screen is the world itself, the passive recipient of what the ancient Greeks feared most—the beast within all of us. We sleep, we eat, we mate and we kill, there is nothing else.


Dr. Jeanne S. M. Willette

The Arts Blogger

Dr. Jeanne S. M. Willette

The Arts Blogger

“The Persistence of the Color Line” by Randall Kennedy

The Persistence of the Color Line by Randall Kennedy


The full title of Randall Kennedy‘s new book, The Persistence of the Color Line. Racial Politics and the Obama Presidency, is, or was, published (2011) a bit too soon and needs a sequel. The incompleteness of this book is not the fault of Kennedy, a professor at the Harvard Law School, but the continuing evidence of ongoing and unrelenting racism displayed in disguise by a variety of political groups. From the Birthers to the Congress to the Tea Party, the election of a black man as President has brought out the worst of America. Kennedy’s book barely gets past the first year of a term in office that was complicated by the simple fact that Barack Obama is only half white. And half is not enough. Kennedy’s main point is that Obama is trapped in his (half) blackness and cannot act with the privileged latitude that comes automatically to any and all white Presidents.This trap of skin color has shaped and will shape this unique Presidency.

Kennedy is certainly correct that it is institutional racism that restricts Obama in what he can do, what he can say, who he can champion, what he can support, which laws he can put forward, which policies he can enact. Despite his high office, in his own country (more than in any other nation) Obama is defined by his race. Kennedy opens his book with the assertion:

The terms under which Barack Obama won the presidency, the conditions under which he governs, and the circumstances under which he seeks reelection all display the haunting persistence of the color line. Many prophesied or prayed that his election heralded a postracial America. But everything about Obama is widely, insistently, almost unavoidably interpreted through the prism of race…

Sadly, despite the hopes to the contrary that America was now “postracial,” it is now clear that America is still a racist society. If we define racism in its largest sense: that racism is a “consciousness” of race, then Americans are intensely conscious of Obama as a man of color. For some, this “color”—black—is the color of redemption, for others, the color is a threat and a retribution. Whether positively or negatively, the entire nation is in thrall to the notion that our President is a black man.

One could wonder if the event of the election of a woman or a man of color as President had happened a few decades later, say in the 2030s, more Americans would have been more accepting and fewer people would have cared about race, but instead Barack Obama was elected in 2010. Early twenty-first century people had parents and grandparents who had (fond) memories of segregation and for many Americans, particularly those in Middle America, the sight of people of color is still rare. The reaction of these white Americans was defensive on one hand—a regression into segregationist attitudes—and offensive—an instinctive rejection of someone so unfamiliar, so dark, so cool.

One could also wonder how much the fate of Obama would have been changed if his own white family had survived: if his white grandparents had survived his election, if his white mother could have lived in the White House along with Michelle Obama’s black mother. The whiteness of Obama could have been on full display on the campaign trail, at the Inauguration, and during policy debates. But without either that white half or the black half, a “blackness” born of racism was projected onto Obama. The result was, to borrow Randall Kennedy’s term, to “blacken” Obama and to make him seem alien. However, far from being an “alien,” Obama is the mixed-race future of a more tolerant America to which we might aspire.

It is interesting to note that the President grew up in a white and multicultural society. Obama is the product of the “Melting Pot” so hated and so dreaded by the Nativists and the Know Nothings of the previous century. Obama is the future they fought to avoid. In a very typical fashion, he was raised by a single mother and her parents, all of whom were white and all of whom loved him. He grew up in multicultural Hawaii and went to white-identified schools and colleges, Occidental, Columbia and Harvard, and dated white women and had white friends. Obama chose to be “black” in the sense that he had to seek and learn about “blackness.”

But these subtleties of choice are lost on those who object to Obama solely because he is black—they don’t care about his decisions, or about the distinctions between black skin and black culture, they care only about the skin and refuse to accept him in the office of the Presidency. As Kennedy reports on the ugly fact that there are,

substantial number of Americans who simply refuse to acknowledge Obama’s political legitimacy (for example, the allegation believed by tens of millions that he was born abroad), the open contempt displayed by antagonists not only on the airwaves of right-wing talk radio but also in the inner sanctum of Congress (for example, Joe Wilson’s infamous shout of “You lie!”), and the stark polarization that characterizes the racial demographics of support for and opposition to Obama. That the opposition is overwhelmingly white is a fact that no one can reasonably dispute.

Then Kennedy asserts, “What is disputed, however, is that racial sentiment is an important ingredient in the opposition.” This statement is interesting and what the author is working through is the fact that Obama won an overwhelming victory and that while he did not win the majority of the white vote, he won enough to carry the day. And as Kennedy points out there are “plenty of reasons” to dismiss Obama without even mentioning race—he is too liberal, he is too conservative, and so on. No president is going to please everyone all the time; but, that said, Obama will always be judged according to different standards and this judgement will always be tempered by race and those attitudes are, in and of themselves, a form of racism. The very fact that Americans were (momentarily) proud of themselves is tinged by a history of slavery and segregation. As Kennedy says,

An inflated sense of accomplishment is part of the racial predicament in which Americans find themselves. Electing a black American as president is treated as remarkable. In a sense it is—but only against the backdrop of a long-standing betrayal of democratic principles…

…That Obama has had to work so hard to make himself and his family acceptable to white America and that he has had to continue to work so persistently to overcome the perceived burden of his blackness is a sobering lesson.

I supposed we Americans hoped that we would rise to our own optimistic standards, and, as Kennedy lays out the campaign was remarkably free of racism; but there was a sizable segment of the nation that would never accept Barack Obama as President. One could argue which incident by which public official marked Obama as “black” and unacceptable was the first but barely into his first term it became clear that this was a marked man. A conservative discourse was woven, full of symbolic racist “dog whistles” to a certain group and therefore skirting overt racism. Kennedy writes that “…the prejudice has been sublimated and expressed via a code that provides a cover of plausible deniability: “He’s not one of ours”; “He’s not like us”; “He’s alien”; “He’s a Muslim”; “He’s a socialist.”

Ironically, because he is black, Kennedy argues, Obama cannot appear to favor peoples of color and therefore can do less for his “own people” who truly need the special help than a white President can freely provide. On the other hand, ironically also because he is black, Obama was in the cultural position to assist other Others, the LBGT community and the Latino community. Although Obama has, as Kennedy points out, elevated many black people to high places in his administration, he has arguably done—in a more specific way—more for the gay and lesbian community and Mexican Americans than for blacks. Thanks to Obama, gays can now serve openly in the military and Latino young people who were brought to American as children can now move freely in society without fear. The next steps, thanks to Obama, will be that gay people may be able to marry legally and that young immigrants can become citizens. This willingness to act in a moral fashion towards those who inhabit this country is real progress towards civil rights for all Americans.

Then there is the dark side of this Presidency. Because of the color of his skin, because of his race, and mostly because of the consciousness of his race, oppositional criticism of Obama falls into the zone of racism but these racist (de)evaluations are delivered in code. Once racist sentiments were uttered openly without restraint and were part of the broader culture, but as Kennedy writes, during the 1960s the language of racism in politics changed:

The Civil Rights Revolution stigmatized the open appeal to racial animus. By the late 1960s, politicians were no longer able to blatantly incite racial prejudice to their advantage at little or no political cost. To tap into racial resentments openly meant falling afoul of newly ascendant norms of racial etiquette and thus attracting punishing censure. So open appeals to racist animus gave way to implicit appeals. To avoid being branded as racist while nonetheless trafficking in racial prejudice, some politicians began to use code words to say covertly what they could no longer safely say overtly.

Today, three years into Obama’s presidency, we see these codes fully developed, unfurled and proudly flying out of the mouths political opponents. Add up these wordy criticisms, they all say the same thing: Obama is incapable of being President because he is black: “he is in over his head,” “he is incompetent,” and so on. All blame for all ills can be laid at the door of a black man, a sin eater of white transgressions. Therefore, the white men who created huge budget deficits are not at fault, the white men who started but did not finish two wars are not at fault, the white men who let Osama bin Laden slip through their fingers are not at fault and Obama’s bold deeds cannot be celebrated, because, as Mitt Romney claimed, killing Osama when the opportunity presented itself was a “no brainer.”

All of Obama’s accomplishments are discounted—he was an affirmative action admission to exclusive Ivy League schools, the stimulus did not work, he is wrong to attempt to bring peace to the Middle East, and on and on. Nothing he does is right and everything he does is wrong, not because any of these Codes are true but because the endless assertions of failure are necessary to allow whites to feel superior to this intelligent and intellectual and gifted and exceptional black man.

The idea that a black President might do a better job than a white one—even George Bush—is insupportable to racist white Americans. Kennedy goes through a number racially tinted incidents that happened before or early in the Presidency of Obama: the very real embarrassment of the Reverend Wright, the clash between the Harvard scholar, Henry Louis Gates, Jr. and a Cambridge police officer, the embarrassing incident involving Shirley Sherrod, the confirmation of Sotomayor, and so on. Kennedy does an excellent job of explaining the culture of Reverend Jeremiah Wright and gives an informative account of black patriotism or why black people love America, But the incident that opened the dam of racism in my opinion was the famous “You Lie” outburst of Joe Wilson, Congressperson from South Carolina.

The occasion was a solemn one, the health care address on a major policy proposal by Obama, marred by a loud Southern voice screaming “You Lie!” clearly something that would never happen to a white president. As Maureen Dowd wrote in the fall of 2009,

I’ve been loath to admit that the shrieking lunacy of the summer — the frantic efforts to paint our first black president as the Other, a foreigner, socialist, fascist, Marxist, racist, Commie, Nazi; a cad who would snuff old people; a snake who would indoctrinate kids — had much to do with race…But Wilson’s shocking disrespect for the office of the president — no Democrat ever shouted “liar” at W. when he was hawking a fake case for war in Iraq — convinced me: Some people just can’t believe a black man is president and will never accept it…Barry Obama of the post-’60s Hawaiian ’hood did not live through the major racial struggles in American history. Maybe he had a problem relating to his white basketball coach or catching a cab in New York, but he never got beaten up for being black. Now he’s at the center of a period of racial turbulence sparked by his ascension. Even if he and the coterie of white male advisers around him don’t choose to openly acknowledge it, this president is the ultimate civil rights figure — a black man whose legitimacy is constantly challenged by a loco fringe. For two centuries, the South has feared a takeover by blacks or the feds. In Obama, they have both.

Dowd concluded by quoting “Congressman Jim Clyburn, a senior member of the South Carolina delegation,” “…had a warning for Obama advisers who want to forgive Wilson, ignore the ignorant outbursts and move on: “They’re going to have to develop ways in this White House to deal with things and not let them fester out there. Otherwise, they’ll see numbers moving in the wrong direction.” I believe that Dowd and Clyburn were correct. The Wilson event, during a speech by Obama on health care, was a turning point. The Congressman both apologized and then raised campaign money on the strength of his racist outburst:

“This evening I let my emotions get the best of me when listening to the president’s remarks regarding the coverage of illegal immigrants in the health care bill. While I disagree with the president’s statement, my comments were inappropriate and regrettable. I extend sincere apologies to the president for this lack of civility.”

Wilson was censured by the House, along party lines (the Republicans taking no responsibiltiy), but the damage was done. This outburst, which Wilson claimed to be “spontaneous,” received only a mild rebuke from his colleagues, and Obama accepted the “apology.” Wilson took advantage of the natural paralysis that happens when civilized people are confronted by outrageous barbarism. There is simply no acceptable reply to an act of such contempt. It is asking a great deal of any human being, startled by an unwarranted and untrue accusation, to react in an effective fashion. One either ignores the outburst—Obama’s approach—or to stop the proceedings—a major policy address—and politely ask the offender to leave. The Congressman should have been expelled from the room and expelled from Congress.

But confrontation is not in the make up of Obama. He is a child of consensus and negotiation, an offspring of the postracial society. It is possible that Obama though that Wilson was having a nervous breakdown, a fit or a meltdown of some sort. Obama wants peace and, at that time, during that summer of 2009, he probably genuinely thought that he could bring the Republicans could be brought into the fold. He did not want to offend the other side; but, by accepting what was a facile and meaningless apology from Wilson, Obama suggested to those who were watching, the people that he did not yet see as his enemies, that he was weak.

After Wilson was let off the hook, it was as if the dam had been burst and the Birthers came out of the woodwork with their absurd claims that the Presidency of Obama was a result of an impossibly complex conspiracy to place a Manchurian candidate in the White House—for what purposes it is never clear. Also out in the open were charges of Socialist, Food Stamp President, “European,” “Muslim,” and on and on, all of which were codes for un-American and also not white, because “real” Americans are white and Obama is black. Obama, in trying to govern from a “bipartisan” philosophy of “compromise,” looked foolish and naïve.

Obama, quite rightly, has taken seriously his charge as President to govern all Americans, regardless of age, race, gender or party affiliations, fairly. This position of equity is far more Presidential and fair than most Presidents. As Kennedy writes, Obama refuses to govern from a position of race and insists upon taking his positions on the basis of morality. For Obama, Kennedy states, it…

…isn’t a matter of black and white. It’s a matter of right and wrong.” Sticking to his strategy of deracialization, Obama sought as much as he could to avoid dirtying himself with the racial messiness of the dispute without alienating his African-American base. He saw deep engagement in the controversy as a losing proposition, a racial quagmire that, for many white voters, would only blacken him…”

If Obama is “blackened,” then all people of color are “colored” in even more intense hues. If it is acceptable to emit racist codes when referring to Obama, then the attacks on Others, those who are not white and male, are suddenly acceptable. Since “You Lie!” we have heard supposedly reputable or apparently sane politicians call for an electrified fence on the border of Mexico and we have seen literally hundreds of laws passed to restrict the rights of female citizens.

We know now that on the night of the Inauguration, certain Republicans met in private (secret) and made a pact—to obstruct very single proposal Obama made, regardless of its merit, regardless of whether or not the policies were originally Republican, regardless of the impact upon the nation. This pact or agreement was nothing short of un-American and un-patriotic and unprecedented. The Republicans have held firm and have voted en masse against every proposal, every policy, every law put forward by Obama. These actions are tantamount to a conspiracy and de-value the office of the Presidency. Already there is ample evidence that the Republicans will have no respect for any President, even their own.

Randall Kennedy shows us the early straws in the wind, one racist event after another, incidents that would have passed unnoticed under a white president or events that would not have happened under a white president. Kennedy points out that in each and every case he presents, that Obama is damned if he speaks out or wades in, and he is damned if he stays silent and stays away. Kennedy is right to stress the fact that Obama is trapped in his blackness and in his innate civility and his heartfelt belief in the good will of all people. I believe that Obama had no idea of how deep and how wide and how old racism is in America. I don’t think he was prepared for the wall of refusal that he faced, and, for years, Obama has had no effective response to the visceral rejection of his presidency.

But Obama is a learner and he is a proud man. The question is what is this nation facing—a return to the blithe and blunt racism of the 1950s? or the last spewings of an ugly racist bile out of the body politic? It this Presidency a Sacrifice Presidency, a period that forces a stained country to redeem its shameful past or is this Presidency a Reversion Presidency, the occasion upon which we revert to the old ways: the rule of the white male? We have a presidential campaign for 2012 that is entirely based upon the charge that Obama must be removed from office because he is “incompetent,” or, in other words, “black.”

News commentators continue tip-toe around this bigoted rhetoric and gingerly call this dark prejudice “tribal” and are forced to call attention to the “codes” used. And as the discourse continues to grow and become more extensive, the media are forced, more and more, to confront the constant racism that has been inspired by this Presidency. But the media–whether left or right—is merely reactive. This Presidency is not just any Presidency: it is an occasion and it is up to Obama to take advantage of his historic election. The speech on Reverend Wright and modern racism was a start, but now, three years later, this brave address is revealed as sadly insufficient for today’s dark world. Obama must take the high moral ground and be a new, another Martin Luther King and demand an end to racism at long last. Kennedy ends his interesting book on a hopeful note,

Among colored folk, his ascendancy has raised expectations of what is possible for them to achieve in a “white” Western modern democracy. It has also affected the expectations of white folk, habituating them, like nothing before, to the prospect of people of color exercising power at the highest levels. There are many who still chafe at this turnabout—witness the racial component of the denial, resentment, and anger that has fueled reaction against the Obama administration. The racial backlash, however, is eclipsed by the lesson being daily and pervasively absorbed—the message that a person of color can responsibly govern.

On the eve of an election campaign that is mired in open and belligerent racism, Randall Kennedy’s book, though now out of date, is an instructive account of how a black man teaches white men (and women) that race should be irrelevant. Only when we all learn what Obama is trying to show us, do we achieve the transcendence of a postracial society.

Dr. Jeanne S. M. Willette

The Arts Blogger



The Betrayal of the American Dream

The Betrayal of the American Dream

by Donald L. Bartlett and James B. Steele


The New Enclosure Movement

Essentially “the American Dream” has always been a middle class dream. Thanks to carefully targeted government policy, the middle class has been systematically privileged and advantaged, while the lower classes lived under surveillance and were kept under control. Even in the Gilded Age, those glorious years before the hated personal income tax was ratified as a Constitutional amendment in 1913, aspirational Americans dreamed of owning their own farms or starting their own businesses or of finding a good job. Like “Liberty” and “Justice,” those dreams were The American Way.

But hiding behind those aspirations and fine words were government measures that worked in favor of the rich, making a mockery of sacred American words such as “equality” and “fairness.” It is the thesis of the latest book, The Betrayal of the American Dream, by Donald Bartlett and James Steele that it is not just the American Dream that has been “betrayed” but also that all of the Americans who are not rich have been betrayed. And even worse, these Americans have been betrayed by their fellow citizens, the very rich and the very powerful, who have essentially thrown them and their dreams under the bus…or the stretch limousine.

Indeed, the first chapter of The Betrayal of the American Dream is entitled “Assault on the Middle Class” and the account of the “assault” begins with a real person, Barbara Joy Whitehouse, one of the many people left behind in the stampede of the wealthy, in their urgency to help themselves, trampling over the rights and dignity of the ordinary person. One could say, “what else is new?” Or one could say, “This sounds familiar.” Or one could repeat the old adage, “The rich get richer and the poor get poorer.” But, this kind of attitude of selfishness which rips the fabric of society apart is new and the disregard of the rich for the communal and historic social compact is relatively recent. Before entering into the weeds of this angry and informative book—how the American dream was betrayed—the presentation of a couple of charts might be in order.

The Contemporary Middle Class

First is the now famous chart of the “flatlining” incomes of the middle class since 1970 juxtaposed to the rising of upper class incomes. The blue line is the income of the Middle Class and the Red Line represents the wealth of the Upper Class. The blue line is evenly stretched out across four decades, staying consistent and flat, even while the prices on everything rose. The red line rises like a star, soaring to the skies of unbelievable wealth, charting an upward bound path towards more money than any one human being could ever spend. The source is CNN Money:

The next chart is the equally famous “Parfait Chart,” which shows different colored layers, demonstrating the “thickness” of the layer entitled “Tax Cuts for the Rich.” The much derided Recovery Plan, also known as “The Stimulus,” from the Obama administration is a tiny pale layer squashed under the Bush Gift to his “Base,” as he called his wealthy supporters. This chart is courtesy of Center on Budget and Policy Priorities:

These charts are classic illustrations of ideology, or how the government favors the interests of the dominant class. The rich prosper and everyone else pays for their gains or to be more precise, the 99% hand over their hard earned money to the 1% in order to encourage these individuals to, at some unspecified point in time, to trickle something down upon the poor. If the Bush tax cuts for the wealthy, now twelve years old, are either increased or allowed to continue as the Republicans wish, the four year recovery from the Wall Street Crash is crushed and the debt continues to rise—along with the incomes of the rich. This parfait chart is instructive because it shows how marginal the Bush Wars (on the credit card) were compared to the Bush Giveaway to the most wealthy and the least needy in America. I present these charts for a reason, because these bright colors bring to mind another kind of economic map, literally a map that shows what happens when the the rich use government to take away from the lower classes. In the eighteenth century, this seizure of resources was called The Closing of the Commons or the Enclosure Movement.

The First Closing of the Commons

The chart above is actually a map of the Commons of an English village called Kibworth-Beauchamp, featured on the recent The Story of England, hosted by the incomparable Michael Wood. The Commons is land held in common by the people. The actual owner of the terrain is the squire of lord of the manor who, in an act of noblesse oblige, allows the people or the tenants who work the estate—the small farmers and the peasants—to have their own plots of farmland. The farmers planted and harvested as they wished and were allowed to keep the bounty for themselves. In the old days, this obligation to one’s tenants, inherited from the Feudal era, was a responsibility that came with wealth and privilege. The Lord and Lady took care of their own. As virtuous as it sounded, noblesse oblige was also smart public policy: it is easier to control contented workers than it is to quell discontented peasants. If both sides understand that the social and economic bargain is a two way street then the network of obligations of responsibilities becomes the warp and woof of social relations.

In a time of unlimited power of monarchs and aristocracy, this historical equalizing of the economic scales acted as a way to repay the peasants for their service, while at the same time tying these people to the land upon which they labored. According to Wood, these strips had been worked by the same families for generations. Each strip has an individual name; each strip had its own level of fertility. Some strips were less fertile or harder to work than others, while some were fertile and easy to farm. These strips were parceled out equally, so that on one family could benefit at the expense of others. Thus a rough equality of responsibility (if not income) that somewhat offset the imbalance of power was created. This age-old balance of power enabled the rich to placate the poor and gave hope to the nascent middle class, and, in England, staved off discontent and revolution. But this social agreement or this belief that everyone had obligations under the social conpact between the two classes came to a close during the eighteenth century.

In his article, “The Second Enclosure Movement and the Construction of Public Domain,” James Boyle presented an old poem that raged against the Closing of the Commons.

The law locks up the man or woman
Who steals the goose from off the common But leaves the greater villain loose
Who steals the common from off the goose.

The law demands that we atone
When we take things we do not own
But leaves the lords and ladies fine
Who take things that are yours and mine.

The poor and wretched don’t escape If they conspire the law to break; This must be so but they endure Those who conspire to make the law.

The law locks up the man or woman
Who steals the goose from off the common And geese will still a common lack
Till they go and steal it back.


The Closing of the Commons or the Enclosure Movement ended, rather abruptly, a centuries old set of legal and social customs pertaining to the balance between privilege and powerlessness. Nowhere is this “shock of the new” better illustrated than in Thomas Gainsborough’s portrait of Mr. and Mrs. Robert Andrews (1748). In her iconic description of this painting, art historian, Ann Bermingham, alludes to “agrarian change.” On one hand we see the accouterments of privilege: the pretty blue silk dress and dainty pink shoes of Frances Andrews and the flintlock rifle and dead game displayed by Robert Andrews. She does not have to labor and he has the inalienable right to hunt on his own property. But the background of the painting, the landscape view that made all of the attributes possible, tells a new story: the Closing of the Commons.

We see the Enclosure Movement stretched out behind the newly married couple. The absence of labor or the workers who serve the estate is palpable. The wide open Commons are fenced in, walled in, making Enclosures for sheep. The reasons for Enclosure during a hundred year period are complex and varied over time and place. In her article article “Jane Austen and the Enclosure Movement: the Sense and Sensibility of Land Reform,” Celia Easton pointed out that

Owners of large estates began enclosing their land when the market and transportation infrastructure made an acre of land devoted to raising sheep more valuable than an acre of land devoted to raising barley. Sheep herding had immediate advantages over farming: lower labor costs, less dependency on weather, and easier land management. Extreme climactic events and dis- ease did threaten the main capital investment—the sheep themselves—but large landowners were less affected by these threats than small landowners, since their sheep had access to larger pasturage and shelter from inclement conditions. None of the decisions to enclose land to raise sheep would have been made, however, without a market for wool and the roads on which to transport it.

What we are seeing in Mr. and Mrs. Robert Andrews is that the wool trade became more economically profitable and that the centuries of farming the same strips of the Commons had exhausted the land. As Easton stated, for centuries, the English government had restricted Enclosure or the desire of the upper classes to make a greater profit, to protect the lower classes, but by the eighteenth century, profit motives overtook moral obligations or social concerns, and the Commons were Closed, either by parliamentary means or by unilateral actions on the part of the landowner. In 1748, Mr. and Mrs. Robert Andrews were on the cutting edge of Enclosure, slicing and dicing their lands and pushing the villagers off their ancestral lands. In other words, the land was outsourced to the sheep.

The Contemporary Closing of the Commons

The Closing of the Commons and the ultimate “betrayal” of the common people in the eighteenth century is similar to the “betrayal” Bartlett and Steele describe in their book. In our time, the post-World War II period, there were a series of government policies designed to raise the middle class, from the G. I. Bill to government projects, such as infrastructure to make interstate commerce more efficient—all of which elevated lower class white males (and their families) to the middle class. As seen in the photograph of Levittown above, it is true that these post-war laws were explicitly directed towards the white male population as a reward for their services in the War. Women and people of color were consciously left out of the post-war benefits boom and their war-time service were expressly not recognized. Both groups, the majority of the American population, were thus placed under the curse of “redlining,” and were denied loans for homes and entry into certain neighborhoods and access to certain jobs and schools.

But post-war government policy had a large and positive impact, creating an extended middle class with rising consumer power and rising incomes that allowed men and women to purchase the post-war avalanche of new commodities. But by 1970, a mere twenty years later, the party was over as outsourcing of good manufacturing jobs began, slowly at first, a trickle here and there, gradually widening into a stream, predicting the flood of jobs gushing towards Asia. Low and high skill manufacturing jobs (the usual domain of the white male) were shipped overseas where desperate workers did the same jobs at a fraction of the wages. The American worker and the middle class professional was left behind high and dry while the wealthy took advantage of law and tax policies they helped fashion to enrich themselves through outsourcing.

Ever since the Enclosure Movement, sociologists and economists have argued over whether or not the Closing of the Commons was theft from the people or whether, in the long, run the result was positive. Of course, as John Maynard Keynes pointed out, “in the long run, we are all dead,” and the long term benefits have proven to benefit one group, the rich, over the other group, those who work to make the rich richer. As in the eighteenth century, those in power have sloughed off the sense of responsibility while retaining the idea of privilege. Just as there was a refusal to accept age-old obligations two hundred years ago, today there are no thoughts of citizenship and no concern with giving back or paying forward for the greater good or the future of the nation. As the authors ot The Betrayal of the American Dream point out,

In our 1992 book America: What Went Wrong? we told the stories of people who were victims of an epidemic of corporate takeovers and buyouts in the 1980s. We warned that by squeezing the middle class, the nation was heading toward a two-class society dramatically imbalanced in favor of the wealthy. At the time, the plight of middle-class Americans victimized by corporate excess was dismissed by economists as nothing more than the result of a dynamic market economy in which some people lose jobs while others move into new jobs—“creative destruction…”

The issues now, as they were in the two centuries of the Enclosure Movement, is not the “creation” of new ways of making wealth but the “destruction” of the old ways and the impact of the “betrayal.” Most importantly, when it is asked who benefits from these economic changes, it becomes clear the the so-called “creativity” which benefits certain individuals also results in a destruction of the lives of the masses who cannot live long enough to benefit from future largesse. The result of the Enclosure Movement was a disconnect between the people and the land—Bermingham calls the effect “alienation.” The landowners severed the ancient obligations of the squire, and the peasants were separated from the land that they had long regarded as “theirs” to the extent that they named their plots.

Globalism and the Abandonment of the Land

Today, Globalization has become the new Enclosure Movement. In the process of moving towards a new international economy—and this is a point that Bartlett and Steele did not emphasize—the rich Americans, like American corporations, have less and less connection to their own nation: their wealth is global and consequently their interests or their fealties are international. The result is a waning of patriotism or a connection to the land (America) and the people who live in the land (America). It has been said by many political commentators, such as Matt Taibbi (Griftopia and The Great Derangement) and Chrsytia Freeland (Plutocrats), that the new wealthy class is not American, they are citizen of the globe who merely happen to live in America. As global citizens, these mega-rich people have no obligation to America and therefore have no compunction about “betraying the American dream.”

Today, money (whether virtual or real) has replaced land as the major source of wealth. During the nineteenth and twentieth centuries, wealth came from ownership of businesses or corporations that were local that had and depended upon a symbiotic relationship between the communities and the laborers. Henry Ford understood that his workers needed to earn enough money in his factories to buy the cars they made. In the twenty-first century, this common sense understanding that labor and management had needs in common and that their relationships was reciprocal has dissolved. In fact an aerial photograph of homes in the Hamptons looks remarkably like the Enclosure Movement in action. The coast and the sea is all privately owned and controlled and enclosed.

Breaking the Social Bonds

But the sources of money in our century are global and not local. The global workers are speechless and powerless citizens of totalitarian nations which are in league with American corporations. Management does not manage workers; managers manage the income or the wealth of the company. American workers have been fired, outsourced and disenfranchised, losing their jobs, their futures and their governmental representation. As Bartlett and Steele write,

At a time when the federal government should be supporting its citizens by providing them with the tools to survive in a global economy, the government has abandoned them. It is exactly what members of the ruling class want. The last thing they want is an activist government—a government that behaves, let’s say, the way China’s does. Their attitude is “let the market sort it out.” The market has been sorting, and it has tossed millions out of good-paying jobs. Now that same ruling class and its cheerleaders in Congress are pushing mightily for a balanced budget at any cost. If it happens, it will be secured mostly by taking more out of the pockets of working people, driving yet another nail into the middle-class coffin. The economic elite have accomplished this by relentlessly pressing their advantage, an advantage that exists for the simplest of reasons: the rich buy influence.

The goals of a corporation are short term: make money now and don’t worry about the future. Or to put it another way—the corporations are no longer linked to a nation so they don’t have any stake in the people of any country. In other words, the relative ability of the American middle class to buy corporate products or commodities is irrelevant to the international business. The only relevancy is profit. It is a moral imperative to corporations that there is no higher good than higher profits. Hiring American workers is expensive: American wages are higher than in most Asian countries and, unlike European countries, American businesses are expected to provide health care benefits and manage retirement accounts. No sane profit-minded corporation would hire American workers when Asian workers could be hired at a fraction of the cost. The free market is free of responsibility and of allegiance to one’s flag. As the authors point out,

Corporate executives contend that they are forced to relocate their operations to low-wage havens to remain competitive. In other words, their domestic workers earn too much. Never mind that manufacturing wages are lower in the United States than in a dozen other developed countries.

But Bartlett and Steele are also interested in telling the story of how the wealthy have been able to not only remove the sources of their income from American shores but also how the wealthy protect their wealth. It is not just that the very rich and powerful have moved the jobs out of reach of the worker, it is they have also removed their money out of the reach of the government. And the government or the politicians have allowed the rich to strip America of the money the nation has earned for them. As Bartlett and Steele charge, the wealthy “lack a moral or civic compass” and are “without a purpose beyond its own perpetuation with no mission except to wall in the money within its ranks.” A case in point would be a Birkin bag that was auctioned off in 2011 for over $200,000: the cost of a modest middle class home in a modest Midwestern state or the amount of four middle class incomes.

That the purse costs as much as a home–and that home is probably in the hands of a bank that has foreclosed and refuses to refinance–raises the question of how much money is “enough?” Is the opportunity to own such an object so important that the possession overrides morality or common sense or American values? The authors assert that America has ceased to be a democracy and has, over time, devolved into a “plutocracy” in which the common people are not so much ruled by the rich as they are exploited by the rich. The rich can’t be bothered to be part of the government; it is easier to buy politicians to enact laws and rules that benefit their one driving desire—to accumulate money, more money, and then even more money.

Ironically, it was Wall Street that disclosed the emergence of the American plutocracy. As early as 2005, a global strategist at Citigroup, Ajay Kapur, and his colleagues coined the word “plutonomy.” They used it in an internal report to describe any country with massive income and wealth inequality. Among those countries qualifying for the title: the United States. At the time, the top 1 percent of U.S. households controlled more than $16 trillion in wealth—more than all the wealth controlled by the bottom 90 percent of the households. In their view, there really was no “average consumer,” just “the rich” and everyone else. Their thesis: “capitalists benefit disproportionately from globalization and the productivity boom, at the relative expense of labor,” a conviction later confirmed by America’s biggest crash since the Great Depression. The very rich recovered quite nicely. Most everyone else is still in the hole.

Indeed, we of the middle class are more than likely to stay in “the hole.” Bartlett and Steele made the case that,

Only once before in American history, the nineteenth-century era of the robber barons, has the financial aristocracy so dominated policy and finance. Only once before has there been such an astonishing concentration of wealth and power in an American oligarchy. This time it will be much harder to pull the country back from the brink. What is happening to America’s middle class is not inevitable. It’s the direct result of government policy, and it can be changed by government action.

It is important to realize to what an extent the moneyed class has become the equivalent of absentee landlords in the eighteenth century. The middle class is simply unimportant to them, their plans, their goals.

Despite obligatory comments about the importance of the middle class and why it should be helped, America’s ruling class doesn’t really care. They’ve moved on, having successfully created through globalization a world where the middle classes in China and India offer them far more opportunities to get rich.

In addition, Bartlett and Steele map out the thinking of corporate America. The “job creators” understand that there is a trade off between providing jobs for Americans or for the Indians and piously decided that it is good and righteous to elevate the inhabitants of Madras instead. The name of the game is “creative destruction” as jobs are created in China and are destroyed in America.

The result is a huge transfer of wealth from the middle class to the wealthy in this country, as well as to workers in China, India, and other developing nations. No one wants to deny people in those countries the right to improve their lot, but the price of uplifting them has been borne almost entirely by American workers, while in this country the benefits have flowed almost exclusively to a wealthy super-elite. Globalization was peddled on the basis that it would benefit everyone in this country. It hasn’t, and it won’t as long as current policies prevail.

The phrase “has been borne almost entirely by” used by Bartlett and Steele is one that can also be applied by the tax code: it is the middle class that pays the price of globalization and it is the middle class that pays the taxes that pay for America. And it is not just the rich individuals who refuse to pay their fare share it is also the corporations who similarly refuse to pay their taxes.

One explanation for the tax burden on middle America is that for years U.S multinational corporations have refused to bring home billions of dollars they’ve earned on overseas sales because they don’t want to pay taxes on those profits. Sitting in banks in the Cayman Islands, the Bahamas, Switzerland, Luxembourg, Singapore, and other tax-friendly jurisdictions is a staggering amount of money—an estimated $2 trillion, a sum equal to all the money spent by all the states combined every year, or more than half the size of the annual federal budget.

The Un-Freedom of the “Free Market”

We are told by the ruling class–or their mouthpieces, the politicians–that the “free market” is at work, that no laws have been broken, and that any regulations on the free market would be a disaster. However, what is not said is that the market is not a level playing field–the market is not free, it is fixed, it a rigged game, the market is Vegas where the house always wins and the weekend punters always loose.

Ultimately, the rule-makers in Washington determine who, among the principal players in the U.S. economy, is most favored, who is simply ignored, and who is penalized. In the last few decades, the rules have been nearly universally weighted against working Americans. That a huge wealth gap exists in this country is now so widely recognized and accepted as fact that most people have lost track of how it happened. One of the purposes of this book is to show how the gap became so huge and to explain why it was no accident. Over the last four decades, the elite have systematically rewritten the rules to take care of themselves at everyone else’s expense.

The myth of the Free Market is just that—a Myth. As the authors point out, Germany and Japan and European countries such as France protect their citizens against the ravages of the market. In American we decry “protectionism” in the names of American corporations who want to sell American products abroad. The middle class wants, we are told, the ability to purchase “cheap” televisions from South Korea, but as Bartlett and Steele point out the trade between America and its trading partners is not free: their workers are protected; ours are not. The result is that American cars are a luxury in China and cost around $100,000. Europe and Asia are simply not big markets for American cars which, at home, must compete with Toyotas, et. at.

Unfair competition that benefits the rich and forces the workers and the poor to take the hit has been going on ever since travel and technology made globalization possible.

What is different today is that a company can go under or “fail,” regardless of competition or profitability. All it needed is for a company to be swooped down upon by a corporate raider intent on a “hostile takeover.” Indeed in their description of what a private equity company, like Bain, does to a business, the authors state that the vulture-like investors argue that the elimination of companies and jobs forces a greater efficiency and thus benefits the “economy.” Bad CEOs are removed, unproductive workers are sent away, they argue and everyone benefits and the nation as a whole is served. But Sensata, a company with record profits, was suddenly swallowed up and closed down by Bain Capital and the jobs and equipment are being shipped to China—all in the name of a greater profit. So we ask? Who Benefits? Which economy? Their or ours? While using the word “economy,” the corporate executives seem to imply the American economy, but what they really mean is that their personal economic positions are improved on the global stage.

The managers of the largest equity and hedge funds have become immensely wealthy—many are billionaires—even though some of the companies they bought and sold later foundered. In addition to the rich fees they harvest, private equity fund managers rake in millions more courtesy of U.S. taxpayers. Thanks to Congress, a portion of their annual income is taxed at 15 percent (rather than 35 percent) under an obscure provision called “carried interest.” This puts that income in the same tax bracket occupied by the janitors who clean their buildings. Using the proceeds from their deals and the money they save on taxes, private equity and hedge fund managers have lavish lifestyles featuring multiple residences, private planes, and ostentatious parties.

As David Stockman described in The Great Deformer, meanwhile the companies seized by Bain-like companies, loaded down with debt and gutted and left for dead, cannot be more “efficient” because the investors/looters have pocketed all the money. Stockman, once Ronald Reagan’s economic budget guru, pointed out the not only does wealth not trickle down, the kind of wealth won by investment capital is not a win-win proposition—the investor wins by destroying a healthy company and displacing thousands of American workers and gutting hundreds of American towns. The wealthy, the authors write, are able to buy not just Congress and other key members of the government, but are also purchasing so-called “experts,” academics in supposedly intellectual “think tanks,” which are well paid for their so-called “reports” on the economy. Writing of the fabulously rich Koch Brothers who fund any number of right-wing causes, Bartlett and Steele said,

The Kochs have contributed $12.7 million to candidates (91 percent Republican) since 1990 and spent more than $60 million on lobbying Washington in the last decade. But their greatest impact is the millions they have poured into foundations, think tanks, and front groups to mold public opinion in their favor by promoting positions that in almost every case benefit the few. The rise of these conservative think tanks and foundations directly coincides with the economic decline of the middle class. Among the more prominent of these organizations are the Cato Institute, which Charles cofounded in 1974, and Americans for Prosperity, which David launched in 2004 as a successor to a similar group that he had helped found earlier called Citizens for a Sound Economy. Dozens of other groups receive Koch money at the national or regional level. In early 2012, a rift developed between the Kochs and Cato, sparking litigation by the Kochs and charges by Cato president Ed Crane that Charles Koch was trying to gain full control of the think tank to advance his “partisan agenda.” The environmental group Greenpeace, which in 2010 examined just one issue on the Kochs’ agenda—their efforts to discredit scientific data about global warming—identified forty organizations to which the Koch foundations had contributed $24.9 million from 2005 to 2008 to fund what Greenpeace called a “climate denial machine.”

In fact, after the release of the documentary Inside Job, the outcry against economists clearly caught in conflict of interest situations was so loud that the profession briefly flirted with setting ethics standards for itself. Embarrassed, the American Economic Association schedule a session on ethics in its 2011 meetings in Denver. As The Economist pointed out,

You might assume that economists already disclose their links to organisations. But when economists write articles for the opinion pages of newspapers and magazines, appear on television to discuss matters of economic policy or testify before parliamentary committees, the audience is often unaware of their non-academic affiliations. A study by Gerald Epstein and Jessica Carrick-Hagenbarth of the University of Massachusetts, Amherst, looked at how 19 prominent academic financial economists who were part of advocacy groups promoting particular financial-reform packages in America described themselves when they wrote articles in the press. Most had served as consultants to private financial firms, sat on their boards, or been trustees or advisers to them. But in articles written between 2005 and 2009 many never mentioned these affiliations, and most of the rest did so only sporadically and selectively. Readers may have assumed they had more distance from the industry than was in fact the case.

Can This Country Be Saved?

The authors of The Betrayal of the American Dream, who have watched the American economy for years, end their book with a plan to remedy the current situation.

Over the last four decades, public policies driven by the economic elite have moved the nation even further away from the broad programs that helped create the world’s largest middle class, to the point that much of that middle class is now imperiled. The economic system that once attempted to help the majority of its citizens has become one that favors the few. Not everyone in the middle class who pursued the American dream expected to get rich. But there was a bedrock sense of optimism. Most people felt that life was good and might get better, that their years of dedication to a job would be followed by a livable, if not comfortable, retirement, and that the prospects for their children and the generations to follow would be better than their own.

The writers lay out a series of reforms that they think are necessary to save the middle class. From reforming the tax code which as been written to favor the wealthy to policing the financial markets to providing Keynesian stimulus to rebuild the infrastructure—all of the se suggestions are common sense and all are doomed to failure, unless the voters demand otherwise. Bartlett and Steele suggest that

Middle-class Americans, still the largest group of voters, must put their own economic survival above partisan loyalties and ask four simple questions of any candidate who wishes to represent them: 1. Will you support tax reform that restores fairness to personal and corporate tax rates? 2. Will you support U.S. manufacturing and other sectors of the economy by working for a more balanced trade policy? 3. Will you support government investment in essential infrastructure that helps business and creates jobs? 4. Will you help keep the benefits of U.S. innovation within the United States and work to prevent those benefits from being outsourced? The choices we make in the candidates we elect and the programs and policies we support will set the direction of the country.

It will be difficult for Americans to put country before party to look past ideology to find facts, for as Thomas Frank pointed out in his 2004 book, What’s the Matter with Kansas? How Conservatives won the Heart of America, Americans can be counted on to vote against their own best interests. His argument, hotly contested by some writers, is that class interests, i. e. money, has been replaced by ethnic interests, i. e., race. Lower and middle class white people have been persuaded that their interests are aligned with those of the upper classes who will—in their own good time—“trickle” their gains “down” to the deserving few. Someday, they are assured “the job creators” will return the jobs they have shipped overseas. Sadly those jobs are not coming back and the Middle Class must start standing up for itself. As The Betrayal of the American Dream concludes,

What’s at stake is not only the middle class, but the country itself. As the late U.S. Supreme Court justice Louis Brandeis once put it: “We can have concentrated wealth in the hands of a few or we can have democracy. But we cannot have both.”

One this is sure, only the middle class can help itself; no one else will.

Dr. Jeanne S. M. Willette

The Arts Blogger









“Drift” by Rachael Maddow

Drift: The Unmooring of American Military Power (2012)


The heart of the question of what Rachael Maddow calls Drift is how do we wage war in the twenty-first century? What is the purpose of war in the contemporary era? And who fights these wars? Or to twist the title of a famous film from World War II, why do we fight?

The answer is: because the President wants us to fight.

American history has been based on the predicate that Americans fight for our rights to be free and to live in a democratic society. We imagine ourselves to be valiant warriors—citizen soldiers, as Stephen Ambrose named those who fought in the last “good war.” Maddow quoted future President Thomas Jefferson as saying in 1792, “One of my favorite ideas is, never to keep an unnecessary soldier,” noting that once the “necessary” war is fought, the “necessary” soldiers fade back into civilian life. But that image of Jefferson’s Yeoman Farmer who could be counted on to spring to the country’s defense when needed is a highly idealized one.

By the middle of the nineteenth century, Jefferson’s concerns about the dangers of keeping a standing army had melted in the heat and fire of expansionism and the Mexican-American War (1846-1848). Maddow quickly skips over a sizable chunk of American history, from the decades of Manifest Destiny and an extended campaign of genocide against Native Americans. This slide into Empire was capped with the Spanish-American War when America finally controlled the maximum territory possible. But making and maintaining an American Empire required having a standing army—how else do you wage a war of conquest from one end of the continent to the other?

I mention this seventy year slice of history, not to criticize Maddow for not covering it, but to make the point that the desire to keep an army, a strong military force, under close command and control of the executive branch, has always been present in American culture, no matter how much the national mythology denies this history. Certainly the Great War was a rude interruption in a self-satisfied isolationism and Americans were dragged with great reluctance into this and the Second World War. Maddow emphasizes how quickly the military was demobilized after these two great wars. However, to paraphrase Karl Marx, the insistence that America was, at heart, a peace-loving nation was a discourse pregnant with its opposite. The ability of a strong President to wage war on command was always present and had been practiced for the bulk of the nineteenth century—war disguised as Manifest Destiny.

That said, the importance of the model or the paradigm of the Second World War cannot be overstated. It was not just the “Last Good War,” as has been often noted, it was also the last conventional war, because it was the last war America fought with Europeans. A shared culture of combat enabled the armies of World War II to fight on the basis of shared assumptions. Japan, having become “modern” by first copying the West and then by beating the West, for the most part complied. Here and there, like Germany, Japan broke the laws of “civilized warfare,” surely a contradiction in terms, but for the most part the basic “rules” were followed. Armies faced and fought one another, Navies faced and fought one another. The goal was for one group to defeat the enemy, invade the territory, seize the capital, and force a formal surrender.

The new enemies did not share these cultural expectations and proceeded to ignore the European forms of fighting. The fact that this rather stilted and formal mode of thrust and parry had a long history, stretching back to Medieval times did not impress the Vietnamese or the tribes of the Middle East. After a brief foray into South Korea, America fought a European style war only once again—an even more brief visit to Iraq. What followed would be a continuation of the Viet Nam style quagmire, a series of non-wars that could not be won, only endured until exhaustion intervened. Despite these unpalatable facts, or because of them, the “Dream War” was a re-run of the Second World War, the Good War, the Winnable War, where words like “victory” and “win” had some meaning.

The Proxy War

After the Second World War, America somehow entered into a continuous state of total war and these were mostly undeclared “wars,” called interventions of some other nomenclature. It seems that after four years of national militarization, it was hard to break the habit of defensive belligerence. The new enemy was the Soviet Union and the Cold War began. There is, apparently, something comforting, in an ordering, logical sort of way, to have a known enemy. The “enemy” sorts the world neatly into two halves: good and evil, simple dualities. We know how hard it has been to let go of a good Foe. Once the Berlin Wall fell and the Soviet Union imploded, America has continued to seek another Opponent. As Maddow comments in her section on Ronald Reagan,

We’d got in the habit of being at war, and not against some economic crisis, but real war—big, small, hot, cold, air, sea, or ground—and against real enemies. Sometimes they’d attacked us, and sometimes we’d gone out of our way to find them.

But the post-war and the post-Cold War world is not so neat and tidy and the new enemies were not schooled in eighteenth century military tactics of opposing lines protecting important strategic sites. And herein lies the trouble with contemporary war and this is the point where Maddow begins to make her point about “drifting” away from traditional formal ways of waging war through declaration and mobilization. Maddow writes of the of the standing army after 1945,

We had 150,000 troops in the Far East, 125,000 in Western Europe, and a smattering in such diverse and far-flung locations as Panama, Cuba, Guatemala, Morocco, Eritrea, Libya, Saudi Arabia, Samoa, and Indochina. Wary as never before of the Communist threat—now a constant “speck of war visible in our horizon”—America had come to see Jefferson’s preoccupation with standing armies and threats from inside our own power structure as a bit moldy. We were, after all, the only country still capable of keeping the planet safe for democracy.

The Cold War set a precedent for war with a goal but no foreseeable ending. Most people thought that the Cold War would never end, precisely because it was cold. Until the twenty-first century, Americans had not considered the possibility that that a Hot War would not have no foreseeable ending but also no articulated purpose. Maddow takes the reader on a Long March from the Viet Nam War into Iraq and Afghanistan, but her purpose is not to refight these endless wars but to discuss why we are fighting them in the first place. The answer seems to be a particularly male or males—the President and the military— need to feel manly and a rather frightening willingness on the part of a temporary leader, i. e., the President, to alone be responsible for the spending of blood and treasure.

In laying out how Maddow made her case, I want to first, move directly past the Viet Nam War into the peculiar non-wars of Ronald Reagan and second, to use the “Reagan Wars” as examples of the lingering Viet Nam Syndrome. The reason for skipping over the conduct of the war in Viet Nam is because this was an inherited war, with long, long roots back to the French Empire. After the Second World War, the tiny Asian nation wanted to be independent of the French who, after surrendering to Germany, were driven to retrieve their dignity by reclaiming parts of their “empire,” such as Viet Nam. The French dragged America into this dubious enterprise through blackmail: if we gave them military and monetary assistance, they would join NATO. And then, the French were defeated at Dien Bien Phu in the summer of 1954. They withdrew and left America holding the bag, so to speak.

Viet Nam became an “American” war by circumstance, doubly damning the conflict as having nothing to do with “our” vital interests. Even though all Viet Nam wanted was national self-determination, as promised by American President Woodrow Wilson, the American government decided that this was the ground where they would fight a proxy war against Communism. From 1959 to 1975 American fought a war that was never declared. Maddow recounts that in an ill-considered desire to carry out the supposed wishes of the deceased President John F, Kennedy, President Lyndon Johnson slip-slided into war sideways through a draft of marginal young men. Privileged young men, future President George H. W. Bush and future Vice-President Dick Cheney and future Presidential candidate, Mitt Romney, could receive draft “deferments.”

The point that Maddow, in laying out her argument concerning wars ordered at the whim of the Executive Branch makes, is that by the 1960s, in the midst of the post-war boom, it was unwise to both wage war and the mobilize the population for war. People did not want another war, not the kind of war that involved the entire population. In order to fight this new war, President Johnson sought recruits from the sons of citizens who had no political clout.

So from the first 3,500 combat Marines Johnson sent ashore near Da Nang on March 8, 1965, to support the first sustained bombing of North Vietnam to the 535,000 American troops who were in Vietnam at the end of his presidency, something like 1 percent would be Guard and Reserves. The active-duty armed forces shouldered the burdens of Johnson’s land war in Asia—fleshed out by draftees, chosen at random from among the ranks of young American men who were unable or unwilling to get themselves out of it.

A dangerous step had been taken—fighting a wrong war with the wrong—or unwilling people—all in the name of an abstraction: The Cold War. Unfortunately, for President Johnson, television had been invented and Americans indicated strongly that they did not want to send their children off to foreign wars, nor did they wish to see nightly battles on television. So for future presidents, the problem would be compounded: how to go to war with the minimum amount of soldiers—no need to call attention to the fact that wars are fought by real people—and with as few witnesses as possible, all the while achieving maximum glory. And here is where Ronald Reagan rode to the rescue with the solution to the problems Lyndon Johnson had left behind.

The Viet Nam War ended in a humiliating defeat for America. The greatest nation in the world had to withdraw ignominiously from an inglorious conflict that had been fought to make a political point for an opponent who was never present. The “manhood” of America had been emasculated, damaged by a guerrilla force impervious to traditional warfare and offended by occupation and division of their nation by colonial masters. Instead of studying the experience of the war and coming to the realization that the myth of American isolationism could make an excellent reality, President Ronald Reagan wanted to help America to “man up.”

The Reagan Solution

However, Reagan was thwarted by a belated law passed by a chastened Congress, a law to curtail adventurous presidents and to limit their War Powers. As Maddow descibes it,

The War Powers Resolution of 1973 was an imperfect law. But by passing it, the legislative branch was putting the executive on notice—it no longer would settle for being a backbencher on vital questions of war and peace. If the president wanted to execute a military operation (any military operation), he had to petition Congress for the authority to do so within thirty days; if Congress didn’t grant explicit authorization, that operation would have to end after sixty days by law. The Oval Office would no longer have open-ended war-making powers.

Rather than putting an end to the unfortunate “foreign entanglements” that George Washington warned of, the War Powers Resolution became an obstacle for annoyed Presidents to overcome. At this point, Maddow begins to describe how one president after another strove to wage war by other means. Reagan’s answer to the Resolution was to order strange little “interventions” or tiny wars, waged on defenseless territories. Reagan was considerably boosted in his Presidential aspirations by his contention that America should reclaim the Panama Canal. The fact that his jingoism, as Maddow puts it, struck a nerve with many Americans suggests that the post-Viet Nam War syndrome—the shame of defeat—was, twenty years later, a national mood.

Once he became President, Reagan immediately began building up the military. To the end of his Presidency, he dreamed of a fantastical mirage of the conquest of space with a weapon called “Star Wars.” Indeed, there was always a strange and surreal aspect to Reagan’s military adventures: he ran when attacked and attacked when there could be no reply. As Maddow explains, Reagan seemed to lack the ability to separate rhetoric from reality and it appears that he actually believed that America had “lost” the Panama Canal and that it was necessary to invade Grenada and then to attempt to overthrow the government of Nicaragua with secret stashes of arms to contras. War under Reagan became a curious mixture of secrecy and public relations.

Maddow lays out how the Reagan administration worked very hard to write a metanarrative that was both teflon and atomic: it was an untouchable story and it would have a long half life. The untouchable narrative was that America had to be Number One and that it had enemies everywhere. Therefore, regardless of facts to the contrary or regardless of the lack of facts, America was in danger, ringed with enemies, in constant danger. From today’s vantage point, the paranoia of the Reagan years seems predictive: a Republican administration frightens the American people with a threat that does not exist, calls those who dare to bring facts to the table “Communist stooges” and what have you, and ignores the impact on those outside of America, who are observing these antics. As Maddow writes,

The Soviets put their own intelligence services on high alert, watching for any and every sign of American military movement. And their ambassador to the United States, Anatoly Dobrynin, who spent much of his adult life in Washington, was gently passing the word to his bosses in the Kremlin that Reagan really did believe what he was saying. Dobrynin later wrote in his memoir that “considering the continuous political and military rivalry and tension between the two superpowers, and an adventurous president such as Reagan, there was no lack of concern in Moscow that American bellicosity and simple human miscalculation could combine with fatal results.” In 1983, when fear at the Kremlin was at an all-time high, the Reagan administration was more or less oblivious to it.

The dangers of this story with a long half-life and this myopic inward vision is apparent. Clearly, Reagan believed everything he was told (he apparently neither read daily briefings nor spent much time in the Oval Office) and clearly he was playing to a local audience for political purposes. Otherwise, why, out of all the nations in the world, invade Grenada? Maddow writes in an ironic spritely style that, in certain contexts, can be somewhat disconcerting, but here, in her description of the Battle of Grenada, excuse me, Operation Urgent Fury, the amused detached tone of near-parody is perfect. The trick the Reagan Administration needed to pull off was to both keep this Operation a secret but to convince the nation that a small group of American medical students were being threatened by an evil Latino dictator.

The story of Operation Urgent Fury reads like a script from the Keystone Cops. It would be a funny story, except for an earlier event that would prove to be prophetic:

On the morning of October 23, 1983, a suicide bomber drove a truck containing six tons of explosives and a variety of highly flammable gases into the US Marine barracks at the airport in Beirut, Lebanon, killing 241 soldiers there on a don’t-shoot peacekeeping mission. Fourteen months into the deployment, and after an earlier suicide bombing at the US embassy in Beirut, Reagan was still unable to make clear to the American people exactly why US Marines were there.

The answer to an unanswerable attack in Lebanon was to invade Grenada and to save medical students from Fidel Castro. Except that, according to Maddow, “Fidel Castro, knew about the invasion well before the Speaker of the United States House of Representatives.” Not only had the rescue teams not bothered to locate the students, who were scattered in various locations, but also, in Maddow’s words, “The chancellor of the medical school had already been telling reporters that their students hadn’t needed rescuing.” Indeed, some students were left behind, never to be “rescued.” But never mind, America was getting its macho back and the public’s attention was diverted from the 240 Marine deaths in Lebanon. Therefore, the Administration took its eye off a very significant ball: the Middle East to gaze southward to Latin nations, where Communism was supposedly fomenting at America’s very doorstep.

Although Congress was not pleased with Reagan and slapped his (now popular) hand, these unilateral actions continued under Reagan’s not always certain management. Maddow quotes the Speaker of the House, Tipp O’Neill:

“He only works three and a half hours a day. He doesn’t do his homework. He doesn’t read his briefing papers. It’s sinful that this man is President of the United States. He lacks the knowledge that he should have, on every sphere, whether it’s the domestic or whether it’s the international sphere.”

The Iran-Contra experience is now a matter of history and it is still unclear who was in charge or whether or not Reagan was or was not in the grip of Alzheimer’s. What is certain is that the “victory” in Grenada gave the President a sense of entitlement and he was determined to have another war in Nicaragua. As Maddow states,

Reagan was convinced that a president needed unconstrained authority on national security. He was also convinced that he knew best (after all, he was the only person getting that daily secret intelligence briefing). These twin certainties led him into two unpopular and illegal foreign policy adventures that became a single hyphenated mega-scandal that nearly scuttled his second term and his legacy, and created a crisis from which we still have not recovered. In his scramble to save himself from that scandal, Reagan’s after-the-fact justification for his illegal and secret operations left a nasty residue of official radicalism on the subjects of executive power and how America cooks up its wars.

In order to have his war and eat it too, Reagan and his sidekick, Oliver North, privitized this little war, which was funded through wealthy (Republican) donors and the Saudis. This unlikely enterprise—too strange to unwind here—came undone and the clear illegalities were exposed to withering investigations. As Maddow summued up this misadventures of Ronald Reagan,

Even before all the indictments and the convictions of senior administration officials, Reagan’s new way—the president can do anything so long as the president thinks it’s okay—looked like toast. In fact, Reagan looked like toast. Whatever his presidency had meant up until that point, Iran-Contra was such an embarrassment, such a toxic combination of illegality and sheer stupidity, that even the conservatives of his own party were disgusted. “He will never again be the Reagan that he was before he blew it,” said a little-known Republican congressman from Georgia by the name of Newt Gingrich. “He is not going to regain our trust and our faith easily.” The president had been caught red-handed.

However, due to the wonderous alchemy of Republican spin, “Reagan could be reimagined and reinvented by conservatives as an executive who had done no wrong: the gold standard of Republican presidents.” Maddow goes on to describe and recount further adventures of the Presidents who came after Reagan. Reagan laid down not just a gauntlet to a meddling Congress but also a path to Executive Power to use the military. The key was not to wage war but to sent out the troops. The problem was that the Draft had been eliminated and the President had to use a professional or volunteer army and the National Guard or the Reserves. It is interesting to note that the liability of not having a large standing army was now an asset. A small but flexible force, especially when combined with an international force, as in the Balkans and in the Gulf War, enabled the President to sent out a focused force without “waging war” and without declaring war.

Once Reagan had established the (specious) “legal” precedent that the military was the President’s tool, there was no check to balance this power. As Maddow states,

Congress has never since effectively asserted itself to stop a president with a bead on war. It was true of George Herbert Walker Bush. It was true of Bill Clinton. And by September 11, 2001, even if there had been real resistance to Vice President Cheney and President George W. Bush starting the next war (or two), there were no institutional barriers strong enough to have realistically stopped them. By 9/11, the war-making authority in the United States had become, for all intents and purposes, uncontested and unilateral: one man’s decision to make. It wasn’t supposed to be like this.

I have been moving through Maddow’s book, or drifting through her arguments by trying to set up, step by step, the trajectory from waging small but satisfying wars somewhere else with a tiny number of military personnel with low psychological cost to the public and with high pay-offs in bragging rights. I think that Maddow is correct to put the starting point of the rise of executive power over war with the Cold War and its ambiguities. That said, during the nineteenth century, there was also a long history of expansion and empire via military campaigns that were informal “wars.” The lack of large and formally declared wars led to the misleading myth of America rousing itself only when necessary while overwriting a longer and more complete story that was actually laced with combat.

The Two Wars of the Bushes

In order to solve the pesky “Viet Nam Syndrome,” or the reluctance on the part of Congress venture into pointless and costly wars, Reagan had solved one problem by seizing the power to put troops in the field and solved the problem of cost by financing the action with a deficit: fight now, pay later. But Reagan’s wars, in and of themselves, were dubious and unsatisfying. What America needed was a “real” war, something that would wipe out the stain of defeat in Viet Nam and when Saddam Hussein invaded the very small and very rich nation of Kuwait, the opportunity to re-masculinize presented itself. After a long and winding wrangle with a recalcitrant Congress, President George Bush put together an international coalition to drive Saddam out of Kuwait.

Thanks to Reagan, Bush felt that he could call up an army without consulting Congress. While Congress complained, Bush and the Chair of the Joint Chiefs of Staffs, Colin Powell, planned. Powell, a veteran of the Viet Nam fiasco, had his own theory of the case on how to fight a war—with deep preparation and with overwhelming force. As Maddow explains,

Powell wanted an overwhelming, decisive use of force to meet American military objectives clearly and quickly. The whole Powell Doctrine of disproportionate force, clear goals, a clear exit strategy, and public support was designed to create a kind of quagmire-free war zone. He was unequivocal—he and his commander on the ground, Norman Schwarzkopf, had agreed: two hundred thousand more troops was what it would take. And they’d already made sure the president understood the numbers would go up if he decided he wanted not only to eject Saddam from Kuwait but to destroy his army, or to depose him. The mission objectives would have to be clearly defined before H-Hour. In any case, Powell and Schwarzkopf wanted five, maybe six, aircraft carrier task forces deployed to the Persian Gulf, which would leave naval power dangerously thin in the rest of the world. By the time the offensive capability was in place, about two months down the road, there would be something in the neighborhood of 500,000 American troops in the Middle East—nearly as many as at the high-water mark in Vietnam. Two-thirds of the combat units in the Marine Corps would be deployed in the Gulf. There would be no more talk of rotating troops home after six months. Soldiers had to understand they were in the Gulf until the job was done, however long that took.

This was the famous “Powell Doctrine,” which was designed to guarantee success. And it worked magnificently in the Gulf War, resulting in a great victory over an inept foe in a truly stupid war that ended in a graceless slaughter along the Highway of Death. Only after long and protracted fight did Congress agree to go to war. According to Maddow, Congress objected to fighting a war in which American interests were not directly involved, but Congress was also disinclined to accept the consequences of not saving Kuwait. The Bush Administration fought a successful war and Kuwait, a nation that circumcised the women, was restored to its (male) owners, but there were hidden costs for the future. The jumping off point into Kuwait was Saudi Arabia and that meant that to one very indignant man infidels were on sacred soil. Osama bin Laden would wait a decade to take his revenge.

Since the “good” Gulf War was fought with Reserves, it was fortunate that the engagement was, thanks to Colin Powell, a short one. But in this short amount of time, certain rules of engagement were laid down—not for the enemy but for fellow Americans. The Viet Nam War had run into trouble as much at home as in the field due to the fact that this was the first war since the Civil War that was uncensored. The military would not make that mistake again. The Gulf War was stage managed, information was controlled and doled out, and press and public was placated with video games of the “smart bombs” over Baghdad. As Maddow said,

Our military dazzled. The First Gulf War was all Powell could have hoped for: a clear mission, explicit public support, and an overwhelming show of force. It was fast—the ground assault lasted just a hundred hours, the troops were home less than five months later. It was relatively bloodless for the away team—fewer than two hundred American soldiers were killed in action. It was cost-effective—happy allies reimbursed the United States for all but $8 billion spent. And it was, withal, a riveting display of our military capability, almost like it was designed for TV. Americans, and much of the world, watched a Technicolor air-strike extravaganza every night. The skeptics were forced to stand down; our military had proved beyond doubt or discussion that we were the Last Superpower Still Standing.

But for longer missions, the Reserves and the video games would not be enough to placate the public. Although, thanks to Reagan, there was no serious thought given to balancing a budget and the military was given whatever it needed or wanted or desired. Aside from boys and their toys, supporting an adequately sized volunteer army was proving to be a very expensive proposition. The military had always supported itself. A young man could enlist or be drafted and find himself, not fighting, but doing laundry or providing food or doing mechanical work. For every combat fighter, there were a dozen or so working in the support systems, as engineers or office workers.

Once the military Draft was ended in 1973 under Richard Nixon, the armed forces all became “volunteer.” At the time, those who were opposed to the Draft, complained of “opportunity costs,” or the economic losses incurred by middle class white males, now likely to have the prospect of high salaries during the post-war boom. Once the white males moved out the way, the males of color could raise themselves socially and economically by volunteering for the military where new “opportunities” could be found. Those who were opposed to the end of the Draft, felt that the ethnic and social mixing that occurred in the military knitted America into a whole nation, instead of a divided country. There was some discussion of patriotism and service to the Flag, but the urgent voices of disgruntled white males had to be heard.

Twenty years later, the all volunteer army was an excellent career choice, but only certain demographic groups took advantage of what the government was offering: young men and women of color and young men and women from the South. The rest of the youth were not interested. The result of these very different life paths would have consequences that would take another twenty years to play out. In the short run, there was the sheer unexpected cost of maintaining a large and long term military full of careerists and their families. As opposed to the draftees, these “volunteers” did not cycle out after a couple of years, they stayed and got married and raised families. Each soldier could easily have three or more dependents living on the base and needing care and feeding.

Maddow brings up a very interesting point about the sheer financial scale of the obligations the government takes on when it commits to a Volunteer Army. The cost of maintaining soldiers and their spouses and children and all the attendant services was huge. As Maddow explained,

In the ten years after 1985, the procurement budget had dropped from $126 billion to $39 billion and represented a paltry 18 percent of total defense expenditures. Sure, the active-duty force had been pared by nearly 30 percent and a few bases had been closed, but that didn’t come close to solving the problem. How were we supposed to ensure our Last-Superpower-on-Earth superiority when just the overhead cost of keeping our standing army milling around was swallowing between 40 and 50 percent of the Pentagon’s annual cash allotment?

The problem was solved by a now familiar term, “outsourcing.” Now, on one hand, it is more expensive to privatize and the corruption when private companies take the place of military personnel is vast, unchecked, and continues today unabated. However, outsourcing can be a very good thing, as Martha Stewart would say, because one can outsource actual soldiers. If one outsources soldiers, not just food services, then the President who is in charge of deploying the mercenaries is now undeterred by such nuisances as Congressional approval. The corporation, such as Xe, assumes the risks and the expenses of the mercenaries who are not eligible for Veterans’ benefits—hospitalization, education, legal protection—but they are paid accordingly with very high salaries that, unlike benefits, have end points. The government is off the hook and the mercenaries can be charged with all kinds of illegal and dishonorable tasks, off the books.

Outsourcing began in earnest in the 1990s. President Bill Clinton was wise enough to not fight wars but to participate in peace-keeping missions, such as the one in the Balkans, where some kind of military presence needed to be in place for years. By the 1990s, the problem of going to war was solved and now it was easy to avoid the skepticism of Congress or the suspicions of the American people or the high cost of casualties. As Maddow explains,

President Clinton never really expended much effort on the politically costly task of convincing the American public of the need to arm the Bosnians or Croatians, or the need to unleash American air power on Miloševic and the Serbs, or the need to put US boots on the ground. Instead, he found a way to do something without the necessity of making any vigorous public argument for it, and without much involving his own balky Pentagon…So it was soon after the peace accords were signed that those twenty thousand American peacekeepers—who would be joined by twenty thousand private citizens under contract to provide support services—arrived in Bosnia and Croatia as part of an international force to keep Miloševic and his Serbian military under heel. And did Clinton have a hard time selling that manpower commitment to the American people? He did not. He was helped greatly by—what else? Outsourcing.

The civil war and the genocide in the former Yugoslavia needed to be quelled and then order had to be restored, a process that took years. Private Contractors, as these mercenaries were then called, made their first appearances in the Balkans. The consequence of the decision to privitize were disastrous, as Maddow says,

…the acute and lasting problem was that they cut that mooring line tying our wars to our politics, the line that tied the decision to go to war to public debate about that decision. The idea of the Abrams Doctrine—and Jefferson’s citizen-soldiers—was to make it so we can’t make war without causing a big civilian hullabaloo. Privatization made it all easy, and quiet.

By the time President Barack Obama inherited two wars, one in Afghanistan and Iraq, the private contractor was a fixture in the American military. During the second Iraq War under a second President Bush, the ratio of the Reserves on active duty and the Private Contractors/Mercenaries was one to one. When the American public is told how many men and women are on active duty in these two war zones, this number should be doubled. In terms of the troops in the field, the actual force is twice as large as we are told. Unfortunately, the troops and the mercenaries are unsuited to the task of “nation building” or modernizing and westernizing a Medieval culture that has no history of democracy or equality.

Into the Cauldron

By the twenty-first century, reasonably good excuses had to be given for rounding up the Reserves and one had to attend to public relations and “nation building” or “bringing democracy” to benighted places seemed to be worthy causes. The invasion of Afghanistan, a barren land, suitable only for the breeding of war and poppies, should have been short-lived once the objective had been obtained—to drive Al Qaeda out of Afghanistan and to kill or capture the architects of the “attack on America” on September 11, 2001. The problem conceptually was that “objective” or goal was not a “victory,” and the second Bush administration cast about for an alternative war on better terrain where a good old-fashioned war could be fought.

Perhaps in the distant future, psycho-historican will explain the psychology of launching a “preemptive war,” also known as the “Bush Doctrine.” The invasion and occupation of Iraq was a strange and surreal event, too familiar to be retold here, but there is one element that remains intriguing—the willingness to not just lie but to create an alternative reality. In contrast to the Cold War, which has been deemed a Simulacra of a war, the Iraq War was a real war fought for fictitious reasons in the fevered mindset of a neo-con fantasy. As with the Reagan administration, it is unclear if the major players actually believed their own rhetoric, if they actually inhabited the alternative universe they created out of whole cloth or whether for unknown reasons they simply wanted to send men and women off to kill other men and women on a whim.

Experience suggests that it is futile to argue with alternative universes and no manner of proof to the contrary will convince the perpetrators otherwise. But what the Iraq was does demonstrate is another step towards executive capriciousness. The second Bush Administration proved to be incapable of governing but the energy of the government was wholly swallowed up in dreams of glory. Maddow suggests that we have now reached the point where the Executive Branch is nearly unchecked and the Pentagon has, thanks to generous Republican (deficit-fueled) spending on defense, the military has taken on a life of its own, regardless of need or regardless of real conditions on the ground.

A fact that’s underappreciated in the civilian world but very well appreciated in our military is that the US Armed Forces right now are absolutely stunning in their lethality. Deploy, deploy, deploy … practice, practice, practice. The US military was the best and best-equipped fighting force on earth even before 9/11. Now, after a solid decade of war, they’re almost unrecognizably better. Early worries such as how much gear we were burning through in Iraq were solved the way we always solve problems like that now: we doubled the military’s procurement budget between 2000 and 2010.

Obama Country

New President Barack Obama won the office, partly on “hope and change,” and partly because he was against “dumb wars.” He inherited two dumb wars and virtually unchecked Executive Power to go to war. Obama is no cowboy. A thoughtful man, he is an intellectual with an analytic mind and it seems that somewhere along the line, he has gently and silently slipped the nation into the new century. As the Obama administration is demonstrating daily, the way in which President George H. W. Bush waged war was old-fashioned and outmoded, a nineteenth century idea of fighting with twentieth century weapons.

To return to a point I made earlier, if the starting point is the “good war,” the Second World War, then the post-war dream is already an outmoded one, one of “victory” and “glory” and “win.” These terms, in the twenty-first century, are without definitions. Even the Powell Doctrine, invading with maximum force, only gets you so far—into the territory—but does nothing in terms of a long occupation and is a hindrance when it is time to get out. And the Powell Doctrine was totally disregarded when the Bush Administration decided to invade Afghanistan and Iraq.

The Iraq War was a horribly expensive war, fought on the cheap in terms of the numbers of troops deployed. While bending to public disapproval of the unnecessary war in search of Weapons of Mass Destruction, the Pentagon kept the number of Reserves low but augmented with Contractors. Iraq is a huge territory that did not want to be invaded or occupied and the shoestring forces could not control the reluctant population. The major objective when waging an unpopular war, justified in a variety of confusing and conflicting ways, is to win this war. But to do so, the Powell Doctrine must be put into play, an impossibility if the war is a “War of Choice.”

Maddow does not spent much time on the fiasco of the Iraq War, already ably covered by other incredulous historians, but she notes that

By 2001, the ability of a president to start and wage military operations without (or even in spite of) Congress was established precedent. By 2001, even the peacetime US military budget was well over half the size of all other military budgets in the world combined. By 2001, the spirit of the Abrams Doctrine—that the disruption of civilian life is the price of admission for war—was pretty much kaput. By 2001, we’d freed ourselves of all those hassles, all those restraints tying us down.

Iraq and Afghanistan, of course, did not go well. The British, how had tried to contain Iraq in the 1920s and the Soviets who had tried to control Afghanistan in the 1980s could have warned the deaf Americans of their ridiculous quest. No amount of time or effort could bring about a “victory” or a ” success” in these ancient lands of Mesopotamia. As if to satisfy himself that the Neo-Conservative assertions that these wars could be won with more troops (remember that the actual number of soldiers is double what we are told), Obama conducted a “surge.” In male military language a surge is an increase of personnel for a limited period of time. The hope is to stabilize the situation long enough to get out of Dodge. Obama’s surge allowed America to save face and taught the President that surges are futile. To ask for a surge is like asking for the price in a fancy boutique—-if you have to ask, you can’t afford it; it you have to surge, you’ve lost the war.

Quietly, Obama took the advice of his Vice-President, Joe Biden, to use commandos instead. And this is where the book ends. Maddow makes the point that every step along the way disconnects “war” from national responsibility, national participation, and democratic participation. As Obama pulls out of the Twin Wars of Bush’s devising, he is escalating the ultimate dislocated war, a War of Drones waged by the CIA, augmented by occasional strikes by elite Special Forces. The Administration has a supposed “secret kill list” of those who are to be removed through long-distance strikes and the rule of engagement are unknown. Congress is kept in the dark about the details but the benefits are clear.

First, the President and the CIA and a small portion of the military can operate at will. They are not engaged in a war but in a program of planned assassinations, designed to take out the leaders and discourage the followers. Compared to a large number of “boots on the ground,” the Drone Program saves lives and money, blood and treasure. The result is the Ultimate Video Game. As Maddow explains it,

When one of those Blackwater-armed drones takes off with a specific target location programmed into its hard drive, it is operated remotely by a CIA-paid “pilot” on-site, in a setup that looks like a rich teenager’s video-game lair: a big computer tower (a Dell, according to some reporting), a couple of keyboards, a bunch of monitors, a roller-ball mouse (gotta guard against carpal tunnel syndrome), a board of switches on a virtual flight console, and, of course, a joystick. Once the drone is airborne and on its way to the target, the local pilot turns control over to a fellow pilot at a much niftier video-game room at the CIA headquarters in Langley, Virginia. The “pilot,” sitting in air-conditioned comfort in suburban Virginia, homes the drone in on its quarry somewhere in, say, North Waziristan. Watching the live video feed from the drone’s infrared heat–sensitive cameras on big to-die-for-on-Super-Bowl-Sunday flat-screen monitors, the pilot and a team of CIA analysts start to make what then CIA chief Leon Panetta liked to call “life-and-death decisions.” Maybe not sporting, but certainly effective.

According to an article by NPR, the local pilots are required to wear uniforms and there are programs to help these people to cope with the after effects of frequent killing, even at a distance. Maddow’s concern is that there is such a dislocation between the decision making process and the public and the distance between the moral responsibility of waging war that it is easy to be in a state of constant conflict without any accountability. She is concerned that the breakdown is between Congress and the President, but I think that there is another trajectory that also needs to be looked at—the increase in distance between the target and the triggerman.

The real question might be another kind of separation, one that dates back to the bombing of civilians in the 1920s. When these bombings first occurred, there was little concern, because the victims were in Iraq and Ethiopia. Only when Europeans were assaulted in Guernica did any outcry occur but these moral qualms vanished, and ten years later, the Allies had firebombed Dresden, Hamburg, and Tokyo and had dropped two atomic bombs on non-military targets in Japan—all on civilians.

The ethical aspects of killing helpless human beings was wiped out by the blanket assumption that the populations of Germany and Japan were complicit in the Second World War. The rationale for these civilian bombings was that the morale of the people had to be broken. Studies after the war have suggested that these bombings, such as that of London, were not effective in either lowering morale or in slowing war time production, but it was hard to break the spell of cost-free or effective aerial warfare.

In fact, Powell had dissuaded Clinton from attempting to settle the Serbian conflict through bombing. Maddow quotes Clinton assistant, Nancy Soderberg, who reported that Powell had advised, “ ‘Don’t fall in love with air power because it hasn’t worked,’ [he said]. To Powell, air power would not change Serb behavior, ‘only troops on the ground could do that.’ ” Indeed, the Second World War was won on the ground in a long slow and deliberate drive to capture and hold territory. In the end, the most effective bombing was those two that were dropped in the end on Hiroshima and Nagasaki. However, the second Bush Administration was still enraptured by air power and treated the helpless and blameless Iraqis to “shock and awe” in 2003…again to no avail.

Wars in the Mideast were quite different from wars in Europe. These new wars were asymmetrical, tribesmen with a cache of modern weapons against a large contingent of well armed twenty-first century warriors who become mired down in what is part of an ongoing tribal conflict. Even though America was convinced that it was fighting a “War on Terror,” the nation was confronting an old culture that was fighting against modernism or modernity. In addition to fighting unwelcome change and colonialism from the outside, these tribes were fighting each other for religious reasons that were unclear to Westerners. But however sectarian these local issues, America is committed to fighting a condition that has been named a “War” to give the American public a framework through which to “read” the traumatic “event” of September 11th.

Obama has definitively changed the way in which this non-war is not waged. The troops are coming home, while the Drones carry on the killing. On one hand, if we follow this line of thinking—kill at a distance—from the bombing of Dresden to the Drone attacks on terrorists in Pakistan, the two points are certainly connected. What remains unclear, even in Maddow’s book, is why a President would want to take sole responsibility for body bags, ours or theirs. Drift seems to imply that one President after another “drifted” into taking more and more power because they could do it, because there was no power capable of stopping them. As the wars became more and more arbitrary, from Viet Nam to Iraq, the personal responsibility became greater, and, as Johnson and Bush found out, the consequences, the judgment of history can be harsh for those who wage war unsuccessfully and for no good reason.

But if the costs of blood and treasure are relatively low, as with the secretive Drone Wars, then the power shifts decisively towards to the Executive Branch. If “war” is redefined as tracking down designated targets on a “kill list,” then the ostensible cost of war goes down as does the size of the military. If Drone attacks can do the job of people, then the need to attack or invade or occupy should diminish. The public will be happy to allow this kind of invisible war to continue, no questions asked. No more flag draped coffins. Maddow ends her book with a list of problems that need to be solved—what she calls a “to do list.” Most of the points on her list concerning going to war, the role of the citizen soldiers, privatization and the disposal of nuclear weapons, will resolve themselves within a few years.

Two of her objections—the “secret” Drone Wars and Executive Power—are here to stay and are the future of war: a President in the Situation Room waiting for the outcome of a covert operation by a team of Seals or for a report on a strike on a target thousands of miles away. If we accept the “necessity” of dropping an atomic bomb on Nagasaki, how can we complain about a single Drone strike on one person? If we want to balance the budget, then how can we not accept this cheap and reliable manner of taking the war to the terrorists? If we could go back in time and assassinate Osama bin Laden, would we do it? If so, then targeting other individuals before they do their worst is a moral act.

Although, such strikes now come under the auspices of the CIA and are “secret” and based on”intelligence” that the public and Congress do not know, Rachael Maddow ends hopefully,

We just need to revive that old idea of America as a deliberately peaceable nation. That’s not simply our inheritance, it’s our responsibility.

I wish I could agree with her hopeful assessment. America has not been a “deliberately peaceable nation,” but we decidedly do not want to take responsibility for these new wars. I was shocked to learn that one of my former art students has become a Drone Pilot. Happy and satisfied in a military career, he is in charge of sorting out the designated target from innocent civilians, and he is convinced that these assassinations save money and lives. What is the more moral position—send thousands of men and women off to die or quietly kill the “terrorists” identified by “intelligence?”

This could well be a question that we will never be asked in any formal way. While there are those who are questioning the Drone War, the real Drift is away from taking collective responsibility. So war becomes the provence of the President who wages it in secret and we may be told from time to time of its causalities. This is the future.

Dr. Jeanne S. M. Willette

The Arts Blogger



“The History of White People”



The History of White People by Nell Painter

We were told that the election of Barack Obama meant that we—America—had transcended into a beatific state called “post-racial.” We were proud of having overcome three centuries of a history of stubborn slavery and an even more intransigent segregation, both of which were based on bogus “racial” “theories.” We proudly and overwhelmingly elected an African American as the President of the United States. Once the tears of joy and pride had been wiped way and clear vision was restored, it was shamingly clear that, far from being a phenomenon of the past, racism was alive and well and virulent in the “land of the free” and the “home of the brave.” The years-long assault on the legitimacy of a Presidency, the determined and unrelenting efforts to force “failure” upon, not only one man—because he was black—but upon the general population has spanned a gamut of accusations: “Muslim,” “Kenyan,” Socialist,” “In over his Head,” “Ineffectual,” even “Monster,” and so on. Make no mistake, racism is hiding behind each and every one of these words.

Whatever words are used, they all add up to one word “black,” which is the opposite of “white,” and, in the minds of these retrogrades racists, there are two words that should never come together: “Black” and “President.” It is important to understand that the (right-wing, Conservative, Tea Party, whatever) attitude that Barack Obama can never be a “legitimate” President is fundamentally different from that of the Democrats who felt strongly that George H. W. Bush had not been elected President twice and had been put in the office through a unilateral action on the part of the Supreme Court but bowed to the rule of law and lived peacefully (if unhappily) under his Presidency. The complaint of the Democrats was a legal one, while the complaint against Obama is a racist one. Over time, Democrats learned to live with what seemed to them to be a coup d’êtat and let the subsequent career of Bush determine his fitness to serve, but, in contrast, the refusal to accept the very basic fact that Obama is an American citizen (born in Hawaii) continues.

The question is why?

Of course, it is impossible to get inside of the psychology of a sizable group of people, but it is possible to get into the history of the culture that created the concept of “whiteness” and the racial dialectical that similarly constructed its polar opposite, “blackness.” Until recently, the very thought of “white” was an absent presence: there but invisible, unspoken but acted upon, reiterated but not acknowledged. “White” as a “race” existed and exerted an unquestioned power but “white” was not seen. This social “white noise” was embedded in the cultural common consciousness, coming from everywhere and no where. The power of “white” rested upon the fact that its source and origin remained both operative and obscured.

Some twenty years ago, “white” came out of the dark and into the light of history and “whiteness studies” was born. This 2010 book by Nell Irvin Painter is part of these academic attempts to examine “whiteness” or “white” as a concept, but, in this book, she examines how the description of a skin color, “white,” became a loaded term, implying innate superiority of one skin color over anther and, by extension, of one “race” over another. To those who watch The Colbert Report, Painter was the game author who attempted to get it across to Stephen Colbert that “white” was an intellectual construct. Colbert asked a very interesting question, trying to determine if her book was a straight historical account of the comings and goings of white people. The History of White People is not what its title infers and the title is probably both ironic and provocative.

With a Ph. D. from Harvard and an academic position at Yale, Painter, a gifted artist and celebrated historian, took up the task of tracing the history of what I could call the “need to define” “white people.” This self-imposed task separates the work of Painter from the theoretical field of “whiteness studies,” for she has produced what is a fairly straightforward account in which she traces to the formation of a discourse on “white people.” It is only recently that an African American has been in the position to write about white people. Or to put it another way, white people have written a great deal about black people but society and culture prevented the objects of this whitened scrutiny to write back. The sheer fact that Painter is black gives the title an extra punch, mitigated by her easy and congenial manner: she comes in peace not in condemnation. As Painter explains on the first page,

I might have entitled this book Constructions of White Americans from Antiquity to the Present, because it explores a concept that lies within a history of events. I have chosen this strategy because race is an idea, not a fact, and its questions demand answers from the conceptual rather than the factual realm.

One of the oddities of “white people” is that unlike “German people” or “British people,” there is a paucity of literature devoted to defining “white people.” This scarcity is particularly notable when compared to the enormous amount of time, energy and ink spent on defining “black people.” There is, in fact, an excess, a surplus, an overflow of writing on “black,” giving the relative silence on “white” the kind of power only wielded by withholding. Withholding of “white” gave “white” not just a powerful potency but also created an assumption of what white meant, a blankness that allowed “white” to be over/written by whatever qualities the culture desired. In other words, “Blackness” was defined from the position of “Whiteness,” which was the vantage point of power and privilege which claimed in inalienable right to Represent. This is power indeed.

Painter’s book in interesting because of the way in which she lays out her argument that the “history” of “white people” is a discourse devised for socio-economic purposes dedicated to the maintenance of domination. First, she tracks down the basis of the word “Caucasian” and then linked the term to “white” which is then linked to “beauty” which was then connected to “intelligence, leading to the logic of superiority. Second, she establishes how the historical connection between color, “black” with bondage and “white” with free, and slavery was made. The importance of taking these two steps or of establishing these two separate discourses, is that the discourse of racial superiority and the discourse of slavery are separable. Painter has to separate the concepts because, once slavery was abolished, the discourse of racial superiority could live on unchanged. Slavery is easy to outlaw; the concept of one race being “superior ” to another is an idea and cannot be abolished.

Slavery can die but racism can live on.

How Racism began, without “Race”

Nell Painter begins her journey into understanding how two neutral words, “white” and “people” became conjoined with ancient Greece, supposedly the “cradle of Western civilization.” The ancient Greeks had no concept of “race” and differentiated among the peoples they came into contact with in terms of place or locale. Historians divided various tribal groups in a accordances with the physical and social distinctions due to climate or terrain. But there was one group that was beyond their empirical reach, those mysterious and legendary inhabitants of the region the Greeks called the “Caucasus.” Here was the land of myth. As Painter laconically describes it, this modern territory,

…is a geographically and ethnically complex area lying between the Black and Caspian Seas and flanked north and south by two ranges of the Caucasus Mountains. The northern Caucasus range forms a natural border with Russia; the southern, lesser Caucasus physically separates the area from Turkey and Iran. The Republic of Georgia lies between the disputed region of the Caucasus, Turkey, Armenia, Iran, and Azerbaijan.

Today this region is still remote and isolated, only occasionally breached by modernity but through a historical accident that is rather arbitrary, “white people” have been named “Caucasians.” Like the Greeks, the Romans had no concept of “race” but the contribution of the Romans to racial thinking was both considerable and accidental. It was the Romans, who, in search of Empire, classified most of the inhabitants of Europe. The Romans were interested in the “civilization” or cultural traits of the non-Romans compared to the Empire builders. “For Roman purposes,” Painter writes, “politics and warfare defined ethnic identities.” Painter points out that it was Julius Caesar who gave many of the names we know and use today, from “Gaul” to “Germania,” to the peoples he encountered. In discussing the differences among these scattered and disparate tribes, Caesar was assessing their relative battle worthiness and determining how he would subdue them.

The Romans, as Empire builders, were imperially promiscuous, the better to blend the subjugated peoples to the conquerers. The result was centuries of intermixing and intermarriage resulting in a hybrid culture that some say diluted the social foundation of the Romans and gradually eroded the Empire. In contrast, the Germans or the Germanic tribes would be very resistant to the benefits of Empire and were hostile to outsiders. In the early years, during the time of Caesar when the Romans were striving to understand their northern neighbors, important differences were imagined. As Painter says,

How could eminent citizens of this great empire squeeze out admiration for the dirty, bellicose, and funny-looking barbarians to the north? The answer lies in notions of masculinity circulating among a nobility based on military conquest. According to this ideology, peace brings weakness; peace saps virility. The wildness of the Germani recalls a young manhood lost to the Roman empire. Caesar headed a train of civilized male observers—with Tacitus among the most famous—contrasting the hard with the soft, the strong and the weak, the peaceful and the warlike, all to the detriment of the civilized, dismissed as effeminate. As we see, the seeds of this stereotype—a contrast between civilized French and barbarian Germans—lie in the work of ancient writers, themselves uneasy about the manhood costs of peacetime.

The Greeks imagined the Caucasians and the Romans imagined the Germans and these ancient mythologies would link “whiteness” to “masculinity” or, to put it another way, there would be a link between purity and resistance compared to hybridity and femininity. The Gauls submitted to the Romans and permitted interpenetration of tribal cultures while the Germans remained “uncivilized” and aloof, withdrawing behind the Rhine river where they remained unmolested. The importance of “Teutonic purity” would be revived later and after the fall of the Roman Empire, Painter relates, “white people” are linked to the barbaric tribes of the British Isles, another resistant group divided by Hadrian’s Wall. The Anglo-Saxons, like the many tribes of the Roman Empire were an amalgamation of conquers and the conquerers—a hybrid mixture of Viking/Scandinavian tribes that invaded the island and settled.

It is interesting that the ethnic groups that gave the Romans the most resistance, the tribes in Great Britain and Germany, were the ones who became linked to “white people,” however, another element had to be added before the concept of “white” could come into existence. As stated, the modern concept of race is a very modern one and was linked to the final ingredient: “black” and slave. Until the sixteenth century, slaves were of all colors. In fact, as Painter points out, the word “slave” comes from the word “Slav,” or the Slavs from eastern Europe who, as the result of the labor shortage after the Black Death, were caught up in a lively slave trade. The “Slave” and the “black” “race” were not paired until the need for workers on the sugar plantations in the Caribbean encouraged the European colonizers to depend upon the Africans.

The Confluence of “Black” and “Slave”

The Spanish eradicated the indigenous population of the Caribbean in a couple of generations and, the English settlers of the North American continent also found out that it was difficult to enslave people—the Native Americans—in their own territory. Africans, seized and stolen from their homes, arrived in America dazed and disenfranchised, far removed from their own cultures with no hope of returning, made good slaves: strong and healthy, confused and divided by dialects and languages. Unlike the Native Americans, the Africans had nowhere to run and no place to hide. However, the idea of “slavery” and lifelong servitude took decades to affix itself to Africans only. Other books have outlined the process in which white indentured laborers and black indentured laborers were socially and legally separated from each other, leaving the white person “free” and the black person “enslaved,” but foundational focus of Painter is the eighteenth and nineteenth centuries, because it is in this period of “enlightenment” that the slavery of black people had to be justified.

Although she later outlines how the “peculiar institution” of slavery in America was developed, Painter unexpectedly begins by linking “white” and “beauty” through

the eighteenth-century science of race developed in Europe, influential scholars referred to two kinds of slavery in their anthropological works. Nearly always those associated with brute labor—Africans and Tartars primarily—emerged as ugly, while the luxury slaves, those valued for sex and gendered as female—the Circassians, Georgians, and Caucasians of the Black Sea region—came to figure as epitomes of human beauty.

The profitability of slavery, regardless of color, throughout the eighteenth century, would stifle any moral qualms about holding humans in bondage for two centuries. However, Painter emphasizes a clear and present subtext in the racialized discourse: the practice of dividing people in terms of physical appearance—beautiful and ugly—based on the Greek ideal (as filtered through Roman art), laced with sexual fantasies, stimulated by both homosexual and homosexual desire. Beauty, for men and women, was Greek and was attributed to certain kinds of features deemed unique to Europeans (white people) as opposed to Africans, Asians or Slavs. Tall, slim, pale-skinned, straight hair and straight noses were the favored elements—not just Greek features based on marble statues, but also diametrically and conveniently opposed to dark skinned, flat nosed, coarse haired Africans and Asians.

Painter does a nice job of presenting a number of intellectual and philosophical and scientific ideas put forward in the eighteenth century (and in the two subsequent centuries) concerning the measurement of skulls and the angle of facial profiles. To the reader conversant with these endeavors, the author presents a brisk summation across a series of chapters. The underlying reason for this growing discourse on “difference” is, of course, linked to the rise of Empires. The imperial adventures of European nations coupled with the enormously profitable enterprise of slavery were inconvenient coincidences with the Enlightenment and its rational doctrines of equality. We can assume that the serious manner in which the Europeans blinded themselves with (pseudo) science to account for their unwillingness to allow the logic of Enlightenment thought to play itself out was a defensive measure.

In America the need to distinguish “white” from “black” was acute. Europeans were intent on explaining their supposed superiority in terms of beauty, equated with innate intelligence, as the reason for colonizing and exploiting the rest of the known world. Unlike the Americans, the Europeans did not keep slaves nor did they depend upon a slave economy. In the American South, an agricultural feudal economy while the Europeans built an international mercantile economy. But in a small and new nation, the South was not only anachronistic but also powerful. Its leaders were slaveholders reluctant to give up their incomes to square their words of freedom with their deeds of slavery. When the Americans gained their independence, they did so by denying the majority of its inhabitants, women and slaves, basic rights. As English politician Samuel Johnson caustically asked, “How is it that we hear the loudest yelps for liberty among the drivers of negroes?”

Painter places Thomas Jefferson, slave owner, lover of a slave, father of slaves, at the center of the American thinking on the significance of “Anglo-Saxon” heritage. She writes,

To Jefferson, whatever genius for liberty Dark Age Saxons had bequeathed the English somehow thrived on English soil but died in Germany…In 1798 he wrote Essay on the Anglo-Saxon Language, which equates language with biological descent, a confusion then common among philologists. In this essay Jefferson runs together Old English and Middle English, creating a long era of Anglo-Saxon greatness stretching from the sixth century to the thirteenth. With its emphasis on blood purity, this smacks of race talk. Not only had Jefferson’s Saxons remained racially pure during the Roman occupation (there was “little familiar mixture with the native Britons”), but, amazingly, their language had stayed pristine two centuries after the Norman conquest: Anglo Saxon “was the language of all England, properly so called, from the Saxon possession of that country in the sixth century to the time of Henry III in the thirteenth, and was spoken pure and unmixed with any other.” Therefore Anglo-Saxon/Old English deserved study as the basis of American thought. One of Jefferson’s last great achievements, his founding of the University of Virginia in 1818, institutionalized his interest in Anglo-Saxon as the language of American culture, law, and politics. On opening in 1825, it was the only college in the United States to offer instruction in Anglo-Saxon, and Anglo-Saxon was the only course it offered on the English language. Beowulf, naturally, became a staple of instruction.

Jefferson’s obsession with the Anglo-Saxons and their mythical racial “purity” was shared with other Americans who were intent on establishing a cultural distinctiveness for those descended from English ancestors. The subtext was more than an attempt to elevate the “pure” white race above the African slaves; it was also a device used to coin social difference to elevate one class of white people above another. The sheer quantity of argument and writing about their racial superiority on the part of white males from all corners of intelligentsia imply a deep unease with their convoluted reasoning. Every now and then a counter argument was put forward, and a rare black voice was heard. Painter introduces the reader to David Walker, a free man in Boston and a well known activist who, in 1829 wrote David Walker’s Appeal: in four articles, together with a preamble, to the coloured citizens of the world, but in particular, and very expressly, to those of the United States of America. According to Painter,

Walker’s Appeal spread a wide net, excoriating “whites” and, indeed, “Christian America” for its inhumanity and hypocrisy. Over the long sweep of immutable racial history, Walker traces two essences. On one side lies black history, beginning with ancient Egyptians (“Africans or coloured people, such as we are”) and encompassing “our brethren the Haytians.” On the other lie white people, cradled in bloody, deceitful ancient Greece. Racial traits within these opposites never change.

Another sub-text that Painter locates in the growing American discourse on race is the dilemma of slave holders—the moral and psychic damage done to them by owning human beings and being unwilling to let the humans in bondage go free. To the modern reader, the guilt of the Founding Fathers is pure hypocrisy, for these high minded men did not have the courage to let go of their slaves, the foundation of their wealth and class position. When the Constitution was written the argument for doing nothing about slavery was put forward, for owning slaves seemed to be on the verge of being less and less profitable. However, the invention of the Cotton Gin in 1794 by Eli Whitney ended the wistful hope that slavery would collapse of its own weight. Once slavery was profitable, the discourse of justification intensified.

Slavery as the American Stain

The need to explain why slavery should continue to be a feature of American life would become more pressing as the nineteenth century progressed. European nations gradually outlawed slave trade, but sharp eyed observers such as Alexis de Tocqueville realized that the slave culture in the South constituted a moral cancer, a disease in the democratic republic. It is not just slavery, however, that is the embedded flaw, it is racism. Racism made it possible to enslave the black and to push the Native Americans off their lands. In his perceptive Democracy in America (1835) Tocqueville wrote unflatteringly of the Southerners:

“From birth, the southern American is invested with a kind of domestic dictatorship…and the first habit he learns is that of effortless domination…[which turns] the southern American into a haughty, hasty, irascible, violent man, passionate in his desires and irritated by obstacles. But he is easily discouraged if he fails to succeed at his first attempt…The southerner loves grandeur, luxury, reputation, excitement, pleasure, and, above all, idleness; nothing constrains him to work hard for his livelihood and, as he has no work which he has to do, he sleeps his time away, not even attempting anything useful.”

Europeans made fortunes in the vile trade of capturing and selling slaves but they distanced themselves, not from the profits but from the consequences by not actually owning Africans. According to Painter, Tocqueville seemed to find it hard to write of the South and its customs, but his friend Gustave de Beaumont examined slavery in America in Marie, or Slavery in the United States, a Picture of American Manners, written in the same year as Tocqueville’s first volume on America. However, Beaumont’s book was not translated into English until 1958 and, tragically, when it was finally published in America, its theme, how “one drop” of “black blood” designated an individual as “black.” Indeed, Painter does not point this out but during World War II, the American blood supply for the soldiers was divided between black and white blood.

For Painter, the Civil War and the extended blood letting over the question of slavery verses the rights of a state to own human beings is only but one part of the question of “race” that, by the nineteenth century had begun to define American thinking. As she writes,

In a society largely based on African slavery and founded in the era that invented the very idea of race, race as color has always played a prominent role. It has shaped the determination not only of race but also of citizenship, beauty, virtue, and the like. The idea of blackness, if not the actual color of skin, continues to play a leading role in American race thinking. Today’s Americans, bred in the ideology of skin color as racial difference, find it difficult to recognize the historical coexistence of potent American hatreds against people accepted as white, Irish Catholics. But anti-Catholicism has a long and often bloody national history, one that expressed itself in racial language and a violence that we nowadays attach most readily to race-as-color bigotry, when, in fact, religious hatred arrived in Western culture much earlier, lasted much longer, and killed more people. If we fail to connect the dots between class and religion, we lose whole layers of historical meaning. Hatred of black people did not preclude hatred of other white people—those considered different and inferior—and flare-ups of deadly violence against stigmatized whites.

What makes this book remarkable is that long before the Republican’s so-called “Southern Strategy” of the 1970s, American had already absorbed racial and racist thinking even before the Civil War. The other value of the book is the sad evidence of how deeply supposedly intelligent and fair minded people, American and European intellectuals and scientists were implicated in fashioning a discourse of dehumanization and prejudice. Painter devotes a segment of the book on how various immigrants who were not English struggled to be accepted as “Americans.” Broadly put, in Hegelian terms, the Master/Slave, the One/the Other dialectic became deeply embedded in the American psyche. Although America was a nation of immigrants, from the very start, only certain kinds of immigrants were welcome: no Irish, no Italians, no Jews, no Eastern Europeans, no Asians, and so on. In fact the nineteenth century, punctuated by the Civil War was one long struggle against the Other, whether it was the Native Americans in the West or the Catholics in the East.

Enlarging “White” through Diminishing Others

Many of the literary architects of the discourse of racism that constructed the concept of “white people” created a construct of “whiteness” that was designed to maintain the privilege of a favored few. Thomas Carlyle, generally well remembered for his efforts to improve the conditions of the working class, stained his record of humanism with viscous writing about the Irish. When Carlyle was writing the Irish were being deliberately starved out of Ireland but he was a man without pity. As Painter wrote,

Thomas Carlyle (1795–1881), the most influential essayist in Victorian England, held the racial-deficiency view, having fled Ireland’s scenes of destitution in disgust after brief visits in 1846 and 1849. In one cranky article he called Ireland “a human dog kennel.” From his perch in London, Carlyle saw the Irish as a people bred to be dominated and lacking historical agency. He took it for granted that Saxons and Teutons had always monopolized the energy necessary for creative action. Celts and Negroes, in contrast, lacked the vision as well as the spunk needed to add value to the world.

Thomas Carlyle teamed up with the American poet Ralph Waldo Emerson in a rather awkward partnership. Carlyle thought that slavery was a perfectly permissible state for the inferior race, while Emerson was an abolitionist. But both were involved in a mystical enterprise of elevating an imaginary Anglo-Saxon “race” above other “races,” such as the benighted Irish. As other writers have pointed out, the language and terminology developed by the English to defame the Irish and justify the British rule over Ireland. This language of inferiority and beastality was formed centuries before the African slave trade and was already ready to be deployed and applied towards any group considered unworthy. Although, as Painter points out, African Americans, such as Frederick Douglas, understood the parallels of prejudice against the Irish and the blacks, the Irish rejected this comparison and fought to be called “white.”

Other respected philosophers, from France’s Ernest Renan and America’s Matthew Arnold wrote extensively of the wonders of the Celts and the Anglo-Saxons. These writings can be read benignly as an attempt to delineate a national identity for a modern world, now obsessed with “difference.” But this strain of thinking was also at heart divisive and, for America, racist. By the middle of the nineteenth century, America was experiencing a tidal wave of immigration, starting with the Irish, followed by the Italians, all of whom were Catholic and all of whom were, therefore, alien to the supposed Anglo-Saxon Protestant Americans. The poetics of a Matthew Arnold and the violent bigotry of the Know Nothing Party are but two sides of the same coin. By mid century, as Painter writes, “The Anglo-Saxon myth of racial superiority now permeated concepts of race in the United States and virtually throughout the English-speaking world. To be American was to be Saxon.”

The reasons for this painstaking and fictional construct of “Teutonic” and “Anglo-Saxon” superiority seems clears today. Carlyle feared the consequences of democracy and other writers feared the invasion of the Others. However, the extent to which these supposed “great” men were aware of the contradiction between their views of the superiority of “whiteness” and the mercy and love of Christianity and promise of equality and democracy is unclear. But for the next one hundred years (and beyond), there would be a mountain of writing piling up a pseudo scientific and pseudo philosophical explanations for why certain peoples should be excluded from the basic rights of human beings and citizens of a free nation. This discourse constructed a fantasy vision of “white people” that was the base for a superstructure of exclusionary laws directed against people who were “not white.”

Clearly the political unconscious of both America and England is an ugly one, but Painter includes an interesting section that links racism not just to beauty but also to sexual desire. Most of the constructors of “whiteness” were middle class privileged males who may or may not have been latent homosexuals. Painter reads their texts much the way in which we read Johann Joachim Winckelmann’s writings on (Roman copies) Greek Art and finds an underlying current of, shall we say, intense admiration for the (male) beauty of the Teutonic ideal. These descriptions of the beautiful white male linger on today—fair skin, blue eyes, blond hair, tall thin frame—and are seen in Ambercombie and Fitch and Ralph Lauren advertising. That said, “white people” are gendered male and “beauty” is linked to the idea of “white.”

With the dubious intellectual weight behind the notion of the inherent and innate superiority of “white people,” came the construction of the “Aryan” idea that was so powerful that art history still includes the ancient Egyptian culture as “Western,” regardless of the fact that the Egyptians were Africans and black. The romantic idea of Aryan and white continued to be supported well into the twentieth century, as after the Civil War in America, “whiteness” was linked to enfranchisement and the power to vote. Even though black men were give the “right” to vote in 1870, full voting rights for non-whites took a hundred years to come about and the hard-fought right to cast a ballot remains under threat today.

Aryan Supremacy Through Eugenics

One of the great services of Painter’s book is the parade of scholars and scientists who wrote of the wonders of being Aryan and Anglo-Saxon and who did studies of the human skull in order to “prove” racial superiority. Today, these men are obscure and well-known only to specialists, such as Painter, but, in their time, as she stresses, they were respected and celebrated. What is remarkable is not only how forgotten these architects of racism are today but also paradoxically how completely their discourses penetrated the American collective consciousness. Reading of one after another of these supposed intellectuals is simply depressing. Decades after slavery was abolished, the writings kept coming, their perpetrators festooned with honors and crowned with laurels, halted only in the face of the Nazis.

Not that the proponents of Aryan superiority would be entirely silenced by the horrors of the Holocaust but the doctrine of racial superiority would lose its luster, at long last. The nation therefore owes a great deal to the occasional and brave white dissenter, like the anthropologist, Franz Boas, who joined with Black intellectual, W. E. B. Dubois to fight racism and anti-semitism in those decades before the Second World War. As Painter points out,

During the late nineteenth century, poor, dark-skinned people often fell victim to bloodthirsty attack, with lynching only the worst of it. Against a backdrop of rampant white supremacy, shrill Anglo-Saxonism, and flagrant abuse of non-Anglo-Saxon workers, Boas appears amazingly brave. It mattered little in those times that lynching remained outside the law. More than twelve hundred men and women of all races were lynched in the 1890s while authorities looked the other way. Within the law, state and local statutes mandating racial segregation actually expelled people of color from the public realm.

The voices, such as that of Boas, who spoke out against the bogus assertion of “race,” were shouting into a headwind of rhetoric. American histories rarely stress the articulate racism of Presidents Theodore Roosevelt and Woodrow Wilson. But early in the twentieth century, instead of leading the population into a new century with new thinking, they vigorously extended the creed of “Anglo-Saxonism.” The trend towards to “Teutonic,” abated somewhat due to the Great War in which the Germans or the Teutons were the enemy. One of the outgrowths of elevation of “white people” was the attendant fear of “race suicide,” due to the threat of intermarriage among the Anglo-Saxons and “inferior” whites. In the Northern states, these rants were due to the continued flux of immigrants who were diluting the essence of the “white” race, in the South, the fear of inter-racial mixing drove many localities to forced sterilization of those who were unfit to breed.

The “science” of eugenics, that would become the driving force behind the Nazis extermination of the Jews, gypsies, and other “undesirables” was, like many of the racist theories in America, tended to be the brian child of New Englanders. Cradled and supported by the most eminent universities in the nation, these writers drove a discourse of exclusion and elimination of the wrong kind of blood or hereditary. It was taken as an article of faith that “inferiority” was hereditary and that there was no concept of environmental considerations that may have caused generational poverty. On one hand, scholarship was turned into public policy that prevented equal opportunity and, on the other hand, those who were impacted were then declared inferior due to generations of bigots who kept the Irish, or the Italians, the Chinese, the African Americans, and so on, from achieving.

The solution to the socially engineered under achievements of the poor and disadvantaged was forced sterilization, which was found constitutional by an 8-1 majority. Virginia led the way in 1924 and for a decade (with California coming in as the second largest sterilizing state), until shamed when the Nazis took up identical policies of sterilizing the poor and those who might inherit a tendency towards criminality, other states followed. But forced sterilization continued until the Civil Rights Movement in the 1960s. Eugenics was directly linked to the argument of inherited superiority of “white people” by taking the assumption of birth rights and privileges and turning it against towards those who had inherited poverty.

If one inherited one’s low economic status, then one also inherited a low intelligence. As with sterilization, technology and pseudo science was put in the service of white suprematists. Blithely unaware of the impact of environment upon intelligence and of the inherent biases in the so-called “intelligent” tests designed by Alfred Binet and Theodore Simon, the anti-immigrant “nativists” used yet another measure to disenfranchise and marginalize the less white whites. The bundle of frantic efforts to maintain domination with tactics including Jim Crow Laws, forced sterilization, anti-immigration intelligence testing and restricted immigration were all based on the supposed superiority of the very white European stocks. However, as these superior beings, in all their shining whiteness, descended into the mad savagery of the Great War.

White Public Policy

The awkwardness of seeing the “white people” acting badly did not deter decades of effort during the twentieth century in legitimizing people of color, Catholics, and Jews. These bigoted beliefs mainstreamed and popularized through mass media and had become widely believed. Fortunately, these beliefs were forced underground and were muted by mid-century. As Painter explains,

After its heyday among race theorists in the 1910s and 1920s, Anglo-Saxonism declined during the Great Depression and the Second World War. A new generation of social scientists had outgrown such blather on race. Now scholars were questioning the very meanings of any and all concepts of race and studying the troubling fact of racial prejudice. Ruth Benedict, along with Franz Boas and their like, were beginning to carry the day…The change from 1920s hysteria to 1940s cultural pluralism occurred simultaneously in politics and in culture.

After the Second World War II, racism, based on the the ideal of “white” beauty, continued under other guises, despite the fact that the idea that there was a scientific entity called “race” was being debunked. During the final decades of the twentieth century, the idea of “white people” was less intellectualized and more politicized. The efforts to assert the superiority of whites was no longer respectable in academia but the efforts to deny African Americans the benefits of the New Deal, the G. I. Bill, and even basic civil rights continued in public policy through a maze of laws and customs. In addition to being pushed to the margins, people of color were trained through mass media to “look white.” As Painter writes,

Much nose bobbing, hair straightening, and bleaching ensued. Anglo-Saxon ideals fell particularly hard on women and girls, for the strength and assertion of working-class women of the immigrant generations were out of place in middle-class femininity. Not only was the tall, slim Anglo-Saxon body preeminent, the body must look middle-rather than working-class.

People of color or different “ethnic” types were forced by real estate laws and municipal zoning to live in ghettos and barrios, where they were invisible. The race presented by mass media as “American” was pure white, people of color were rare and on the fringes in movies and many mainstream magazines, from news magazines to fashion magazines, refused to print photographs of people of color. But the Civil Rights Movement countered the myth of white cultural and physical superiority by challenging white people on moral grounds. Painter quotes Malcolm X,

“When I say the white man is a devil, I speak with the authority of history…. The record of history shows that the white man, as a people, have never done good…. He stole our fathers and mothers from their culture of silk and satins and brought them to this land in the belly of a ship…. He has kept us in chains ever since we have been here…. Now this blue-eyed devil’s time has about run out.”

While the “white people” were frightened after hearing the frank assessment of a certain portion of the African American public, that same public would be alarmed at the writings of an anti-melting pot white supremacist, as quoted by Painter, as having

taught at Stanford University and the experimental college of the State University of New York at Old Westbury. In The Rise of the Unmeltable Ethnics, perfectly suited to the times, Novak concentrates on those unmeltable “PIGS,” Poles, Italians, Greeks, and Slavs, in their view so long reviled: “The liberals always have despised us. We’ve got these mostly little jobs, and we drink beer and, my God, we bowl and watch television and we don’t read. It’s goddamn vicious snobbery. We’re sick of all these phoney integrated TV commercials with these upper-class Negroes. We know they’re phoney.”

These words, complete with misspelling, from Michael Novak’s Rise of the Unmeltable Ethnics (1972) would, today, be termed “hate speech.” At the time, they were the leading edge of the Nixon “Southern Strategy” which was a fancy term of thinly disguised racism. Here and there a few lone voices among white Southerners were raised and revealed the inherent lack of “ethics” in the system of racial segregation, bases upon the fiction of “white supremacy.” Painter presents

Lillian Smith (1897–1966), a white southern essayist, novelist, and (with her lifetime partner Paula Schnelling) operator of a fancy summer camp for girls, powerfully described her South in Killers of the Dream (1949 and 1961). The book pilloried southern culture as pathological and white supremacist southerners as caught in a spiral of sex, sin, and segregation.5* Here was a book of wide influence that portrayed whiteness as morally diseased.

It would take science, real science this time to put to rest the notion that there was “race.” There are only human beings who have skin colors and facial features that have evolved due to the environment. Painter quotes

the words of J. Craig Venter, then head of Celera Genomics, “Race is a social concept, not a scientific one. We all evolved in the last 100,000 years from the same small number of tribes that migrated out of Africa and colonized the world.” Each person shares 99.99 percent of the genetic material of every other human being. In terms of variation, people from the same race can be more different than people from different races.18 And in the genetic sense, all people—and all Americans—are African descended.

Painter’s book is divided into a series of what she terms “Enlargements of Whiteness.” The First Enlargement” is in fact the formation of the post-Enlightnement discourse on “white people” by writers in the eighteenth and early nineteenth century. The Second Enlargement builds on these beginnings and uses the ideas of racial superiority to expand political rights for one group of white people, the males, and excluding other groups, people of color and women. The Third Enlargement expanded these privileges after the Second World War by showering benefits upon white males and by excluding equally deserving people of color and women from the great government “thank you” stimulus that created the male middle class. The Fourth Enlargement is one of the struggle of women and people of color to enter fully into the American dream.


People of color made inroads during the post-war period because the discourse that defined “white people” was doubly discredited. First the Nazis adopted lock, stock and barrel the entire panoply of racist ideologies and used this discourse on “Aryans” and “Anglo Saxons” and what have you to slaughter millions of human beings. Second, the final stand of the white supremacists during the Civil Rights period was so public and so ugly and the resulting photographs and television coverage was so shaming that it was impossible to defend the determination to disenfranchise millions of American citizens. But the final coup de grâce was the genetic studies that proved that all humans share the same genetic makeup and that there is no scientific entity that could be separated out as “white people.”

Painter ends with the hope that perhaps intermarriage and “race mixing” will end the black/white dichotomy but that is far in the future. In the meantime as she points out “Nonetheless, poverty in a dark skin endures as the opposite of whiteness, driven by an age-old social yearning to characterize the poor as permanently other and inherently inferior.” The discourse on “white people” is alive and well and is an article of faith by millions of Americans who may or may not be aware of the immoral and unethical and unAmerican roots of their ideologies. It is sad to learn from this book that the term “American exceptionalism” is a code for “white people”—Anglo-Saxon “whiteness.”

When politician say that “Barack Obama does not believe in American exceptionalism,” they are saying “Barack Obama is black.” Like all the invading immigrants from the Irish to the Hispanics, he doesn’t “belong;” he is not an “American.” The discourse on “white people” is why there is such a strong belief that the President wasn’t born in America. Obama cannot be an American because he is an African American; he is black. Painter needs to write a sequel to this book that focuses on the twenty-first century salvage operation of this discourse which continues on the fringes on hate websites and in political speeches. The Discourse of “White People” continues to mar the American Dream.

If you have found this material useful, please give credit to

Dr. Jeanne S. M. Willette and Art History Unstuffed. Thank you.

[email protected]