Out of the crooked timber of humanity, no straight thing was ever made
Updated: 1 hour 9 min ago
That’s the title of a piece of mine the Chronicle of Higher Education ran a little while ago. It’s paywalled but they have graciously given me permission to republish it here.
A little while ago, the University of Warwick was in the news for all the wrong reasons. Its longstanding legal firm, SGH Martineau, put up a blog post suggesting that universities should take action against “insubordinate” academics with “outspoken opinions.” The firm stressed the importance of making an example of offenders whose academic work was “brilliant,” lest other employees become tempted to emulate them.
Unfortunately for Warwick, this suggestion was made at precisely the time the university was seeking to remove an insubordinate professor, whose alleged offenses included “sighing” and “irony during job interviews”, though it appears his real offense was criticism of the British government’s higher-education policy.
The law firm’s post was couched in terms of the possibility of damage to the university’s “brand.” Universities have always been rightly concerned about their reputations. But the conversion in recent years to the language of branding has reached a fever pitch. Of course, in Warwick’s case, both the proposal to muzzle academics and the marketing-speak used to justify it did enough damage to offset, for some time, the efforts of its entire central-administration communications team, which employs almost 30 people, not to mention similar personnel in various schools and departments.
In Australia, Monash University proudly announced this year that it was the first organization in the world to acquire a “brand” top-level domain name—that is, an Internet address ending in “.monash” rather than the previous “monash.edu.au.” This trivial change cost $180,000, plus an annual fee of $25,000, and is part of the university’s expensively maintained “brand identity policy.”
In America, the University of Pennsylvania was an early adopter of this approach. In 2002 the Pennsylvania Gazette celebrated its centenary with a history titled “Building Penn’s Brand.” What might Penn’s most eminent sociologist, Erving Goffman (author of The Presentation of Self in Everyday Life), have made of this adoption of the language of image and “brand”?
Many American universities now have branding policies, and some affirm an unqualified commitment to the associated marketing ideology. The University of Florida, for example states on its website:
“The importance of having a clear, recognizable brand can never be overstated. It defines us, separates us and communicates our relevance and value. It is especially important in an environment as vast and decentralized as the University of Florida. Thousands of messages leave the university every day, and each represents an opportunity to enhance—or fragment—our image. By maintaining consistent standards, we capitalize on the enormous volume of communications we generate and we present an image to the world of a multifaceted, but unified, institution.”
That statement summarizes all the key points of the ideology of branding. First there is the emphasis on image without any reference to an underlying reality. Second there is the assumption that the university should be viewed as a corporate institution rather than as a community. Third there is the desire to subordinate the efforts of individual scholars in research, extension, and community engagement to the enhancement of the corporate image. And finally there is the emphasis on distinctiveness and separateness. The University of Florida does not want to seem part of a global community of higher education, but rather as a competitor in a crowded marketplace.
Before considering this process further, we need some context. The authority on the history of the corporation and of brands is Alfred D. Chandler, whose books Strategy and Structure: Chapters in the History of the American Industrial Enterprise (MIT Press, 1962), The Visible Hand: The Managerial Revolution in American Business (Harvard University Press, 1977), and Scale and Scope: The Dynamics of Industrial Capitalism (Harvard University Press, 1990) are the definitive studies of the rise of the managerial corporation.
Chandler emphasizes the emergence of packaged and branded goods. Until the late 19th century, products like foodstuffs were sold in bulk by wholesalers, then measured out by retailers to individual customers. At every stage there were opportunities to increase profits by passing off a cheaper alternative for the good being sold. Shopkeepers’ reputations were the primary warranty. In the increasingly urban and mobile environment of the late 19th century, reputation, never a fully effective seal of quality, became even less so.
Branded products provided a solution. Now it was possible for consumers to repeatedly buy the same brand of product at various stores. The brand was a guarantee of consistent quality, not because of the trustworthiness of the corporation (of which the buyers would typically know nothing), but because its value depended on consistency and quality.
Consistency was the more important of the two. A low-quality product, provided that it was consistently adequate and appropriately priced, could benefit just as much from a brand as a higher-quality, more expensive alternative could. Indeed, the wealthy were the last to embrace branded products, instead patronizing bespoke tailors and personal providers of food and other services long after the middle and working classes were used to doing their shopping at Macy’s and A&P.
The great marketing discovery of the 20th century, pioneered by the advertising titan J. Walter Thompson, was that brands could do much more than guarantee a consistent level of objective quality. With the right advertising, a brand could come to embody connotations of all kinds, unrelated to the qualities of the product to which it was attached. Femininity or masculinity, luxury or solid good sense, excitement or security—all of these and more are part of “image.”
A third form of brand value arises when there are strong forces for customer “loyalty,” amounting, in some cases, to “lock-in.” For example, anyone who wants to use computers of designs descended from the IBM PC has little choice but to buy Microsoft operating systems like Windows.
And now we come to what may be the most striking feature of branding in higher education. Universities are corporate bodies, but they predate commercial corporations by many centuries. Long before the advent of packaged and branded goods, universities were certifying the quality of their students through the awarding of degrees.
Many criticisms of corporate branding apply equally to university degrees, and much of the voluminous literature on “credentialism” could be translated into the language of branding. The aim of degrees is, after all, to certify quality in the sense that a student has completed a course of study and acquired the associated knowledge and reasoning skills. And, as with brands that involve monopoly power, many degrees gain value from the fact that they are required for entry to particular professions. On the other hand, and with notable exceptions like the M.B.A., there has been little consistent effort to promote “brand image” to potential employers. Like a 19th-century brand, the degree has, in large part, gained its value from graduates rather than vice versa.
The rise of corporate-style branding has gone hand in hand with the devaluation of degrees through grade inflation. Grades in the A range have become the norm at leading universities. Reports that Princeton might roll back attempts to cap the proportion of A’s at 35 percent cite administrators’ fears that the policy discourages potential applicants and students’ complaints that it hurts their chances of getting jobs, fellowships, or spots in graduate or professional schools.
The “brand value” or “brand equity” of a company can be estimated as the intangible capital, beyond the company’s actual earnings, that may arise, as Chandler suggests, in three ways:
* The company is known to produce goods and services of a higher quality than competitors of similar cost (or similar quality at lower cost). Remember? That’s the 19th-century notion of brand.
* The brand reflects intangible attributes, through advertising, in the minds of consumers. That’s the early 20th century notion.
* A brand’s component products work best together or with those made by partner brands. That’s the late-20th-century lock-in notion.
It is appropriate, therefore, that the world’s most valuable brand is Apple, because it hits the trifecta. It is widely perceived as the highest-quality and most consistently innovative maker of computing devices. Its products carry a cachet of sophistication emphasized by the famous “I’m a Mac … and I’m a PC” ads. And (except for a brief period in the 1990s when Macintosh “clones” were marketed on a small scale) anyone who wants to use Apple operating systems has to buy an Apple device, and vice versa.
How do those concepts apply to universities and, in particular, to undergraduate education, which remains the core business of most of these institutions?
The 19th-century notion of quality is established in the minds of students, parents, and just about everybody else. In fact, it is so well established that rankings of leading universities have barely changed since the hierarchy was established, in the second half of the 19th century. A blog post by the sociologist Kieran Healy on Crooked Timber compares a ranking produced in 1911 with the most recent U.S. News rankings and finds a close correlation (except that elite private universities, as a group, have improved their status relative to state flagships).
In that sense, then, university brands are strong. But brand relativities that endure regardless of the competence of university leaders, the vagaries of scholars and departments, and the efforts of marketing departments are not really of much interest.
None of this is to say that there are no differences in quality among those captured by these very stable rankings. At any given time, the quality of departments in any university will vary widely. Some will be making great strides in teaching and research. Others will be riven by internal divisions, or wedded to outdated and discredited approaches to pedagogy and research methodology. But there is no way to discover such things from branding exercises at the university level.
Key branding efforts focus on intangibles. In this respect, university branding has been an embarrassing failure both by the industrial standards of the advertising sector and by the intellectual standards that universities are supposed to uphold. For example, virtually every Australian university has adopted (replacing the Latinate motto that used to adorn its crest) a branding slogan: “Know more. Do more.” “Where brilliant begins”. Good luck trying to match a particular slogan with its respective university. (Disclosure: I am, perhaps, bitter that my own proposed branding slogan—”UQ, a university not a brand”—did not find favor with my institution’s marketing department.)
Finally there is the question of lock-in. A university degree is a required ticket for entry to many professions, and where state-level licensing applies, the range of choices may be limited. At the top end, access to various elite jobs is confined largely to the products of Ivy League and similarly elite institutions. That is a form of lock-in that adds to “brand value,” but in a socially unproductive way.
Branding, as applied to higher education, is nonsense. Colleges are disparate communities of scholars (both teachers and students) whose collective identity is largely a fiction, handy during football season but of little relevance to the actual business of teaching and research. The suggestion that a common letterhead and slogan can “present an image to the world of a multifaceted, but unified, institution” is comforting to university managers but bears no correspondence to reality.
The idea of universities as corporate owners of brands is directly at odds with what John Henry Newman called “the Idea of a University.” To be sure, that idea is the subject of contestation and debate, but in all its forms it embodies the ideal of advancing knowledge through free discussion rather than burnishing the image of a corporation. In the end, brands and universities belong to different worlds.
John Quiggin is a fellow in economics at the University of Queensland, in Australia; a columnist for The Australian Financial Review; a blogger for Crooked Timber; and the author of Zombie Economics: How Dead Ideas Still Walk Among Us (Princeton University Press, 2010).
I’ve an article in the new issue of The National Interest looking at various liberal critiques of Snowden and Greenwald, and finding them wanting. CT readers will have seen some of the arguments in earlier form; I think that they’re stronger when they are joined together (and certainly they should be better written; it’s nice to have the time to write a proper essay). I don’t imagine that the various people whom I take on will be happy, but they shouldn’t be; they’re guilty of some quite wretched writing and thinking. More than anything else, like Corey I’m dismayed at the current low quality of mainstream liberal thinking. A politician wishes for her adversaries to be stupid, that they will make blunders. An intellectual wishes for her adversaries to be brilliant, that they will find the holes in her own arguments and oblige her to remedy them. I aspire towards the latter, not the former, but I’m not getting my wish.Over the last fifteen months, the columns and op-ed pages of the New York Times and the Washington Post have bulged with the compressed flatulence of commentators intent on dismissing warnings about encroachments on civil liberties. Indeed, in recent months soi-disant liberal intellectuals such as Sean Wilentz, George Packer and Michael Kinsley have employed the Edward Snowden affair to mount a fresh series of attacks. They claim that Snowden, Glenn Greenwald and those associated with them neither respect democracy nor understand political responsibility. These claims rest on willful misreading, quote clipping and the systematic evasion of crucial questions. Yet their problems go deeper than sloppy practice and shoddy logic.
The Union Jack came down in Camp Bastion today, marking the end of the UK’s combat role in Afghanistan and its misconceived campaign in Helmand Province; the campaign with no strategy, less chance of success and a gossamer-thin plan. It has come to a dignified end with a choir of establishment generals (is there any other kind?) and politicians serenely harmonising the nation’s oldest hymns; ‘mistakes were made’, and ‘perhaps we might have done it differently’.
Nineteen billion pounds. Twenty thousand Afghan civilians. Four hundred and fifty three UK soldiers. More Afghan National Army killed last summer than UK troops throughout the whole war. More poppy seed than ever growing in Helmand, but lots more children in school, too.
Was it worth it? Well if you’ve figured out a workable and not-obscene calculus of human pain and worthwhile profit, let the rest of us know.
I knew one of the four hundred and fifty three, but only superficially. He was deputed one autumn evening to squire me around the officers’ mess when E was already gone. He made sure I had drinks and was warm enough, saw me into the dining room, flirted chastely back and manfully ignored the younger women. It was like something out of Thackeray. Beautiful manners on the eve of battle.
The other senior wife and I went to his funeral, along with the welfare officer, representing the battalion. The men wouldn’t be home for months. As an Irish woman, I had never expected to be dressed in black, walking slowly through a seated congregation to a reserved pew at the front, next to a coffin with a Union Jack on. The gloves and belt were the hardest to look at. No one cried. Not obviously, anyway.
Later, driving through the gold-tinged dusk of a Wiltshire summer evening, I rounded the corner of B-road to see the flag again, flying in someone’s garden. I had to pull over.
That’s not my flag and never will be. It’s just something someone I slightly knew died for.
The past year has been one of reading long books. Naguib Mahfouz’s Cairo Trilogy, War and Peace, and, on the back of the latter Vasily Grossman’s Life and Fate. I’m still digesting. Is Life and Fate the greatest Russian novel of the past century? I don’t know, and it seems like an invidious question. But great it certainly is. Not so much for the writing — at least in translation Grossman’s prose is, well, prosaic — but for it breadth of vision, its humanism, its psychological insight and for Grossman’s courage in facing up to inconvenient facts about human beings and his own society. Grossman, Soviet war correspondent alongside his rival Ilya Ehrenberg, one-time favoured Soviet writer, seems to have imagined the book might have managed to get published under Khruschev, as One Day in the Life of Ivan Denisovich was. What an absurd hope to have had. Life and Fate, was “arrested”, the typescript seized by the KGB. But Grossman had made copies which were smuggled to the West, and the book was finally published in 1980.
At the heart of the novel is Stalingrad, briefly, as he puts it, capital of the world, the focus of a great struggle between two totalitarian powers. Alongside this, and interwoven with it, are the travails of nuclear physicist Victor Shtrum and his extended family, their dealings with a capricious state, their moral dilemmas and psychological adaptations in the face of its cruelties. In the background lurks the memory of the year 1937, knocks on the door in the night and sentences of ten years “without right of correspondence”, meaning, in actuality, a bullet in the head. Right in front is the destruction of European Jews, massacres and deportations by the invading Germans and the imminence of death. And all the time the question occurs, made vivid by Grossman cutting between Soviet POWs in Germany and zeks in Siberia of whether there is any moral difference between these two regimes.
It is hard to know what Grossman’s answer to that question is, exactly. On the one hand, Stalinism and Hitlerism are members of the same species, with similar methods and organization, and similar cruelties and caprices. Grossman manages to affirm this whilst reserving a special horror for the industrialized destruction of the Jews, though even there the exceptional character of Nazism is put in doubt by the treacherous collaboration of the Ukrainian neighbours of Jewish victims and by Stalin’s own turn to Russian-chauvinism and anti-semitism. On the other hand, there is never any doubt in the novel that it is vitally important that the German invader must be beaten and repulsed, even if that redounds to Stalin’s advantage. One way of reading that commitment to victory over the Germans would be a nationalist one, and indeed there are many deliberate echoes of the Great Patriotic War over Napoleon, but there is also a rejection on Grossman’s part of Russian nationalism (and a horror at its resurfacing) and an indentification with a cosmpolitan Soviet identity (including Tatars, Jews, Ukrainian etc) malgré tout.
Chance, randomness, and the arbitrary choices of the powerful play a central role in the lives of the poor human beings who are trying to make a life for themselves, and they desperately interpret good news as a sign of their own election, and everybody trys to adapts to the facts of power. Shtrum himself, sidelined by a vicious anti-semitic campaign at the moment of his greatest intellectual success and disowned by his erswhile “friends” is suddenly redeemed by an intervention from on high, and gripped by a mixture of fear and relief, signs a letter of denunciation in turn. Krymov, commissar, Old Bolshevik and former Comintern agent, pathetically hanging on to his sense of identity as a communist,falls into the maw of the secret police because his estranged wife repeats a snippet of conversation to her lover who then drops it into a conversation with an apparatchik. Anything can sink you, it seems. The deserving are imprisoned or die; the undeserving get rewarded.
Life and Fate is a long and rambling thing with many interconnected threads, but if there is a centre to it, it is probably contained in the ramblings of Ikonnov-Morzh, a Tolstoyan Christian mystic, who is imprisoned in a German concentration camp with Mostovskoy, and Old Bolshevik. Liss, the Gestapo interrogator, plays with Mostovskoy, teasing him about the similiarities between the two totalitarian systems and then leaves him with a text written by the Tolstoyan. It is a denunciation of the pursuit of the “good” by political and scientific means as inevitably licensing cruelty and egoistic self-deception. To resist this there is nothing but an ineradicable animal kindness that rebels against such projects. Rousseauvian pitié, I suppose. The novel contains many instances of these minor kindnesses as the hope for something better once the tanks have gone and Stalinism has thawed. An anti-political vision, but a powerful one. If you haven’t read it, it should go on your list.
One of the more recent criticisms I’ve read of Eichmann in Jerusalem—in Bettina Stangneth’s and Deborah Lipstadt’s books—is that far from seeing, or seeing through, Eichmann, Arendt was taken in by his performance on the witness stand. Eichamnn the liar, Eichmann the con man, got the better of Arendt the dupe.
For the sake of his defense, the argument goes, Eichmann pretended to be a certain type of Nazi—not a Jew hater but a dutiful if luckless soldier, who wound up, almost by happenstance, shipping millions of Jews to their death.
Arendt heard this defense, and though she never accepted the notion that Eichmann was an obedient soldier (she thought he was a great deal worse than that), she did conclude that Eichmann had “an inability to think, namely, to think from the standpoint of somebody else.” Eichmann was hermetically sealed off from the world, from the perspective of people who weren’t Nazis. Because the “more decisive flaw in Eichmann’s character was his almost total inability ever to look at anything from the other fellow’s point of view,” he “never realized what he was doing.” He knew he was sending Jews to their death; he just didn’t grasp the moral significance of that act, wherein its evil lay, how others, including his victims and their families, might see it.
According to evidence presented by Stangneth and Lipstadt, Eichmann the thoughtless schlemiel was indeed a performance on Eichmann’s part. The truth is that he was a rabid anti-Semite who took initiative and on occasion defied the directives of his superiors in order to make sure even more Jews went to their death; at one point, Lipstadt reports, he even personally challenged Hitler’s order to allow some 40,000 Hungarian Jews to be released for emigration to Palestine via Switzerland.
At every stage of his career, Eichmann knew what he was doing. In power, he did it with zeal; out of power, in the dock, he tried to pretend that he hadn’t, or that if he had, that he had no choice.
Arendt’s vision of the banality of evil, her critics claim, rests upon a failure to see this, the real Eichmann. Eichmann the trickster, Eichmann the con man, rather than Eichmann the thoughtless schlemiel.
As I’ve written before, I think there’s something to this argument about Arendt’s failure to apprehend Eichmann’s performance as a performance. Arendt sometimes, though not nearly as often as her critics claim, did take Eichmann at his word, and it never seems to have occurred to her that he would have had the cunning—and necessary self-awareness—to fashion an image of himself that might prove more palatable to the court.
But if Eichmann was indeed a liar, that, it seems to me, argues in favor of Arendt’s overall thesis of the banality of evil, not against it. Once you work through the implications of Eichmann the liar—as opposed to Eichmann the thoughtless schlemiel—it becomes clear that it is Arendt’s critics, rather than Arendt, who have not only failed to come to terms with his evil, but who also may have, albeit inadvertently, minimized what he actually did.
So let’s work this one through.
# # # # #
To repeat: At the heart of Eichmann’s evil, Arendt believes, was a certain kind of cluelessness about what it was that he did, which was rooted in his inability to see how his actions and statements might appear to another person, particularly someone who had been the victim of his acts. Eichmann might admit, as he did on the stand, that the Holocaust was “one of the greatest crimes in the history of Humanity,” but those were just words. He simply did not grasp the meaning of what he did. Or said.
Arendt offers plentiful evidence for this claim, some of which cannot be construed as lies on Eichmann’s part. After she writes that Eichmann “never realized what he was doing,” for example, she says:
But it was when he was on the witness stand that Eichmann truly proved himself a thoughtless man. For when Eichmann presented himself in what he clearly thought was an exculpatory light he only wound up indicting himself even further. This, for Arendt, was the horror—and comedy—of the man.
Eichmann thought he was offering himself up (whether sincerely or not) to the court as a more palatable specimen, not realizing: first, that given what he did (and admitted to having done), there was nothing he could do or say that would redeem him; and, second, that the exculpatory examples he offered were only further confirmation of his evil.
Arendt writes, for example:
Let’s assume for the sake of the argument, however, that Arendt’s critics are wrong, that she was not taken in by Eichmann and that she had him, at least here, pegged right. Any reader of this passage can see that her point is not that Eichmann was humane but that he was morally and politically—and ultimately intellectually (though not psychologically)—deranged. That he could willingly participate in a plan to exterminate millions—something he admitted to on the stand, Arendt reminds us—but think that his crimes were mitigated by the fact that he neither caused people unnecessary pain nor ever laid a hand on a poor Jewish boy and in fact was genuinely outraged by any sign of cruelty by the SS: that for Arendt was a sign of his failure to recognize the enormity of his crime, to truly understand what he had done.
Now let’s assume for the sake of the argument that Arendt’s critics are right, that she was in fact taken in by him and that this was all a big lie for the witness stand. It doesn’t change her point at all; in fact, it only strengthens it. That Eichmann could willingly participate in a plan to exterminate millions but nevertheless think that the court would somehow conclude he wasn’t so bad because he didn’t cause people unnecessary pain nor ever lay a hand on a poor Jewish boy—and then, on the basis of that lunatic assumption, deceive the court in the hope that it might get him off or get him a lighter sentence: that too should be taken as a sign of his failure to recognize the enormity of his crime, to truly understand what he had done. For who but Eichmann could possibly believe that that mitigated his crime in any way?
Whether Eichmann believed what he said or was lying to save his ass, his failure to think—the banality of his evil—is demonstrated by the fact that he assumed there might be something he could do or say that would get him off the hook. Even at the moment when he was facing his own death, he couldn’t imagine the enormity of his crimes, how they would appear to others.
At the heart of Arendt’s assessment, then, is the idea that once Eichmann set down the path of mass murder of the Jews, nothing he did or didn’t do, nothing he said or didn’t say, could change, alter, soften, or otherwise mitigate that fact. It was that enormous. To think otherwise was not to understand the enormity of the crime.
One can cite other examples from Eichmann in Jerusalem. Like this one:
Arendt did not believe that this kind of cluelessness was peculiar to Eichmann; it was rife throughout the Nazi high command.
# # # # #
Once we realize how little of Arendt’s banality thesis hinges upon whether Eichmann was a liar or a believer of his own bullshit, we begin to see that there is something peculiar about the claim that Arendt was taken in by Eichmann.
As a simple empirical observation, the claim is perfectly plausible and unobjectionable, and indeed, as I’ve already said, can shed some interesting light on Arendt’s other ideas about performance and lying.
But Arendt’s critics want to use Eichmann the liar as a cudgel: not against Arendt in error (most philosophers make errors) or even against Arendt the dupe. No, they want to make Arendt into, if not an abettor of or apologist for evil, than at least an evader or minimizer of evil, who denies the wickedness of the Holocaust by insisting on the banality of one of its perpetrators.
Richard Wolin makes the point simply and directly:
Lipstadt is more balanced and circumspect in her final judgment of Arendt, but she too ventures into some strange territory.
Lipstadt begins with a claim about Arendt and Eichmann in Jerusalem that, on its own terms, is straightforward enough:
Nor, however, can one dismiss the way in which she so seamlessly elided the ideology that was at the heart of this genocide. She related a version of the Holocaust in which anti-Semitism played a decidedly minor role.
But for Lipstadt and other critics, they are. For Arendt’s refusal to see Eichmann’s anti-Semitism is part and parcel of her fraternization with, even indulgence of, the anti-Semitism of her friends and lovers.
Hovering around the edges of these statements is the suggestion that Eichmann in Jerusalem enabled a genteel anti-Semitism—liberating the long suppressed feelings of Arendt’s goyish friends—and trafficked in its far more malignant forms, channeling the spirit of the Nazi Heidegger and mirroring the thoughtlessness of the Nazi Eichmann. In other words, sleeping with the enemy.
# # # # #
There’s no question that Arendt herself believed that the Nazis had committed a crime of massive proportion and that Eichmann had a major, if overstated, hand in that crime. And unlike Gershom Scholem, Martin Buber, and a great many others in Israel and elsewhere, Arendt had no doubt that Eichmann ought to hang for his deeds (even Ben-Gurion, Lipstadt claims, had momentary doubts about that). Even if Arendt underplayed Eichmann’s anti-Semitism, even she got his banality wrong, she was absolutely clear that he had helped perpetrate one of the greatest mass murders in history, that he was a moral catastrophe of the highest order, and that he should hang for his crimes. None of these final judgments of hers was dependent on her assessment of his anti-Semitism or banality. For Arendt, it was enough that he was a mass murderer and an ethical catastrophe that he should hang.
So why all the high dudgeon of her critics? Why this operatic suggestion from them that by minimizing his anti-Semitism and insisting on his banality Arendt was somehow letting Eichmann off the hook? It’s almost as if, to these critics, sending millions of Jews to their death, and being a moral catastrophe, is not in fact enough. Certainly not enough for Eichmann to hang.
The reaction of Arendt’s critics makes me wonder whether Eichmann the liar might not have had a point, whether there might not have been a method to his madness on the stand. His gamble on the stand was that if the court could see how little he enjoyed his work, how little taste for blood he actually had, how upright he was in the execution of his duties, they’d let him off the hook.
Whether this was a strategy or the truth wouldn’t have made a difference to Arendt. In either case, she would have concluded, he was guilty of mass murder; in either case he was a moral catastrophe; in either case, he was banal; in either case he should hang; in either case he was evil. But maybe what her critics are saying is: if he was a mass murderer and banal, if he was a mass murder and not anti-Semitic, then somehow his crimes really would be less. As Wolin says, no banality, no evil.
At Passover, we sing a song called Dayenu. Dayenu means “it would have been enough,” it would have been sufficient, it would have sufficed. We sing it in honor of all the things God did for us, as Jews, in the Exodus and after that. After we cite each one of these things God did for us, we say, Dayenu, it would have been enough. The cumulative force of the song is that just one of these things would have been enough, but God did so much more. Had God only led us out of Egypt, it would have been enough. But God also led us across the Red Sea. And had God only led us across the Red Sea, it would have been enough. But God also drowned our enemies there. And had God not only drowned our enemies there…you get the picture.
It seems as if, for Arendt’s critics, there’s a kind of reverse Dayenu at work. Their Passover canon goes like this: Had Eichmann only been a mass murderer, it would not have been enough. Had Eichmann only been a mass murderer who was also an ethical catastrophe, it would not have been enough. Had Eichmann only been a mass murderer who was also an ethical catastrophe and would have been hanged for his deeds, it would not have been…you get the picture.
Here’s an assorted list of things that once seemed archetypally American, but have pretty much reached the end of the line. More precisely, there are no new ones, or hardly any, and the existing examples look increasingly down at heel
Feel free to discuss, deny, add to the list and so on.
I used to think that David Brooks deserved some sort of George Orwell ‘best bad modern writing’ award for a phrase in his old attack on Markos Zuniga Moulitsas.
The Keyboard Kingpin, aka Markos Moulitsas Zuniga, sits at his computer, fires up his Web site, Daily Kos, and commands his followers, who come across like squadrons of rabid lambs, to unleash their venom on those who stand in the way.
It’s hard to beat squadrons of venom-unleashing rabid command-lambs. But then, when doing some background reading for class in re: Rand Paul’s foreign policy speech, I came across this plea from Joseph Joffe:
who will save the American posterior once the chickens of aloofness come home to roost?
Who? Who indeed?
I envision America so:
I’m sure that there’s a lot of other policy writing with terrible metaphors out there that I’m unaware of. Feel free to provide in comments.
It’s important that you listen to these important songs now, because of their great import. I like the Shirley Bassey song because, when John and I were first married, it was the signature tune of one of our favorite DJs. He had a club night where he played goofy twinkly commercially popular 60s and 70s music, and this was the “get everybody out on the floor” song. The Vaughan Mason & Crew tune (Disco Remix) is a roller disco song thankyouverymuch. Not merely normal-disco. It is so good. SO GOOD. It sounds like some Derrick Carter-ness but it’s from 1979.
So many, the sparkles. All the sparkles. And for years I couldn’t find this song somehow. Violet reminded me just now to search again and—duh there it was! The off-kilter horns make it. I’m glad I could make this significant contribution to our blog.
Gough Whitlam, Prime Minister of Australia from 1972 to 1975, died on Tuesday. More than any other Australian political leader, and as much as any political figure anywhere, Gough Whitlam embodied social democracy in its ascendancy after World War II, its high water mark around 1970 and its defeat by what became known as neoliberalism in the wake of the crises of the 1970s.
Whitlam entered Parliament in 1952, having served in the Royal Australian Air Force during the War, and following a brief but distinguished legal career. Although Labor had already chosen a distinguished lawyer (HV Evatt) as leader, Whitlam’s middle-class professional background was unusual for Labor politicans
Whitlam marked a clear break with the older generation of Labor politicians in many other respects. He was largely indifferent to the party’s socialist objective (regarding the failure of the Chifley governments bank nationalisation referendum as having put the issue off the agenda) and actively hostile to the White Australia policy and protectionism, issues with which Labor had long been associated.
On the other hand, he was keen to expand the provision of public services like health and education, complete the welfare state for which previous Labor governments had laid the foundations, and make Australia a fully independent nation rather than being, in Robert Menzies words ‘British to the bootstraps’.
Coupled with this was a desire to expand Labor’s support base beyond the industrial working class and into the expanding middle class. The political necessity of this was undeniable, though it was nonetheless often denied. In 1945, the largest single occupational group in Australia (and an archetypal group of Labor supporters) were railwaymen (there were almost no women in the industry). By the 1970s, the largest occupational group, also becoming the archetypal group of Labor supporters. were schoolteachers.
Whitlam’s political career essentially coincided with the long boom after World War II, and his political outlook was shaped by that boom. The underlying assumption was that the tools of Keynesian fiscal policy and modern central banking were sufficient to stabilize the economy. Meanwhile technological innovation, largely driven by publicly funded research would continue to drive economic growth, while allowing for steadily increasing leisure time and greater individual freedom. The mixed economy would allow a substantial, though gradually declining, role for private business, but would not be dominated by the concerns of business.
The central institution of the postwar long boom, the Bretton Woods system of fixed exchange rates, was already on the verge of collapse by the time Whitlam took office in 1972. The proximate cause of its collapse was the inflationary surge that had begun in the late 1960s and reached its peak with the oil price shock of 1973.
So, Whitlam was living on borrowed time from the moment he took office. His ‘crash through or crash’ approach ensured that he achieved more in his first short term of office (eighteen months before being forced to an election by the Senate) than most governments did in a decade. The achievements continued in the government’s second term, but they were overshadowed by retreats and by a collapse into chaos, symbolized by the ‘Loans Affair’ an attempt to circumvent restrictions on foreign borrowing through the use of dodgy Middle Eastern intermediaries.
The dramatic constitutional crisis of November 1975, and the electoral disaster that followed, have overshadowed the fact that, given the economic circumstances, the government was doomed regardless of its performance. The Kirk-Rowling Labour government in New Zealand, also elected in 1972 after a long period of opposition, experienced no particular scandals or avoidable chaos, but suffered a similarly crushing electoral defeat.
Despite his defeat, and repudiation by succeeding leaders of the ALP (and of course his conservative opponents), it is striking to observe how much of Whitlam’s legacy remains intact. Among the obvious examples (not all completed by his government, and some started before 1972, but all driven by him to a large extent)
In all of this Whitlam is emblematic of the social democratic era of the mid-20th century. Despite the resurgence of financialised capitalism, which now saturates the thinking of all mainstream political parties, the achievements of social democracy remain central to our way of life, and politicians who attack those achievements risk disaster even now.
With the failure of the global financial system now evident to all, social democratic parties have found themselves largely unable to respond. We need a renewed movement for a fairer society and a more functional economy. We can only hope for a new Whitlam to lead that movement.
One of the criticisms often made of Hannah Arendt’s account of the Eichmann trial was that she found Eichmann funny. Throughout Eichmann in Jerusalem, Arendt can barely contain her laughter at the inadvertent comedy of the man, which was connected to her claim of his banality and to the ironic tone she adopted throughout the text. Many at the time found her tone flippant and her irony distasteful; since then, her appreciation of Eichmann’s buffoonery has been seen as a sign, to her critics, of her haughty indifference to the suffering he inflicted.
Yet, in reading about the trial, it’s quite clear that Arendt wasn’t the only one who found Eichmann funny. So did the courtroom, which periodically broke out into laughter at the accidental hilarity wafting down from the witness stand. As Deborah Lipstadt reports:
Laughter does not minimize evil; it denies evil the final word.
I’m reading up on the history of party politics. It’s a nice question why Henry Bolingbroke doesn’t get more credit for theorizing the benefits of Two Great Parties. But, now that I’m tucking into his “Dissertation Upon Parties”, I’m starting to get a notion.
Dude did not appreciate that regarding Whigs and Tories as closely related independent causes, yet great, does not make it great to write in closely related, independent clauses. Ahem:
How the notions then in vogue began to change, and this spirit to decline, some time after the Restoration; how the zeal of Churchmen and Dissenters against one another began to soften, and a Court and Country party to form themselves; how faction mingled itself again in the contest, and renewed the former resentments and jealousies; how Whig and Tory arose, the furious offspring of those inauspicious parents roundhead and cavalier; how the proceedings of one party might have thrown us back into a civil war, confusion and anarchy; how the success of the other had like to have entailed tyranny on the state, and popery in Church; how the Revolution did, and could alone, deliver us from the grievances we felt, and from the dangers we feared; how this great event was brought about by a formal departure of each side from the principles objected to them by the other; how this renewal of our constitution, on the principles of liberty, by the most solemn, deliberate, national act, that ever was made, did not only bind at least every one of those, who concurred in any degree to bring it about (and that description includes almost the whole nation); but how absurd it is for any man, who was born since that era, or who, being born before it, hath been bound by no particular, legal tie to any other settlement, to be willing to give up the advantages of the present constitution, any more than he would give up the privileges of the great charter, which was made and ratified so many ages ago; all these points are to be now touched in that summary manner which I have prescribed to myself, and which will be sufficient, in so plain a case, where men are to be reminded of what they know already, rather than to be informed, and to be confirmed, not to be convinced.
Perhaps it is no coincidence that the use of the semicolon, in English, peaks at about the time that the notion of the ‘loyal opposition’ enters the language. For is not an independent clause, separated from the rest of the sentence, yet remaining within it, courtesy of a semicolon, something like a loyal, propositional opposition? And is not the ultimate break-up of the British Empire perhaps a phenomenon to be studied in parallel with, so to speak, post-semicolonial studies?
Some of this work is, admittedly, speculative.
The announcement of the death of David Greenglass has got me thinking a lot about collaborators. Though much of twentieth-century history could not be written without some discussion of collaborators—from Vichy to Stalinism to the Dirty Wars to McCarthyism—the topic hardly gets a mention in the great texts of political theory. Eichmann in Jerusalem being the sole exception.
In my first book on fear, I tried to open a preliminary discussion of the topic. That discussion drew from a wide range of twentieth-century experiences, in Europe, Latin America, the US, and elsewhere, as well as from my reading of Eichmann and Montesquieu’s Persian Letters.
Reading over what I wrote, I’d say I failed. I was so intent on breaking apart the conventional understanding of the collaborator as someone who aids and abets a foreign enemy that I wound up broadening the category too much. So intent was I, also, on breaking apart the three-legged stool of perpetrator-victim-bystander—where was the collaborator in all this, I wondered—that I wound up conflating low-level perpetrators with collaborators; I now think there’s an important difference there.
That said, I thought I’d reprint my discussion here. As I said, political theorists have yet to grapple with the problem of collaboration. Or careerism, which is a related topic. One day, when I’m in my dotage, I’d like to write a book, a kind of political theory of careerism and collaboration. Arendt thought we should take our theoretical cues from actual political experience; political theory was first and foremost an attempt to understand what we are doing. That’s why she wrote books and essays on totalitarianism, revolution, action, and other political phenomena. But when it comes to careerism and collaboration, we have yet to understand what we are doing. So here goes.
• • • • •
By conventional understanding, a collaborator is one who assists an enemy, helping groups to which he does not belong threaten groups to which he does belong. (1) But this definition, it seems to me, is too restrictive. It presumes that a group is a discrete whole, that once in it, we can’t get out of it or have competing affiliations.
Collaborators, however, cannot be so neatly bound. Some do not entirely belong to the group they betray; others, like the French fascists of Vichy, have a deep affinity for the enemy they aid. Informers are perhaps the most common kind of collaborator, but they are notorious chameleons, making it virtually impossible to pin down their affiliations at all.
Knud Wollenberger, an East German dissident who secretly kept the Stasi apprised of his wife’s subversive activities, claims that his collaboration was entirely consistent with his membership in the couple’s oppositional circle. One way to challenge the government, he explains, was “through open dissidence, and the other way [was] through government channels. I was on the inside and the outside at the same time.” (2)
Harvey Matusow joined the American Communist Party in 1947, began informing on it in 1950, recanted his testimony in 1954, and then lied about all three phases of his career in his memoir False Witness, published in 1955. So promiscuous were Matusow’s politics, it is impossible to know what he had been false to, except the truth. The title of another FBI informant’s memoir—I Led Three Lives (as Communist, informer, and “citizen”)—was more apt, suggesting the multiple identities the collaborator regularly assumes. (3)
I don’t wish to carry this notion of multiple affiliations too far. Wollenberger could very well be rationalizing a past of which he is ashamed, and Matusow may simply be the hollow man many at the time suspected him to be. Whether we belong to one group or another in some existential sense, in the course of our lives we do incur moral obligations to our comrades and friends, whom we betray when we aid our opponents.
But to avoid the question of identity that restrictive definitions of collaboration entail, I will use the definition contained in the word’s Latin root collaborare: “to work together.” By collaborator, I simply mean those men and women who work with elites and who occupy the lower tiers of power and make political fear a genuinely civic enterprise.
Collaborators may be low- or mid-level perpetrators; suppliers, like the warehouse in Jedwabne, Poland, which provided the kerosene local residents used in 1941 to burn a barn containing 1,500 Jews, or Ford and General Motors, which funded a Brazilian security outfit that interrogated and tortured leftists; attendants (cooks, secretaries, and other supporting staff); or spies and informers. (4) Though all are not equally compromised by their deeds, each is guilty of complicity.
The collaborator is an elusive figure. With the exception of The Persian Letters and Eichmann in Jerusalem, he seldom makes an appearance in the literature of political fear. One of the reasons for his absence, I suspect, is that he confounds our simple categories of elite and victim. Like the elite, the collaborator takes initiative and receives benefits from his collaboration. Like the victim, he may be threatened with punishment or retribution if he does not cooperate. Many collaborators, in fact, are drawn directly from the ranks of the victims.
Perhaps then we can distinguish between collaborators of aspiration, inspired by a desire for gain, and collaborators of aversion, inspired by a fear of loss. The first are akin to elites, the second to victims. But even that distinction is too neat. Elites also fear loss, and victims hope for gain, and as the economist’s notion of opportunity costs attests, the hope of gain often informs the fear of loss. (5)
Collaborators serve two functions. First, they perform tasks that elites themselves cannot or will not perform. These tasks may be considered beneath the dignity of the elite: cooking, cleaning, or other forms of work. They may require local knowledge—as in the case of informers, who provide information elites cannot access on their own—or specialized skills.
We often think of torturers, for example, as thugs from the dregs of society. But torture is a weapon of knowledge, designed to extract information from the victim, often without leaving a physical trace. The torturer must know the body, how far he can go without killing the victim. Who better to assist or direct the torturer than a doctor? Thus, 70 percent of Uruguayan political prisoners under that country’s military regime claim that a doctor sat in on their torture sessions. (6)
Second, collaborators extend the reach of elites into corners of society that elites lack the manpower to patrol. These collaborators are usually figures of influence within communities targeted by elites. Their status may come from the elite, who elevate them because they are willing to enforce the elite’s directives. (7)
More often, their authority is indigenous. Figures of trust among the victims, they can be relied upon to persuade the victims not to resist, to compound the fear of disobedience the victims already feel.
During its war against leftist guerillas in the late ’70s and early ’80s, the Salvadoran army worked closely with such indigenous leaders. In 1982, a battalion officer informed Marcos Díaz, owner of the general store in the hamlet of El Mozote, with friends in the military, that the army was planning a major offensive in the region. To ensure their safety, the officer explained, the townspeople should remain in the village. Though many in El Mozote thought such advice unsound, Díaz was the local potentate who knew the army’s ways. His voice held sway, the villagers did as they were told, and three days later, some eight hundred of them were dead. (8)
Because their functions are so various, collaborators come in all shapes and sizes. Some travel in or near the orbit of elite power; others are drawn from the lower orders and geographic peripheries.
One common, though unappreciated, influence upon their actions is their ambition. While some collaborators hope to stave off threats to their communities and others are true believers (9), many are careerists, who see in collaboration a path of personal advance. In Brazil, for example, torture was a stepping stone, turning one man into the ambassador to Paraguay and another into a general, while doctors advising the torturers in Uruguay could draw salaries four times as high as those of doctors who did not. (10)
Whether the payment is status, power, or money, collaboration promises to elevate men and women, if only slightly, above the fray. Nazi Germany’s Reserve Police Battalion 101, for example, was a unit of five hundred “ordinary men,” drawn from the lower middle and working classes of Hamburg, who joined the battalion because it got them out of military service on the front. All told, they were responsible for executing 38,000 Polish Jews and deporting some 45,000 others to Treblinka.
Why did they do it?
Not because of any fear of punishment. No one in the 101 faced penalties—certainly not death—for not carrying out their mission. The unit’s commander even informed his men that they could opt out of the killing, which 10 to 15 of them did. Why did the remaining 490 or so stay?
According to Christopher Browning, there were different reasons, including anti-Semitism and peer pressure, but a critical one was their desire for advance. Of those who refused to kill Jews, in fact, the most forthright emphasized their lack of career ambitions. One explained that “it was not particularly important to me to be promoted or otherwise to advance. . . . The company chiefs . . . on the other hand were young men and career policemen who wanted to become something.” Another said, “Because I was not a career policeman and also did not want to become one . . . it was of no consequence that my police career would not prosper.” (11)
Though ambitious collaborators like to believe that they are adepts of realpolitik, walking the hard path of power because it is the wisest course to take, their realism is freighted with ideology. Careerism has its own moralism, serving as an anesthetic against competing moral claims. Particularly in the United States, where ambition is a civic duty and worldly success a prerequisite of citizenship, enlightened anglers of their own interest can easily be convinced that they are doing not only the smart thing, but also the right thing. They happily admit to their careerism because they presume an audience of shared moral sympathy. How else can we understand this comment of director Elia Kazan in response to a colleague’s request that he justify his decision to name names? “All right, I earned over $400,000 last year from theater. But Skouras [the head of Twentieth-Century Fox] says I’ll never make another movie. You’ve spent your money, haven’t you? It’s easy for you. But I’ve got a stake.” (12)
(1) According to Jan Gross, the word “collaboration” first took on this negative connotation—as opposed to the more neutral notion of two parties working together—with the Nazi invasion of France, whereupon it was used to refer to natives of occupied countries who colluded with the Germans. Jan Gross, Neighbors: The Destruction of the Jewish Community in Jedwabne, Poland (Princeton: Princeton University Press, 2001), 5, 205-6.
(2) Tina Rosenberg, The Haunted Land: Facing Europe’s Ghosts After Communism (New York: Vintage, 1995), xiii.
(3) Herbert A. Philbrick, I Led Three Lives: Citizen, ‘Communist,’ Counterspy (New York: Grosset & Dunlap, 1952); Ellen Schrecker, Many Are the Crimes: McCarthyism in America (Boston: Little Brown, 1998), 310-13, 344-349.
(4) Gross, 97-100; Lawrence Weschler, A Miracle, A Universe: Settling Accounts with Torturers (Chicago: University of Chicago Press, 1990, 1998), 44.
(5) Nadezhda Mandelstam, Hope Against Hope (New York: Modern Library, 1970, 1999), 42; Primo Levi, Survival in Auschwitz (New York: Simon and Schuster, 1958), 19.
(6) Weschler, 126.
(7) Levi, 33.
(8) Mark Danner, The Massacre at El Mozote: A Parable of the Cold War (New York: Vintage, 1993), 17, 20, 23, 50, 59.
(9) Yehuda Bauer, Rethinking the Holocaust (New Haven: Yale University Press, 2001), 77-82; Victor Navasky, Naming Names (New York: Penguin, 1980, 1991), 3-69; Gross, 37-40, 60-62, 65, 91, 123-125; Christopher Browning, Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland (New York: HarperCollins, 1992, 1998), 162, 177, 180, 196-200, 202.
(10) Weschler, 76, 127.
(11) Browning, 1-2, 55-77, 169-170; Bauer, 37.
(12) Stefan Kanfer, A Journal of the Plague Years: A Devastating Chronicle of the Blacklist (New York: Atheneum, 1973), 173. Even if Kazan had refused to testify and been penalized by Hollywood, he undoubtedly could have had a thriving career as a Broadway director—a point he affirmed before his death. Bernard Weinraub, “Book Reveals Kazan’s Thoughts on Naming Names,” New York Times (March 4, 1999), E1.
Science is the process through which we derive reliable predictive rules through controlled experimentation. That’s the science that gives us airplanes and flu vaccines and the Internet. But what almost everyone means when he or she says “science” is something different. … Since most people think math and lab coats equal science, people call economics a science, even though almost nothing in economics is actually derived from controlled experiments. Then people get angry at economists when they don’t predict impending financial crises, as if having tenure at a university endowed you with magical powers.
One way of systematically understanding the world is just to watch it and write down what happens. “Today I saw this bird eat this fish.” “This year the harvest was destroyed by frost.” “The Mongols conquered the Sung Dynasty.” And so on. All you really need for this is the ability to write things down. This may sound like a weak, inadequate way of understanding the world, but actually it’s incredibly important and powerful, since it allows you to establish precedents. … A second way of systematically understanding the world is repeated observation. This is where you try to make a large number of observations that are in some way similar or the same, and then use statistics to identify relationships between them. … The first big limitation of empirics is omitted variable bias. You can never be sure you haven’t left out something important. The second is the fact that you’re always measuring correlation, but without a natural experiment, you can’t isolate causation. Still, correlation is an incredibly powerful and important thing to know. … Experiments are just like empirics, except you try to control the observational environment in order to eliminate omitted variables and isolate causality. You don’t always succeed, of course. And even when you do succeed, you may lose external validity – in other words, your experiment might find a causal mechanism that always works in the lab, but is just not that important in the real world.
Mankind are so much the same, in all times and places, that history informs us of nothing new or strange in this particular. Its chief use is only to discover the constant and universal principles of human nature, by showing men in all varieties of circumstances and situations, and furnishing us with materials from which we may form our observations and become acquainted with the regular springs of human action and behaviour. These records of wars, intrigues, factions, and revolutions, are so many collections of experiments, by which the politician or moral philosopher fixes the principles of his science, in the same manner as the physician or natural philosopher becomes acquainted with the nature of plants, minerals, and other external objects, by the experiments which he forms concerning them.
There’s got to be a better way to prep for class. First I read the assigned text, taking notes while I’m reading either in the back of the book or, when space runs out, in a little pocket notebook that I carry. Then I read through those notes, highlighting specific passages or commentary that might be relevant for lecture and discussion. Then I re-type some (hopefully more coherent) version of those highlighted notes in a Word file, organizing them in some kind of thematic fashion or outline. (Sometimes I divide that step up into two: first, I retype all the highlighted notes in a Word file; then I organize those notes into outline form in a new Word file.) Once I have some basic sense of the themes I’ll be talking about and the passages I want to focus on, I prepare my lecture (whether it’s a grad seminar or an undergrad class, I always do some interwoven combination of lecture and discussion). All the while I’m trying to do some secondary reading to help me figure out what the hell is going on in or around the text. There’s got to be a better way to prep for class.
A standard piece of advice to researchers in math-oriented fields aiming to publish a popular book is that every equation reduces the readership by a factor of x (x can range from 2 to 10, depending on who is giving the advice). Thomas Piketty’s Capital has only one equation (or more precisely, inequality), at least only one that anyone notices, but it’s a very important one. Piketty claims that the share of capital owners in national income will tend to rise when the rate of interest r exceeds the rate of growth g. He suggests that this is the normal state, and that the situation prevailing for much of the 20th century, when r was less than g, was an aberration.
I’ve seen lots of discussion of this, much of it confused and/or confusing. So, I want to offer a very simple explanation of Piketty’s point. I’m aware that this may seem glaringly obvious to some readers, and remain opaque to others, but I hope there is a group in between who will benefit.
Suppose that you are a debtor, facing an interest rate r, and that your income grows at a rate g. Initially, think about the case when r=g. For concreteness, suppose you initially owe $400, your annual income is $100 and r=g is 5 per cent. So, your debt to income ratio is 4. Now suppose that your consumption expenditure (that is, expenditure excluding interest and principal repayments) is exactly equal to your income, so you don’t repay any principal and the debt compounds. Then, at the end of the year, you owe $420 (the initial debt + interest) and your income has risen to $105. The debt/income ratio is still 4. It’s easy to see that this will work regardless of the numerical values, provided r=g. To sum it up in words: when the growth rate and the interest rate are equal, and income equals consumption expenditure, the ratio of debt to income will remain stable.
On the other hand, if r>g, the ratio of debt to income can only be kept stable if you consume less than you earn. And conversely if r < g (for example in a situation of unanticipated inflation or booming growth), the debt-income ratio falls automatically provided you don’t consume in excess of your income.
Now think of an economy divided into two groups: capital owners and everyone else (both wage-earners and governments). The debt owed by everyone else is the wealth of the capital owners. If r>g, and if capital owners provide the net savings to allow everyone else to balance income and consumption, then the ratio of the capital stock to (non-capital) income must rise. My reading of Piketty is that, as we shift from the C20 situation of r ≤ g to one in which r>g the ratio of capital to stock to non-capital income is likely to rise form 4 (the value that used to be considered as one of the constants of 20th century economics) to 6 (the value he estimates for the 19th century)
This in turn means that the ratio of capital income to non-capital income must rise, both because the capital stock is getting bigger in relative terms and because the rate of return, r, has increased as we move from r=g to r>g. For example if the capital-income ratio goes from 4 to 6 and r goes from 2 to 5, then capital incomes goes from 8 per cent of non-capital income to 30 per cent1. This can only stop if the stock of physical capital becomes so large as to bring r and g back into line (there’s a big dispute about whether and how this will happen, which I’ll leave for another time), or if non-capital owners begin to consume below their income.
There’s a lot more to Piketty than this, and a lot more to argue about, but I hope this is helpful to at least some readers.