“I don’t want a Google car,” I tell her. “I want a train.” … There’s something very odd about a world in which it’s easier to imagine a futuristic technology that doesn’t exist outside of lab tests than to envision expansion of a technology that’s in wide use around the world. How did we reach a state in America where highly speculative technologies, backed by private companies, are seen as a plausible future while routine, ordinary technologies backed by governments are seen as unrealistic and impossible?
… My student Rodrigo Davies has been writing about civic crowdfunding, looking at cases where people join together online and raise money for projects we’d expect a government to otherwise provide. On the one hand, this is an exciting development, allowing neighbors to raise money and turn a vacant lot into a community garden quickly and efficiently. But we’re also starting to see cases where civic crowdfunding challenges services we expect governments to provide, like security. Three comparatively wealthy neighborhoods in Oakland have used crowdfunding to raise money for private security patrols to respond to concerns about crime in their communities. …
… On the one hand, I appreciate the innovation of crowdfunding, and think it’s done remarkable things for some artists and designers. On the other hand, looking towards crowdfunding to solve civic problems seems like a woefully unimaginative solution to an interesting set of problems. It’s the sort of solution we’d expect at a moment where we’ve given up on the ability to influence our government and demand creative, large-scale solutions to pressing problems, where we look to new technologies for solutions or pool our funds to hire someone to do the work we once expected our governments to do.
There’s a lot of good stuff in Colin Crouch’s new book, Making Capitalism Fit for Society (Powells, Amazon), but one point seems particularly relevant today. As umpteen people have pointed out, the rollout of the federal enrollment system for Obamacare has been a disaster. The polymathic David Auerbach has been particularly excellent on this.
The number of players is considerably larger than just front-end architects Development Seed and back-end developers CGI Federal, although the government is saying very little about who’s responsible. The Department of Health and Human Services’ Centers for Medicare and Medicaid Services (CMS), which issued the contracts, is keeping mum, referring reporters to the labyrinthine USASpending.gov for information about contractors. … By digging through GAO reports, however, I’ve picked out a handful of key players. One is Booz Allen … Despite getting $6 million for “Exchange IT integration support,” they now claim that they “did no IT work themselves.” Then there’s CGI Federal, of course, who got the largest set of contracts, worth $88 million, for “FFE information technology and healthcare.gov,” as well as doing nine state exchanges. Their spokesperson’s statement is a model of buck-passing … Quality Software Solutions Inc …[have] been doing health care IT since 1997, and got $55 million for healthcare.gov’s data hub in contracts finalized in January 2012. But then UnitedHealth Group purchased QSSI in September 2012, raising eyebrows about conflicts of interest.
… Development Seed President Eric Gundersen oversaw the part of healthcare.gov that did survive last week: the static front-end Web pages that had nothing to do with the hub. Development Seed was only able to do the work after being hired by contractor Aquilent, who navigated the bureaucracy of government procurement. “If I were to bid on the whole project,” Gundersen told me, “I would need more lawyers and more proposal writers than actual engineers to build the project. Why would I make a company like that?” These convolutions are exactly what prevented the brilliant techies of Obama’s re-election campaign from being involved with the development of healthcare.gov. To get the opportunity to work on arguably the most pivotal website launch in American history, a smart young programmer would have to work for a company mired in bureaucracy and procurement regulations, with a website that looks like it’s from 10 years ago. So much for the efficiency of privatization.
Otherwise put, it’s a good example of Crouch’s critique of neo-liberal efforts to ‘shrink’ government – that in practice it is less about free markets than the handing over of government functions to well connected businesses.
Outsourcing is … justified on the grounds that private firms bring new expertise, but an examination of the expertise base of the main private contractors shows that the same firms keep appearing in different sectors … The expertise of these corporations, their core business, lies in knowing how to win government contracts, not in the substantive knowledge of the services they provide. … This explains how and why they extend across such a sprawl of activities, the only link among which is the goernment contract-winning process. Typically, these firms will have former politicians and senior civil servants on their boards of directors, and will often be generous funders of political parties. This, too is part of their core business. It is very difficult to see how ultimate service users gain anything from this kind of managed competition.
As Crouch suggests in an aside, we’ve been here before. The cosy relationship between corporations like CGI Federal and Booz Allen and the government bears a strong resemblance to feudalism (which, stripped of the pageantry, was a complex web of relations and privileges between a small and privileged elite of nobles and the state). It bears an even stronger resemblance to Old Corruption, the strangling web of sinecures and emoluments that radicals like William Cobbett inveighed against in the early nineteenth century. Government – even at the best of times – has many clunky and inefficient features (the American version particularly so – many of the worst inflexibilities of the US government have their origins in people’s distrust of it). Yet the replacement of large swathes of government with a plethora of impenetrable subcontracting relationships is arguably even worse – it has neither the efficiencies (sometimes) achieved by markets, nor the accountability (sometimes) achieved by democratic oversight.
It’s the fiftieth anniversary of the March on Washington and the Kennedy assassination, both in their way notable events in the history of African American civil rights. But it is also the hundredth anniversary of a different, equally notable event: the racial segregation of the US government in 1913 under newly elected president Woodrow Wilson.Wilson did not, himself, order the segregation of the civil service; rather, it began because his subordinates found in Wilson’s administration an environment conducive to racist innovations. Wilson won office largely as a result of Theodore Roosevelt’s splitting the Republican Party; he garnered fewer votes than the perennial loser William Jennings Bryan had done in his head-to-head contests with Republicans. As a result, the Oval Office now housed a Virginian and states’ rights enthusiast uninterested in, when not hostile to, African American civil rights.
Wilson appointed a fellow southerner, William Gibbs McAdoo, as Secretary of the Treasury and in July 1913, on McAdoo’s authority, the Auditor of the Treasury ordered the establishment of segregated toilets in the department.1 Other departments followed Treasury’s example, introducing racial separation to dining areas and washrooms.
In bringing Jim Crow to official Washington, the Democrats were importing a practice they had only recently imposed in the southern states. Around 1890, along with laws removing black citizens from voting rolls, southern states began to pass laws to remove black persons to separate railway carriages, leading to a series of statutory experiments with separating the races. Civil rights organizations sought to resist this new tendency.
Wilson agreed to meet a small delegation of black leaders who objected to the policy. Five men led by William Monroe Trotter went to the White House to plead their case. Trotter, like many other African Americans, had supported Wilson in 1912, in part on the ground that black citizens could get more from their government if they showed they were independent voters not always pledged to the party formerly known as Lincoln’s.
Wilson began the meeting by defending segregation. When he had done, Trotter said, “you were heralded as perhaps a second Lincoln,” but the segregation policy would mean blacks returning entirely to the Republicans. Wilson interrupted Trotter, saying politics did not belong in the discussion. Trotter thought they did, and pursued the point. Wilson finally declared, “Never before have I been addressed in such an insulting fashion,” and told Trotter and his colleagues they had to leave.
After being ejected, Trotter told reporters on the White House lawn that the president had claimed – as was the fashion in those days – that segregation was only a device to reduce racial conflict. Trotter pointed out that race had never seemed a problem until the Democrats gained control of the executive branch in 1913.
Suppose Theodore Roosevelt had kept his hat out of the ring in 1912, made peace with Taft, and prevented a split in the Republican Party: racial segregation would not have come to official Washington until much later. If it had to wait, it might have seemed an even more fragile and foolish innovation than it was. We like of course to think that the long arc of the moral universe bends towards justice but incident, and a small group of opportunists, can readily force it back the other way.
I have always been of the view that there’s no real point in getting too outraged about the Nobel Prize for Economics. For one thing – economics is an important subject which is bound to have an important prize, and it’s a good thing that this prize isn’t wholly in the control of the American Economic Association, because if it was it would be a whole lot worse. For another, on an objective look at the quality of the company which the Economics Nobel is keeping, I don’t think anyone can really claim it’s bringing the average down. The Peace Prize is a notorious joke, of course, but the Literature prize is also wildly eccentric, and even the Physics and Chemistry prizes are occasionally awarded to people who believe in ESP. So let’s stipulate that the Balzan Prize and the Fields Medal are both really really good prizes, and that winning one of them is probably even better than having dinner with the King of Sweden.
So, the Fama/Shiller/Hansen prize, or as the vast majority of comment has it, the prize for “Fama, Shiller and that other guy”. What does it say about the state of economics? I think it encapsulates everything good and bad about the subject. First, the good.
People have really misinterpreted what this prize is about – in particular, anyone who thinks there’s anything at all paradoxical about it being shared between Fama and Shiller is really getting the wrong end of the stick. It’s a prize that’s been awarded for decades of empirical work on the statistical properties of securities prices. Allow me a potted history …
Securities markets are big and important economic things, so it would be a good idea to understand as much as we can about them, even if there wasn’t the tantalizing possibility of making a load of money out of predicting their movements, which there is. They also have the attractive property that definitionally, all securities transactions have to be recorded and are denominated in cash, so there aren’t the same measurement issues as there are with a lot of other economic phenomena. We also had (via Paul Samuelson, among others) a very intriguing piece of theory, which suggests that, in the title of his 1965 paper “Properly Anticipated Prices Fluctuate Randomly“. There is a hell of a lot of intellectual debate (and because this is economics, an awful lot of ideological bullshit) related to what conclusions might follow from the fact that market prices properly discount all available information, which I will get onto later. But on the face of it, and bearing in mind that “Properly Anticipated Prices Fluctuate Randomly” does not mean the same as “Randomly Fluctuating Prices Are Properly Anticipated”, the stochastic properties of securities prices, and specifically the question of whether they fluctuate randomly or not, would seem to be quite an important thing to know about.
This is where Eugene Fama came in, round one. Unfortunately, at this point, a piece of misnomenclature occurred, and the “Random Walk Hypothesis” (ie, the hypothesis that securities prices are a random walk process) got renamed the “Efficient Markets Hypothesis”. At least at this point, it was still called a “Hypothesis”, which gave at least a clue that it was an empirical claim about the statistical properties of securities returns rather than a necessary truth about underlying reality – the “Efficient Markets Theory” (and, god help us, “Theorem”) was yet to come.
Although he was one of the parties to the mis-re-naming, Eugene Fama did a lot of really fundamental work in sharpening up the concept of what it might mean for securities prices to fluctuate “randomly” . In particular, his weak, semi-strong and strong forms of the Efficient Markets Hypothesis started the ball rolling with respect to thinking about what sort of information one should be conditioning on, when carrying out the statistical tests for a random walk. On the basis of his own research – which was very good at the time, albeit that as time and science moved on, it became apparent that the standard statistical tests of the day were pretty low in power and tended not to be very good at rejecting the hypothesis of a random walk – Fama concluded that the basic answer to the question was that as far as anyone could tell, and certainly to the extent of being able to profit from them, securities prices were random and anyone charging money for the service of being able to predict them was probably lying. This was described, rather embarrassingly, as “the best established proposition in social sciences”, in 1978 by Michael Jensen, who was one of a large crew of mainly American academics who kind of picked up this ball and ran with it, as we will get into discussing later.
At the start of the 1980s, though, Shiller kind of put a bomb under the EMH, by attacking it from the other side. Although as I noted above, “”Properly Anticipated Prices Fluctuate Randomly” does not imply “Randomly Fluctuating Prices Are Properly Anticipated”, the reverse implication is valid – if prices could be demonstrated to not properly anticipate the changes in the underlying cash flows they represent claims on, then it could be demonstrated that they weren’t wholly random, as well as blowing up the larger intellectual project of “market efficiency” that had been built on the foundations of the Random Walk Hypothesis.
Shiller’s 1981 paper is pretty conceptually simple. Given that securities prices (particularly share prices) are meant to be based on the present value of a stream of future dividends, and given that dividend streams are not really all that volatile, why is it that share prices go up and down so much? Shiller showed that in order to believe that prices properly anticipated the risk-adjusted discounted value of future cash flows, you would have to believe something pretty implausible about the unobserved parameters (the rate of risk aversion and/or time preference) and that it was considerably more appealing to believe that the market simply and constantly got it wrong.
I like to think that, as the majority of CT readers stroke themselves with pleasure at the thought of “Markets are not efficient after all! I was right to do that humanities degree!” and mark down Robert Shiller as A Good Thing to be counterposed to Eugene Fama who was A Bad Thing , a small minority of like-minded souls will be thinking “hang on, tell me about that ‘stock prices not random’ thing?”.
Indeed my friends. If stock prices have “too much” randomness, then they overreact in both directions. Specifically, if the Shiller Cyclically Adjusted Price Earnings ratio (or price-dividends, or whatever) is historically high, then it will tend to fall and vice versa if it is low. This one works. It’s by now a really quite well-established fact about securities prices.
In fairness to Fama, his “round two” contribution to this debate showed that he is a good empiricist at heart, as he spent most of the 80s and early 90s doing work with Kenneth French on testing what kinds of “anomalies” or sources of predictability could be explained away as artifacts of the data or statistical quirks, and which were real and persistent effects that had to be taken into account. In the end, he concluded that before saying that stock prices fluctuated randomly, one had to condition on factors which might include company size, “value” (book to market ratio) and even “momentum” (shares which have performed strongly in recent periods do seem to have a tendency to continue to outperform which can’t fully be chalked up to statistical noise). For reasons I don’t fully understand , Fama still thinks that a theory which allows for these factors is still worth calling a theory of “efficient markets”, but fair do’s to the guy – he won the prize for empirical work and he has not been scared to go where the data took him.
Meanwhile, in another part of the forest, Hansen’s work goes to one of the points that I kind of skated over in the discussion of Shiller above. I noted that, given the variability of dividends and the variability of share prices, one would have to have an absolutely implausible amount of variability in risk aversion to conclude that share prices were “Properly Anticipating” the future. But how can you know what amount of variability is or isn’t implausible? The state of the art in econometrics when Shiller (1981) came out was the “maximum likelihood” method, by which you specify a probability distribution for the error process and then calculate how far out of the tail (or how close to the centre) of this distribution the errors (“residuals”) would have to be in order to give the actual data, if the model you were considering was the correct model.
Which is not very satisfactory, given that we are dealing with securities prices here and that one of the things we have known about securities prices since Mandelbrot (Fama’s dissertation supervisor) is that when you are making assumptions about their probability distribution, you are on really very shaky ground. Which is why Hansen is getting a share of the prize, despite not appearing on television as much as the other two. The General Method of Moments is much weaker in its assumptions than MLE … what you do is …
Rather than maximizing the likelihood function of the residuals, what you do is take advantage of the fact that your model will usually define some function of the residuals of which you want the expectation to be zero. You find out what you have to do to the parameters to make this function equal zero, and you have your estimate, in the simple case, without having to make any of the simplifying assumptions of the linear regression model. In the difficult case (for example with difficult time series structures, where you would want some function of the correlation of the errors to also be zero), there will be more such functions than you have parameters, so you choose the set of parameters which minimize the value of these “identifying restrictions”. Then, Hansen’s methodology tells you how to transform the distance from zero of the minimized functions to get a variable that is chi-square distributed, allowing you to test your model after all .
In the context of securities prices, this allows you to build a full model of investment and consumption behaviour and compare the “excessive” variability found in Shiller to the amount of variation in risk appetite which it might actually be reasonable to presume (given that risk aversion is driven by diminishing marginal utility of consumption, and given the actual variability of consumption). And you find that quite a lot, but not all, of the excess variability could plausibly be chalked up to people correctly anticipating the fluctuations in their real consumption, and being more or less willing to bear investment risk as a result.
And so that’s my tour d’horizon of the empirical securities returns research – it’s basically a quite important and very interesting project which (by the end, in the Hansen estimates of generalized consumption-driven asset pricing models) has taken us to some quite deep and interesting places in understanding fundamental things about risk attitudes and utility. Along the way, a lot of very useful techniques were invented – particularly the General Method of Moments – which have been useful in all sorts of other fields. Stripped of all the ideological bullshit, this is a thoroughly deserved Nobel for all concerned.
But …. well, “stripped of all the ideological bullshit”, the Nuremberg Rallies were a folk festival. This is economics, and the ideology is not something that can be yaddaed away. As I’ve noted on this blog a load of times, there is, hiding somewhere inside the bloated corpus of economics, a nice and intellectually respectable branch of science struggling to get out. It’s just that, somewhere in the nineteenth century, this lean and useful engineering discipline fell into disreputable company, acquired a huge amount of psychological and philosophical baggage, leaving it in very poor shape to resist the further intellectual depredations of the Cold War. So yeah, “Efficient Markets Theory” … it’s bad.
We’ve discussed its Zombie qualities on a number of occasions (and in the relevant chapter of John’s book), but my favourite trip round this particular mulberry bush was in 2004, where John set out the numerous and important policy debates in which fairly massive and substantial conclusions were deduced from a hypothesis (and after 1981, a largely falsified hypothesis) about the stochastic properties of prices on the New York Stock Exchange. I said right at the top of the piece that “Properly Anticipated Prices Fluctuate Randomly” can’t be taken to mean “Prices Which Fluctuate Randomly Are Properly Anticipated”, but actually where this horse and cart was driven to, it looked more like “Because Some Prices Seem To Fluctuate Randomly, No Non-Market-Based Policy Can Be Optimal”. And even beyond that, to various theories of the confidence fairy under which the main job of government is to act as an investor relations officer for the local treasury bonds, which aren’t even consistent with the original thesis about securities predictability. The disastrous metastasis of “random walk” into “efficient markets” is a perfect example of how difficult it is to take the ideology out of economics.
So should we get rid of the prize for the economists “until they can show they are a proper science”? Well, I don’t think so. If you’re going to have economics at all, you’re going to have politicized economics; that’s the nature of the beast. All the sciences have disagreements, and (ugly little secret) in all the sciences, those disagreements are resolved by social processes and things which very much resemble politics of some sort or other – this is a difference of degree, not of kind. Economics has the problem to a much greater degree than other sciences, for the fairly obvious reason that it’s the branch of science which has most to do with the distribution of finite resources in society, so an incorrect view in economics can still be worth pushing in a way in which an incorrect theory of fundamental physics can’t. But this problem isn’t fundamentally one of economics as a science – it’s due to the fact that economics is carried out in the context of society. And really, if someone makes the argument that “third world countries ought to deregulate their capital accounts because most American mutual funds don’t justify their fees”, and society believes them, surely whose fault is that?
 Of course the idea would be for the premier global prize in economics to be awarded by the Econometric Society, not least because John is a member of it.
 Brian Josephson (Physics, 1973) and Kary Mullis (Chemistry, 1993)
 He gives academics the popular culture recognition that they’ve been need’n
 Although not necessarily recorded in a consistent, comparable or machine-readable way. One of the big unsung achievements of empirical finance has been the creation of consolidated and “clean” securities returns datasets.
 Log returns, yes yes. Give me a break here, I’m writing for a general audience.
 Where this solecism is used in print, it appears to mean Samuelson’s result about 40% of the time, the Modigliani-Miller theorem about 30%, some version of the Markowitz portfolio theorem 20% and that the author does not have a clue what he is talking about roughly 100% of the time.
 As Samuelson’s concluding paragraphs have it … “I have not here discussed where the basic probability distributions are supposed to come from. In whose minds are they ex ante? Is there any ex post validation of them? Are they supposed to belong to the market as a whole? And what does that mean? Are they supposed to belong to the “representative individual”, and who is he? Are they some defensible or necessitous compromise of divergent expectation patterns? Do price quotations somehow produce a Pareto-optimal configuration of ex ante subjective probabilities? This paper has not attempted to pronounce on these interesting questions”.
 Nearly all this work was done by economists and econometricians, by the way. One area in which economics really can hold its head up high with any “hard” science you care to name is in the invention of useful pieces of statistical toolkit to deal with random variables.
 It’s also technically a beauty, although I’m oversimplifying it mightily here. Wait until I get onto Hansen, then the real intellectual vandalism will start.
 Kidding! Love you guys really.
 Shiller, by the way, is a guy who believes not only that there should be derivatives markets in absolutely everything from GDP to unemployment to average wages, but that everyone, in the sense of the normal middle class, should actively trade on these markets, in the belief that this would allow people to “hedge” their macroeconomic risk exposures. Personally I think this is a world-beater of a Terrible Idea (and take comfort in the fact that no such contract has ever even looked like taking off, with the possible and qualified exception of the Case-Shiller housing index futures). In the context of Shiller’s results on stock market volatility, it really looks to me like a case of “This food is terrible! And everyone needs to eat it in larger portions!”.
 On the other hand, I am not going to cosign stuff like this, of course, but I think this has to be counted as personal and ideological peccadilloes which shouldn’t be taken as detracting from the underlying quality of the work. As I discuss later in this essay, there are admittedly a hell of a lot of such peccadilloes to yaddayadda around, but hey – Kary Mullis had glowing raccoons from outer space.
 For the longest time, the view from Chicago was that the size, value and momentum effects might be proxies for as yet unknown sources of risk, for which the associated excess returns were fair compensation. This was never disproved, but as far as I can tell the Chicago guys just got tired of getting laughed at and kind of gave up on this theory. Nowadays as I understand it, the view when pressed is that the “anomalies” are genuine features of the dataset which have to be taken into accounting when testing theories on historical data, but not really part of the underlying model, and maybe they will disappear when we have ten thousand years of equity returns to deal with. At least it’s an ethos.
 So hang on, does this mean that you can beat the market? Basically yup. An awful lot of people writing on this Nobel appear to have learned the 1970s version of Efficient Markets when they were at university or at business school and never kept up with the rest of the literature. If you hold a value, size and momentum-loaded portfolio (and load up on a couple of other factors outside the Fama-French ones, most particularly on stocks with low volatility), and if you rebalance it between stocks and cash when the Shiller dividend and earnings ratios are a long way from long term values, then the current state of science is that you will have constructed a portfolio which on the basis of historical evidence is likely to outperform. And all you need is good enough data to calculate these ratios, trading costs low enough to execute the strategy and enough self-discipline and strength of will to stick to the plan when it looks like it is going wrong (as Shiller’s results demonstrate it will, a lot of the time). Easier to say than do, but there are some people who manage it.
 Does not constitute investment advice, not least because it is couched in such absurdly general terms as to hardly constitute advice at all.
 I once gave advice on a mailing list that
“As far as active investment goes, I always put it this way – are you prepared to put as much time and effort into managing your investments as you would into running a small business? If you are then go for it – playing the market is not a bad hobby, about as interesting as birdwatching or something. And most people on this list actually do have enough intelligence to beat the market and therefore to beat most active-managed funds, in my opinion. The trouble is of course that beating the market doesn’t just require intelligence, it requires self-discipline, hard work and the ability to control your emotions. But in many ways so does success in bird-watching.”
 Not in any regulatory or fiduciary sense.
 Plenty of fast skating over technical material there, but frankly, as a description of the method of maximum likelihood which can be squeezed into two tweets, I think that is pretty bloody good actually.
 Not so good, that one. Five and a half tweets and considerably less clear. Any suggestions for improvement gratefully received. On Twitter itself, I managed
“The expectation of model errors ought to be 0. Their correlation ought to be 0. The extent to which it isn’t is a measure of model fit”
Which is pretty barbaric (quite apart from anything, the residual covariance matrix gives you a measure of significance, not fit), but kind of gives the flavour of it in 140 characters
Katharine P. Zakos, firstname.lastname@example.org
This week's In Media Res theme focus is Food Media (October 14 - October 18).
Here's the line-up: http://mediacommons.futureofthebook.org/imr/
Monday, October 14, 2013 - Sarah O'Brien (University of Toronto) presents: What's so gross about pink slime?
It is my pleasure to announce that Figure/Ground Communication has partnered up with The Midwest Popular Culture Association / American Culture Association (http://mpcaaca.org/) to collaborate in the production of a new peer-review publication: The Popular Culture Studies Journal.
Issues #1-2 are now available online: http://mpcaaca.org/wp-content/uploads/2013/10/PCSJ_vol1.pdf
We hope you enjoy it and invite you to submit your articles and book reviews for our next issue (deadline TBA).
Ingrid links to some fascinating discussion from Philip Mirowski of the role of Swedish domestic politics in the establishment of the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, with emphasis on the way in which claims of “scientific” status for economics helped the claim of the Swedish central bank to independence from government.
In the broader context, it seems pretty clear that, if the idea had arisen even a few years later, it would have been rejected. In 1969, economics really did seem like a progressively developing science in which new discoveries built on old ones. There were some challenges to the dominant Keynesian-neoclassical synthesis but they were either marginalized (Marxists, institutionalists) or appeared to reflect disagreements about parameter values that could fit within the mainstream synthesis.
Only a few years later, all of this was in ruins. The rational expectations revolution sought, with considerable success, to discredit Keynesian macroeconomics, while promising to develop a New Classical model in which macroeconomic fluctuations were explained by Real Business Cycles. This project was a failure, but led to the award of a string of Nobels, before macroeconomists converged on the idea of Dynamic Stochastic General Equilibrium models, which failed miserably in the context of the global financial crisis. The big debate in macro can be phrased as “where did it all go wrong”. Robert Gordon says 1978, I’ve gone for 1958, while the New Classical position implies that the big mistake was Keynes’ General Theory in 1936
The failure in finance is even worse, as is illustrated by this year’s awards where Eugene Fama gets a prize for formulating the Efficient Markets Hypothesis and Robert Shiller for his leading role in demolishing it. Microeconomics is in a somewhat better state: the rise of behavioral economics has the promise of improved realism in the description of economic decisions.
Overall, economics is still at a pre-scientific stage, at least, as the idea of science is exemplified by Physics and Chemistry. Economists have made some important discoveries, and a knowledge of economics helps us to understand crucial issues, but there is no agreement on fundamental issues. The result is that prizes are awarded both for “discoveries” and for the refutation of those discoveries.
Wait, why do we have actually have The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel ? If you have ten minutes to spare, Philip Mirowski will give us part of the answer, and tell us about his research project investigating this issue.
So, during our latest enjoyable discussion fracas mêlée, John alluded to the fact that what I have is something more like a reading illness than a love of literature per se. I usually either walked to school or took the (very crowded) bus when I lived in New York. So I never developed the special skill, honed to perfection by my uncle, of folding the New York Times first, in half upper to lower; then, in halves again but along the central line; finally, in half again along the midline, and reading 1/8 of a page at a time. This sounds easy. But you really need to picture my uncle, a partner at Cadwalader, Wickersham & Taft, taking the subway to work down on Wall Street from the upper East side, whence he was bound to get a seat—I must note he was being rather frugal (which will seem to be belied by what follows, but having a smaller number of really well-made suits is cheaper in the long run). There he is: sitting, in a beautiful bespoke suit (I thought he would die when during a brief fever of dot.com bubbliness the firm introduced “casual Fridays,” which policy was happily discarded in 2000, as I assured him it would be), and horn-rimmed glasses, on the express, hemmed in by people, none of whom he is inconveniencing in any way by his NYT reading, because of his special, lifetime-New-Yorker ability to pick up each section, shake it into sudden crisp folds against its own grain, and repeat, as needed, until all is read and the crossword finished by 7:45 a.m. when he gets to work. (As I say, it sounds easy, but think of what happens when you must get from an article folded into the top left 1/8 of one page into the middle 1/8 of the lower part of the next page, and you may not extend it beyond your knees or your elbows beyond your shoulders.) He is a very meticulous and wonderful person, my uncle.
So, naturally, I’m reading Studd right now. Wait, wait, wait, it makes sense! It’s out of a perverse curiosity to see how well I remember the book, having not looked at it for 14 years! Perfectly well, is how. He has two sentences near the start recounting a brief exchange with the son of a cousin he looks up after fifteen years or whatever, because the son expressed himself with a few choice words on the subject of learning Latin and got sent out of the room, but then Studd’s all, giving him manly man advice and shit. I remembered that just from seeing the cover! OK, then I told myself I was reading it on y’all’s behalfs so I could post about it. But it’s not stupid in a particularly interesting way. The author is one of those “I’m so nonchalant about sex because” handwaving “unspecified reasons” who is also “the world is going to hell in a handbasket because man in the gray flannel suit so secretly I’m a beatnik too, down inside, really, I am, please have morally wrong pre-marital sex with me?” But it’s 1972. Dude, shouldn’t you at least secretly be a hippie by now? The author lived in Africa for quite a while and hunted a lot and clearly enjoys it; the safari sections are the most readable part, and not actually all that racist. Not much at all, really. And the main character reads James Bond novels, and laughs about their implausibility. IT’S GETTING META UP IN HERE. Also Hornblower. Which is fine, everyone should read about Hornblower. Rather than, say, about Studd. I sent John accusing texts last night. “Studd is so horrible.” Him: “there are better SF novels unpacked on the shelf, read them.” Me: “WHY R U MAKN ME DO THS? I H8 U!” After they all returned from getting udon: older daughter, “but why are you reading it if you hate it so much then?” Younger daughter, to whom I had explained things in a pellucid fashion “to get it over with more quickly.” Yes, see! It makes perfect sense! Luckily I finished reading it while I was composing this post, and now I am going to throw it down the rubbish chute of our new 24th floor apartment (that we rent, but it’s new to us.)
I became a Dutch citizen earlier this year. That is, I became a Dutch citizen given the definition of ‘citizen’ that most political scientists would use – someone with full political rights, including the right to vote and the right to stand for election. The process was partly Kafkaesque – perhaps I’ll tell you some more about that another time.
The reason I wanted Dutch citizenship is that I want to be able to vote in the country in which I live, in which I plan to stay, in which my children grow up, in which I work, in which I pay taxes, and – perhaps the most important – where I care a lot about how institutions are being redesigned and policies implemented. The reason I didn’t apply for Dutch citizenship earlier on, is that it has only recently become possible for me to acquire Dutch citizenship without losing my Belgian citizenship. And I didn’t want to give up Belgian citizenship, since at the ‘personal identity’ level it feels like a denial of part of oneself if one has to give up the nationality that has shaped the person one has become. I think people should be able to hold two passports since one’s nationality does not only reflect which political community one regards oneself most engaged with, but also one’s identity at a deeper level – whatever one prefers to call this – the psychological level or related to one’s personal self-narrative, or something similar.
But now I am in this remarkable position to be a person with two votes. I can vote for the national and regional elections in Belgium, and for local, national and European elections in the Netherlands. Isn’t this a violation of the deep democratic principle we all know by the slogan ‘one man, one vote’? Some friends have suggested that there is nothing wrong with having two votes, since after all one has ties with both countries. But that doesn’t seem quite right to me, since it would still mean that one person overall has greater political power than their co-citizens.
So I guess my position is this: Two passports: fine. Two votes: not OK. We should have a set of rules such that those of us who hold two passports should prioritise them: the first one gives one all the rights of all other citizens, and the second one gives one all the rights of the citizens except the right to vote.
has been found! And in pretty good shape by the looks of it. Fantastic news.
QED Inaugural Issue Is Out (Charles E. Morris III, Syracuse University, email@example.com; Thomas K. Nakayama, Northeastern University, firstname.lastname@example.org)
Today, in part 4 of my series on the intellectual history of fear, I turn to Hannah Arendt’s theory of total terror, which she developed in The Origins of Totalitarianism—and then completely overhauled in Eichmann in Jerusalem. As I make clear in my book, I’m more partial to Eichmann than to Origins. But Origins has been the more influential text, at least until recently, and so I deal with it here.
The Origins of Totalitarianism is a problematic though fascinating book (the second part, on imperialism, is especially wonderful). One of the reasons it was able to gain such traction in the twentieth century is that it managed to meld Montesquieu’s theory of despotic terror with Tocqueville’s theory of democratic anxiety. It became the definitive statement of the Cold War in part because it took these received treatments of Montesquieu and Tocqueville and mobilized them to such dramatic effect. (One of the reasons, as I also argue in the book, that Eichmann provoked such outrage was that it undermined these received treatments by reviving ways of thinking about fear that we saw in Hobbes and that had been steadily abandoned during the 18th and 19th centuries.)
But, again, if you want to get the whole picture, buy the book.
• • • • •
Mistress, I dug upon your grave
To bury a bone, in case
I should be hungry near this spot
When passing on my daily trot.
I am sorry, but I quite forgot
It was your resting-place.
It was a sign of his good fortune—and terrible destiny—that Nikolai Bukharin was pursued throughout his short career by characters from the Old Testament. Among the youngest of the “Old Bolsheviks,” Bukharin was, in Lenin’s words, “the favorite of the whole party.” A dissident economist and accomplished critic, this impish revolutionary, standing just over five feet, charmed everyone. Even Stalin. The two men had pet names for each other, their families socialized together, and Stalin had Bukharin stay at his country house during long stretches of the Russian summer. So beloved throughout the party was Bukharin that he was called the “Benjamin” of the Bolsheviks. If Trotsky was Joseph, the literary seer and visionary organizer whose arrogance aroused his brothers’ envy, Bukharin was undoubtedly the cherished baby of the family.
Not for long. Beginning in the late 1920s, as he sought to slow Stalin’s forced march through the Russian countryside, Bukharin tumbled from power. Banished from the party in 1937 and left to the tender mercies of the Soviet secret police, he confessed in a 1938 show trial to a career of extraordinary counterrevolutionary crime. He was promptly shot, just one of the 328,618 official executions of that year.
Not long before his murder, Bukharin invoked a rather different biblical parallel to describe his fate. In a letter to Stalin, Bukharin recalled the binding of Isaac, the unwitting son whose father, Abraham, prepares him, on God’s instruction, for sacrifice. At the last minute, an angel stops Abraham, declaring, “Lay not thine hand upon the lad, neither do thou any thing unto him: for now I know that thou fearest God, seeing thou hast not withheld thy son, thine only son from me.” Reflecting upon his own impending doom, however, Bukharin envisioned no such heavenly intervention: “No angel will appear now to snatch Abraham’s sword from his hand.”
The biblical reference, with its suggested equivalence of Stalin and Abraham, was certainly unorthodox. But in the aftermath of Bukharin’s execution it proved apt, for no other crime of the Stalin years so captivated western intellectuals as the blood sacrifice of Bukharin. It was not just that this darling of the communist movement, “the party’s most valuable and biggest theoretician,” as Lenin put it, had been brought down. Stalin, after all, had already felled the far more formidable Trotsky. It was that Bukharin confessed to fantastic crimes he did not commit.
For generations of intellectuals, Bukharin’s confession would symbolize the depredations of communism, how it not only murdered its favored sons, but also conscripted them in their own demise. Here was an action, it seemed to many, undertaken not for the self, but against it, on behalf not of personal gain, but of self-destruction. Turning Bukharin’s confession into a parable of the entire communist experience, Arthur Koestler, in his 1941 novel Darkness at Noon, popularized the notion—later taken up by Maurice Merleau-Ponty in Humanism and Terror and Jean-Luc Godard in his 1967 film La Chinoise—that Bukharin offered his guilt as a final service to the party. In this formulation it was not Stalin, but Bukharin, who was the true Abraham, the devout believer who gave up to his jealous god that which was most precious to him.
But where Abraham’s readiness to make the ultimate sacrifice has aroused persistent admiration—Kierkegaard deemed him a “knight of faith,” prepared to violate the most sacred of norms for the sake of his fantastic devotion—Bukharin’s has provoked almost universal horror. Not just of Stalin and the Bolshevik leadership, but of Bukharin himself—and of all the true believers who turned the twentieth century into a wasteland of ideology.
Moralists may praise familiar episodes of suicidal sacrifice such as the Greatest Generation storming Omaha Beach, but the willingness of the Bukharins of this world to give up their lives for the sake of their ideology remains, for many, the final statement of modern self-abasement. Not because the sacrifice was cruel or senseless—not even because it was undertaken for an unjust cause or was premised on a lie—but because of the selfless fanaticism and political idolatry, the thoughtless immolation and personal diminution, that are said to inspire it. Communists, the argument goes, collaborated in their own destruction because they believed; they believed because they had to; they had to because they were small.
According to Arthur Schlesinger, communism “fills empty lives”—even in the United States, with “its quota of lonely and frustrated people, craving social, intellectual and even sexual fulfillment they cannot obtain in existing society. For these people, party discipline is no obstacle: it is an attraction. The great majority of members in America, as in Europe, want to be disciplined.” Or, as cultural critic Leslie Fiedler wrote of the Rosenbergs after their execution, “their relationship to everything, including themselves, was false.” Once they turned into party liners, “blasphemously den[ying] their own humanity,” “what was there left to die?” Abraham believed in his faith and was deemed a righteous man; the communist believed in his and was discharged from the precincts of humanity.
As we now know, Bukharin’s confession, like so many others of the Stalin era, was not quite the abnegation intellectuals have imagined. From 1930 to 1937, Bukharin resisted, to the best of his abilities, the more outlandish charges of the Soviet leadership. As late as his February 1937 secret appearance before the Plenum of the Central Committee, Bukharin insisted, “I protest with all the strength of my soul against being charged with such things as treason to my homeland, sabotage, terrorism, and so on.” When he finally did admit to these crimes—in a public confession, replete with qualifications casting doubt upon Stalin’s legitimacy—it was after a yearlong imprisonment, in which he was subject to brutal interrogations and threats against his family.
Bukharin had reason to believe that his confession might protect him and his loved ones. Soviet leaders who confessed were sometimes spared, and Stalin had intervened on previous occasions to shield Bukharin from more vicious treatment. Threats against family members, moreover, were one of the most effective means for securing cooperation with the Soviet regime; in fact, many of those who refused to confess had no children. Instead of manic self-liquidation, then, Bukharin’s confession was a strategic attempt to preserve himself and his family, an act not of selfless fanaticism but of self-interested hope.
But for many intellectuals at the time, these calculations simply did not register. For them, the archetypical evil of the twentieth century was not murder on an unprecedented scale, but the cession of mind and heart to the movement. Reading the great midcentury indictments of the Soviet catastrophe—Darkness at Noon, The God That Failed, 1984, The Captive Mind—one is struck less by their appreciation of Stalinist mass murder—it would be years before Solzhenitsyn turned the abstraction of the gulag into dossiers of particular suffering—than by their horror of the liquidated personality that was supposed to be the new Soviet man. André Gide noted that in every Soviet collective he visited “there are the same ugly pieces of furniture, the same picture of Stalin and absolutely nothing else—not the smallest vestige of ornament or personal belonging.” (Writers consistently viewed public housing, whether in the Soviet Union or in the United States, as a proxy for leftist dissolution. Fiedler, for instance, made much of the fact that the Rosenbergs lived in a “melancholy block of identical dwelling units that seem the visible manifestation of the Stalinized petty-bourgeois mind: rigid, conventional, hopelessly self-righteous.”) Perversely taking Stalin at his word—that a million deaths was just a statistic—intellectuals concluded that the gulag, or Auschwitz, was merely the outward symbol of a more profound, more ghastly subtraction of self. Even in the camps, Hannah Arendt wrote, “suffering, of which there has been always too much on earth, is not the issue, nor is the number of victims.” It was instead that the camps were “laboratories where changes in human nature” were “tested” and “the transformation of human nature” engineered for the sake of an ideology.
If we owe any one thinker our thanks, or skepticism, for the notion that totalitarianism was first and foremost an assault, inspired by ideology, against the integrity of the self, it is most assuredly Hannah Arendt. A Jewish German émigré to the United States, Arendt was not the first to make such claims about totalitarianism. But by tracing the ideologue’s self-destruction against a backdrop of imperial misadventure and massacre in Africa, waning aristocracies and dissolute bourgeoisies in Europe, and atomized mass societies throughout the world, Arendt gave this vision history and heft. With a cast of characters—from Lawrence of Arabia and Cecil Rhodes to Benjamin Disraeli and Marcel Proust—drawn from the European landscape, Arendt’s The Origins of Totalitarianism made it impossible for anyone to assume that Nazism and Stalinism were dark emanations of the German soil or Russian soul, geographic accidents that could be ascribed to one country’s unfortunate traditions. Totalitarianism was, as the title of the book’s British edition put it, “the burden of our times.” Not exactly a product of modernity—Arendt repeatedly tried to dampen the causal vibrato of her original title, and she was as much a lover of modernity as she was its critic—but its permanent guest.
Yet it would be a mistake to read The Origins of Totalitarianism as a transparent report of the totalitarian experience. As Arendt was the first to acknowledge, she came to the bar of political judgment schooled in “the tradition of German philosophy,” taught to her by Heidegger and Jaspers amid the crashing edifice of the Weimar republic. Making her way through a rubble of German existentialism and Weimar modernism, Arendt gave totalitarianism its distinctive cast, a curious blend of the novel and familiar, the startling and self-evident. Arendt’s would become the definitive statement—so fitting, so exact—not because it was so fitting or exact, but because it mixed real elements of Stalinism and Nazism with leading ideas of modern thought: not so much twentieth-century German philosophy, as we shall see, but the notions of terror and anxiety Montesquieu and Tocqueville developed in the wake of Hobbes. As Arendt confessed in private letters, she discovered “the instruments of distinguishing totalitarianism from all—even the most tyrannical—governments of the past” in Montesquieu’s writings, and Tocqueville, whose work she read while drafting The Origins of Totalitarianism, was a “great influence” on her.
But within a decade of publishing The Origins of Totalitarianism, Arendt changed course. After traveling to Israel in 1961 to report on the trial of Adolph Eichmann for The New Yorker, she wrote Eichmann in Jerusalem, which turned out to be not a trial report at all, but a wholesale reconsideration of the dynamics of political fear. Not unlike Montesquieu’s Persian Letters or the first half of Tocqueville’s Democracy in America, Eichmann in Jerusalem posed a direct challenge to the account of fear that had earned its author her greatest acclaim. It produced a storm of outrage, much of it focused on Arendt’s depiction of Eichmann, her savage sense of irony, and her criticism of the Jewish leadership during the Holocaust. But an allied, if unspoken, source of fury was the widespread hostility to Arendt’s effort to upend the familiar canons of political fear: for in Eichmann, Arendt showed that much that Montesquieu and Tocqueville—and she herself—had written about political fear was simply false, serving the political needs of western intellectuals rather than the truth. Arendt paid dearly for her efforts. She lost friends, was deemed a traitor to the Jewish people, and was hounded at public lectures. But it was worth the cost, for in Eichmann Arendt managed “a paean of transcendence,” as Mary McCarthy put it, offering men and women a way of thinking about fear in a manner worthy of grown-ups rather than children. That so many would reject it is hardly surprising: little since Hobbes had prepared readers for the genuine novelty that was Eichmann in Jerusalem. Forty years later, we’re still not prepared.
If Hobbes hoped to create a world where men feared death above all else, he would have been sorely disappointed, and utterly mystified, by The Origins of Totalitarianism. What could he possibly have made of men and women so fastened to a political movement like Nazism or Bolshevism that they lacked, in Arendt’s words, “the very capacity for experience, even if it be as extreme as torture or the fear of death?” Hobbes was no stranger to adventures of ideology, but his ideologues were avatars of the self, attracted to ideas that enlarged them. Though ready to die for their faith, they hoped to be remembered as martyrs to a glorious cause. For Arendt, however, ideology was not a statement of aspiration; it was a confession of irreversible smallness. Men and women were attracted to Bolshevism and Nazism, she maintained, because these ideologies confirmed their feelings of personal worthlessness. Inspired by ideology, they went happily to their own deaths—not as martyrs to a glorious cause, but as the inglorious confirmation of a bloody axiom. Hobbes, who worked so hard to reduce the outsized heroism of his contemporaries, would hardly have recognized these ideologues, who saw in their own death a trivial chronicle of a larger truth foretold.
What propelled Arendt in this direction, away from Hobbes? Not the criminal largesse of the twentieth century—she repeatedly insisted that it was not the body counts of Hitler and Stalin that distinguished their regimes from earlier tyrannies—but rather a vision, inherited from her predecessors, of the weak and permeable self. Between the time of Hobbes and that of Arendt, the self had suffered two blows, the first from Montesquieu, the second from Tocqueville. Montesquieu never contemplated the soul-crushing effects of ideology, but he certainly imagined souls crushed. It was he who first argued, against Hobbes, that fear, redefined as terror, did not enlarge but reduce the self, and that the fear of death was not an expression of human possibility but of desperate finality. Tocqueville retained Montesquieu’s image of the fragile self, only he viewed its weakness as a democratic innovation. Where Montesquieu had thought the abridged self was a creation of despotic terror, Tocqueville believed it was a product of modern democracy. The democratic individual, according to Tocqueville, lacked the capacious inner life and fortified perimeter of his aristocratic predecessor. Weak and small, he was ready for submission from the get-go. So strong was this conviction about the weakness of the modern self that Arendt was able to apply it, as we shall see, not only to terror’s victims but, even more wildly, to its wielders as well.
Melding Montesquieu’s theory of despotic terror and Tocqueville’s account of mass anxiety, Arendt turned Nazism and Stalinism into spectacular triumphs of antipolitical fear, what she called “total terror,” which could not “be comprehended by political categories.” Total terror, in her eyes, was not an instrument of political rule or even a weapon of genocide. One will look in vain throughout the last third of The Origins of Totalitarianism, where Arendt addresses the problem of total terror, for any reckoning with the elimination of an entire people. Total terror, for Arendt, was designed to escape the psychological burdens of the self, to destroy individual freedom and responsibility. It was a form of “radical evil,” which sought to eradicate not the Jews or the kulaks but the human condition. If Arendt’s totalitarianism constituted an apotheosis, it was not of human beastliness. It was of a tradition of thought—established by Montesquieu, elaborated by Tocqueville— that had been preparing for the disappearance of the self from virtually the moment the self had first been imagined.
From Molly Wright Starkweather, Digital Track Director:
Back in May at the University of Chicago, this happened:
Two locksmiths with medical conditions were told to repair locks on the fourth floor of the Administration Building during the day. Stephen Clarke, the locksmith who originally responded to the emergency repair, has had two hip replacement surgeries during his 23 years as an employee of the University. According to Clarke, when he asked Kevin Ahn, his immediate supervisor, if he could use the elevator due to his medical condition, Ahn said no. Clarke was unable to perform the work, and Elliot Lounsbury, a second locksmith who has asthma, was called to perform the repairs. Lounsbury also asked Ahn if he could use the elevator to access the fourth floor, was denied, and ended up climbing the stairs to the fourth floor.
Clarke and Lounsbury were told they had to haul their asthma and hip replacements up four flights of stairs because the University of Chicago has had a policy of forbidding workers from using the elevators in this building, which houses the President’s office, during daytime hours. As the university’s director of labor relations put it: “The University has requested that maintenance and repair workers should normally use the public stairway in the Administration Building rather than the two public elevators.”
Upstairs, downstairs was once a metaphor for how the lower and higher orders of Edwardian England lived (servants downstairs, masters upstairs). Today, it’s a literal rendition of the daily grind of workers at our most elite universities.
After five months of agitation, including the threat of a rally and support from undergraduates and graduate students who are organizing their own union, University of Chicago President Robert Zimmer has at last issued a statement reversing the policy: “Let me state in the simplest of terms what the policy actually is: the elevators are for everybody’s use.”
So I ask you: If this is what it takes for workers at an elite American university to be able to use an elevator—a university that is very much in the public eye and thus susceptible to public pressure—what must it take for workers around the country, in small factories and far-off hamlets, to secure their basic rights and privileges?
That is a question I wish our academic theorists of democracy would think some more about.
While we’re on the topic of unions and universities, there was this salutary report from Inside Higher Ed the other day:
The authors of a paper released this year surveyed similar graduate students at universities with and without unions about pay and also the student-faculty relationship. The study found unionized graduate students earn more, on average. And on various measures of student-faculty relations, the survey found either no difference or (in some cases) better relations at unionized campuses.
The paper (abstract available here) appears in ILR Review, published by Cornell University.
“These findings suggest that potential harm to faculty-student relationships and academic freedom should not continue to serve as bases for the denial of collective bargaining rights to graduate student employees,” says the paper, by Sean E. Rogers, assistant professor of management at New Mexico State University; Adrienne E. Eaton, a professor of labor studies and employment relations at Rutgers University; and Paula B. Voos, a professor of labor studies and employment relations at Rutgers.
Much of the study focuses on student-faculty relations, and whether—as union critics fear—the presence of collective bargaining turns a mentoring relationship into an adversarial one. The graduate students were asked to respond to a series of statements about their professors as a measure of how they perceived their relationships. On many issues, there were not statistically significant differences. But on a number, the differences pointed to better relations at unionized campuses. Unionized graduate students were more likely than others to say their advisers accepted them as professionals, served as role models for them and were effective in their roles.
We often hear how liberals belong to the reality-based community and conservatives to the faith-based community. But given how resistant tenured faculty—including the most liberal—are to findings like these, perhaps we should revise our sense of who belongs where.
Philosophy/Communication: Studies in Hermeneutics, Ethics, and Critical Theory
Ramsey Eric Ramsey and Amit Pinchevski, Editors
This week's In Media Res theme focus is Apocalypse Now: End of the World Media (October 7 - October 11).
Here's the line-up: http://mediacommons.futureofthebook.org/imr/
Monday, October 7, 2013 - Kristine Weglarz & Beth Bonnstetter (University West Florida & Adams State University) present: Apocalypse Always: The Imagination of Terrorism JJ Abrams Star Trek(s)
If a devoted choir of lemmings were to go head-to-head against a squadron of rabid, venom-unleashing command-lambs, which would win? The command-lambs might look at first like the obvious choice, but I can’t help feeling that the mysteriously compelling harmonies of the lemming-choir’s deadly siren song would give the crafty rodents a decisive strategic advantage.
Cornell historian Holly Case has a fascinating piece in The Chronicle Review on Stalin as editor. Reminds me of that George Steiner line that the only people in the 20th century who cared about literature were the KGB.
Here are some excerpts. But read the whole thing.
Joseph Djugashvili was a student in a theological seminary when he came across the writings of Vladimir Lenin and decided to become a Bolshevik revolutionary. Thereafter, in addition to blowing things up, robbing banks, and organizing strikes, he became an editor, working at two papers in Baku and then as editor of the first Bolshevik daily, Pravda. Lenin admired Djugashvili’s editing; Djugashvili admired Lenin, and rejected 47 articles he submitted to Pravda.
Djugashvili (later Stalin) was a ruthless person, and a serious editor. The Soviet historian Mikhail Gefter has written about coming across a manuscript on the German statesman Otto von Bismarck edited by Stalin’s own hand. The marked-up copy dated from 1940, when the Soviet Union was allied with Nazi Germany. Knowing that Stalin had been responsible for so much death and suffering, Gefter searched “for traces of those horrible things in the book.” He found none. What he saw instead was “reasonable editing, pointing to quite a good taste and an understanding of history.”
Stalin always seemed to have a blue pencil on hand, and many of the ways he used it stand in direct contrast to common assumptions about his person and thoughts. He edited ideology out or played it down, cut references to himself and his achievements, and even exhibited flexibility of mind, reversing some of his own prior edits.
For Stalin, editing was a passion that extended well beyond the realm of published texts. Traces of his blue pencil can be seen on memoranda and speeches of high-ranking party officials (“against whom is this thesis directed?”) and on comic caricatures sketched by members of his inner circle during their endless nocturnal meetings (“Correct!” or “Show all members of the Politburo”).
The Stanford historian Norman Naimark describes the marks left by Stalin’s pencil as “greasy” and “thick and pasty.” He notes that Stalin edited “virtually every internal document of importance,” and the scope of what he considered internal and important was very broad. Editing a biologist’s speech for an international conference in 1948, Stalin used an array of colored pencils—red, green, blue—to strip the talk of references to “Soviet” science and “bourgeois” philosophy. He also crossed out an entire page on how science is “class-oriented by its very nature” and wrote in the margin “Ha-ha-ha And what about mathematics? And what about Darwinism?”
But Stalin was still not satisfied. In the next round of substantial edits, he used his blue pencil to mute the conspiracy he had previously pushed the authors to amplify (italics indicate an insertion):
The Soviet people unanimously approved the court’s verdict—the verdict of the people annihilation of the Bukharin-Trotsky gang and passed on to next business. The Soviet land was thus purged of a dangerous gang of heinous and insidious enemies of the people, whose monstrous villainies surpassed all of the darkest crimes and most vile treason of all times and all peoples.
MONTAGNE: Right, so given the rise of open-access journals, you decided to do an experiment. That is, send in for publication a fake experiment. And, as you describe it, it is a sting operation.