Out of the crooked timber of humanity, no straight thing was ever made
Updated: 32 min 48 sec ago
When I was sixteen and seventeen I did my 5th Year of secondary school twice. Amidst grinds, tears and two to three hours of Honours Maths homework each night, I just could not make it past Christmas and still understand what was going on. (The obvious and practical response; take Ordinary Level Maths instead and accept that a career in Medicine was out, just didn’t seem to present itself.) For two years I hungrily repeated the exercises in the small part of the curriculum I understood, and threw myself with increasing desperation and diminishing returns at the rest. The last chapter I remember mastering was called something like ‘Sequences, Series and the Binomial Theorem’.
Happily, understanding – at least a little – the concept of geometric progressions has turned out to be one of the most useful and widely applicable bits of Maths I could have picked up. It crops up everywhere; understanding the spread and gravity of DDOS attacks, why mouse infestations need to be hit early, why skimming stones on water is so hard, and how a young woman settling for less money than a man at the beginning of her career may still be paying for it when she’s middle-aged.
The definition of a geometric series or progression is ‘whenever a term of a sequence is a constant multiple of the preceding term’. When that multiple is greater than one, the numbers will get very big, very fast. If, for example, the multiple is two, you’ll get what we often lazily mislabel ‘exponential growth’. Exponential growth tends to sound less cheery when the term is applied in epidemiology.
At dinner the other night, I learnt that the rate of increase of cases of Ebola in certain African countries has been modeled as a geometric progression for weeks, if not months.* Since at least August, the number of new Ebola infections has started to double every month. Common sense dictates that the more people infected, the more people who will be infected. Mathematics predicts chillingly just how bad it will be. The battle to stop the spread of this disease reaching the threshold where it is now running like wildfire has already been lost.
How did it get so bad, so fast? We already know the answer – failed states with no capacity to look after their people at the best of times fell totally apart in a crisis. Over the past two years, the Liberian government disbursed about 5% of the aid it received from the EU that should have gone into building a decent health system. People who might or might not be infected resisted government attempts to round them up and put them in hospitals that more accurately resembled enforced quarantine zones with little or no treatment.
In Sierra Leone, where the UK cut its direct aid budget by 20% two years ago, reported Ebola cases are doubling every three weeks. NGOs there believe a far greater number of people are dying of the disease unreported and at home. There are just over three hundred hospital beds for Ebola in the country. It is a complete disaster and as our mathematics tell us, it will only get worse.
Something must be done, but what? A donors’ conference is happening in London today, where developed countries will pledge more money, divvy up responsibilities, and try to figure out how to channel emergency assistance through or past state channels.
Already, soldiers from the US, UK and France are in affected countries – numerically dwarfing the build-up in Syria / Iraq. (Parse that, for a moment. See where the biggest actual threat seems to be.) The US military is focused on Liberia, the UK on Sierra Leone, and France on Guinea.
And these are not small numbers of troops. The Americans have sent the bones of a brigade to Liberia, where they are building multiple 100-bed military hospitals. The UK is aiming to build a 700-bed hospital in Sierra Leone. The military is used to building instant infrastructure in impossible environments, but will it be enough?
The new Sierra Leone hospital will need hire and train for the special conditions about 7000 personnel, most of them nurses. Where will they come from? A mix of local and international, probably, but that’s a lot harder to put together than it sounds.
What about the soldiers themselves? It sounds very prime-ministerial to ‘send in the army’ to fill and deploy sandbags against the UK’s seasonal floods, to save the day when G4S has cocked up tending Olympics security lines, to be on six-hour standby when Hackney and Peckham are engulfed in riots. But the boots on the ground belong to real people, with real families.
A practicality; UK soldiers deploying abroad usually insure their lives and incomes under private sector schemes like Pax. (Because no, an army pension is not typically enough for a surviving family to live on, and yes, your average critical and life insurer doesn’t cover deployment to war zones.) Pax doesn’t cover Ebola and it’s not likely to either. It’s one thing deploying somewhere terrible when your family will be looked after in case of the worst, quite another when you’ve just voided your income protection along with your fatigues, and the people you love will be kicked off the patch six months after your ugly and lingering death.
Another practicality; it’s all very well to fly the odd Ebola-stricken aid worker back to the UK to be treated – though it presents a stark and nasty calculus of the respective values of African and European/American lives. But how will that work if we’re sending a dozen or two dozen sick and infections soldiers back every month to the UK’s precious few medical isolation units? As to the dead, the flower-lined streets and heroes’ laments of Royal Wooton Bassett will be a dim memory for those felled by a revolting disease spread partly through contaminated corpses.
Scale matters. A lot. It changes not just the scope but the type of problem we’re dealing with, and the proliferation of problems around that. An exponentially growing problem is a problem that metastasizes out of recognition every few months.
Of the three Jews described by George Steiner as, in Corey’s summary, having formulated a great and demanding ethics/politics, Jesus is to me the most interesting.1 That thought struck me while reading Jerry Cohen’s Self-ownership, freedom and equality, a Marxist response to Nozick. As Cohen observes early on, Marxists seem to have a lot more difficulty responding to Nozick than do (US) liberals or social democrats. That’s because the notion of self-ownership central to Nozick’s argument is closely allied to the Marxian idea that capitalism inherently involves exploitation (that is, extraction of surplus value from labor). Nozick’s claim was that the same is true of taxation, or any kind of claim on private property imposed by the state.
I’ll come back to self-ownership in a little while. The more interesting point, to me, is that Nozick’s argument was refuted in advance by Jesus when he was asked by Pharisees (arbiters of the law laid down by Moses) whether it was lawful for Jews to pay taxes to the Romans. This was, of course, a trap, since he could be arrested for saying No and discredited for saying Yes. Jesus showed them a coin with the emperor’s head on the obverse and said “Render unto Caesar the things that are Caesar’s; and unto God the things that are God’s”. And “when they had heard these words, they marvelled, and left him, and went their way.”
Jesus’ point is just as valid if the coin is replaced by paper currency bearing the picture of a president, or rent from a land title issued by a state, or a dividend coupon from a corporation established under state law. All of these things were initially obtained from states under conditions that (in most cases, explicitly) involved the obligation to pay taxes as determined by the legal processes of those states. Someone who takes Caesar’s coin and then repudiates the associated obligation to pay taxes is, quite simply, a thief (of course, theft implies property, and vice versa).
How does all this relate to self-ownership? In my view, this is nothing more than a linguistic confusion.2 Our relationship to our bodies and thoughts, to our friends and family, and even to the objects we use in our daily life, is fundamentally distinct from the property rights we may, or may not, derive from, and have enforced by, states. That’s true even though the same grammatical structures (genitives and clitics) are used for both. This is most obvious from the fact that most (if not all) actually existing property rights in the world today can be traced back to systems which encompassed some form of slavery.
Moreover, systems of property that do recognise self-ownership must necessarily allow some form of slavery. Ownership implies alienability, so that freemen can sell themselves and their families3 into slavery, peonage or indentured servitude.
This brings us to the idea, shared by Marx and Calhoun (among many others) that wage employment is inherently a form of slavery. This conclusion, I think, reflects the fact that self-ownership is the wrong starting point for thinking about these issues.
The fact that most employment relationships involve some degree of exploitation of the worker by the employer reflects the fact that employers are mostly richer and more powerful than workers. A change in the formal relationship, doesn’t change the facts and is often associated withintensified exploitation. An example is the conversion of workers into nominally independent contractors, often used in Australia as a method of unionbusting.
To sum up, the whole idea of basing a theory of social justice on self-ownership, or any kind of natural right to property derived from self-ownership, is inherently self-contradictory. State-created and enforced property rights, including the associated taxation systems, are social institutions which may or may not contribute to socially just outcomes, but have no moral standing in themselves.
But even if a system of this kind established gender equity and notionally gave everyone self-ownership, it would not change the dependence of children on their parents or other adults. So, by the time children reached adulthood, they could be burdened with unrepayable debts, as typically happens in systems of debt peonage.
I hesitate to post this little item because it involves praise of me (with a term, as you may recall, that I really don’t like), but…John’s complaining that we’re not posting enough, and I think the topic in this item might be of interest to readers.
The context is that my friend, Peter von Ziegesar, who’s a filmmaker and author (of an affecting memoir about his brother that you really should read), was interviewed by PEN America and was asked, “While the notion of the public intellectual has fallen out of fashion, do you believe writers have a collective purpose? How about artists? Is it a shared purpose?”
In his response, Peter says in part:
Since we’re on the topic of appalling and bizarre things said by rightwingers, here’s my entry, from this morning’s inbox, with the headline above. It’s from the Foundation for Government Accountability, a Florida thinktank closely linked to ALEC (it also has some overlap with Cato and the State Policy Network).
The “argument” is that the expansion gives health care to poor people “many of whom (35 percent) with a record of run-ins with the criminal justice system”. This is illustrated with a “light-hearted” YouTube cartoon of convicts (riding in Cadillacs, naturally) pushing old ladies out of the line to get into the luxurious health care club that is Medicaid.
Given the catchy use of percentages (the 35 per cent figure is applicable to any assistance given to the poor), we can expect to see this one resurface in the Repub memepond on a regular basis. Paging Mitt Romney.
Crooked Timber seems to be suffering from a deficit of posts. I blame excess of virtue on my part. I was going to post about that Kevin Williamson piece that has set everyone off. I noticed it before it was a thing! And now it’s gone viral. And he’s followed up with a Twitter thing about hanging women who get abortions. Lovely.
Here’s the thing. 1) He’s trolling. 2) On or about Monday afternoon I realized this specific style of trolling bothers me a bit less than it did a couple years back.
1) I’ve grown old and cold and my youthful idealism for truth and justice has dried up.
2) I don’t wear my old “I refute Jonah Goldberg posts that haven’t even been written yet” t-shirt much anymore – because, seriously. Life’s too short to be always trying to live on the bleeding edge of NR nonsense. “Tastes are composed of a thousand distastes” (Paul Valery) and all that. Still.
3) I just don’t see this sort of rhetorical performance being a culture war winner for conservatives any time soon. If Williamson is just going to prove Dunham’s point, give or take – well, why the hell not? If he thinks the solution to the problem of getting down with his bad self is ‘keep digging!’, who am I to say no?
But life is always better with greater intellectual clarity, if it can be achieved, so let me conclude this post by explaining something about Williamson’s Tweets, which are baffling, and have actually been Boing Boing’ed.
What, you may ask, is ‘the personhood dodge’? That is, why does Williamson think that his views are strictly scientific (not religious) and that the only way to be pro-choice is by indulging in some sort of mystic mumbo-jumbo?
The answer is provided here.
There are many religious people in the pro-life camp, but it is not a religious question. It is a question about the legal status of an entity that is under any biological interpretation a 1) distinct, 2) living, 3) human 4) organism at the early stages of development. Consider those four characteristics in order: There is no scientific dispute about whether an embryo is genetically distinct from the body in which it resides, about whether the tissue in question is living or not living, about whether the tissue in question is human or non-human, or whether it is an organism as opposed to a part of another organism, like an appendix or a fingernail.
The pro-abortion response to this reality is to retreat into mysticism, in this case the mysterious condition of “personhood.” The irony of this is that the self-professedly secularist pro-abortion movement places itself in roughly the same position as that of the medieval Christians who argued about such metaphysical questions as “ensoulment.” If we use the biological standard, the embryo is exactly what pro-lifers say it is: a distinct human organism at the early stages of development. If we instead decide to pursue the mystical standard of “personhood,” we may as well be debating about angels dancing on the head of a pin.
This is at least the sort of argument that is interesting to discuss (probably not with Williamson, who is obviously way too busy not caring about Lena Dunham having sex. But maybe he can take a break from all that.) The argument stands in a long line of similar arguments that try to finesse the is/ought distinction, by finding some scientific is to substitute for some-or-other puzzling ought. It’s a classic positivist gambit to say that anything that isn’t strictly scientific is therefore mystical (even though it’s actually kind of implausible that this opposition is exhaustive.)
It’s easy to see that Williamson’s argument generates odd implications. Thus, it can only be swallowed simultaneously with some extreme moral revisionism of ordinary attitudes and notions.
Suppose we encounter a race of non-human aliens that are, like us, sentient. They feel pain and pleasure. They have beliefs and desires, they laugh and cry and fall in love. They make life plans. Can we torture and kill them with impunity? After all, they lack human DNA? Obviously no one is going to say it’s just obvious that we can.
Suppose that for some strange reason a women is pregnant with a genetic clone of herself, so that the thing growing inside her is not genetically distinct from her, as an organism? Does it seem more ok to abort a clone, merely because it lacks a unique DNA signature? I think not. For that matter, would it be ok to murder an adult human clone – or one of two genetically identical twins, so long as you spared the other?
Suppose you want to defend the permissibility of factory farming against Peter Singer-style arguments? Would you simply repeat, over and over, that these animals have been tested and found to contain no human DNA?
Last but perhaps not least: human knowledge about basic truths of genetics is fairly recent. But human ethics is ancient. It is full of strictures against murdering, robbing, unfair treatment. It stipulates duties of care, on and on. If all this is really about genetic facts (not persons) then it seems to follow that all of human ethics is one giant Gettier problem. Quite literally, no human knew a thing about right and wrong before we basically knew how DNA works. Would it make sense to say that humans have moral knowledge of right and wrong, but that no humans did before 1953? Also, real knowledge about genetics is even today a fairly scarce commodity. (I confess to large gaps in my own knowledge.) Would it make sense to say that most of us take it on scientific faith that murder is wrong? That is, we usually have to trust the CSI boys and girls not just about the forensic details of a given murder scene, but about the victim being a genuinely qualified genetic candidate for the moral status of being murdered?
The basic formula for all such silly counter-examples is simple: it is a contingent fact that all known and accredited subjects entitled to the highest level of moral care do have unique human DNA signatures (with the exception of twins). So imagine a world in which that contingency doesn’t hold. What will our moral judgments track, in that world? Not the DNA signature (or lack thereof). That is, the reason we extend moral respect to some things, not others, is not that we value DNA. Rather, we judge some things, not others, to be persons.
If it turns out that personhood is a scientifically disreputable category, what follows is that the content of human morality is scientifically disreputable, for better or worse. If it turns out that our sense of personhood is vague, or conflicted in some cases, yet our moral sense demands an answer, then certain sorts of cases are just never going to be morally comfortable. We will have doubts and a nagging sense that there is something arbitrary, or absurd, about our ethical outlook. We will feel we’ve gone wrong somewhere. The pieces don’t fit.
But probably that isn’t such a surprising result. You can say that this is a proof that all humans are mystics. They go around all day believing in stuff that has no scientific basis, that doesn’t even really make sense, if you really push it. But, since we associate the term ‘mysticism’ with more specific forms of belief and behavior, perhaps this is not the best way to talk about it.
So Williamson is basically committing an old-style is/ought scientistic conflation, or positivistic fallacy, like I said. But maybe there’s a more general term for this fallacy? It goes like this. You notice that some subject, X, is a mess. But there is some subject, Y, in the vicinity, that can be handled neatly. You infer that X must be Y. Because what are the odds that the universe isn’t neat and tidy? Maybe this is just Occam’s Broom?
[UPDATE: thinking about it a bit more, after reading a comment by Brad DeLong, who takes Williamson to be reducing personhood to DNA, I think actually he is reducing moral truth to biological truth, while simultaneously being an eliminativist about personhood. It’s wrong to murder. That’s biology. But there’s no such thing as a person. That’s also biology. Curious combination.]
Important developments in Hong Kong, where students and citizens are protesting to get more democratic reforms. According to various internet reports (various posts at the BBC-website, Hufftington, Bloomberg), college and university students went on strike last Monday to protest Beijing’s decision to not allow open nominations for candidates for the 2017 elections in which the leader of Hong Kong would get elected. Protesters are worried that the closed nominations will mainly draw candidates who follow the Beijing line. From the perspective of an outsider, this seems like a textbook case of elections which will not be democratic if nominations themselves are not democratic.
The civil disobedience movement demanding more democracy is known as Occupy Central: the BBC has a short piece on the movement that helpfully explains their demands and gives some background information. Occupy Central is planning a multiple-day sit-in at Hong-Kong’s financial district starting October 1st.
According to the BBC, “most of China’s state-run media outlets have not commented directly on the student-led protests.” Which makes it all the more urgent and important that people-controled media, such as independent blogs like ours, share the news and talk about it. Consider this an open thread, for sharing views, information, insights and updates.
Jeffrey Toobin has a fascinating piece in this week’s New Yorker on the effort of individuals to get information about themselves or their loved ones deleted from the internet.
Toobin’s set piece is a chilling story of the family of Nikki Catsouras, who was decapitated in a car accident in California. The images of the accident were so ghastly that the coroner wouldn’t allow Catsouras’s parents to see the body.
Two employees of the California Highway Patrol, however, circulated photographs of the body to friends. Like oil from a spill, the photos spread across the internet. Aided by Google’s powerful search engine—ghoulish voyeurs could type in terms like “decapitated girl,” and up would pop the links—the ooze could not be contained.
Celebrities who take naked selfies, ex-cons hoping to make a clean start, victims of unfounded accusations, the parents of a woman killed in a gruesome accident: all of us have an interest in not having certain information or images about us or our loved ones shared on the internet. Because it provides such a powerful sluice for the spread of that information or those images, Google has become the natural target of those who wish to protect their privacy from the prying or prurient eyes of the public.
In Europe, Toobin reports, the defenders of the right to privacy—really, the right to be forgotten, as he says—have had some success. In the spring, the European Court of Justice upheld the decision of a Spanish agency blocking Google from sharing two short articles about the debts of a lawyer in the newspaper La Vanguardia. While the newspaper could not be ordered to take down the articles, the Court held that Google could be “prohibited from linking to them in any searches relating to” the indebted lawyer’s name. As Toobin writes:
The Court went on to say, in a broadly worded directive, that all individuals in the countries within its jurisdiction had the right to prohibit Google from linking to items that were “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes for which they were processed and in the light of the time that has elapsed.”
While the decision has quite a bit of support in Europe, it has been widely criticized in the United States as a violation of the First Amendment, threatening both freedom of speech and freedom of the press. Where the right to privacy is held to be “a fundamental human right” in Europe, claims Stanford scholar Jennifer Granick, Americans are more sensitive to issues of freedom of expression; they prefer to deal with the privacy issues, if they deal with them at all, in a piecemeal fashion.
Europe’s position, Toobin reports, comes out of the continent’s long experience with state surveillance, with governments making use of personal data in ways that presumably the American state has not. He cites the case of the Nazis in the Netherlands and the Stasi after the war. (Though what about J. Edgar Hoover? I remember reading somewhere—though I’ve never been able to find where, which makes me wonder if I just made it up—that the ratio of government informers to population in the US during the Second World War was almost on par with that of postwar East Germany.)
In any event, Toobin concludes, there’s a difference between Europe and the US when it comes to the right to privacy on the internet.
As Toobin goes onto explain, Americans can legally protect themselves from unwanted scrutiny or embarrassment on the internet through a different legal instrument: copyright law.
As Siva Vaidhyanathan helpfully explained to me, Google is required by law to honor the claims of those who own specific words or images by refusing to link to that copyrighted material and by removing, when asked, any links to it (Google also will not allow anyone to post copyrighted videos on YouTube, which it owns.) So if a celebrity were to take a selfy, or if the Catsouras family owned the photographs of their daughter—they tried, unsuccessfully, to get the California Highway Patrol to give them the copyright—Google could be forced, or persuaded, to stop linking to any sites that posted them.
That threat of copyright violation, Toobin explains, can be very effective.
In August, racy private photographs of Jennifer Lawrence, Kate Upton, and other celebrities were leaked to several Web sites….Several of the leaked photographs were selfies, so the women themselves owned the copyrights; friends had taken the other pictures. Lawyers for one of the women established copyrights for all the photographs they could, and then went to sites that had posted the pictures, and to Google, and insisted that the material be removed. Google complied, as did many of the sites, and now the photographs are difficult to find on the Internet, though they have not disappeared. “For the most part, the world goes through search engines,” one lawyer involved in the effort to limit the distribution of the photographs told me. “Now it’s like a tree falling in the forest. There may be links out there, but if you can’t find them through a search engine they might as well not exist.”
I don’t have much of an opinion about the fundamental issue in the article: the battle between the right to privacy and freedom of speech. Toobin expertly presents the various arguments on all sides of the question, and it’s pretty clear that the European approach, favoring the right to privacy, raises many difficult legal and institutional issues.
What I’m more struck by is how little traction the right to privacy has in the United States, as compared to the claims of copyright.
I don’t know much about copyright law, either in the US or in Europe, but I can’t help wondering if one of the reasons its claims are so potent here, trumping those of privacy, is that copyright is a property right. (Siva said he thinks it’s just that copyright has powerful corporate defenders and wealthy lobbies; privacy does not.)
The right to privacy, of course, is historically intertwined with property rights: in the Griswold decision, for example, which struck down Connecticut’s ban on contraception, Justice Douglas cited the Third Amendment, which forbids the quartering of soldiers in private homes, as the basis for a broad constitutional right to privacy. And though Henry Farrell wrote to me in an email that there is an increasing trend in the US to treat privacy as a property right, the fact is that the right to privacy is not nearly as dependent on the claims of property as is copyright, which is a variant of intellectual property (patents and so forth).
Where copyright is designed to protect a person’s ownership over a text or image on the theory that that ownership benefits the public—if an author can reap the full monetary benefits from the production or sale of a text or image, she will be encouraged to produce those texts and images—the right to privacy is designed to protect a person’s claims against the public. Copyright protects a person’s property by conscripting it on behalf of the public (at least in theory); privacy shields a person from the public.
It’s interesting that an allegedly individualistic US is less sensitive to these issues of privacy than an allegedly collectivistic Europe, but the rights of privacy in the cases Toobin cites don’t involve any property rights. Save the damage to one’s reputation, which might gain some traction from the law if a person were powerful, but gets virtually none when a person is not. (If I remember correctly—it’s been a while—one of the cornerstones of the legal theory of hate speech, at least as scholars like Mari Matsuda, Charles Brown, Kimberlè Crenshaw, and Richard Delgado laid it out, was the attempt to extend the protections of libel law to an entire social group or class, so that disfranchised collectivities, like African Americans, could receive the same legal protections for their social status that powerful individuals traditionally had received for theirs. As they argued, when the personal reputation of a wealthy individual is at stake, the law could be far less solicitous of the free speech claims of that individual’s critics, reputation being a kind of property right that the state ought to protect. As I say, it’s been a while since I read this literature, so I could be completely misconstruing it. But I digress.)
The whole discussion in Toobin’s article reminds me of another Justice Douglas opinion: his concurrence in Heart of Atlanta Motel v. United States. In that case, the Supreme Court upheld Title II of the Civil Rights Act. That provision made it illegal for restaurants, inns, and other public accommodations to discriminate on the basis of race. The Court claimed that Title II was a legitimate exercise of Congress’s power under the Commerce Clause because the travel of African Americans to and from the South involved interstate commerce, and ending segregation in these public accommodations would facilitate such travel and, by extension, interstate commerce.
In his concurring opinion, Douglas conceded that Congress had the right to use its interstate commerce powers in these ways, but he was nonetheless discomfited by the Court’s resting Title II on that provision of the Constitution. He would have preferred to rest it on Congress’s power under the 14th Amendment.
Though I join the Court’s opinions, I am somewhat reluctant here, as I was in Edwards v. California, 314 U.S. 160, 177, to rest solely on the Commerce Clause. My reluctance is not due to any conviction that Congress lacks power to regulate commerce in the interests of human rights. It is, rather, my belief that the right of people to be free of state action that discriminates against them because of race, like the “right of persons to move freely from State to State” (Edwards v. California, supra, at 177), “occupies a more protected position in our constitutional system than does the movement of cattle, fruit, steel and coal across state lines.” Ibid. Moreover, when we come to the problem of abatement in Hamm v. City of Rock Hill, post, p. 306, decided this day, the result reached by the Court is, for me, much more obvious as a protective measure under the Fourteenth Amendment than under the Commerce Clause. For the former deals with the constitutional status of the individual, not with the impact on commerce of local activities or vice versa.
But American being America, commerce ruled. And rules. Like property.
What was it those two dudes said? “In bourgeois society capital is independent and has individuality, while the living person is dependent and has no individuality.”
On Monday, 13 October 2014, at 11.45 am, the winner of the 2014 Nobel Prize in Economics will be announced (yes, we know it is officially the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel but that’s not the focus of this post). Some have said that the prize should go to Thomas Piketty, for his best-selling, important and highly influential book Capital in the twenty-first century. I, too, think this is a great book, for a variety of reasons.
But there is another inequality economist who is at least equally, and arguably much more deserving of the Nobel prize, and that is Anthony B. (Tony) Atkinson. For close readers of Piketty’s work, this claim shouldn’t be surprising, since Piketty credits Atkinson with “being a model for me during my graduate school days, [and Atkinson] was the first reader of my historical work on inequality in France and immediately took up the British case as well as a number of other countries” (Capital, vii). In a recent interview with Nick Pearce and Martin O’Neill which was published in Juncture, Thomas Piketty calls Tony Atkinson “the Godfather of historical studies on income and wealth” (p. 8). So my hunch is that Piketty would endorse the claim that if the Nobel Prize were awarded to welfare economics/inequality measurement, that Atkinson should get the Nobel Prize.
In addition to his scholarly contributions, Atkinson has always tried to make his work accessible to a wider audience – to empower people (and policy makers) by making economic research accessible to non-specialists. Many of the books he wrote are accessible to non-economists – and he has published a remarkable large number of books (even more so by the standards of his profession) – no less than 22 monographs (some of which are co-authored) and 17 edited volumes and reports. He also repeatedly argued that economics is a moral science and that it should, once again, understand itself as a moral science; last year I’ve argued here why I couldn’t agree more.
Many citizens (and scholars) who think that we should pay more attention to inequality would be thrilled if Thomas Piketty would get the Prize. But given that Piketty (and all other currently successful welfare economists) stand on the shoulders of Atkinson, it’s clear that Atkinson should get the prize. Yet perhaps there are good reasons to split the Nobel Prize between Atkinson and Piketty? Clearly Piketty has had an enormous impact on political and public debate worldwide, which is rare for an academic economist. But there’s another reason the Nobel Prize Committee could consider to co-award it to Atkinson and Piketty, and that is that Piketty has linked welfare economics with macro-economics, which not many welfare economists do (1) and that opens up a promising avenue of future research. And there may be other reasons why Piketty is deserving as much as Atkinson – if you think there are, you can add them below.
In any case, whether the Nobel Prize committee decides to give the prize to Atkinson or also to Piketty – I hope they don’t disappoint us, by not giving it to welfare economics/inequality analysis.
(1) I am walking on thin ice here, since I may be biased in my observations: when I undertook my graduate studies in welfare economics (around 1995-1999), this seemed to me to be the case, and the work I’ve been reading since in welfare economics was also predominantly micro. But there may be more work out there connecting inequality measurement and macro-economics that I don’t know. This is John’s terrain, really, and I would be more than happy to be corrected if needed. That’s what conversations are for.
George Steiner writes somewhere that the deepest source of anti-Semitism may lie in three Jews: Moses, Jesus, and Marx. Three Jews who formulated a great and demanding ethics/politics, an almost unforgiving and humanly unbearable ethics/politics, that the rest of the world, whatever their formal embrace of institutionalized Christianity or communism, has repeatedly bridled at and hated. And never forgiven the Jews for. Setting aside the bit of self-congratulation that lies at the heart of that formulation—ah, we Jews, we’re so ethical and righteous—I wonder if some part of what Steiner says may not lie at the heart of the rage and reaction that Hannah Arendt’s Eichmann in Jerusalem has elicited over the years. I mean, regardless of what you think of Eichmann’s arguments, you have to admit: the book does get under people’s skin. And not just for a moment, but for more than a half-century now, with no signs of abating. And that may be, taking my cues from Steiner, that there is something unforgiving at the heart of that book. It is a relentless indictment—not just, pace what Arendt herself said later of the book, of one man, but of many men, and women—an indictment, despite Arendt’s best and professed intentions, in which ordinary readers (ordinary men) can’t help but see themselves. And an indictment in the name of (or at least implicitly and distantly in the name of) a difficult and demanding ethics and politics. An indictment that seems to stir the same kind of reaction to Arendt that historically was stirred up against the Jews. Oh, that Hannah Arendt: she sets herself apart; she thinks she’s smarter than the rest of us; she belongs to no one, not even the Jews. Only this time it’s not the reaction of just non-Jews to Jews, but also of Jews to a Jew. Shana Tova.
In my class today someone made reference to the Kitty Genovese case (it was relevant) and I commented, casually, that I thought that the claim that 30 something people had looked on while Genovese had been discredited. Another student said “oh no, I am revising for a test later today about this” and proceeded to give us the standard account of the case. Here’s Nick Lemann’s New Yorker review of the books that seemingly discredit it.
I sent the students the link, and a different student wrote back that she had thought I was joking in class (they know I do that sometimes) and that as a psychology major she hears about the case in every class she takes. That got me to thinking about the Milgram experiment (which philosophers make much more of than they do of the Genovese case) which, again, seems to me (I say “seems” because I read part of Gina Perry’s book, and have heard her interviewed in depth) also discredited. And made me wonder i) whether anyone has a refutation of Perry’s book but, more, ii) how quickly professors adjust their teaching when findings they have taught as gospel are thoroughly discredited. I was a bit shocked, frankly, that the Genovese case is still being taught as something to be regurgitated in a test, but I am also quite struck by the number of times I have heard philosopher’s call on the Milgram experiment as evidence for some philosophical view, and wondered how long it will take before it is removed from the philosopher’s armoury (and the psychologist’s lectures)
Unless I’m missing something, Kurtz’ actual argument that Hillary has consistently remained an Alinskyite radical is that, for decades, she has consistently done absolutely nothing whatsoever to suggest this is true – as one would expect! She is, to all appearances, moderate, incrementalist and pragmatic. Just like Barack Obama, who is such a model Alinskyite radical that he is on track to govern for eight years and retire to private life without once doing anything to suggest he’s got a radical bone in his body.
How much more sinister would The Manchurian Candidate have been if the trigger word were never spoken. The sleeper never wakes! (A lone hero tries to warn the world but, because there is literally nothing to warn people about, he is ignored.)
Back to Kurtz.
With Obamacare and much else besides, the legal and bureaucratic groundwork has already been laid for a leftist transformation of America. It is naïve to believe that Hillary would roll any of this back.
OK, now that would be a twist ending. Suppose Hillary is elected and we find out how just how deep the rabbit hole goes. She was, and has remained, a Goldwater Girl. After 1964 she knew that sort of commonsense conservatism could not win openly. It was too easy for opponents to tar you as a radical. The whole Alinsky phase was then a ruse, to establish a veneer of political acceptability. This was deep cover, to get close to Bill Clinton and, through him, the levers of power. Flash forward. It’s been a long road but finally, in 2016, all the ‘naive’ people who expect from Hillary a radical rollback of Obamacare, and much else he and other Democrats have done for decades in a seemingly moderate, incrementalist, pragmatic spirit – after all, she says she’s a moderate! – are proved right! President Hillary confesses to the American people that she has only seemingly been supporting a consistently seemingly moderate politics all these years, because secretly she advocated a consistently moderate politics. But she knew the American people, who don’t like radicalism, would only go for moderation if it was cloaked as radicalism cloaked in moderation. She joins the Tea Party and goes down in history as a truly moderate Democrat.
“That’s a new one, blue skies on Mars.”
Over the last couple of weeks, I’ve seen four major reports (details over the fold) from very different sources, all making the same point: decarbonizing the world economy will involve economic costs that are
Against the expectations of doubters, wind and solar PV are steadily increasing their share of electricity generation, to the point where they constitute the majority of new installations in many countries. Again, the costs have been trivially small: in Australia’s case, made up almost entirely of the reduction in asset value imposed on existing generators.
There is as far as I am aware, no credible analysis to support the opposite claim (call it the economic armageddon hypothesis) that decarbonization will involve economic costs sufficient to greatly reduce living standards, or, for poor countries, prevent catchup to the developed world. (Again, more detailed argument over the fold.
Nevertheless, past experience suggests that lots of people are sufficiently wedded to the economic armageddon hypothesis that neither this, nor any other evidence will change their minds. I have previously analyzed this unwillingness to respond to evidence in terms of Noah Smith’s Bayesian definition of “derp“: “the constant, repetitive reiteration of strong priors”.
But I no longer think this is sufficient. A central concept of Bayesian decision theory is the separation of preferences from beliefs. That is, your subjective belief about the probability that a proposition is true should be independent of whether (because you have bet on it, or for some other reason) you want it to be true. This is the opposite of what is often called “motivated reasoning” or, less politely, “wishful thinking”.
This, I think, is the central distinction between “derp” and “denial”. Both involve the rejection of factual evidence that would (to a person without strong preconceptions) be overwhelmingly strong. This must involve strong prior beliefs. Denial differs from derp in that these factual beliefs derive from preferences, and are unlikely to undergo any updating. If anything, denial may be strengthened by evidence of the proposition being denied.
This in turn suggests different possible cures. Derp may eventually, if very slowly, be overcome by an accumulation of evidence. By contrast, denial can only be addressed by changing the source of wishful thinking; for example, by convincing rightwingers to stop being rightwingers.
That brings us to the question of why, if the case is so overwhelming, the political resistance to action on climate change has been so strong, and whether it can be overcome. I have a go at this in another post on my blog, where this one was already posted. It might be worth reading the comments threads to these posts before jumping in here.
As promised above, here are my sources for the proposition:
First, there’s Pathways to Deep Decarbonization an international collaborative project under the auspices of the UN.
Second, the Better Growth Better Climate report from the Global Commission on the Economy and Climate
Third, this report on Green Growth from the Center for American Progress (covers the US only)
And, most strikingly, this report from staffers at the International Monetary Fund, long the guardian of fiscal rectitude has concluded that for most countries, the local side benefits of reducing pollution would be sufficient to offset the costs for carbon prices up to $50/tonne.
There are lots more analyses making the same point, which can easily be checked with an upper bound calculation.
On the other side, I haven’t seen anything that comes close to being a credible source for the economic armageddon hypothesis. What I have seen are
All of these are, of course, the standard argumentative practices of climate science denialists, who are entirely consistent in their treatment of economic issues. Unfortunately, there are also many who, like Trainer, regard themselves as being on the environmental side of the debate but give aid and comfort to its enemies by backing their bogus claims of economic armageddon. At these point, it is necessary to extend the denialist label to cover this group as well.
I’m sure that this point has been made somewhere or other in the general debate on email spying and the NSA/Snowden revelations, but in my opinion not often enough or forcefully enough. People who want to dismiss the whole thing as “no big deal” are, in my view, totally underestimating the scale of the blind trust that’s required of them. In other words, even opponents of ubiquitous surveillance (like Kieran in this worked example) tend to assume that the institution which has access to your information is the institution which collected it. But that’s not necessarily the case at all.
The Leveson Inquiry in the UK demonstrated that the Police National Computer could be accessed by more or less any tabloid journalist with a phone and an account with a crooked detective agency (which served as the conduit to crooked insiders). The Manning and Snowden revelations, whatever else they’ve shown us about the world, have made it clear that mid-level employees can get access to huge amounts of top secret data as long as they’ve got the wit to smuggle it out on a thumb drive.
So the question is not so much “do you trust the CIA/NSA/MI6/etc?”. It’s “Do you trust every single sysadmin working for these organisations? Every single analyst? Every single middle manager?”. The CIA might not be interested at all in my dull mobile phone conversation metadata, but someone else might – the Leveson inquiry was told how the UK’s PNC was used by one copper to check out his daughter’s new boyfriend. In terms of our personal data, the kind of uses which the agencies want to be allowed to make, while worrying enough in themselves, are the tip of the iceberg. And all the policies which might prevent it from being accessed by blackmailers, tabloid journalists, nosey neighbours and basically anyone else, are themselves top secret and not subject to any sort of legal oversight.
This isn’t a conspiracy theory, as you can see; it’s based on the fact that big and complicated systems are set up to malfunction, particularly if they are able to declare themselves above any regulation at all. And the way in which this particular system is set up to malfunction is easily predictable and potentially very damaging to innocent people. I am personally not at the stage where I trust every single person who might be hired for a low level IT job in a security agency, and I’m not sure that I trust an entirely opaque set of safeguards with no accountability either.
Feminism, social activism, eye-catching stunt made eye-catching because it’s not a stunt.
About a dozen single mothers kicked out of their hostel in east London have occupied a ‘show-flat’ in the former Olympics estate that Newham Council is trying to flog while it has 24,000 households on its waiting list.
Increasingly, I just can’t justify the amount of volunteer time I spend on Internet rights. Yes, we are handing over control of every aspect of our lives to insidiously corrupt and obviously ineffective states, and that is a terrible, terrible thing. But I live in a city of dirty billionaires and hungry children. This made me cry. Something has got to give.
Long-time readers of this blog know that I am an apostate of the economics discipline. When I was 17, I wanted to study something that would be useful to help make the world a better place. I thought that economics would meet that requirement, and it also seemed natural since I always had a strong interest in politics, in particular the question how to organize society. For reasons explained here, I eventually gave up the hope that economics (as I studied it in the 1990s) could give me that knowledge, and diverted to political theory/philosophy and later also ethics, where I’ve been happy ever since.
But for the first time since many years, I felt a shiver of regret for having left economics – and that was when in April this year I started reading Capital in the Twenty-first Century, the best-selling book by Thomas Piketty. Reading Capital was a great intellectual adventure, while at the same time enjoyable to read (many have said the translator, Arthur Goldhammer, deserves part of the latter credits). It is hard for academic economics to evoke positive feelings in its readers, but Capital did so with me for at least two reasons.
One is that the book is extremely interesting and rich, as by now many have pointed out. It brings economics back to the wider public, and gives the reader a true sense of what is at stake in studying the economy and hence engaging with economics. It opens up economics to all those for whom the study of economics is relevant – that is, all of us.
The second source of excitement came from my sense of what this book could do to change the economics discipline. Students of economics have been advocating for years that economics is too narrow, too much focussed on elegance and mathematical beauty, insufficiently rooted in both history of economic thought and empirical economic history, and insufficiently aware of the institutional and cultural context. None of this applies to Capital. In fact, I think that by putting forward such a strong, historically-based, empirically-grounded, and theory-rich account of the workings of capitalism and the resulting inequalities, Piketty is offering us a concrete alternative for how to do economics.
Heterodox economists have tried for years to say how economics as a discipline should change. In my view, overall they have failed to make much of a difference (that is not the same as saying that their arguments were bad or unconvincing!). Economics by now is a science where the mainstream is extremely powerful; other social sciences are much more internally pluralistic/heterogenous. Hence, it is hard to be a happy heterodox economist working in an economics department (with the exceptions of the handful of heterodox departments that are left). So I am not surprised that many heterodox economists left the economics discipline – and moved to economic history, development studies, economic geography, political theory, even philosophy.
Thomas Piketty has the power that heterodox economists never had. He has shown how economics can be done differently. He is a professor of economics at a prestigious university, hence he is situated squarely within the center of the economics discipline. He hasn’t used elaborate meta-theoretical critiques to show why mainstream economic models and methods fall short, but simply put into practice a different way of doing economics – while en passant noting that some of his findings are beyond the radar screen of mainstream economics because of their built-in assumptions. Sure, he’s not the first to have worked with such methodological commitments, but with the success of Capital, and building on the academic credibility gained by his earlier scholarly articles and books, Piketty may have the power to make a real difference to how the economics discipline in the near future will look like.
Alan Dershowitz expresses his opinion on academic freedom, the Salaita case, and why UIUC natural scientists appear to have been less likely than social scientists and humanities people to support him.Some, including Alan Dershowitz, a Harvard law professor who backed Summers and opposed the tenure bid of Norman Finkelstein, the controversial former political scientist at DePaul University, have a more cynical take. Dershowitz said that in his experience, academics working in STEM tend, “in general, to be more objective and principled, and those in the humanities tend to be ideologues and results-oriented, and believe it’s the appropriate role of the scholar to use his or her podium to propagandize students.” Dershowitz said he believed personal opinion had influenced how those human sciences viewed both the Salaita and Summers cases, and that scientists were likelier to examine the evidence impartially. “I would bet anything that 99 percent of the people who are demanding that [Salaita] be restored tenure would be on the exact opposite side of this if he’d been making pro-Israel but equally uncivil statements,” he said.
There is a very strong case to be made against “results oriented” ideologues in the academy but I think that it isn’t quite the case that Dershowitz is making.
To illustrate this case, let’s turn to some relevant quotes from an article in the now defunct Harvard student magazine 01238, preserved at The Faculty Lounge.Dershowitz is, however, notorious on the law school campus for his use of researchers. (The law school itself is particularly known for this practice, probably because lawyers are used to having paralegals and clerks who do significant research and writing; students familiar with several law school professors’ writing processes say that Dershowitz reflects the norm in principle, if to a greater degree in practice.) … Several of his researchers say that Dershowitz doesn’t subscribe to the scholarly convention of researching first, then drawing conclusions. Instead, as a lawyer might, he writes his conclusions, leaving spaces where he’d like sources or case law to back up a thesis. On several occasions where the research has suggested opposite conclusions, his students say, he has asked them to go back and look for other cases, or simply to omit the discrepant information. “That’s the way it’s done; a piecemeal, ass-backwards way,” says one student who has firsthand experience with the writing habits of Dershowitz and other tenured colleagues. “They write first, make assertions, and farm out [the work] to research assistants to vet it. They do very little of the research themselves.”
I don’t recall that Dershowitz was himself quoted in the article in question; quite possibly he wasn’t asked. If he had been asked, he might very well have contested the description of his research practices that the article attributes to several of his former researchers. But imagine that a scholar in a department in the hard sciences (or social sciences or humanities for that matter), tenured or otherwise, conducted his research in the manner that the article attributes to Dershowitz. That scholar would deserve to be fired, regardless of whether his or her political leanings (or research findings) leaned hard left, hard right, centrist or whatever. He or she would be guilty of the most flagrant abuse of research standards. In the hard sciences, if you’re caught throwing out inconvenient data in order to justify a conclusion, you will be disgraced, and ought to be compelled to resign. The social sciences, likewise. If you’re a humanist, and you write articles claiming e.g. that the historical sources say x, when you have carefully and deliberately omitted the sources that say not-x, again you’re likely to be drummed out of the profession.
This is academic misconduct. Put more simply, you are cheating on the job you are supposed to be doing. You are not a scholar, but a hack and propagandist. Perhaps this kind of conduct is ubiquitous in law schools (the article claims that other colleagues of Dershowitz also do this), although I personally would be surprised. Outside of academia, there may, very reasonably, be different standards. Litigators are supposed to make the best case they can under the law for their clients. However, if my understanding is correct, they too have obligations as officers of the court, not e.g. to knowingly omit relevant information or citations. It is honorable for a litigator to be a hack, but only up to a point.
Holding unpopular opinions or saying harsh things on Twitter is not academic misconduct. I personally find some of the views that Alan Dershowitz has expressed (e.g. on torture) to be repulsive and indeed, actively depraved. I wouldn’t press to have him fired for saying those things, and if his job were threatened because he had said these things I would defend his right to employment, while holding my nose. If he had reached these opinions through real academic research (rather than outsourced hackish opportunism), it would be part of his vocation- academics are supposed to follow their search for knowledge wherever it leads them. If he were expressing those opinions outside an academic context, it would be his own private business.
As per Chris, Alex Gourevitch and Corey’s broader analysis, the Salaita case is best seen as an instance of a broader phenomenon: how control over people’s employment opportunities is being used to deny them ability to express their political beliefs. It’s in the same class as this case in which a foreman at a West Virginia mine was pressured to make contributions to GOP candidates (through a centralized process, in which her boss would be able to see who contributed and who did not), and alleges that she was fired when she failed to comply.
Academics, obviously, have self interested reasons to defend against abuses of the sort that we saw in Salaita’s case. But they also should want to see these freedoms extended to the workplace more generally, even in instances where the results may seem individually obnoxious to them. Employers shouldn’t have any control, express or implicit, over their employees’ political activities outside the workplace. That they do have effective control in many US states, is more a hangover from feudalism, than anything that is justifiable in principle in a democratic state.
People who’ve been reading this blog for a long time won’t need to be told who Jim Henley is. He’s been blogging longer than we have (if we’re a product of the mid-Cretaceous, he’s been doing it since the early Jurassic). He’s also a wonderful guy. And he’s been dealing with a recurrence of his cancer, the loss of his job when his employer went under, the need to pay medical and transport bills and keep his equally wonderful family going. In short, he could use your help. If you would like to provide it, please go here.
The very insightful Ethan Zuckerman recently gave a convocation speech at his alma mater, Williams College. While his specific angle was not about this, I read it as a nice call for the importance of international students on campus, and of studying abroad (among other things).
One of the things I’ve learned in my research is that it’s much easier to pay attention to people than to places. If there’s someone you care about who’s from Haiti, if you’ve had the chance to travel there and meet people from Haiti, you’ll watch the news differently. You’ll have a connection to that place, a context for a story you hear. The events will be more real to you because Haiti is more real to you through the people you know there.
It is important though that international student recruitment not be restricted to international students who can pay full tuition. Personally, I remain extremely grateful to Smith College for its generous support of international student financial aid. When I was applying to US colleges from Hungary in the early 90s, it was the only school that came even close to offering enough aid to allow me to study in the US.
By the way, if you haven’t read Ethan’s book Rewire, you should. It’s a quick and very pleasant read with lots of interesting material and important insights on just how not connected we are in meaningful ways despite infrastructural connections.