In the space of a week, Jane, Mindles, and the commenters have fleshed out the Republican policy towards the poor. To wit: 1. Those tricksy bastards (Dems) are wildly overstating the problems [this post];Still, these two comments are the best:
2 A lot of the problems associated with the lower end of the income scale are a result of the stupidity of the poor (and really, what can you do with the stupid?) [this post];
3. Almost all Republicans have suffered through much more trying times than any of the poor have faced - and they’ve kept the aspidistra flying, dammit; the poor need to stop whining [this post];
4. Mercy is twice blessed because it is given; it cannot be commanded by the government. If someone has screwed up and doesn’t get another chance - well, they made their own bed. That someone else, with a different background, has had a second chance (or however many chances one gets in getting from 20 to 40 as a drunk) is of no import whatsoever, and people who are envious of the latter group should have had the forethought to have better parents. Indeed, even asking that we temper our scorn for them is too much - might be a disincentive to change [drug post];
5. Of course, the poor don’t need to have forethought because we keep cosseting them. If we let a few old people starve to death on the streets, they’d smarten up, work harder, and start investing; doing anything at all to help the poor merely robs them of the incentive to improve their lot [SS post];
6. Occasionally, you run across the very rare situation where it’s hard to entirely blame the poor for their situation, like natural disasters. In those cases, we may give them some help. But, before doing so, it’s important to note
- that they’ve done very little for us;
- that they are insufficiently grateful at the moment of the crisis;
- that if we’re going to put aside our principles and help them, we must get credit! [stingy post]
it seems that leftists and liberals are really, really innumerate… anyone interested in the real world and good in math seems to be very libertarian or conservative…(Link)and:
A pound of ham will make the equivalent of 20 quarter pounders, by my math. (This somewhat misses the point, as I wouldn’t put a quarter pound of ham on my sandwich, and probably neither would you.) LinkAh, science. (And I grant that the comments are not strictly contradictory). On a more serious note, I was thinking today of how much better off the residents of American inner cities would be if the Singapore model of hawker centres prevailed. Sure, there’s fattening char kway teow, but every hawker centre has a fruit juice and sliced fruit stand with cheap papaya, watermelon, and kiwi fruit, not to mention carrot juice. I understand that crime is a deterrent, but why exactly is it that US inner-city markets have such awful, expensive, fly-blown produce, even the ones in Oakland CA? Is this true in poor neighborhoods in Great Britain? I understand that there are supply chain/perishability problems, but is it only this that makes it cheaper to sell St. Ides and a Big Grab Doritoes than mustard greens?
Asia Source has an interview with Amartya Sen , which touches on the record of the World Bank and IMF, the evolution of Sen’s ideas on “capabilities”, democracy, the postwar histories of India and China, anticolonialism, and much else. (Found via INBB , which looks like a really interesting blog.)
I forgot to post a follow-up on my recent struggles with statistics. Well, it may be that blogging on something as opposed to actually doing it is shockingly useless displacement activity, but that doesn’t mean it’s completely wasted time. By the time I got to the exam, all aggravation had been expressed and I had some great suggestions for books to use if I didn’t scrape the pass this time round. Plus, I’d ditched the parts of the course I was just not absorbing (poisson distribution and other topics whose very names I’ve since erased from memory ) and practiced using the parts I understood.
Luck played a part again (is this statistically significant, 3rd time in a row?). The really hard trick question, which I considered for 5 minutes and then skipped in favour of a straightforward regression, turned out to have been so fiendishly difficult that no one answered it, and it was excluded from the marking. My performance probably said as much about a well-practiced ability to do exams - by scouring the paper for every last point - as it does about my statistics ability. As I left the exam, I figured that with a bit of luck I’d score 12 out of 20. Which I did.
So, what have I learnt? A good bit more about the basics than on my two previous passes. This time round, I actually understood and used regression, and couldn’t understand why it had seemed so impossible last time. And a little bit more about probability, but a very useful little bit. I do agree with Kieran and other commenters that statistics is a really cool and useful tool, and I wish I could wield it better. But now I have (another) piece of paper that says I can do that passably well.
Thanks everyone for the advice, sympathy, and above all the book recommendations. I expect they’ll be useful to many struggling with stats as the pedagogy catches up (or slows down) to non-mathematical students’ needs.
Well, the Lancet study has been out for a while now, and it seems as good a time as any to take stock of the state of the debate and wrap up a few comments which have hitherto been buried in comments threads. Lots of heavy lifting here has been done by Tim Lambert and Chris Lightfoot; I thoroughly recommend both posts, and while I’m recommending things, I also recommend a short statistics course as a useful way to spend one’s evenings (sorry); it really is satisfying to be able to take part in these debates as a participant and I would imagine, pretty embarrassing and frustrating not to be able to. As Tim Lambert commented, this study has been “like flypaper for innumerates”; people have been lining up to take a pop at it despite being manifestly not in possession of the baseline level of knowledge needed to understand what they’re talking about. (Being slightly more cynical, I suggested to Tim that it was more like “litmus paper for hacks”; it’s up to each individual to decide for themselves whether they think a particular argument is an innocent mistake or not). Below the fold, I summarise the various lines of criticism and whether they’re valid or (mostly) not.
Starting with what I will describe as “Hack critiques”, without prejudice that they might in isolated individual cases be innocent mistakes. These are arguments which are purely and simply wrong and should not be made because they are, quite simply, slanders on the integrity of the scientists who wrote the paper. I’ll start with the most widespread one.
The Kaplan “dartboard” confidence interval critique
I think I pretty much slaughtered this one in my original Lancet post, but it still spread; apparently not everybody reads CT (bastards). To recap; Fred Kaplan of Slate suggested that because the confidence interval was very wide, the Lancet paper was worthless and we should believe something else like the IBC total.
This argument is wrong for three reasons.
1) The confidence interval describes a range of values which are “consistent” with the model1. But it doesn’t mean that all values within the confidence interval are equally likely, so you can just pick one. In particular, the most likely values are the ones in the centre of a symmetrical confidence interval. The single most likely value is, in fact, the central estimate of 98,000 excess deaths. Furthermore, as I pointed out in my original CT post, the truly shocking thing is that, wide as the confidence interval is, it does not include zero. You would expect to get a sample like this fewer than 2.5 times out of a hundred if the true number of excess deaths was less than zero (that is, if the war had made things better rather than worse).
2) As the authors themselves pointed out in correspondence with the management of Lenin’s Tomb,
“Research is more than summarizing data, it is also interpretation. If we had just visited the 32 neighborhoods without Falluja and did not look at the data or think about them, we would have reported 98,000 deaths, and said the measure was so imprecise that there was a 2.5% chance that there had been less than 8,000 deaths, a 10% chance that there had been less than about 45,000 deaths,….all of those assumptions that go with normal distributions. But we had two other pieces of information. First, violence accounted for only 2% of deaths before the war and was the main cause of death after the invasion. That is something new, consistent with the dramatic rise in mortality and reduces the likelihood that the true number was at the lower end of the confidence range. Secondly, there is the Falluja data, which imply that there are pockets of Anbar, or other communities like Falluja, experiencing intense conflict, that have far more deaths than the rest of the country. We set that aside these data in statistical analysis because the result in this cluster was such an outlier, but it tells us that the true death toll is far more likely to be on the high-side of our point estimate than on the low side.”
That is, the sample contains important information which is not summarised in the confidence interval, but which tells you that the central estimate is not likely to be a massive overestimate. The idea that the central 98,000 number might be an underestimate seemed to have blown the mind of a lot of commentators; they all just seemed to act like it Did Not Compute.
3. This gave rise to what might be called the use of “asymmetric rhetoric about a symmetric confidence interval”, but which I will give the more catchy name of “Kaplan’s Fallacy”. If your critique of an estimate is that the range is too wide, then that is one critique you can make. However, if this is all you are saying (“this isn’t an estimate, it’s a dartboard”), then intellectual honesty demands that you refer to the whole range when using this critique, not just the half of it that you want to think about. In other words, it is dishonest to title your essay “100,000 dead – or 8,000?” when all you actually have arguments to support is “100,000 dead – or 8,000 – or 194,000?”. This is actually quite a common way to mislead with statistics; say in paragraph 1 “it could be more, it could be less” and then talk for the rest of the piece as if you’ve established “it’s probably less”.
The Kaplan piece was really very bad; as well as the confidence interval fallacy, there are the germs of several of the other fallacious arguments discussed below. It really looks to me as if Kaplan had decided he didn’t want to believe the Lancet number and so started looking around for ways to rubbish it, in the erroneous belief that this would make him look hard-headed and scientific and would add credibility to his endorsement of the IBC number. I would hazard a guess that anyone looking for more Real Problems For The Left would do well to lift their head up from the Bible for a few seconds and ponder what strange misplaced and hypertrophied sense of intellectual charity it was that made Kaplan, an antiwar Democrat, decide to engage in hackish critiques of a piece of good science that supported his point of view.
The cluster sampling critique
There are shreds of this in the Kaplan article, but it reached its fullest and most widely-cited form in a version by Shannon Love on the Chicago Boyz website. The idea here is that the cluster sampling methodology used by the Lancet team (for reasons of economy, and of reducing the very significant personal risks for the field team) reduces the power of the statistical tests and makes the results harder to interpret. It was backed up (wayyyyy down in comments threads) by people who had gained access to a textbook on survey design; most good textbooks on the subject do indeed suggest that it is not a good idea to use cluster sampling when one is trying to measure rare effects (like violent death) in a population which has been exposed to heterogeneous risks of those rare events (ie; some places were bombed a lot, some a little and some not at all).
There are two big problems with the cluster sampling critique, and I think that they are both so serious that this argument is now a true litmus test for hacks; anyone repeating it either does not understand what they are saying (in which case they shouldn’t be making the critique) or does understand cluster sampling and thus knows that the argument is fallacious. The problems are:
1) Although sampling textbooks warn against the cluster methodology in cases like this, they are very clear about the fact that the reason why it is risky is that it carries a very significant danger of underestimating the rare effects, not overestimating them. This can be seen with a simple intuitive illustration; imagine that you have been given the job of checking out a suspected minefield by throwing rocks into it.
This is roughly equivalent to cluster sampling a heterogeneous population; the dangerous bits are a fairly small proportion of the total field, and they’re clumped together (the mines). Furthermore, the stones that you’re throwing (your “clusters”) only sample a small bit of the field at a time. The larger each individual stone, the better, obviously, but equally obviously it’s the number of stones that you have that is really going to drive the precision of your estimate, not their size. So, let’s say that you chuck 33 stones into the field. There are three things that could happen:
a) By bad luck, all of your stones could land in the spaces between mines. This would cause you to conclude that the field was safer than it actually was.
b) By good luck, you could get a situation where most of your stones fell in the spaces between mines, but some of them hit mines. This would give you an estimate that was about right regarding the danger of the field.
c) By extraordinary chance, every single one of your stones (or a large proportion of them) might chance to hit mines, causing you to conclude that the field was much more dangerous than it actually was.
How likely is the third of these possibilities (analogous to an overestimate of the excess deaths) relative to the other two? Not very likely at all. Cluster sampling tends to underestimate rare effects, not overestimate them2.
And 2), this problem, and other issues with cluster sampling (basically, it reduces your effective sample size to something closer to the number of clusters than the number of individuals sampled) are dealt with at length in the sampling literature. Cluster sampling ain’t ideal, but needs must and it is frequently used in bog-standard epidemiological surveys outside war zones. The effects of clustering on standard results of sampling theory are known, and there are standard pieces of software that can be used to adjust (widen) one’s confidence interval to take account of these design effects. The Lancet team used one of these procedures, which is why their confidence intervals are so wide (although, to repeat, not wide enough to include zero). I have not seen anybody making the clustering critique who as any argument at all from theory or data which might give a reason to believe that the normal procedures are wrong for use in this case. As Richard Garfield, one of the authors, said in a press interview, epidemics are often pretty heterogeneously distributed too.
There is a variant of this critique which is darkly hinted at by both Kaplan and Love, but neither of them appears to have the nerve to say it in so many words3. This would be the critique that there is something much nastier about the sample; that it is not a random sample, but is cherry-picked in some way. In order to believe this, if you have read the paper, you have to be prepared to accuse the authors of telling a disgusting barefaced lie, and presumably to accept the legal consequences of doing so. They picked the clusters by the use of random numbers selected from a GPS grid. In the few cases in which this was logistically difficult (read: insanely dangerous), they picked locations off a map and walked to the nearest household). There is no realistic way in which a critique of this sort can get off the ground; in any case, it affected only a small minority of clusters.
The argument from the UNICEF infant mortality figures
I think that the source for this is Heiko Gerhauser, in various weblog comments threads, but again it can be traced back to a slightly different argument about death rates in the Kaplan piece. The idea here is that the Lancet study finds a prewar infant mortality rate of 29 per 1000 live births and a postwar infant mortality rate of 54 per 1000 live births. Since the prewar infant mortality rate was estimated by UNICEF to be over 100, this (it is argued) suggests that the study is giving junk numbers and all of its conclusions should be rejected.
This argument was difficult to track down to its lair, but I think we have managed it. One weakness is similar to the point I’ve made above; if you believe that the study has structurally underestimated infant mortality, then isn’t it also likely to have underestimated adult mortality? The authors discuss a few reasons why the movement in infant mortality might be exaggerated (mainly, issues of poor recall by the interview subjects), though, and it is good form to look very closely at any anomalies in data.
Which is what Chris Lightfoot did.
Basically, the UNICEF estimate is quoted as a 2002 number, but it is actually based on detailed, comprehensive, on-the-ground work carried out between 1995 and 1999 and extrapolated forward. The method of extrapolation is not one which would take into account the fact that 1999 was the year in which the oil-for-food program began to have significant effects on child malnutrition in Iraq. No detailed on-the-ground survey has been carried out since 1999, and there is certainly no systematic data-gathering apparatus in Iraq which could give any more solid number. The authors of the study believe that the infant mortality rates in neighbouring countries are a better comparator than pre-oil for food Iraq, and since one of them is Richard Garfield, who was acknowledged as the pre-eminent expert on sanctions-related child deaths in the 1990s, there is no reason to gainsay them.
I’d add to Chris’ work a theory of my own, based on the cluster sampling issue discussed above. Infant mortality is rare, and it is quite possibly heterogeneously clustered in Iraq (not least, post-war, a part of the infant mortality was attributed to babies being born at home because it was too dangerous to go to hospital). So it’s not necessarily the case that one needs to have an explanation of why they might have been undersampled in this case. Since this undersampling would tend to underestimate infant mortality both before and after the war, it wouldn’t necessarily bias the estimate of the relative risk ratio and therefore the excess deaths. I’d note that my theory and Chris’s aren’t mutually exclusive; I suspect that his is the main explanation.
We now move into the area of what might be called “not intrinsically hack” critiques. These are issues which one could raise with respect to the study which are not based on either definite or likely falsehoods, but which do not impugn the integrity of the study, and which are not themselves based on evidence strong enough to make anyone believe that the study’s estimates were wrong unless they thought so anyway.
There are two of these that I’ve seen around and about.
The first might be called the “Lying Iraqis” theory. This would be the theory that the interview subjects systematically lied to the survey team. In fact, the team did attempt to check against death certificates in a subsample of the interviews and found that in 81% of cases, subjects could produce them. This would lead me to believe that there is no real reason to suppose that the subjects were lying. Furthermore, I would suspect that if the Iraqis hate us enough to invent deaths of household members to make us look bad in the Lancet, that’s probably a fairly serious problem too. However, the possibility of lying subjects can’t be ruled out in any survey, so it can’t be ruled out in this one, so this critique is not intrinsically hackish. Any attempt to bolster it either with an attack on the integrity of the researchers, or with a suggestion that the researchers mainly interviewed “the resistance” (they didn’t), however, is hack city.
The second, which I haven’t really seen anyone adopt yet, although some people looked like they might, could be called the “Outlier theory”. This is basically the theory that this survey is one gigantic outlier, and that a 2.5% probability event has happened. This would be a fair enough thing to believe, as long as one admitted that one was believing in something quite unlikely, and as long as it wasn’t combined with an attack on the integrity of the Lancet team.
Finally, we come onto two critiques of the study which I would say are valid. The first is the one that I made myself in the original CT post; that the extrapolated number of 98,000 is a poor way to summarise the results of the analysis. I think that the simple fact that we can say with 97.5% confidence that the war has made things worse rather than better is just as powerful and doesn’t commit one to the really quite strong assumptions one would need to make for the extrapolation to be valid.
The second one is one that is attributable to the editors of the Lancet rather than the authors of the study. The Lancet’s editorial comment on the study contained the phrase “100,000 civilian deaths”. The study itself counts excess deaths and does not attempt to classify them as combatants or civilians. The Lancet editors should not have done this, and their denial that they did it to sensationalise the claim ahead of the US elections is unconvincing. This does not, however, affect the science; to claim that it does is the purest imaginable example of argumentum ad hominem
Finally, beyond the ultra-violet spectrum of critiques are those which I would classify as “beyond hackish”. These are things which anyone who gave them a moment’s thought would realise are irrelevant to the issue.
In this category, but surprisingly and disappointingly common in online critiques, is the attempt to use the IBC numbers as a stick to beat the Lancet study. The two studies are simply not comparable. One final time; the Iraq Body Count is a passive reporting system[4], which aims to count civilian deaths as a result of violence. Of course it is going to be lower than the Lancet number. Let that please be an end of this.
And there are a number of odds and ends around the web of the sort “each death in this study is being taken to stand for XXYY deaths and that is ridiculous”. In other words, arguments which, if true, would imply that there could be no valid form of epidemiology, econometrics, opinion polling, or indeed pulling up a few spuds to see if your allotment has blight. This truly is flypaper for innumerates.
I would also include in this category attempts like that of the Obsidian Order weblog to chaw down the 98,000 number by making more or less arbitrary assumptions about what proportion of the excess deaths one might be able to call “combatants” and thus people who deserved to die. This is exactly what people accuse the Lancet of doing; it’s skewing a number by means of your own subjective assessment. Not only is there no objective basis for the actual subjective adjustments that people make, but the entire distinction between combatants and civilians is one which does not exist in nature. As a reason for not caring that 98,000 people might have died, because you think most of them were Islamofascists, it just about passes muster. As a criticism of the 98,000 figure, it’s wretched.
Finally, there is the strange world of Michael Fumento, a man who is such a grandiose and unselfconscious hack that he brings a kind of grandeur to the role. I can no more summarise what a class A fool he’s made of himself in these short paragraphs than I could summarise King Lear. Read the posts on Tim’s site and marvel. And if your name is Jamie Doward of the Guardian, have a word with yourself; not only are you citing blogs rather than reading the paper, you’re treating Flack Central Station as a reliable source!
The bottom line is that the Lancet study was a good piece of science, and anyone who says otherwise is lying. Its results (and in particular, its central 98,000 estimate) are not the last word on the subject, but then nothing is in statistics. There is a very real issue here, and any pro-war person who thinks that we went to war to save the Iraqis ought to be thinking very hard about whether we made things worse rather than better (see this from Marc Mulholland, and a very honourable mention for the Economist). It is notable how very few people who have rubbished the Lancet study have shown the slightest interest in getting any more accurate estimates; often you learn a lot about people from observing the way that they protect themselves from news they suspect will disconcert them.
Footnotes:
1This is not the place for a discussion of Bayesian versus frequentist statistics. Stats teachers will tell you that it is a fallacy and wrong to interpret a confidence interval as meaning that “there is a 95% chance that the true value lies in this range”. However, I would say with 95% confidence that a randomly selected stats teacher would not be able to give you a single example of a case in which someone made a serious practical mistake as a result of this “fallacy”, so I say think about it this way.
2Pedants would perhaps object that the more common mines are in the field, the less the tendency to underestimate. Yes, but a) by the time you got to a stage where an overestimate became seriously likely, you would be talking not about a minefield, but a storage yard for mines with a few patches of grass in it and b) we happen to know that violent death in Iraq is still the exception rather than the norm, so this quibble is irrelevant.
3And quite rightly so; if said in so many words, this accusation would clearly be defamatory.
4That is, they don’t go out looking for deaths like the Lancet did; they wait for someone to report them. Whatever you think about whether there is saturation media coverage of Iraq (personally, I think there is saturation coverage of the green zone of Baghdad and precious little else), this is obviously going to be a lower bound rather than a central estimate, and in the absence of any hard evidence about casualties there is no reason at all to suppose that we have any basis other than convenient subjective air-pulling to adjust the IBC count for how much of an undersample we might want to believe they are making.
Tyler Cowen had a discussion of this a few days ago, but I think it worth a mention here: tit-for-tat was beaten in a recent iterated PD computer tournament. The winners entered a large number of different strategies programmed to communicate with one another. By signalling their existence to their confederates and adopting master and slave roles, some strategies were able to gain full exploiter’s advantage over many rounds and thereby build up huge scores. Non-confederates were systematically punished by strategies from this stable, thus damaging the scores even of conditionally co-operative rivals. Full details here .
Why are all required statistics courses essentially the same? They start off with bland assurances from the instructor that no knowledge of maths is required and that the concepts involved are pretty easy to grasp – all you need to do is turn up in class and do lots of practice questions. Oh, and have a positive attitude. Yeah, right.
I’m about to take the third stats exam of my life. As with the two before, failure is a barrier to continuing my ‘real’ studies. And, though this is my third tour of duty through histograms to simple regression, failure is a distinct possibility. The null hypothesis, that Maria has sufficient knowledge, nerve and luck to once again pass stats by the skin of her teeth, looks like being rejected. Of course I don’t blame myself, not entirely. I’d rather blame the teachers, or perhaps the subject itself.
The first compulsory (is there any other kind?) stats course I ever took was as an undergraduate. The lecturer – there were no tutorials, and no textbook that I can recall – clearly hated the job. I don’t blame him. Unlike teachers in any other discipline I’ve studied, statisticians genuinely seem to have been dragged blinking and unwilling to the lectern and struggle to communicate their wisdom to the herd. (Though I’ve never studied third level maths or hard sciences so maybe this quality is typical of all quant jocks.)
When the exam rolled around at the end of the year, I couldn’t understand a word of it and barely attempted any questions. Which didn’t surprise me, but my classmates who actually had a clue were completely stumped too. There were tears all round and everyone resigned themselves to autumn repeats. But when the results came out, everyone had passed, even me and the other no-hopers. What had happened?
The rumour was that the lecturer had resigned / been denied tenure, and to annoy the department he had set the first years an exam none of them could pass. Which would have meant enrolment for second year was precisely nil. By the time it was discovered, we’d all left for our J1 summers in Long Island and Martha’s Vineyard, and the only thing for the department to do was pass us all. Which was a boon to me as I’d never have gotten through otherwise. So that time it was pure luck that got me through.
The next time was at LSE. This time, the lecturer couldn’t have been nicer and more apologetic about the fact that we had to pass the class to get a degree. Don’t worry, he said gently, it will all be geared to teaching you how to understand and interpret the statistics in social science literature. Which was only fair and probably would have been quite useful. But there was still any amount of chi-square, one tails, two tails, and all the rest of it which went completely over my head. By now, computerised statistical analysis was all the rage, and we had ‘lab’ sessions where we were supposed to apply the theoretical insights gained in our weekly lectures. And interchangeable and grindingly bored doctoral students trying to teach us how to use SPSS and answer the questions you can’t ask in a lecture theatre of 500 people, i.e. all of them.
The questions statisticians dread; ‘but WHY did you do that?’ and ‘what does it MEAN?’. Why is it, that when you ask a statistician one of these questions they look at you as if you’ve addressed them in ancient Greek? When it’s painfully obvious that they’re the one speaking nonsense in an obscure language…? And when they do respond, it’s as if they’re a demented computer stuck in a sub-routine. They simply repeat the steps of the procedure several times more, never saying why they did it or what the outcome signifies.
I think understanding statistics must be like visiting a faraway land that only the chosen few may enter. Like, for example, North Korea. Or maybe it’s like being stolen away by the fairies. When they come back, well, they don’t ever really come back, do they? Their eyes are glassy and their responses are just a bit off. Or perhaps it’s like a faustian pact where you gain tremendous secret knowledge but lose the ability to explain or share it with anyone. Except other statisticians, of course. Perhaps they intermarry?
In any case, my second bout in the ring was decided by a take-home exam which we all thought was very American, complete with an all-nighter of pizza and Coca-Cola in the residence computer room. It was pass-fail too, though you could apply to the department for your actual mark. I think it was sheer nerve that got me through this time, and simple sugars. When I passed, I decided that was enough, and it would be pushing my luck to see if my mark was more than the bare minimum.
And now here I am facing stats exam number three, which I’ll have finished in exactly 24 hours. And, if the conceit of this blog entry is to hold, I need to get through it based on my acquired knowledge of the discipline.
All the usual tics and blind spots of the statistics course were present:
· The emphasis on teaching us how to use statistics to make informed decisions, not train us to be practitioners of some dark art.
· The promise in week 1 that basic arithmetic was all that was needed, giving way to a dismissive aside in week 4 that anyone who couldn’t graph a quadratic equation should do an emergency maths course in the 5 days remaining before the exam. (OK, I can do that and so can everyone else in the class. But still.)
· The claim that statistics’ biggest enemies are ‘blockheads, fools, morons, idiots, prigs and authoritarian personalities’. Well, that’s me done then.
· And above all the assurance that it would be over quickly and hardly hurt at all.
I’ll admit I’m easily in the bottom 25% of the class, and was never going to be a natural with this stuff. And I’m sure it does the ego no harm to feel like a remedial case from time to time. But this time I did the exercises and had a positive attitude and really thought I’d crack it.
So, I have to ask. do stats courses for non-practitioners really have to be so painful and so obscure? They seem just as unpleasant for us students as they are for the teachers…
Three courses, three exams, and I know that if I manage to pass this one, I’ll never open that book again.
Matthew Turner has been reading John Gunther’s Inside Europe , a classic from 1936, and (in two posts )regales us with some of the facts about Britain contained therein. I particularly liked this one:
* The decline in the birth rate, which, according to competent estimates, will reduce the population to thirty-three million by 1985.
I’ve spent the past couple of days at the latest in a series of conferences under the name Priority in Practice , which Jo Wolff has organized at UCL. I don’t think I’d be diminishing the contribution of the other speakers by saying that Michael Marmot was the real star of the show. He’s well known for the idea that status inequality is directly implicated in health outcomes, a thesis that he promotes in his most recent book Status Syndrome and which first came to the fore with his Whitehall Study which showed that more highly promoted civil servants live longer even when we control for matters like lifestyle, smoking etc. Even when people have enough, materially speaking, their position in a status hierarchy still impacts upon their longevity. One interesting other finding that he revealed was that being in control at home (as opposed to at work) was massively important in affecting women’s longevity, but didn’t really impact upon men. There’s an excellent interview of Marmot by Harry Kreisler of Berkeley in which he outlines his central claims.
One interesting recent strand of research on justice and human well-being has been that inspired by Amartya Sen’s “capability” approach. There’s now an association dedicated to this, with Sen as its first President and Martha Nussbaum as President-elect. Details here .
Berkeley’s Mike Hout and my colleague Fr Andrew Greeley have an Op-Ed in the Times today making some good points about the Republican Party’s support amongst Evangelical Christians. Religious and political conservatism don’t line up as closely as you might think, and certainly not as much as the talking heads assume. The intervening factor is how much money you make:
[N]either region nor religion can override the class divide: if recent patterns hold, a majority (about 52 percent) of poor Southern white evangelicals will vote for Mr. Kerry in November, while only 12 percent of affluent Southern white evangelicals will.
Most poorer Americans of every faith - including evangelical Christians - vote for Democrats. It’s a shame that few pundits, pollsters or politicians seem to notice.
A related point is that the swing to the Republican in the South has not not been a uniform migration. More of the better off have drifted, but not necessarily the poorer Whites. Of course, the claim isn’t that all poorer White Evangelicals vote Democrat — Brayden can testify to that — but rather that a surprisingly large number do, even after the universally acknowledged success of the Southern Strategy and the long-running tactic (going back to Reagan) of appealing to the Patriotism of poorer Americans in an effort to make them forget about their pocketbooks.
Shameless plug: there is a new book out on Social Inequality edited by Kathryn Neckerman and published by the Russell Sage Foundation. The volume brings together recent research from the various social sciences on the topic of social stratification. I am often frustrated by how common it is for researchers to ignore papers by others on topics relevant to their work simply - or so it seems - because the researchers are in other fields. One nice aspect of this volume is that it features research by sociologists, political scientists, economists and demographers alike. The shameless plug has to do with the fact that I co-authored (with Paul DiMaggio, Coral Celeste and Steven Shafer) one of the chapters called “Digital Inequality: From Unequal Access to Differentiated Use”.
It is exciting to see a book on social stratification contain a chapter on digital inequality since many subfields of sociology seem to be taking quite some time in realizing and/or acknowledging that the increasing spread of IT is relevant to various areas of social scientific inquiry.
These are the five key issues we address in our piece:
1. The digital divide. Who has access to the Internet, who does not have access, and how has access changed?
2. Is access to and use of the Internet more or less unequal than access to and use of other forms of information technology?
3. Among the increasing number of Internet users, how do such factors as gender, race, and socio-economic status shape inequality in ease, effectiveness, and quality of use? What mechanisms account for links between individual attributes and technological outcomes?
4. Does access to and use of the Internet affect people’s life chances?
5. How might the changing technology, regulatory environment, and industrial organization of the Internet render obsolete the findings reported here?
See a more detailed outline of the chapter and a copy of a draft version here or send me a note if you’d like me to snail mail you a copy of the final chapter.
The book has 26 chapters on topics ranging from family and children to inequality in school and work, in health and political participation. With the index, the volume is over 1000 pages long. The paperback edition is $49.50 (the hardcover goes for $125.00). Contributors include Neil Fligstein, Richard Freeman, Bob Hauser, Mike Hout, Sandy Jencks, Theda Skocpol, Sidney Verba, Jane Waldfogel, Bruce Western and many others.
À Gauche
Jeremy Alder
Amaravati
Anggarrgoon
Audhumlan Conspiracy
H.E. Baber
Philip Blosser
Paul Broderick
Matt Brown
Diana Buccafurni
Brandon Butler
Keith Burgess-Jackson
Certain Doubts
David Chalmers
Noam Chomsky
The Conservative Philosopher
Desert Landscapes
Denis Dutton
David Efird
Karl Elliott
David Estlund
Experimental Philosophy
Fake Barn County
Kai von Fintel
Russell Arben Fox
Garden of Forking Paths
Roger Gathman
Michael Green
Scott Hagaman
Helen Habermann
David Hildebrand
John Holbo
Christopher Grau
Jonathan Ichikawa
Tom Irish
Michelle Jenkins
Adam Kotsko
Barry Lam
Language Hat
Language Log
Christian Lee
Brian Leiter
Stephen Lenhart
Clayton Littlejohn
Roderick T. Long
Joshua Macy
Mad Grad
Jonathan Martin
Matthew McGrattan
Marc Moffett
Geoffrey Nunberg
Orange Philosophy
Philosophy Carnival
Philosophy, et cetera
Philosophy of Art
Douglas Portmore
Philosophy from the 617 (moribund)
Jeremy Pierce
Punishment Theory
Geoff Pynn
Timothy Quigley (moribund?)
Conor Roddy
Sappho's Breathing
Anders Schoubye
Wolfgang Schwartz
Scribo
Michael Sevel
Tom Stoneham (moribund)
Adam Swenson
Peter Suber
Eddie Thomas
Joe Ulatowski
Bruce Umbaugh
What is the name ...
Matt Weiner
Will Wilkinson
Jessica Wilson
Young Hegelian
Richard Zach
Psychology
Donyell Coleman
Deborah Frisch
Milt Rosenberg
Tom Stafford
Law
Ann Althouse
Stephen Bainbridge
Jack Balkin
Douglass A. Berman
Francesca Bignami
BlunkettWatch
Jack Bogdanski
Paul L. Caron
Conglomerate
Jeff Cooper
Disability Law
Displacement of Concepts
Wayne Eastman
Eric Fink
Victor Fleischer (on hiatus)
Peter Friedman
Michael Froomkin
Bernard Hibbitts
Walter Hutchens
InstaPundit
Andis Kaulins
Lawmeme
Edward Lee
Karl-Friedrich Lenz
Larry Lessig
Mirror of Justice
Eric Muller
Nathan Oman
Opinio Juris
John Palfrey
Ken Parish
Punishment Theory
Larry Ribstein
The Right Coast
D. Gordon Smith
Lawrence Solum
Peter Tillers
Transatlantic Assembly
Lawrence Velvel
David Wagner
Kim Weatherall
Yale Constitution Society
Tun Yin
History
Blogenspiel
Timothy Burke
Rebunk
Naomi Chana
Chapati Mystery
Cliopatria
Juan Cole
Cranky Professor
Greg Daly
James Davila
Sherman Dorn
Michael Drout
Frog in a Well
Frogs and Ravens
Early Modern Notes
Evan Garcia
George Mason History bloggers
Ghost in the Machine
Rebecca Goetz
Invisible Adjunct (inactive)
Jason Kuznicki
Konrad Mitchell Lawson
Danny Loss
Liberty and Power
Danny Loss
Ether MacAllum Stewart
Pam Mack
Heather Mathews
James Meadway
Medieval Studies
H.D. Miller
Caleb McDaniel
Marc Mulholland
Received Ideas
Renaissance Weblog
Nathaniel Robinson
Jacob Remes (moribund?)
Christopher Sheil
Red Ted
Time Travelling Is Easy
Brian Ulrich
Shana Worthen
Computers/media/communication
Lauren Andreacchi (moribund)
Eric Behrens
Joseph Bosco
Danah Boyd
David Brake
Collin Brooke
Maximilian Dornseif (moribund)
Jeff Erickson
Ed Felten
Lance Fortnow
Louise Ferguson
Anne Galloway
Jason Gallo
Josh Greenberg
Alex Halavais
Sariel Har-Peled
Tracy Kennedy
Tim Lambert
Liz Lawley
Michael O'Foghlu
Jose Luis Orihuela (moribund)
Alex Pang
Sebastian Paquet
Fernando Pereira
Pink Bunny of Battle
Ranting Professors
Jay Rosen
Ken Rufo
Douglas Rushkoff
Vika Safrin
Rob Schaap (Blogorrhoea)
Frank Schaap
Robert A. Stewart
Suresh Venkatasubramanian
Ray Trygstad
Jill Walker
Phil Windley
Siva Vaidahyanathan
Anthropology
Kerim Friedman
Alex Golub
Martijn de Koning
Nicholas Packwood
Geography
Stentor Danielson
Benjamin Heumann
Scott Whitlock
Education
Edward Bilodeau
Jenny D.
Richard Kahn
Progressive Teachers
Kelvin Thompson (defunct?)
Mark Byron
Business administration
Michael Watkins (moribund)
Literature, language, culture
Mike Arnzen
Brandon Barr
Michael Berube
The Blogora
Colin Brayton
John Bruce
Miriam Burstein
Chris Cagle
Jean Chu
Hans Coppens
Tyler Curtain
Cultural Revolution
Terry Dean
Joseph Duemer
Flaschenpost
Kathleen Fitzpatrick
Jonathan Goodwin
Rachael Groner
Alison Hale
Household Opera
Dennis Jerz
Jason Jones
Miriam Jones
Matthew Kirschenbaum
Steven Krause
Lilliputian Lilith
Catherine Liu
John Lovas
Gerald Lucas
Making Contact
Barry Mauer
Erin O'Connor
Print Culture
Clancy Ratcliff
Matthias Rip
A.G. Rud
Amardeep Singh
Steve Shaviro
Thanks ... Zombie
Vera Tobin
Chuck Tryon
University Diaries
Classics
Michael Hendry
David Meadows
Religion
AKM Adam
Ryan Overbey
Telford Work (moribund)
Library Science
Norma Bruce
Music
Kyle Gann
ionarts
Tim Rutherford-Johnson
Greg Sandow
Scott Spiegelberg
Biology/Medicine
Pradeep Atluri
Bloviator
Anthony Cox
Susan Ferrari (moribund)
Amy Greenwood
La Di Da
John M. Lynch
Charles Murtaugh (moribund)
Paul Z. Myers
Respectful of Otters
Josh Rosenau
Universal Acid
Amity Wilczek (moribund)
Theodore Wong (moribund)
Physics/Applied Physics
Trish Amuntrud
Sean Carroll
Jacques Distler
Stephen Hsu
Irascible Professor
Andrew Jaffe
Michael Nielsen
Chad Orzel
String Coffee Table
Math/Statistics
Dead Parrots
Andrew Gelman
Christopher Genovese
Moment, Linger on
Jason Rosenhouse
Vlorbik
Peter Woit
Complex Systems
Petter Holme
Luis Rocha
Cosma Shalizi
Bill Tozier
Chemistry
"Keneth Miles"
Engineering
Zack Amjal
Chris Hall
University Administration
Frank Admissions (moribund?)
Architecture/Urban development
City Comforts (urban planning)
Unfolio
Panchromatica
Earth Sciences
Our Take
Who Knows?
Bitch Ph.D.
Just Tenured
Playing School
Professor Goose
This Academic Life
Other sources of information
Arts and Letters Daily
Boston Review
Imprints
Political Theory Daily Review
Science and Technology Daily Review