This text is not about Baby Reindeer, Netflix’s latest hit. It’s about one of the most perverse dimensions of sanism and anti-madness: the exploitation of madness as an edifying aesthetic resource. It is also about the obsolescence of narratives centered on the uncritical perspective of the traditional agent of the banality of evil, the mediocre white guy who destroys everything, including himself (even if temporarily), in the pursuit of a vague and elusive future for which he has neither the preparation nor the talent.
From the category archives:
Psychology
My son’s language is made of a bundle of sounds that do not exist in the Spanish that we speak around the Río de la Plata. He repeats syllables he himself invented, he alternates them with onomatopoeias, guttural sounds, and high-pitched shouts. It is an expressive, singing language. I wrote this on Twitter at 6:30 in the morning on a Thursday because Galileo woke me up at 5:30. He does this, madruga (there is no word for “madrugar”, “waking up early in the morning” in English, I want to know why). As I look after him, I open a Word document in my computer. I write a little while I hear “aiuuuh shíii shíiii prrrrrr boio boio seeehhh” and then some whispers, all this accompanied with his rhythmic stimming of patting himself on the chest or drumming on the walls and tables around the house.
My life with Gali goes by like this, between scenes like this one and the passionate kisses and hugs he gives me. This morning everything else is quiet. He brings me an apple for me to cut it for him in four segments. He likes the skin and gnaws the rest, leaving pieces of apples with his bitemarks all around the house. He also brings me a box of rice cookies he doesn’t know how to open. Then he eats them jumping on my bed. He leaves a trace of crumbles. Galileo inhabits the world by leaving evidence of his existence, of his habits, of his way of being in the world.
When we started walking the uncertain road to diagnosis, someone next of kin who is a children’s psychologist with a sort of specialisation in autism informally assessed him. She ruled (diagnosed, prognosed) that he wasn’t autistic, that we shouldn’t ask for the official disability certificate (because “labels” are wrong, she held), and that he should go on Lacanian therapy and music therapy on Zoom —now I think this is a ready-made sentence she just gives in general to anyone.
Hugo Mercier, Melissa Schwartzberg and I have two closely related publications on what we’ve been calling “No-Bullshit Democracy.” One is aimed at academics – it’s a very short piece that has just been officially published in American Political Science Review. The other just came out in Democracy. It’s aimed at a broader audience, and is undoubtedly livelier. An excerpt of the Democracy piece follows – if you want to read it, click on this link. The APSR academic letter (which can be republished under a Creative Commons license) is under the fold. Which one you might want to read depends on whether you value footnotes more than fisticuffs, or vice versa …
The New Libertarian Elitists
What might be called “no-bullshit democracy” would be a new way of structuring democratic disagreement that would use human argumentativeness as a rapid-growth fertilizer. … But first we need to sluice away the bullshit that is being liberally spread around by anti-democratic thinkers. … . Experts, including Brennan and Caplan (and for that matter ourselves), can be at least as enthusiastic as ordinary citizens to grab at ideologically convenient factoids and ignore or explain away inconvenient evidence. That, unfortunately, is why Brennan and Caplan’s books do a better job displaying the faults of human reasoning than explaining them.
Everyone always says that forgiveness a worthwhile life strategy, and is for you, not for the other person who wronged you. This seems obviously true in some cases–in principle if you are nursing a rather trivial grudge which is bothering you, it would be better to let it go. In severe cases there is evidence that anger or misery can dampen your immune system, shave years off your life, give you heart disease, etc. The NYT has recently advocated both the somewhat paradoxical advice to hold on to grudges under certain circumstances, and the more traditional suggestion to let go of them. (At the former link there is a kind of fun quiz you can take to see how serious the grudge is, and whether you should allow yourself the petty pleasure of nursing it. Also, it’s clearly meant to apply to that girl in fourth grade who said that you used crayons and colored pencils on your poster of the solar system, and it didn’t match, and she didn’t want to sit with you at lunch for three days.) The latter is the advice most often given by psychologists and 12-step programs and self-help books.
These days we are healthily cynical about the omnipresence of motivated reasoning in cognition and communication. Everyone is working to fool everyone, starting with themselves. (It used to be you had to read Nietzsche to learn this stuff. Ah, those were the days.) [click to continue…]
Chris’ post on psychological theories of anti-egalitarianism reminded me of one I’ve been meaning to write for a while, responding to a whole subgenre of the Haidt school of political psychology dealing with the question: why do people maintain false beliefs in the face of contrary evidence? There seems to be an article in the papers on this every week or two (unsurprisingly, given the political situatin), and they are nearly always much the same.
Given the assumption that this is a matter of individual psychology, the answer must be applicable to everyone, and, in the US context, “everyone” means “both Republicans and Democrats”. The answer is some irrational/antirational feature of individual belief formation, such as confirmation bias. I’m not going to pick on any particular writers; examples abound.
The obvious problem here is that, to a first approximation, people believe what members of their social groups believe. So, the relevant questions are:
*How do social groups maintain, or correct, false beliefs in the face of contrary evidence ?
* Under what circumstances do people break with false beliefs held by other members of their social group? If this happens, does it involve a break with the social group or the emergence of a dissident subgroup?
Once we look at things this way, it’s obvious that not all social groups are the same, Scientists have a social process for dealing with evidence, which differs from that of (to pick a group with almost zero overlap) Republicans. Obviously, scientists collectively are much better at correcting false beliefs than Republicans, even though, as individuals, both scientists and Republicans exhibit forms of motivated reasoning such as confirmation bias.
The question of how the members of groups change their beliefs seems like an obvious topic for study by social psychologists. And perhaps it is, but if so, their work has had no impact on the asocial psychologists I’ve seen talking about it.
Political philosophers have been arguing about equality for a very long time. We’ve argued about whether equality is a fundamental value or whether what matters is better captured by a focus on priority or sufficiency. We’ve argued about whether egalitarians should focus on securing equal amounts of something or on assuring people that they stand in relationships of equality of status toward one another. We’ve argued about the currency of egalitarian justice, and whether we should assess equality in terms of welfare, resources, opportunity for welfare or “advantage”. Luck egalitarians have argued that people should be rendered equal with respect to their unchosen circumstances but that inequalities that result from choices people freely are ok. All of these are arguments within the egalitarian camp.
So it is frustrating [to read a paper](http://www.nature.com/articles/s41562-017-0082) in *Nature*, written by some psychologists from the Pinker/Haidt school of public pontificating that claims that people don’t care about equality but about “fairness”, where the inequalities that people tolerate turn out to be (a) inequalities in money and (b) inequalities that result from choices people make. Nobody working in poltical philosophy thinks that inequalities in money matter fundamentally, and lots of people think that the value of equality, properly understood, not only allows but *requires* differences in outcome that result from choice. There’s one reference to Rawls in the paper (simply to mention the veil of ignorance) and one of Frankfurt’s sufficiency view, but Dworkin, Cohen, Sen, Anderson, Arneson et al are entirely absent. Perhaps *Nature* needs to pick its peer reviewers from a wider pool.
Since the dawn of time, man has wondered: what are p-values? [click to continue…]
There’s been a lot of commentary on a recent study by the Replication Project that attempted to replicate 100 published studies in psychology, all of which found statistically significant effects of some kind. The results were pretty dismal. Only about one-third of the replications observed a statistically significant effect, and the average effect size was about half that originally reported.
Unfortunately, most of the discussion of this study I’ve seen, notably in the New York Times, has missed the key point, namely the problem of publication bias. The big problem is that, under standard 20th century procedures, research reports will only be published if the effect observed is “statistically significant”, which, broadly speaking means that the average value of the observed effect is more than twice as large as the estimated standard error. According to the standard classical hypothesis testing theory, the probability that such an effect will be observed by chance, when in reality there is no effect, is less than 5 per cent.
There are two problems here, traditionally called Type I and Type II error. The classical hypothesis testing focuses on reducing Type I error, the possibility of finding an effect when none exists in reality, to 5 per cent. Unfortunately, when you do lots of tests, you get 5 per cent of a large number. If all the original studies were Type I errors, we’d expect only 5 per cent to survive replication.
In fact, the outcome observed in the Replication Study is entirely consistent with the possibility that all the failed replications are subject to Type II error, that is, failure to demonstrate an effect that is there in reality
I’m going to illustrate this with a numerical example[^1].
In my class today someone made reference to the Kitty Genovese case (it was relevant) and I commented, casually, that I thought that the claim that 30 something people had looked on while Genovese had been discredited. Another student said “oh no, I am revising for a test later today about this” and proceeded to give us the standard account of the case. Here’s Nick Lemann’s New Yorker review of the books that seemingly discredit it.
I sent the students the link, and a different student wrote back that she had thought I was joking in class (they know I do that sometimes) and that as a psychology major she hears about the case in every class she takes. That got me to thinking about the Milgram experiment (which philosophers make much more of than they do of the Genovese case) which, again, seems to me (I say “seems” because I read part of Gina Perry’s book, and have heard her interviewed in depth) also discredited. And made me wonder i) whether anyone has a refutation of Perry’s book but, more, ii) how quickly professors adjust their teaching when findings they have taught as gospel are thoroughly discredited. I was a bit shocked, frankly, that the Genovese case is still being taught as something to be regurgitated in a test, but I am also quite struck by the number of times I have heard philosopher’s call on the Milgram experiment as evidence for some philosophical view, and wondered how long it will take before it is removed from the philosopher’s armoury (and the psychologist’s lectures)
Apologies for extended absence, due to me teaching a Coursera MOOC, “Reason and Persuasion”.
I’m moderately MOOC-positive, coming out the other end of the rabbit hole. (It’s the final week of the course. I can see light!) I will surely have to write a ‘final reflections’ post some time in the near future. I’ve learned important life lessons, such as: don’t teach a MOOC if there is anything else whatsoever that you are planning to do with your life for the next several months. (Bathroom breaks are ok! But hurry back!)
We’re done with Plato and I’m doing a couple weeks on contemporary moral psychology. The idea being: relate Plato to that stuff.
So this post is mostly to alert folks that if they have some interest in my MOOC, they should probably sign up now. (It’s free!) I’m a bit unclear about Coursera norms for access, after courses are over. But if you enroll, you still have access after the course is over. (I have access to my old Coursera courses, anyway. Maybe it differs, course by course.) So it’s not like you have to gorge yourself on the whole course in a single week.
We finished up the Plato portion of the course with Glaucon’s challenge, some thoughts about the game theory and the psychology of justice.
They say that to do injustice is naturally good and to suffer injustice bad, but that the badness of suffering it so far exceeds the goodness of doing it that those who have done and suffered injustice and tasted both, but who lack the power to do it and avoid suffering it, decide that it is profitable to come to an agreement with each other neither to do injustice nor to suffer it. As a result, they begin to make laws and covenants, and what the law commands they call lawful and just. (358e-9a)
So I whipped up some appropriate graphics (click for larger). [click to continue…]
The New York Times has an interesting piece on the variability of people’s personalities, tastes and opinions over time and how we tend to underestimate the amount we will change in the future:
when asked to predict what their personalities and tastes would be like in 10 years, people of all ages consistently played down the potential changes ahead. Thus, the typical 20-year-old woman’s predictions for her next decade were not nearly as radical as the typical 30-year-old woman’s recollection of how much she had changed in her 20s. This sort of discrepancy persisted among respondents all the way into their 60s. And the discrepancy did not seem to be because of faulty memories, because the personality changes recalled by people jibed quite well with independent research charting how personality traits shift with age. People seemed to be much better at recalling their former selves than at imagining how much they would change in the future.
This wouldn’t have come as any surprise to Montaigne, whose whole project was predicated on the idea of constant change in the self:
I am unable to stabilize my subject: it staggers confusedly along with a natural drunkenness. I grasp it as it is now, at this moment when I am lingering over it. I am not portraying being but becoming: not the passage from one age to another … but from day to day, from minute to minute. I must adapt this account of myself to the passing hour. (“On repenting”, Screech trans 908-9)
But how much this contradicts the central presupposition of much intellectual biography, which is to find as much consistency as possible among the attitudes and doctrines adopted by a person throughout their life.
So, some celebrities got married: Blake Lively, who was in the TV show Gossip Girl, and Ryan Reynolds, who was in the Green Lantern and is one of those dudes who is stipulated to be handsome but his eyes are too close together so he just looks moronic. Like a younger…thingface. Whoever. Lively herself is an off-brand Gwyneth Paltrow so it’s suitable.
They had the wedding, which was all perfect and arranged by actual Martha Stewart with color-coördinated jordan almonds (OK I made that detail up, but almost certainly yes), at Boone Hall Plantation, outside of Charleston in South Carolina. Boone Hall almost alone of the pre-Civil War plantations has its slave quarters intact. I think this is actually awesome about Boone Hall. At all the other plantations, you go, and some nice white volunteer shows you around, and you have to just use your imagination. The main house is now surrounded by vast lawns, and live oaks and azaleas, wisteria and breath of spring, tea olive, daphne odora, gardenias, and mounds of Lady Banksia roses. Mmmm, up in Charleston that Lady Banksia will get up to one-and-a-half stories high. I’m not sure why it doesn’t grow so well in Savannah. Pretty little yellow roses on a climbing vine, heaping up on itself, all up around old fenceposts. But no hovels! No wood fires, no chickens, no foundries! No crying babies, no foremen, no one making grits, no one getting beat the hell up, no black people!
[click to continue…]
I’ve met someone at the SAP-conference this weekend, whom I never met before, but with whom I had corresponded quite intensively over a period of two years. And now it turns out that this person is a man, whereas I had assumed he was a woman. He has a name that I am not familiar with, but I had just somehow assumed this was a woman’s name.
Reflecting a bit on this, I notice that I see two patterns in my sex-to-name-attributing habits.
[click to continue…]
In the thread on community colleges (which morphed into a discussion of more general education and management issues), someone mentioned Kahneman on the “halo effect” in grading (or marking) student work. _Thinking Fast and Slow_ has been on my to-read pile since Christmas, but I got it down from the shelf to read the relevant pages. Kahneman:
bq. Early in my career as a professor, I graded students’ essay exams in the conventional way. I would pick up one test booklet at it time and read all the students’ essays in immediate succession, grading them as I went. I would then compute the total and go on to the next student. I eventually noticed that my evaluations of the essays in each booklet were strikingly homogeneous. I began to suspect that my grading exhibited a halo effect, and that the first question I scored had a disproportionate effect on the overall grade. The mechanism was simple: if I had given a high score to the first essay, I gave the student the benefit of the doubt whenever I encountered a vague or ambiguous statement later on. This seemed reasonable … I had told the students that the two essays had equal weight, but that was not true: the first one had a much greater impact on the final grade than the second. This was unacceptable. (p. 83)
Kahneman then switched to reading all the different students’ answers to each question. This often left him feeling uncomfortable, because he would discover that his confidence in his judgement became undermined when he later discovered that his responses to the same student’s work were all over the place. Neverthless, he is convinced that his new procedure, which, as he puts it “decorrelates error” is superior.
I’m sure he’s right about that and that his revised procedure is better: I intend to adopt it. Some off-the-cuff thoughts though: (1) I imagine some halo effect persists and that one’s judgement of an immediately subsequent answer to the same question in consecutive booklets or script is influenced by the preceding one; (2) reading answers to the same question over and over again can be even more tedious than marking usually is. I thing it would be even better to switch at random through the piles; (3) (and this may get covered in the book) the fact that sequence matters because of halo effects strikes me as a big problem for Bayesians. What your beliefs about something end up being can just be the result of the sequence in which you encounter the evidence. If right (and it’s not my department) then that ought to be a major strike against Bayesianism.