Or “I thought Science was a serious peer-reviewed publication…”
A study published today in Science by Facebook researchers using Facebook data claims to examine whether adult U.S. Facebook users engage with ideologically cross-cutting material on the site. My friend Christian Sandvig does an excellent job highlighting many of the problems of the piece and I encourage you to read his astute and well-referenced commentary. I want to highlight just one point here, a point that in and of itself should have stood out to reviewers at Science and should have been addressed before publication. It concerns the problematic sampling frame for the study and how little prominence it gets in the publication (i.e., none, it’s all in the supplemental materials).
Sampling is crucial to social science questions since biased samples can have serious implications for a study’s findings. In particular, it is extremely important that the sampling methodology be decoupled from the substantive questions of interest in the study. In this case, if you are examining engagement with political content, it is important that sampling not be based on anything related to users’ engagement with politics. However, that is precisely how sampling was done here. I elaborate below, but in sum, although the study boasts 10 million plus observations, only seen in the supplementary materials is the fact that only a tiny percentage (single digits) of Facebook users were eligible to make it into the sample in the first place. These are folks who explicitly identify their political affiliation on the site, i.e., people who probably have a different relationship to politics than the average user. They are also relatively active users based on another sampling decision, again, something confounded with the outcome of interest, i.e., engagement with political materials.
Not in the piece published in Science proper, but in the supplementary materials we find the following:
All Facebook users can self-report their political affiliation; 9% of U.S. users over 18 do. We mapped the top 500 political designations on a five-point, -2 (Very Liberal) to +2 (Very Conservative) ideological scale; those with no response or with responses such as “other” or “I don’t care” were not included. 46% of those who entered their political affiliation on their profiles had a response that could be mapped to this scale.
To recap, only 9% of FB users give information about their political affiliation in a way relevant here to sampling and 54% of those do so in a way that is not meaningful to determine their political affiliation. This means that only about 4% of FB users were eligible for the study. But it’s even less than that, because the user had to log in at least “4/7 days per week”, which “removes approximately 30% of users”.
Of course, every study has limitations. But sampling is too important here to be buried in supplementary materials. And the limitations of the sampling are too serious to warrant the following comment in the final paragraph of the paper:
we conclusively establish that on average in the context of Facebook, individual choices (2, 13, 15, 17) more than algorithms (3, 9) limit exposure to attitude-challenging content.
How can a sample that has not been established to be representative of Facebook users result in such a conclusive statement? And why does Science publish papers that make such claims without the necessary empirical evidence to back up the claims?
Can publications and researchers please stop being mesmerized by large numbers and go back to taking the fundamentals of social science seriously? In related news, I recently published a paper asking “Is Bigger Always Better? Potential Biases of Big Data Derived from Social Network Sites” that I recommend to folks working through and with big data in the social sciences.*
Full disclosure, some of my work has been funded by Facebook as well as Google and other corporations as well as foundations, details are available on my CV. Also, I’m friends with one of the authors of the study and very much value many of the contributions she has made to research.
[*] Regarding the piece on which I comment here, FB users not being nationally-representative is not an issue since the paper and its claims are only concerned with Facebook use.
{ 60 comments }
Walt French 05.07.15 at 7:38 pm
“This means that only about 4% of FB users were eligible for the study. But it’s even less than that, because the user had to log in at least ‘4/7 days per week’, which ‘removes approximately 30% of users’.â€
Let me guess that people with strong political opinions may partake of Facebook less than those with less conviction, so that the study was left with an even more skewed subset of FB users.
Holden Pattern 05.07.15 at 8:03 pm
This just seems like a painfully stupid survey in the first place. Most of the people I know on the left have actually already engaged with American politically conservative material, years ago, usually over and over again, and found that when examined it fails even on its own terms.
I assume something similar holds for American conservatives, but with the extra fun twist of movement conservative epistemological closure that doesn’t allow even for new facts that contradict the conservative worldview.
So why would Facebook be a venue in which it would be interesting to do that sort of thing?
Warren Terra 05.07.15 at 8:10 pm
I’m not entirely sure what your central point is here. If it’s that the paper seems to have slipped potential flaws past the reviewers on the way to publication – well, it happens, and a manuscript that more openly admitted these flaws probably wouldn’t have caught the reviewers or editor napping, so while your complaint is valid it’s also a bit circular. I think the more interesting and open questions have to do with the use of online supplements for the Methods.
Such use of the “online supplementary information” as a place to put some or even all of the Methods and a lot of the data has been an issue since this became possible with widespread use of the web by journals (I’d place this about 15 years ago; the first biological journal to have full text online started 20 years ago). Indeed, this capability is often abused – and by the journals as much as by the authors, with Science and its mirror image Nature being the worst offenders. Because these are short-format journals who publish high-impact papers often reflecting vast amounts of work, they’ve been delighted that they can shift in some cases all of the Methods information online.
That said, the situation is complicated, and this is not necessarily a bad development. The supplementary online material is often open-access even when the article itself isn’t (basically to make it easier on people who have gotten a copy of the article by legitimate means but don’t want to be hassled while checking the online supplement), and I’m not sure you understand just how egregious Nature and especially Science were in the days before the online supplement. Nature at that time did allow a section of the manuscript for Methods, carved out from within the already stingy space limitations on a Nature letter. Science just didn’t. Notoriously, the Methods for a Science paper were often found in the endnotes, mixed in among the references, and published in tiny print. For both of these short-format journals, the extreme length limitations made for terribly composed, ludicrously unreadable papers: often great science but unjustly crammed into a few thousand words.
The ability to give unlimited space to the Methods and various fine points of the work while shifting them out of the actual article and into an online supplement has made the papers more readable and also made the methods and other supplemental material vastly more readable. There is I think a real, if not wholly convincing, argument to be made that the casually interested don’t need to double-check the methods but can read the paper trusting peer review has done its job, while those sufficiently engaged that they actually want to ensure every t has been crossed and every i dotted will have the motivation to go the extra step and look online, where they will find a more complete presentation than could ever have fit in the article.
Where this become particularly abusive is that people are now expected to publish vat amounts of data in the supplement; often what would be entire additional papers are contained there. And it’s not necessarily even material directly related to the manuscript at hand – I know of one case offhand in which an anonymous peer reviewer successfully demanded the authors of a manuscript under review at a prestigious journal add to the online supplement a set of experiments attempting (and failing) to replicate the key findings of a paper published by another group, in the same field but in no way closely connected to the manuscript in question. That’s an extreme example, but it’s now common for reviewers to make extreme requests, knowing it can all go in the supplement.
So: I’ve been longwinded as I often am, but I think there are important issues with the u of online supplements, but that they really aren’t the issues you raise.
Dean Eckles 05.07.15 at 8:20 pm
To be clear (since this post seems to suggest otherwise), the paper states what the population is on the very first page of the main text (not just in the SI):
“[W]e constructed a de-identified dataset that includes 10.1 million active U.S. users who self-report their ideological affiliation and 7 million distinct Web links (URLs) shared by U.S. users over a 6-month period between July 7, 2014 and January 7, 2015.”
http://www.sciencemag.org/content/early/2015/05/06/science.aaa1160
krippendorf 05.07.15 at 8:55 pm
Sampling on the dependent variable is crappy social science, whether you state the original sample (note: not population) on the first page or not, or whether it’s in the supplementary appendix or not. It’s something every first year grad student in sociology, demography, and political science learns, or at least should.
Dean Eckles 05.07.15 at 9:02 pm
@krippendorf : In what sense is this sampling on the dependent variable (this has a specific technical meaning)? Maybe draw a causal directed acyclic graph illustrating what you think is wrong here.
adam.smith 05.07.15 at 9:22 pm
Dean — yes, this isn’t selecting on the dependent variable, but it also is not just non-random sampling, but non-random sampling with a very plausibly very strong sampling bias, which isn’t acknowledged anywhere in the paper. Meanwhile, as Eszter points out, the paper does make very strong claims about the entire population of US Facebook users.
Moreover, while you’re correct that the size and the basic methodology of the sample are stated, specifics of the sampling methodology are not–but that’s what’s really crucial for understanding to what degree we should believe any statement that goes beyond the 10million people actually covered by the study. In other words, this information is crucial for understanding whether the central claims of the study are true. The fact that that information isn’t included in the study is, indeed, very puzzling. I’d expect this to be covered even in a good journalistic summary of it, but most certainly in the study itself.
Do you really not find that problematic? You’re clearly a much more accomplished statistician than I am, but all the serious statistician’s I’ve met are obsessively cautious about such things, so I’m genuinely flabbergasted by what seems like a pretty callous attitude on your part to basic problems with this paper.
@Warren Terra — I take Ezter’s point to me that we’re seeing the modern day re-occurrence of the 1936 Literary Digest election poll that’s prominently covered in (almost) all statistics courses: People get excited by large sample sizes, neglecting the quality of the sample.
Bloix 05.07.15 at 9:28 pm
#3- I don’t understand why you don’t understand. The paper draws conclusions from a sample that has not been shown to be representative of the population under investigation, and to the contrary bears indications that it is not representative. This means that if the article’s conclusions happen to true, it’s a result of a happy accident. They are not reliable and they add nothing to the store of knowledge. The article’s presence in Science, therefore, is not merely worthless. It is noise that drowns out the ability of competent researchers to talk to each other and to do good work.
#4 – note that they gave the big number (the absolute number in the sample), which is meaningless, while suppressing the little numbers (the percentage who are self-identified and the shares of those who can be sorted into categories), which are highly meaningful.
Warren Terra 05.07.15 at 9:42 pm
@#8
Sure, but this sort of lapse in peer review has nothing to do with the use of online supplements. If anything, the supplements make more complete inspection much more possible for the peer reviewers and for sufficiently interested readers. And yet the title and the first bit of the post is all about the use of online supplements for Methods.
Freddie deBoer 05.07.15 at 10:06 pm
I don’t disagree, but these comments demonstrate the common condition of people who aren’t researchers radically overestimating what common research standards of publication are. Typically, intuitive critiques of sampling methodology imply standards so high nothing could ever get published. I mean, if you think this is bad, you should see all the other stuff that does get published. There’s tons of convenience sampling in published literature because in many or most research contexts, there is no practical way possible to collect a truly random sample.
JanieM 05.07.15 at 10:13 pm
Not a social scientist, never so much as took a statistics class. But based on Eszter’s OP, and adam.smith’s and Bloix’s comments especially, I’m sitting here wondering: Are the authors such incompetent social scientists that they didn’t know that their conclusions and the basis for reaching them were seriously flawed? Or did they think no one would notice?
Of course, there are never only two possibilities. I’ve been having recent personal experience of the way the tail (sales and marketing) increasingly wags the dog (the actually grunty work that the rest of us have to do) in the business world. And with a behemoth like Facebook, I suppose there’s a strong bubble effect as well.
Abbe Faria 05.07.15 at 10:20 pm
Freddie’s right a lot of these arguments are a bit cut and paste Statistics 101.
The ‘bias’ is in the wrong direction. If you cured a bunch of people who are dying you’d have better evidence you were on to something than if you looked at a representative sample of the mildly unwell. People routinely seek out extreme samples for that reason.
What concept of attitude are they using? Do don’t knows/don’t cares even have an attitude, they certainly haven’t made any enduring expression of favor or disfavor? What could possibly be attitude challenging content for someone who’s apathetic and just doesn’t care or have an opinion? How does that concept even apply to those sort of people? Are they really part of the population?
adam.smith 05.07.15 at 10:26 pm
@Freddie — I am an active researcher and I am very confident that nothing like this would be published in a Top 10 journal in my field (political science) without a strong caveat about external validity and likely some attempt to justify the sample as being reasonably representative, if possible. Sure, you can’t random sample for everything, but if you don’t, you can’t go out and make statements about some larger population without significant qualifications. I’ve been aghast by some of the quantitative stuff I’ve seen published in education, so if that’s what you’re talking about, that’s possible, but let’s not overstate how bad things are. Most researchers are careful in not overstating their findings.
JanieM — the folks who published this are very capable. They’re aware of this. I’m actually quite curious why they decided to downplay it to the extent they did.
Freddie deBoer 05.07.15 at 10:29 pm
Sorry. Don’t agree.
adam.smith 05.07.15 at 10:41 pm
That should be easy to settle:
If you’re able to cite 5 recent journal articles from a field for which the following three things are true, I’ll agree you’re right for that discipline:
1. Published in a top journal (depending on size of the discipline top 5 or 10)
2. Using a convenience sample of some type to
3. Make claims about a larger population without a discussion about generalizability of the sample to said population in the paper itself.
Freddie deBoer 05.07.15 at 10:50 pm
Look, I’m not interested in getting into some “my credentials are amazing/my field is best” pissing contest with you. I’m making a basic point about populist critiques of research methodology. People on Twitter are acting like this is the research scandal of the century, and that’s just not true.
The Temporary Name 05.07.15 at 10:58 pm
Then make it.
Freddie deBoer 05.07.15 at 11:23 pm
I see The Internet’s Most Self-Important Commenting Section hasn’t lost its edge in recent years.
adam.smith 05.07.15 at 11:27 pm
Freddie,
what you started out with here was:
which I hope you realize is super condescending and also rather ridiculous given that a) a lot of the people here are active researchers and b) Eszter, who wrote the original post, is one of the pre-eminent researchers working on the effects of social media.
As for Twitter, in my feed I see a lot from Zeynep Tufekci, who is also a widely published and respected researcher on social media and the public. So, as someone with no relevant research experience and really not terribly impressive academic credentials to come around and explain to the plebes how real research works is pretty rich.
The Temporary Name 05.07.15 at 11:34 pm
As someone with far less impressive credentials than anyone concerned, I’d just like someone to show me that their view of generally prevailing standards is correct (and I don’t care if it’s adam.smith’s field or not).
Freddie deBoer 05.07.15 at 11:44 pm
“which I hope you realize is super condescending and also rather ridiculous given that a) a lot of the people here are active researchers and b) Eszter, who wrote the original post, is one of the pre-eminent researchers working on the effects of social media.”
I said nothing whatsoever about Eszter or her credentials.
“As for Twitter, in my feed I see a lot from Zeynep Tufekci, who is also a widely published and respected researcher on social media and the public. So, as someone with no relevant research experience and really not terribly impressive academic credentials to come around and explain to the plebes how real research works is pretty rich.”
1. That’s not what I did. That’s not remotely what I did. And it’s very telling that you’ve projected that baggage into an innocuous and not-at-all-insulting expression of mildly dissenting opinion. (From my first comment, literally: “I don’t disagree….”)
2. Literally every argument of yours here is purely asserted arguments to your own expertise, which has involved you claiming to be a big shot, making fun of other people’s fields, and grandstanding about what a big deal you are.
3. All of this is straight from Crooked Timber’s playbook: fake a kind of milquetoast warm milk liberal egalitarianism, but the second you get alternate opinion on the slightest issue, start talking about your Ivy League credentials. I don’t play that. I’m not impressed by you, your anonymity, your asserted credentials, credentialism in general, your totally uninformed take on me or my abilities, your cloying condescension, or your very conservative endorsement of straight-up meritocracy mythology. If you are really this hurt by alternate opinion, it’s hard to understand how you function in the world at all.
Keep putting the “less” in bloodless, Crooked Timber. The next time you guys like to posture about your lefty cred, remember that this is a place that lives and dies based on who is more shameless in flogging their degrees.
Bruce Wilder 05.08.15 at 12:00 am
It seems to me that Freddie was making a valid point about criticism, which adopts as its standard, the representativeness of the mythic random sample, and pontificates as if that were some kind of gold standard routinely satisfied.
Every thing people have said about how the researchers featured some numbers, while hiding some less reassuring figures, are valid enough, without unnecessary piety about the textbook conventions.
Freddie deBoer 05.08.15 at 12:05 am
That was, indeed, the mild point I was trying to make: that when the flaws of any given study are suddenly thrust into public conversation, they tend to get compared to an idealized standard of perfectly random samples that aren’t found in real research.
But unlike Adam Smith, I didn’t go to Yale College Harvard, so we can feel free to disregard my opinion.
adam.smith 05.08.15 at 12:10 am
Freddie — I have no idea what your beef is. The only credentials I claim are those of an active researcher (I’m at a similar career level as you are with similar impressive/non-impressive academic credentials and without any Ivy in it, FWIW.).
But let’s step back. When you said:
please explain to me how I or anyone else in comments 1-9 is supposed to have read this as anything but implying that a) we’re not researchers and b) you know better than us how real research looks.
I’m sorry if I offended you with my remark about education research, which was unnecessary and mainly me trying to make sense of what to me is an unexplainable difference in perception of research practices, so I am happy to retract that and apologize.
Lastly, I’m not Crooked Timber. I’m not even that regular of a commenter here, and I’m not interested in your past and present conflicts and issues with CT, whatever they are. I took issue with a particular comment you made and you turned it into a giant meta debate about pissing matches, credentials, and CT, in which I have no interest and on which this is going to be my last word. I’m happy to continue to debate the original question, on which I’m still not sure what your position is.
Henry 05.08.15 at 12:19 am
Coming from you Freddie, that is very, very funny. So much so that I’d almost suspect you of a delightfully self-deprecating sense of humor, if I didn’t know better.
Freddie deBoer 05.08.15 at 12:19 am
Oh, yeah, no– when you looked up my academic credentials (which you could do because I use my real name because I don’t hide behind accountability-killing internet anonymity) and called them not impressive, you had no intention of big timing me, right? Yeah.
And you may not be Crooked Timber but you are very much in keeping with the general practice around here, which is to enforce a liberal get-alongism by jumping the throat of anyone who dissents in the most mild way possible, usually through credentialism and a patronizing, grandfatherly assertion of one’s own superior academic standing. Like I said: you don’t impress me.
Freddie deBoer 05.08.15 at 12:20 am
Hey, it’s Henry “I’m a bigger deal than you” Farrell, here to enforce that conformity I was just talking about!
Henry 05.08.15 at 12:20 am
Come to think of it, if you’re ever looking to rename your blog, “The Internet’s Most Self-Important Commenting Section” might not be at all be a bad one.
adam.smith 05.08.15 at 12:21 am
That overlapped and to bring this back on topic, I tend to disagree with this on both an empirical and a normative level.
Empirically, I continue to maintain that it is _not_ in fact common practice in the social sciences to draw unwarranted inferences from non-random samples. That doesn’t mean I’m claiming that research looks like some type of textbook ideal–it certainly doesn’t–but I would argue that most published research does in fact not have flaws that anyone who has taken Stats 101 can easily see. I am genuinely surprised you disagree with that, so I was asking for examples.
On a normative level, I think it is perfectly OK to put studies with major implications under closer scrutiny, especially when they’re published in one of the most visible science publications anywhere. And given the impact of Facebook algorithms on the daily life of many people, this study would certainly qualify. So even if similar criticism do apply to other studies, I don’t think there’s any downside to highlighting them in publicly visible research.
Freddie deBoer 05.08.15 at 12:26 am
Oh so now we’re allowed to talk merits? A minute ago I wasn’t worth talking to because I lack I degree you find up to your exacting standards. That’s interesting.
Speaking of Stats 101, here’s a little something to demonstrate the degree to which the average research finding fails. http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124
Which is to say nothing of the mountains of fraud we have reason to suspect are found all over the published record. But don’t let me disabuse you of your naive cynicism; it’s clearly well-rehearsed.
TM 05.08.15 at 12:35 am
This gives me the opportunity to report that I just held my statistics final exam and the results were as always slightly depressing (I have only myself to blame though – I should “lower my expectations” as older colleagues told me right away). I’m not teaching students likely to engage in a scientific career but I think the pattern is pretty widespread – most textbooks discuss hardly anything outside the bare cookbook recipes and students have learned for so many years to disregard context and focus on doing computations, it’s an uphill battle to convince them that understanding what they are doing and how to interpret it is far more important than following the recipes. The issue highlighted in the OP should have jumped out to researchers and reviewer but reviewers are easy to satisfy as long as the formalities are checked and the p-values are there.
Freddie be Mad 05.08.15 at 12:43 am
1. Freddie’s free-floating anger is always a joy.
2. To paraphrase a smart man, much CT commentary takes the form of looking for a fight and then acting surprised when you get one.
3. adam.smith is basically right—this paper wouldn’t have gotten published in a good social science journal, for the reasons he and Eszter give. Now, it’s also true that a lot of shitty research gets published. But the ever so slightly more subtle points are (a) that less of that shit than you might think makes its way into very good journals, and (b) Science is a very high prestige magazine that occupies an odd position as a kind of supercharged Scientific American, and we should really expect better of it. Social science is underrepresented as a rule in Science and it’s a pity that the stuff that makes it in is often a bit flashy and undercooked.
Warren Terra 05.08.15 at 12:46 am
Oh, great, a back-and-forth about Freddie, and his complaints, rather than anything to do with the various actual subjects under discussion. Quelle surprise.
adam.smith 05.08.15 at 12:48 am
I know Ioannidis’s work and I think it’s great, but I always took his main point to be that even if we do follow pretty standard textbook science, we get a lot of false results and we should be careful about that. But if you add statistical malpractice to his model, things become even worse, so surely Ioannidis would agree that we should be very careful about that, as indeed he says:
I’m also aware of various cases of fraud across disciplines as well as of studies showing clustering of published results around commonly used p-value thresholds, suggesting at a minimum cherry-picking of results. And, while we’re at disciplines, mine is especially guilty (pdf, key figures p. 30/31) of that.
But to me that just means we need to do better by doing the things that people like Ioannidis have been advocating for a long time (register studies, promote & reward meta-analyses and replication, reward & publish negative findings, etc.). I don’t see why it should mean that we should give individual studies, published in highly-coveted places, a pass for elementary errors. I’m not sure why my position is “well-studied naive cynicism” (though I like the term)–wouldn’t that better describe what you’re proposing?
Freddie deBoer 05.08.15 at 12:49 am
Yeah, it’s crazy how, when I make an anodyne point that began “I don’t disagree,” but which might, if you squint the right way, amount to something like dissent, and this blog’s amoeba of comity fetishists lay into me with straight credentialism and big-timing, I might say, “hey, that’s lame.” I must be some kind of kook!
TM 05.08.15 at 12:50 am
It would be nice for everybody to come back to topic.
Henry 05.08.15 at 12:53 am
I was going to suggest that people stop engaging, on the grounds that Freddie deBoer considers any conversation that is not already all about Freddie deBoer (or perhaps, at a pinch, the sinuous complexities of Chairman deBoer Thought) to be definitionally an outrage. Happy to have been anticipated.
Freddie deBoer 05.08.15 at 1:12 am
So you read my initial comment, right Henry, old chum? Where I made a mild point that sought to perhaps gently qualify a set of criticisms that I actually largely agree with, and which had literally nothing to do with me? And you further saw, because as you never stop telling us, you are so learned and all-knowing, that this comment was met with a bizarre series of credentialists insults that made the conversation about me in a way that I had no hand in? You saw those things, right? You can scroll back up and read my first comment, which was 100% about the subject at hand, and then read one of the anonymous members of your liberal squish politburo making the conversation about me. And then you can re-read you, saying that I’ve made the conversation about me, despite the fact that this is glaringly, factually false.
You may then apologize at your leisure.
Henry 05.08.15 at 1:22 am
Freddie – please stop being a pest. This thread isn’t about you and your pet peeves about the Crooked Timber comment section. I warmly encourage you to write an extremely long post over at your own blog about the self-satisfied liberal commentariat, the horrid way in which people treat you, how I’m always telling you how learned and all-knowing I am, and how you and you alone truly appreciate and understand what it means to still be leftwing in these days of tribulation. But you’re making yourself a bit of a nuisance here, and getting in the way of a conversation that other people would like to have, a conversation which is manifestly Not About Freddie. I suggest you desist.
Warren Terra 05.08.15 at 1:45 am
@#32
From the point of view of an outsider who’s not paying close attention I’d agree with the latter part, about the importance of sex appeal rather than merit or rigor (though I’d point out that this is hardly contained to the social sciences; there is a half-joking cliche in the biological sciences that says “just because it’s published in Science doesn’t mean it’s not true”).
But do you really feel the former is true, about under-representation? This very much is not my impression. Maybe we’re normalizing differently? I mean, sure, there’s only a couple of social science reports an issue – but if you normalize to number of researchers, or to number of faculty, or especially to research funding, I would suggest (albeit without any actual numbers) that this begins to look like over-representation, like a conscious effort by Science to include social science (albeit perhaps with a weakness for flashy, thinly based reports). And certainly the social science papers that are published often seem to reflect vastly less effort, enormously fewer man-hours consumed, than do the other reports.
dsquared 05.08.15 at 2:11 am
Freddie, I like your blog a lot, but you’re way out here. It’s true that populist critiques of sampling are often way out (I actually coined the phrase “Devastating Critique” on this very blog to describe “a critique of a piece of statistical work based on points made in the methodology section of the actual paper”). But this is not the sort of “close-to-the-line” case that John McEnroe used to get angry about. It’s a self-selected sample, specifically self-selected because of their political identification, used to generalise about all Facebook users, when the generalisation is “Facebook users do X, Y and Z based on their political identification”. You’ve gone off at half-cock here – this would be a valid point in a lot of contexts but actually in this case Eszter is dead right.
Belle Waring 05.08.15 at 4:11 am
“these comments demonstrate the common condition of people who aren’t researchers radically overestimating what the standards of current research publication are.”
1. Freddie, there is no plain meaning of this sentence, when it is preceded by fewer then ten comments, that does not include Ezster’s comments on the issue as being among those belonging to “people who aren’t researchers.” Starting the sentence with “I don’t disagree” does not magically transmute the following words in some way. Its like you think calling “no backsies” can force people to not be offended when you say offensive things. That’s not a real thing. Relatedly, responding to the quite reasonable pushback “but we are researchers, though” with “fuck you and your credential-boasting ways!” is galling at best, tediously and successfully derailing at worst. Other people, if Freddie wants to flounce in and out of comments in a continual netizen pavanade, let him.
2. If people are going to be sexist dicks on EVERY SINGLE MOTHERFUCKING ONE of Ezster’s posts, by explaining in comments that:
a) the thing she reports on isn’t real (“but she’s reporting on her own experience, how–” “Isn’t real!”)
b) it’s real but she’s totally misunderstood it
c) it’s real but she has taken it out of context because of the lamentably limited scope of her experience and qualifications, which failing the male commenter will now remedy at length,
I suggest a rule that you may not say anything to this effect until after comment 50? A very strict rule? Like, I will gank it? The sort of rule where your comment will leave the thread to go to The Great Blog in The Sky, whither no posted comment may follow, leaving no sign of its passing, like a wind that ripples across the Sahara in the night and is gone? Then people could talk about the actual topic for some time before the (apparently totally inevitable) hit on Ezster comes in. I guess I’ll just have to set up a notification so that I can try to come over in time, to disappear people like the liberal squish Djerzhinsky that I am. There are female posters on our roster who post rarely or never. People have various reasons, but can anyone imagine that this continuous barrage of bullshit makes people pull back when they might have written something
which gave you a chance to swing your dicks around in the comments so hard they fly off due to centripetal force, and apparently you’re into that but I am bored with it raining hot dogswhich interested you?THINK ABOUT WHAT YOU ARE SAYING. YOU CAN BE BETTER THAN THIS.
Now talk for a while about the actual topic, or I will cut you.
Belle Waring 05.08.15 at 4:30 am
I’m not sure why I should need to add this, but it’s trivially easy to disagree with Ezster’s claims and say you think they’re false without implying that she is wrong because she is not a researcher/doesn’t know enough about the literature/is taking it the wrong way/is too sensitive/maybe people chose the shirts themselves, did you ever think of that missy?/that’s not how computers work/maybe fewer women than men submitted papers for the conference, did you ever think of that, missy?/etc., etc., …
JPL 05.08.15 at 5:01 am
I would just like to note that the spark for this little conflagration “about Freddie”, which is now thankfully being put out, was the comment by Freddie deBoer @14 that said, in apparent response to the previous comment @13, simply, “Sorry. Don’t agree.” That’s not enough. You need to identify, in what was said by the commenter (adam.smith), just what it is that you disagree with, and say why you disagree. That comment was not helpful, because its lack of specificity left too many possibilities open. In any conversation where people are sincerely trying to make serious observations one should try to be as helpful as possible to the addressee. It’s even harder to be a mind- reader over the internet. (And it would I think actually have been more polite to have omitted the “Sorry.”. )
The combustible conditions in the tinder were created in the preceding comment by Mr. deBoer @10, where the first sentence says, “… but these comments demonstrate the common condition of people who aren’t researchers radically overestimating what common research standards of publication really are.” Which comments exactly are being referred to by the phrase “these comments”? At that point I would identify key comments as including Eszter’s OP, the linked study by Christian Sandvig, and adam.smith @7, all of whom evidently are active researchers and experts in their fields. Thus adam.smith’s response @19 is understandable. Later, @16, deBoer clarifies that he is talking about “populist critiques of research methodology” (people on Twitter as opposed to experts?). In that case, it would have been clearer in 10 to say something like, “the comments of people who are not researchers (on Twitter?) demonstrate …” (although people in the thread had been talking about expert views, not “populist” ones). The rest of the comment in 10 is a series of blanket statements of nonspecific reference (“intuitive critiques”, “other stuff”, “tons of convenient sampling”), although the post is about one specific case. Yes, deBoer starts his comment @10 with “I don’t disagree”, but again, it’s not clear what exactly he’s not disagreeing with. (Warren Terra’s comment @9? But then the rest of that sentence doesn’t seem to relate to the comment @9.) And after all the indignant responses, there is still no clarification of what Freddie deBoer disagrees with in what adam.smith said.
Sorry to extend the attention to the “Freddie” exchanges; I just found them interesting from a linguistic point of view, and am procrastinating instead of marking the papers I need to mark.
JPL 05.08.15 at 5:09 am
Sorry, I hadn’t seen Belle’s comment. As usual she says everything in a much more interesting way. Now to mark my papers and listen to Alabama Shakes.
Main Street Muse 05.08.15 at 10:04 am
I don’t really understand numbers at all; still puzzling over the recent measles “epidemic” of 130 people in a land of more than 300 million.
And weren’t we all gonna die of ebola just last fall?
We seem to love certitude even if there is none.
From anecdotal observations (no statistics to back up this claim), Freddie gets really cranky if people don’t agree with him.
TM 05.08.15 at 12:58 pm
40: Why should the merit and relevance of research be measured by how much money it cost to produce, rather than by its scientific value? Oh wait… are you saying …?
Belle Waring 05.08.15 at 2:24 pm
MSM: we all did die of ebola, remember? Oh crap, there are fresh brains left somewhere in our otherwise desolate zombie nation!? Track down that IP address… Also, I got dibs on the pineal gland.
Bloix 05.08.15 at 3:16 pm
None of the comments that disagree or downplay Eszter’s criticism argue that she is mistaken when she says that the sample is non-representative of the population and therefore the conclusions are worthless. Some people seem to be saying, Don’t you understand?? ALL published Social Science research is not worth the glossy paper is printed on!!!!
Is that really the argument? That Eszter is a fool for assuming that the point of publication is to disseminate knowledge, and not merely to create a c.v. that will get someone tenure?
Bloix 05.08.15 at 3:32 pm
#46 – MSM, epidemic is one of those words, like significant or fruit, that means something different to a specialist than it does to a lay person.
To an epidemiologist, “epidemic” (the adjective) refers to a sharp, sudden rise in the incidence of an infectious disease above the usual (“endemic”) levels. This doesn’t necessarily mean large numbers in absolute terms – it means a large percentage increase. To avoid confusion, when talking to the public the CDC often uses the word “outbreak,” but in technical writing people say “epidemic” and journalists find epidemics more exciting, so …
http://www.cdc.gov/ophss/csels/dsepd/ss1978/lesson1/section11.html
What lay people call an “epidemic” is, in epidemiological jargon, “pandemic” incidence of disease.
http://en.wikipedia.org/wiki/Pandemic
TM 05.08.15 at 3:37 pm
Having briefly looked at the study, it appears both flawed and relatively banal. People’s facebook friends tend to share their outlook? How surprising, rush that to Science Magazine. Except that one wonders why moderates have so few moderate friends. Even a casual reader at that point should wonder about sampling issues (interestingly, the article doesn’t give an ideological breakdown of their sample).
I do think that social science is underrepresented in Science and when something does get published, it is often faddish like this (internet! social media! network graphs!!!)
Ronan(rf) 05.08.15 at 5:15 pm
Eszter – is your paper “Is Bigger Always Better?…†available anywhere (even in an earlier draft) non paywalled ? (obviously no problem if not, I just wasn’t able to find it through google)
I have to also say Freddie be Mad’s first two points are very astute.
On to something a little less related – the mention of epidemiology has offered a lead in to something I’ve been wondering for a while, but I dont want to divert the thread away from its current course towards a cliff edge, so if anyone interested and knowledgable would be interested I’ll open this up
https://ronanfitz.wordpress.com/2015/05/08/what-could-an-epidemiology-of-violence-tell-us/
and would welcome comments where applicable. (don’t reply here, I’ll approve all comments if there are any takers to school me on such subjects)
David Steinsaltz 05.08.15 at 8:21 pm
I’m not sure where anyone got the idea that Science is a serious peer-reviewed publication. It’s an academic ambulance-chaser. It’s more important that the results be sensational than that they be right. Speaking as a professional statistician, the standards of statistics in Science are generally abysmal, and they have no interest in correcting the methodological errors.
Donald A. Coffin 05.08.15 at 10:38 pm
For those who don’t wish to scroll back to the beginning of this, I quote in its entirety Freddie’s original comment:
“I don’t disagree, but these comments demonstrate the common condition of people who aren’t researchers radically overestimating what common research standards of publication are. Typically, intuitive critiques of sampling methodology imply standards so high nothing could ever get published. I mean, if you think this is bad, you should see all the other stuff that does get published. There’s tons of convenience sampling in published literature because in many or most research contexts, there is no practical way possible to collect a truly random sample.”
My problem is that I take this to be an argument *against* raising methodological issues about published research–because there’s a lot of sloppy/biased/badly done research, we shouldn’t complain about it or point it out? Or is there a more subtle point here that I am missing?
Collin Street 05.09.15 at 12:00 am
> Or is there a more subtle point here that I am missing?
Confused language reflects confused thoughts. If someone’s thoughts are a muddle, then an accurate reduction of their thoughts into words would also be a muddle: if the words were to convey any coherent meaning they would cease to reflect what the speaker thought.
js. 05.09.15 at 4:06 am
What the fuck does FdB have against CT? Seriously. Is there some famous past post I’m unaware of?
jackrousseau 05.09.15 at 11:54 am
So I read the post, thought “yeah that’s pretty bad, they should be on the lookout for things like this considering they’re a top journal”, saw FdB’s first comment and thought “yeah, that’s a good point, people tend to reference everything against these ideals that are increasingly passe in the corrupted juggernaut we call academia, while not ceasing to point out bad practices we should keep perspective about it – probably a more useful approach in the long term to fixing these problems”, and then wondered how on Earth things devolved so quickly.
There must be some sort of Dave Graeber-esque bad blood here, right? Maybe I don’t read the relevant radicals: I know CT is liberal and not, say, anarchist, but that doesn’t stop me from reading it any more than it stops me from checking out Krugman’s arguments against austerity. I haven’t heard of a liberal Politburo here, just horrible reactionary trolls like Brett Bellmore.
OK, going back to an extended thought I had when reading FdB’s first comment: first, is it possible to reform academia at this point? The publish or perish syndrome, the increasing risk-averse bureaucratization, the specter of bad practices (especially statistical ones) hiding behind buzzword shields like Big Data, the Bayesian arguments about the literature being full of false positives – do these mark the final stages of modern academia as we understand it, are they a symptom of the wider problems of society or are they simply unrelated problems with discrete, plausible solutions? If the latter, and zooming in to the problem at issue, what’s the best way to go about fixing it?
Should we tacitly pretend that there is elsewhere the Statistics Ideal being practiced (if even by omission of a discussion of the state of the average publication etc when it comes to this stuff) when taking Big Shiny Papers out behind the woodshed, for maximum effect and critical attention? Or would it be better as a whole to keep emphasizing the fact that this is really a terrible problem affecting tons of papers basically everywhere, and that almost every journal is guilty to some degree, and Twitter storming about a particular paper is not as effective as, say, getting some sort of organized Retraction Watch-style group to shed light on the issue? I’m honestly not sure.
Thanks Eszter for the thought-gunpowder.
Consumatopia 05.09.15 at 8:05 pm
Sandvig referred to this briefly, but to me the strangest part of this result is that they felt the need to compare individual choices to algorithms as sources of self-filtering. Just because there is a larger bad thing doesn’t, by itself, justify the existence of a smaller bad thing. But also because it’s not clear that individual choices to avoid content has the same problems that algorithmic filtering has. You might choose not to click on that link from one of your right-wing relatives, but at least you know it was there and the opinion is out there. And, morally speaking, there is a huge difference between choosing to avoid content you disagree with by yourself, and making that decision for somebody else. Especially when most Facebook users probably don’t even realize that their content comes pre-filtered.
Kiwanda 05.12.15 at 1:40 am
Thomas Leeper’s post about responses to the study, including this one, may be of interest.
He says, in response to this post, regarding the study’s generalization to the larger Facebook population, “In this case, it is not obvious that the authors intend to generalize to other groups – maybe they do, maybe they don’t, they’re not really clear on the question.” That is, he doesn’t seem to notice, or maybe care, that the sample conditions were described only in the supplementary material, which was a particular point here.
Eszter Hargittai 05.13.15 at 12:46 pm
A piece in HuffPo quotes the lead author as stating the following in response to some of the critiques:
Yes, approximately 4% of all Facebook users have identified their ideology, but the thing to keep in mind is that we were not interested in effects for all users,” he said. “We wanted to measure [the] extent to which people are exposed to viewpoints. People need to have viewpoints to do that.
This comment suggests that he thinks every FB user who has a political viewpoint actually identifies it in the About info on their FB account. As far as I know, we have zero evidence that that is the case. Does he have evidence of this? Can he please share it? I just checked some friends’ accounts who I know for sure have political views on the spectrum they use and none of them have indicated it to FB in the way relevant to the study.
Also, I maintain that the way they wrote up the study does not make it sound like they are restricting their findings to a very narrow group of users.
Comments on this entry are closed.