Dishonorable Citations

by Henry Farrell on October 12, 2005

The Chronicle has a very interesting (if long) “article”:http://chronicle.com/free/v52/i08/08a01201.htm on the ISI citation impact index, which seeks to measure the importance of academic journals through calculating the number of citations that each article in the journal gets. Like all indices, it creates skewed incentives for people to game the system. Authors tailor their pieces to get into the top journals, while journal editors’ choice of which articles to publish may be influenced by whether or not it will get lots of citations (and bump up the impact factor of the journal).

bq. Many other editors contacted by The Chronicle also deny making judgments on the basis of whether a paper will attract citations. But Dr. DeAngelis, of JAMA, says editors at some top journals have told her that they do consider citations when judging some papers. “There are people who won’t publish articles,” she says, “because it won’t help their impact factor.” … Fiona Godlee, editor of BMJ (formerly known as the British Medical Journal), agrees that editors take impact factors into account when deciding on manuscripts, whether they realize it or not. “It would be hard to imagine that editors don’t do that,” she says. “That’s part of the way that impact factors are subverting the scientific process.” She says editors may be rejecting not only studies in smaller or less-fashionable fields, but also important papers from certain regions of the world, out of fear that such reports won’t attract sufficient citation attention. “It’s distorting people’s priorities,” she says, “and we have to constantly fight against that.”

Some journal editors take citation-boosting to an extreme.

bq. In other cases, research articles in a journal preferentially cite that very journal, with the effect of raising its impact factor. ISI detected a clear example of that practice at the World Journal of Gastroenterology. The company stopped listing that journal this year because 85 percent of the citations to the publication were coming from its own pages. (Despite that censure, the journal’s Web site has a moving banner that still trumpets its 2003 impact factor.) …

bq. John M. Drake, a postdoctoral researcher at the National Center for Ecological Analysis and Synthesis, at the University of California at Santa Barbara, sent a manuscript to the Journal of Applied Ecology and received this e-mail response from an editor: “I should like you to look at some recent issues of the Journal of Applied Ecology and add citations to any relevant papers you might find. This helps our authors by drawing attention to their work, and also adds internal integrity to the Journal’s themes.”Because the manuscript had not yet been accepted, the request borders on extortion, Mr. Drake says, even if it weren’t meant that way. Authors may feel that they have to comply in order to get their papers published. … Robert P. Freckleton, a research fellow at the University of Oxford who is the journal editor who sent the message to Mr. Drake, says he never intended the request to be read as a requirement. … Mr. Freckleton defends the practice: “Part of our job as editors is making sure that our work is getting cited and read appropriately.” The policy, he says, is not an explicit attempt to raise the journal’s impact factor. But the policy has done just that, and quite successfully, according to the The Chronicle’s analysis of self-citations to one-year-old articles — which are important in the impact calculation. In 1997 the Journal of Applied Ecology cited its own one-year-old articles 30 times. By 2004 that number had grown to 91 citations, a 200-percent increase. Similar types of citations of the journal in other publications had increased by only 41 percent.

The article suggests that the problem may get better as new forms of publishing and access move the highlight from journals to individual articles. I suspect that this isn’t going to solve the problem so much as to displace it – emerging systems of figuring out which article is or is not important will almost certainly be vulnerable to gaming too. But a very interesting read nonetheless (and the Chronicle’s “colloquy”:http://chronicle.com/colloquy/2005/10/impact/ on the topic, starting at 1pm today should be interesting too).

{ 17 comments }

1

Jed Harris 10.12.05 at 10:53 am

Of course Google has this problem, and presumably has much better solutions than current academic publishing. I’m sure links from one blog entry to another don’t increase a site’s page rank. Surely citations from one journal article to another in the same journal should at least have lower weight than external citations.

More generally there could be problems with clusters of mutually citing journals, but obviously this would be much harder for editors to manage. I know Google has problems with clusters of mutually referencing sites that don’t have many external links, but I bet they have at least some handle on how to deal with this.

A better “influence metric” would be much more useful than an occasional decision to “delist” a journal because it is a bad actor.

2

Taylor Gilbert 10.12.05 at 11:42 am

Could it be another instance of the Billboard top 100 or the Best Seller list effect? The effect goes something like – the mere fact of getting on the list at all gets your name out there and thus amplifies your sales which results in additional sales?

Or is it that academics are playing a similar game as web geeks have been playing for years – How to get a higher ranking on search engines. Although one would expect academic/grant review committees using such means to know how to evaluate the results for statistical significance.

questions:
– will the rise in academic’s blogging and trackbacks be included in future metrics?
– how will they adjust for the rise in web based journals?

3

abb1 10.12.05 at 12:04 pm

I’m not too familiar with academic journals (other than that I once worked for a software company facilitating peer reviews) but wouldn’t volume of circulation/subscription be an obviously much better metric?

4

Tim Worstall 10.12.05 at 12:13 pm

abb1: Sorta depends. A journal that reaches all 500 people in a speciality might be a “better” than one that reaches 5% of a larger speciality.

5

abb1 10.12.05 at 12:19 pm

But same must be true about citations: a brilliant article in some godforsaken area is unlikely to collect many.

6

Keith 10.12.05 at 2:37 pm

I alweays found it amusing when I saw how often my professors in Grad School cited themselves. But then, shameless self promotion in my field (Library Science) has always been a halmark.

7

Peter 10.12.05 at 4:31 pm

I knew it! Groups of academics citing each others’ work! Someone should put a stop to these illegal citation rings, making a mockery of peer review!

8

Commenterlein 10.12.05 at 4:54 pm

Citation counts are probably still a better measure of the importance and impact of an article than most other measures one might think of. The quotations above make it sound as if editors considering the citations a paper is likely to garner was an a priori bad thing – I don’t believe it is. Preferring articles which are likely to be read by a lot of people and influence a lot of other’s work to articles which may be brilliant but nobody is interested in seems to be perfectly reasonable.

The games played by editors to drive up the citation counts and impact factors of their own journals are obviously detrimental to the scientific process and hence unacceptable. But this simply implies that impact factors should be calculated without including self-citations at the journal level.

9

Mike Otsuka 10.12.05 at 7:29 pm

While wasting time surfing Web of Science, I recently discovered that, by some enormous margin, the most cited article of G. A. Cohen’s is “On the Currency of Egalitarian Justice.” In thinking about why that particular piece would have been cited so many more times than other influential articles of his, it occurred to me that this article, unlike most of his others, hasn’t yet been reprinted in any of his books. Once an article is reprinted, it’s as likely to be cited via the book in which it’s reprinted rather than the original article in which it was published.

So perhaps in the future journals will try to require that authors of articles not reprint them in their collected works.

10

vivian 10.12.05 at 8:09 pm

But commenterlein, one expects that a ‘truly prestigious’ journal – one that people know has a lot of articles by people one finds consistently interesting, on extremely important topics (to the subfield) – should cite other articles in the journal. I’d be surprised if articles in JAMA, Nature, Cell, Ethics, JPP, PAPA, etc. didn’t cite work from those journals -they carry too much good work (or work by giants) not to cite them.

Academics I know tend to have a small set of journals they respect a lot, based on how many articles in each issue they’d consider relevant and good work. Among the larger set of journals, they browse, and seek the issues out for specific articles, or authors. These ‘idio-rankings’ aren’t standardized, but they are what drive their reading and recommending. I can’t see most of these profs changing their habits based on ISI algorithms, even if ISI had good ones. In unfamiliar subfields they’d just ask a colleague, or roll dice or something. Anything rather than defer to some institution.

11

Taylor Gilbert 10.12.05 at 9:10 pm

the more insideous aspect to all of this is that when BAD research gets through the filters, and it becomes popular, then it is tough to eradicate it from peoples heads. How about freudian literary analysis or worse yet marx. These guys have not been taught in their own fields for decades and their ideas are all over the social sciences.

and what is far more insideous is when the research leads to a reform of public education teaching methods. If you have not looked in a public high school textbook on history in a while, you might be suprised.

12

Scott Eric Kaufman 10.12.05 at 9:40 pm

For some reason I wouldn’t feel nearly so guilty about hawking my blog on this thread as I do on othes. Everyone should read it. Really, they should. It’s really important. Know why? Because I say so. All kidding aside, the consequences of this practice in the hard sciences are, potentially, far more dangerous than another article about Jane Austen or Jack London falling into the void of some specialist publication in the humanities. Insert your own nightmare scenario of mute inglorious cures for cancers or the common cold here…

13

Demosthenes 10.13.05 at 12:28 am

Wait.. since when was Marx not part of social science?

14

Kenny Easwaran 10.13.05 at 3:42 am

Commenterlein – in addition to Vivian’s point about prestigious journals expecting to get a lot of self-citations, I hear (at least in philosophy) it’s also traditional to publish a response to a particular article to the same journal. This sort of journal self-citation is relevant because a journal that gets a lot of it is publishing the articles people want to directly reply to, while the journals that get only ordinary self-citation and external citation are publishing papers that make some nice points but don’t inspire direct responses.

Now, of course, this could be a good thing or a bad thing. One thing that draws in direct responses is when an article just seems plain wrong. But another is when it really does say something important that inspires new research, rather than just making some points that are tangentially relevant elsewhere.

15

Andy 10.13.05 at 3:53 am

Within the social sciences, I think there is a specific problem with the ISI impact ratings that promotes gaming; the time periods involved. ISI impact ratings are calculated on cites within 2 years of publication. Now given the lead times involved, you do some research, work it up into a conference paper. Revise it, submit it, make revisions – we are probably talking 2 years from conception to acceptance on average, plus another year to 18 months wait before publication.

In these circumstances, most articles won’t be that heavy on references to recently published articles, so gaming by editors can really push up ratings.

16

Taylor Gilbert 10.13.05 at 11:55 am

Here is a roundtable discussion of the issue from the Chronicle:
http://chronicle.com/colloquy/2005/10/impact/

here are some interesting bits:

– Sociologists are not “mathy” people:

by Anurag A. Agrawal:

“Perhaps more mathy people, maybe sociologists, should take a look at this issue.”

“I don’t advocate getting rid of the impact factor… I simply think we should keep an open mind and at least use several indices… (including our own personal judgment of the prestige of the publication venue). There is no way that this process is going to be made simpler, with a silver bullet, and there is also no possibility to make it completely objective. Our only hope is to not be locked into a self-imposed monopoly where impact factor is god. In addition, we should try to close the door on abuses … I think the clear path here is to embarrass the editors or publishers that make compromising requests!”

” It may be worth the effort of some governmental agencies, especially those that fund scientific research, to investigate impact factors and journal practices.”

“Anurag A. Agrawal, an assistant professor of ecology and evolutionary biology at Cornell University, has served on the editorial boards of five journals. He recently published a letter in _Trends in Ecology and Evolution_ decrying some common editorial practices designed to raise citations and journal-impact factors.”

17

Dean Blobaum 10.14.05 at 12:04 pm

Jed Harris said: “I’m sure links from one blog entry to another don’t increase a site’s page rank.”

There is no such thing as “a site’s page rank.” Only webpages have page rank, not websites. There is no equivalent to the entity of a journal in how page rank is calculated.

Comments on this entry are closed.