A while back, I was invited to referee a paper for an academic philosophy journal that requested the report back within 60 days. Really, 60 days? This provoked two thoughts in me. First, I’ll never submit to this journal. If you already give referees 60 days, how long will the entire process take? Second, why does it often take so long in (political) philosophy, ethics and related fields to get papers reviewed by journals?
What could be the reasons why it takes so long, how does this compare to other fields, and what (if anything) can be done about it?
First, possible reasons. If the journal has only one editor-in-chief who is the first filter of selection, they may block a limited number of days a month in their agenda to work themselves through the pile of papers that have been submitted to the journal. Excessive workloads are increasingly a problem for everyone in academia, and to the best of my knowledge most editors do their editorial work on top of their other tasks, entirely voluntarily. So it’s quite reasonable that it might take most editors so long before they can read the paper. Yet, by the time the editor has passed on the paper to the Associate Editor (AE) who has to start searching for reviewers, several weeks may have passed. (not all journals have this system, and for some journals the paper ends up very quickly with the (associate) editor who is responsible for either desk rejecting or getting referees; so there is only one person reading the paper, which limits the number of people who might not have time to take the paper further in the process).
Second, it is increasingly hard to find scholars to agree to referee papers. This is not only my own experience as an AE; on a regular basis there is an (associate) editor on FB or Twitter complaining about how often scholars decline reviewing for them (or, worse, not even responding to the request/invitation to review a paper). How to solve the inadequate ability/willingness of referees is something we’ve been discussing here before, and surely not for the last time.
Third, some (associate) editors unfortunately do not (and often cannot) sufficiently prioritise their editorial work. Perhaps they are genuinely overburdened, and they should have resigned from their post ages ago. Whatever the reasons may be, both in my own experiences with submitting papers, as well as what I’ve witnesses from PhD students as well as postdocs I’ve been working with, it sometimes takes unacceptably long for a journal to get back to the author. The record I’ve heard was a political philosophy paper that got desk-rejected (hence: no reports) one year after submission. The journal said it couldn’t find reviewers.
Fourth, many referees let the papers sit too long on their desk. As referees, we receive a deadline, and many of us send the report back the week before (or after) the deadline. A while ago, I’ve started to try to change this for myself, and try to review papers I accept to read within a week. The results are mixed: I’ve managed to turn in a handful of papers within a week, but I’ve also still had a few papers for which I submitted the report around the deadline. But the more general question is what the point would be to try to be quick if the other referees sit on the papers for 30, let alone 60, days? Speeding up the return-time only helps to speed up the entire referee process if that’s something all reviewers would do. Perhaps referees should automatically be informed when another referee turns in their report?
So is it possible to do this differently? I have two experiences that suggests that it might be possible to do it differently. The first experience was a couple of years ago in the field of bio-ethics/medical ethics. I was invited to submit a paper and within one week of submitting the paper, I got two very helpful referee reports. I was astonished, and told the editor handling the paper that I had never experienced such a speedy review – to which she responded that this happens regularly with this journal. This raises the question to what extent my observations are specific for the social sciences/humanities journals for which I tend to review. Perhaps scholars in the medical sciences, life sciences and natural sciences have a different set of social norms regarding reviewing that we could copy? I genuinely do not know – so tell me if you do.
The other recent and very pleasant experience I had was with a political philosophy editor at a general philosophy journal, who found two referees willing to read a paper and write up their comments within a week. But how can editors assure themselves of enough referees willing to write reports relatively quickly? Perhaps this was just a one-off lucky case?
The reason I worry about the slowness of the refereeing process, is that for our junior colleagues on the job market, having papers that sit with journals can harm their job prospects. It shouldn’t be the case that those who are lucky with encountering fast AEs and referees should have a more impressive publication list than those who have bad luck with editors/referees. More generally, this seems to be an inefficiency in the publication process, and one might wonder whether there could be better ways to organise this – if we could coordinate this…
What would happen if we would only accept to review papers that we knew we could/would/were willing to read within a week? (pending, of course, unexpected things happening in our private or professional lives). One might wonder whether editors would find it even harder to find referees. But would this be so? I don’t know.
Perhaps one of you has a better idea on how to speed up the refereeing process in academia. Or should we simply accept the current situation as the best it can be?
One might think this is a minor issue among the myriad of problems that contemporary academia faces, or that there are already too many articles being published anyway. But we have plenty of PhD students and junior colleagues who are on the job market, or will be applying again this Fall (the Northern Hemisphere Fall, that is). For them especially, the speed of the whole process does matter.
{ 25 comments }
David Steinsaltz 09.13.21 at 9:20 am
60 days! We mathematicians can only dream of such speed. 6 months is more typical, and I’ve waited more than a year at times for first responses from journals.
Margo Trappenburg 09.13.21 at 9:56 am
If I were asked to referee a paper within a week from receiving it, I would feel no hesitation to refuse (unless the abstract looked really, really interesting). Whereas I could not really claim to be too busy for sixty days after receiving the paper … So my guess is that editors want to have as many referees as possible. I my view it would also really help if they were not ask you to re-evaluate the paper after an R&R but leave the re-evaluation to the editor
A is for Anon 09.13.21 at 10:25 am
My first* venture into jurisprudence was a bruising experience: submission by email, no receipt (until I requested one), and then silence.
Eight months later I got the decision to reject. R2 said that there was something there, but that I had basically presented a lot of textual exegesis followed by a theory that might make sense of it, and if it was going to be publishable I’d really need to rewrite it the other way round; R1 took issue with the exegesis point by point, endorsed some but dismissed most of it, and strongly recommended against publishing a paper that was so obviously incorrect. (The worst of it was that, revisiting the paper, I found that some of R1’s negative comments – I stress some, if only for the sake of my own vanity – were actually valid.)
So that was the best part of a year gone before I saw any feedback – and I wasn’t exactly buoyed up with enthusiasm to get back to work when it did come in. (Needless to say, I’d chosen that journal in the first place because it seemed the best fit for that specific paper.)
[I could probably de-anonymise this whole thing without it meaning anything to more than a couple of hundred people – but academia’s a small world, so I’ll remain cloaked if I may.]
*and to date only, although hope springs eternal
Tyler Bickford 09.13.21 at 11:54 am
Seconding Margo’s point about re-evaluating R&Rs. I’ve been asked to read the same paper three times (even after I already supported publication). The editors should ultimately make the decision about what to publish, and my guess is that this is partly an artifact of the big publishing org’s newish online submission systems that take discretion out of the hands of editors, and are probably creating more work rather than less.
Matt 09.13.21 at 12:37 pm
For whatever reason, it’s often not that unusual for me to get referee requests in bunches. So, none for a few weeks or sometimes even a couple of months, and then a bunch within a few days or a weeks of each other. I don’t know why this is so. (A well-known political philosopher was mentioning – maybe complaining – on twitter the other day about getting 8 requests in a single day. He at least strongly implied that he didn’t accept all of them.) I very rarely turn down requests, but if I had a couple w/in a few days of each other, and it was requested that I do them in a week, I’d probably turn down more. I just couldn’t be sure I’d get them done in that time frame. So, I worry that asking for a one-week response might end up being worse. I certainly don’t know that. Maybe it would still be better. But, given the way that editors often complain about having difficulty finding referees, I’m a bit skeptical that it’s a good idea to make it more likely that people will turn down requests. (I have heard people say that it often takes 6 requests to find a referee. That always seems surprising to me given that I do almost always accept requests, but I’m willing to believe this is true.)
KLG 09.13.21 at 12:38 pm
In the biomedical sciences we are sometimes asked to return a review in a week; I am a day late this morning on one of those due to problems related to the pandemic. 2-4 weeks is normal and expected. When serving on a grant review panel we generally have 4-5 weeks to prepare for the panel meeting (now on Zoom or equivalent, which saves a lot of money), during which a typical reviewer may have 5-7 grants for which s/he is first or second reviewer (full review) and several as “reader” (impressions if there is something to add to the panel discussion). The best way to speed up the process is to pay reviewers an honorarium large enough to cover their Starbucks tab for a week…cue howls of indignation in 4,3,2,1…
Robert A Gressis 09.13.21 at 2:17 pm
Do journal editors keep databases of reviewers? There seems to me a lot of information that could be collected that would speed the process up. E.g., collect:
Name of the reviewer
How often they accept or decline review requests
How timely are their reviews
What their decisions are for each article
How many reviews they do
How good the editor thinks the review is (obviously, this is subjective; it could be a scale based on a rubric that asks such questions as: how readable was the review? Did the arguments the reviewer offered make sense? Was the reviewer polite, nasty, etc.? But this last category is very time-consuming!)
Finally, there could be a central database that editors share so that a kind of institutional knowledge could develop for which reviewers to select and which not to. I.e., this would be like Yelp for reviewers.
Double finally, re: the OP’s worry about junior philosophers, couldn’t editors collect information from who submits to them, and prioritize junior scholars over tenured ones? So, there would be two lines? This would be like TSA Pre, but for journal submissions.
Sashas 09.13.21 at 2:46 pm
Computer Science is weird in that we tend to deal in conferences rather than journals as the premier publications. All of the conferences I’ve reviewed for have given between 2 weeks and a month for reviews, and I think every period has been extended because not enough of us got our reviews back on time. I could probably return a review within a week if I only got one at a time. The latest conference gave me 4. I ended up doing one of them right away and the other three close to the deadline.
Overall, I’d say it would be great to speed up the review process. That said, it doesn’t even make the first page of my list of complaints. My top priority would be to add a mechanism for reviewers (or meta-reviewers after discussion among the reviewers) to get factual questions answered by the authors. This has been a publish-or-reject level issue for something like half of the submissions I’ve reviewed this year.
Tim Worstall 09.13.21 at 2:57 pm
“entirely voluntarily” might be where the problem lies.
“The best way to speed up the process is to pay reviewers an honorarium large enough to cover their Starbucks tab for a week…cue howls of indignation in 4,3,2,1…”
Something like that might well be the solution.
To argue by analogy, Hardin’s strict division into two of the solutions to commons tragedies was wrong, as Ostrom proved. She showed that social norms could work, but only up to a certain group size. Presumably, the sort of size where social norms could be mutually enforced.
This is only analogy, not a direct mapping. But it’s possible to imagine that getting a sneak preview of the new thoughts of a leader in the field is worth working on as a part of the general working process. Getting that fourth request this week to look over the publish or be damned paper to pad the CV by a fourth rate mind at a third rate institution for a fifth rate journal (to be unkind) might not stimulate those I’ll do it for free juices quite so much.
The size of academia, the number of requests, might mean that it has all passed some sort of Ostrom Limit.
I should note that I’m not an academic so this is only rumination.
Sam Tobin-Hochstadt 09.13.21 at 3:03 pm
I’m a computer scientist, where we have a considerably different publishing culture. Most papers are published in conferences, where there’s a fixed timeline — a deadline for submission, and a deadline for all the decisions, usually about 4-5 months apart, with maybe 1 subsequent month for changes before the final version appears. Of course, this has its own drawbacks — with no real “revise and resubmit” option, good work can get rejected again and again, with different reviewers. A friend shared her experience of having the same paper rejected 6 times before finally being accepted.
That said, journal reviewing in CS takes longer, partially because CS journal papers are themselves longer. But even holding paper length constant, I think I would basically never accept a review request if it had a 1-week deadline.
MisterMr 09.13.21 at 3:15 pm
@Tim Worstall 9
You miss the point that nobody actually buys scientific journals, so many of these journals can’t actually pay people for reviews. This also means that academic publishing doesn’t really follow market rules because apparently academics have to write research for their career, so it is them who want to publish in the journals, not the journals who want to publish stuff and ask the academics material; it is the reverse of the normal expected relationship between publisher and author (though I think in the world of publishing this is quite common).
Also, the OP says: “for our junior colleagues on the job market, having papers that sit with journals can harm their job prospects.”
But in reality, if you help academic A by reviewing faster, he will take the job instead of academic B, so in the end even if faster reviews might be better they wouldn’t help the career of academics as a group, they just switch the seats but the number of the seats stays the same.
Ingrid Robeyns 09.13.21 at 3:34 pm
MisterMr @ 11 – yes you are right that the number of seats stays the same. So I guess what I want to change is very slow journals. If all publication processes would be exactly equally fast (not possible, but let’s just imagine), then there would be no factor related to the speed of journals that would have a different (and morally arbitrary) effect on different authors and their carreer prospects.
So then perhaps what we need is a social norm within academic journal publishing that prompts us to not take longer than, X weeks/months to get back with referee reports (I guess if X were 3 or 4, everyone would be happy). The question would then have to be rephrased how we can make sure that there are no journals that take longer than 4 monhts to return reports and a verdict. I am wondering about acceptable and fruitful changes in editorial structures, habits, practices, nudges and incentives that would make this possible.
Neil Levy 09.14.21 at 2:55 am
I’ve been an editor in chief, now an associate editor for nearly 15 years, at two different journals. The journal I’m now AE at does keep statistics on reviewers. I’m confident that the main cause of slow turnarounds is reviewers: reviewers not responding to requests at all, not agreeing to review and sitting on papers when they do agree. Partly this is a cultural issue: philosophers are for some reason proud of their indifference to deadlines. I’ve seen this boasted of on social media numerous times. In any case, when there’s a culture of taking months to review, people feel no guilt about doing so. We feel guilty about falling below the standard, not abiding by a bad standard. In other fields norms are different and turn around times are very different.
We’ve discussed shortening requested review times before. People respond like Margo above: I will refuse requests, because with a longer window, I can find time when I’m not busy. This is a fallacy on which there’s empirical work: people make commitments in the expectation they’ll be less busy later, but they won’t be: most people who say “this week is usually busy” are wrong, and their current week is an average week for them. In any case, since it’s a cultural problem we need to change the culture. I would advocate requiring people to review in order to submit, and uninviting them if they haven’t returned their reviews within 14 days.
Neville Morley 09.14.21 at 7:40 am
I generally enjoy reviewing papers, and I don’t think I’ve ever actually turned down a request from a journal; it’s the sort of bounded task that can be fitted into my commute, and offers a break from my own research to think about something else briefly (so, yes, taking on reviews is also a sort of procrastination). Generally I get them done in a week or so – but if that was the specified deadline, I would be turning down lots of requests if they weren’t directly related to my interests; I can feel sure that I will find commuting time that doesn’t have to be spent on emails within a 3-4 week period, but within a week or two this can’t be guaranteed, and reviews (unless directly related) aren’t something I can prioritise for non-commuting work time within such a period.
And as an AE for a journal in a fairly niche field, I have no doubt it would make it harder to find reviewers; we have a standard six-week turnaround, and still get people asking for more time, or declining because they’re too busy. There is a particular problem, at least in this niche field, with hyper-specialisation, so that someone might rule themselves out from reviewing a paper on e.g. C18 adaptations of Greek tragedy in Austria on the grounds that their expertise is in adaptations of Greek comedy – which leaves you with a referee pool in single digits, all of whom know each other, and so have real power to dictate the terms under which they’ll take on a review. As AE I’d be quite happy with a reviewer who knows about C18 Austrian drama in general and a reviewer who knows about the reception of Greek tragedy, and so I devote a fair amount of effort to crafting invitation emails to say, yes, we DO want you to review this paper, and this is why we think you’re fully qualified to do so. And still get lots of refusals on the grounds of expertise.
andrew_m 09.14.21 at 9:27 am
I work in agricultural science and am in the process of stepping off the reviewing treadmill because of a role change. Review requests come in with 2-4 week deadlines, which I have probably missed by a fortnight on average over the years.
I reached a point where I got far more review requests than I could possibly handle, perhaps 3-4 per week. I have to prioritise on the basis of some combination of my degree of expertise, the likelihood I’ll learn something and encouragement for less-prestigious journals that need to persist because they are filling a useful geographical or intellectual niche. There’s no doubt the the journal-publishing systems track reviewers and have metrics for reviews: the reward for a job well done is another job, sometimes by an editor who isn’t really matching expertise.
Par for my annual number of reviews = (annual number of papers I author) x (average number of reviewer per paper) x (scaling factor for the responsibilities of degree of seniority)
I find myself publishing a lot in Elsevier journals (I know, I know): average time from submission to comments back is probably 4 months, another 6 weeks for the revisions and then8-12 months in the journal’s queue.
notGoodenough 09.14.21 at 9:28 am
I should first caveat that I can only speak from a very limited perspective, and am purely doing so with respect to my own field (materials science and energy storage), as I have no wish to try to generalise this to other areas or subjects. With that said, I will now throw my own handful of grit into the works :-)
So again, purely form personal experience, there can be considerable variation in review times – one Nature paper, for example, took over 1 year before the reviewers finished their deliberations, while conversely one in Energy and Environmental Science took about 3 weeks. I would say the latter was not an unreasonable period, but the former probably was. Many journals typically aim for 1 month, but 6 – 8 weeks is not uncommon either.
On the other hand, the perspective of reviewers should also be considered – and while I would by no means claim great expertise, I do typically review about 12 – 15 papers a year. Generally, for the journals I review for, I am expected to turn around my review to the editor in 2 weeks (which is not an exceptionally long time to read and thoroughly review the sorts of papers published in this field). I will admit I do go over the deadline from time to time, though not regularly (and I think the longest I delayed by was an additional week, mainly because I had a confluence of unexpected deadlines resulting in working rather long hours which was not conducive to making a thorough review). This is not a point of pride, but rather a function of fatigue. And though I will refuse to review if I think I cannot make a good stab at it (e.g. if I believe I have insufficient time), I will invariably receive another review request within a day or two afterwards – which suggests to me there is something of a demand for reviewers.
While certainly reviewers can be slow (and indeed, can often be so for bad reasons), I also think it a little unfair to lay the blame solely at their feet or to characterise this as always some sort of point of contrarian pride. I suspect there may be a number of reasons (some of which have no doubt been raised here before), which include the usual points of contention (again specific to my area, though perhaps some universality may exist):
1) Reviewing a paper represents quite a commitment.
I am often called upon to review papers which are 30+ pages (not including SI, Figures, ToC, etc.) very carefully and to an exacting standard, then to make (hopefully) sensible comments and suggestions. It is by no means as simple as merely providing feedback on whether or not the paper is “suitable for publicationâ€, as I am expected to provide discussion regarding the quality of the work and its presentation, how well it fits with the journal and the field, what impact is predicted, spot any errors (from the trivial typographical to significant issues with methodology or conclusions, to actual fraud), and to provide long and detailed thoughts regarding how this work may be improved (sometimes this is easy, such as editing for clarity, and sometimes it can be more-or-less writing a brief dissertation on the topic and putting together a short research project including techniques and data analysis which should be carried out).
This is, I would suggest, hardly a negligible amount of effort.
2) All my reviewing is considered “not workâ€.
Any reviewing I do has to be in my free time – which, given the expected dedication of work (as previously noted), does in fact represent quite a burden. I often experience quite high levels of exhaustion, and it is not unusual for me to spend most of my evenings and weekends undertaking this endeavour (often for months at a time). And while I am prepared to do this out of a sense of comity and responsibility, I would find it hard to blame anyone who didn’t particularly feel like devoting a good 10-20 hr of unpaid labour per week purely out of a sense of duty (particularly given that, for some of m’ colleagues in academia, there are often many other things you are supposed to be doing in your “free time” – such as writing grant proposals, grading, preparing lesson plans, balancing budgets, planning activities, managing your postdocs/students, etc. etc.).
3) The ever-expanding workload.
While I have no evidence for this, I do have a suspicion that editors tend to rely on reviewers they have good experiences with (it would, I dare say, make sense to send work to someone who has a track record of fast turn around times rather than keeping searching for new people you have no data for). Whether or not this is true, it is certainly the case that I am reviewing more papers now than I used to – and one of my older colleagues has the same impression, stating that they have seen something close to a quadrupling of the number of review requests they receive). I doubt this is so much due to any increase in my personal standing within the community, and venture it is simply a function of there being many more papers to review.
There have already been long debates regarding the impact of publish-or-perish, and I won’t propose to rehash them here, but I think it worth pointing out that if researchers are measured by how many papers are produced it is not unreasonable to expect that to be a targeted outcome (you incentivise the behaviours you wish to see increase, after all). To put this simplistically, if the performance target is for every researcher to publish 5 papers, then one would expect that at a minimum (and assuming everyone shares the burden equally) every researcher would need to review 10 papers (2 reviewers per paper is generally the minimum). The more papers set as a target, the more papers to review…
TL;DR while I would by no means say that reviewers are by any means perfect (far from it!), I am unconvinced it is more due to a matter of personal failings than it is a system which demands a dedication of labour which is unrewarded and (if one considers the opportunity costs) may well be actively harmful if pursued to too great an extent. I would suggest that the system, therefore, also plays something of a part in all this.
Specific thoughts:
As a point of pedantry, I think it is not universally true that scientific journals are not purchased (IIRC Elsevier, for example, make up about 34% of the revenue for RELX, who’s total revenue was ca. $9.4 billion). In part this seems (at least to me) to have been driven by that litigious old publishing tycoon, Maxwell. Now this is not to say that paying people to review is necessarily desirable or the most productive route forward – but I am unsure that it would not be possible (however, as this risks diverting back into the weeds again about the usual hoary old topics, such as access and such, I will try to refrain from commenting further on this specific point). Instead, fundamentally speaking I am a bit skeptical that monetary rewards would help. Not only because I doubt many publishers would be happy to see their profit margins decrease (at least for those journals who are “for profit”), but also because I suspect many would not find it much of an incentive unless the sums involved were a bit more significant than “beer money†(if one is writing an EU grant, I doubt a 10 euro voucher for a journal subscription would exactly lead to a fundamental shift in priorities).
I am also rather unconvinced by the proposal that people should only be allowed to publish if they review for the journal – unless all publishers share a central database, surely the obvious outcome will be that the incentive is to review only for those journals you plan to publish in? One suspects this might well lead to rather unequal distributions of volunteers, which may well exacerbate the situation…
My own (poorly considered) modest proposal
I am unsure that nudges and incentives will lead to much progress here – I think a more drastic change to the culture is needed.
Personally, what I think might help (again, purely speaking from my personal perspective on my field) would be if the demands for meeting targets for numbers of papers were relaxed, and more of an emphasis (at the institutional and funding body levels) were placed on things like reviewing. For instance, if (just as an example, not necessarily as an actual proposal) universities were expected to set aside a certain number of work hours for their research and teaching staff to undertake reviewing, I think this would be far more effective than using sticks and/or carrots at the individual level. In short, I would say a systematic problem requires a systematic solution – all nudges are likely to do is place more burdens on people who (not infrequently) already dedicate a disproportionate amount of their lives to their work.
Indeed, I would like to see systematically more value being placed on these sorts of “community services†researchers are expected to carry out – such as reviewing, replications studies, on-going peer review (i.e. post-publication literature study), etc.
And then, perhaps, subsequently I may collect my magical flying pony and head off to the moon for a scoop of delicious cheese!
Tim Worstall 09.14.21 at 10:28 am
“You miss the point that nobody actually buys scientific journals,”
Well, libraries do. But other than that that’s part of my point. Whether we talk of Ostrom or Polanyi and mutual exchange networks it’s still true that certain incentive or exchange or social management processes work up to a certain size of group of people. Once the group is larger than that then other such methods need to be used.
Kiwanda 09.14.21 at 7:51 pm
The time commitment for reviewing a paper must vary quite a lot across fields, and across papers within fields. A twenty-page paper in mathematics might involve substantial work, sentence by sentence, in verifying each step of a proof, checking that theorems from other papers imply what is claimed they do, and so on. A comparable paper in, say, machine learning or biology, might be reviewable in a quarter the time, or less. Less rigorous, more impressionistic fields, like physics, I imagine take much less reviewing time.
Fields for which the time commitment is greater likely have correspondingly greater difficulty in finding people willing to make that commitment. Finding reviewers can itself be a long process:
– struggle to find two people both knowledgeable in the relevant sub-sub-field, and without a conflict of interest;
– ask them; wait awhile for them to respond;
– ask them again;
– wait awhile for them to respond;
– give up;
– repeat the above until two people agree to review;
– wait awhile;
– ask each reviewer for the report;
– wait awhile;
– ask again;
– they respond “within two weeks”
– forget about it for a month;
– ask them again;
– no response;
– wait a week or two
– ask them again;
– they respond “it’s a very busy time; just after Halloween?”
– wait until after Thanksgiving;
– ask them again;
– no response, or the response is “based on a quick skim, I think….”;
– give up on them, start process all over again.
It’s true that there’s ways to speed this up, for example: ask four people, hoping for two. This takes more editor time, less elapsed time. Or: give up earlier on people who agree to review, but then don’t. But some people can be very persuasive: they’ll do it soon! Really they will! And hope springs eternal.
Peter Erwin 09.15.21 at 11:10 am
Some notes from the perspective of astronomy:
The usual requested time for finishing a review is 3 weeks (or 2 weeks if it’s one of the “Letters” sub-journals, which are for short, timely/sexy papers). In practice, typical time-to-receipt-of-review turnaround from the point of view of authors (i.e., the time between when you submit a paper and when you receive the review, so including all the editorial overhead) is around 5 weeks; two months is considered quite excessive.
Papers submitted to the main astronomy journals almost always get only one referee, which probably improves the overall speed; my impression is that it almost always takes less than a week for the editors to find a willing referee. (Papers submitted to places like Nature or Science usually get three referees.)
I had an interesting exchange with Mark Liberman of Language Log in the comments of this post, about the contrast with the field of linguistics, where he presented evidence that the typical time-to-receipt-of-review was four to nine months. (And also that the time between acceptance and actual publication was eight months to a year, a rather strong contrast to the one or two months it takes for an accepted astronomy paper to appear in final form on the journal’s website.)
JBL 09.15.21 at 11:07 pm
Like David Steinsaltz, I’m a mathematician, and I had difficulty parsing the indignation in the first paragraph of this post: I have only on one or two occasions been asked to review a paper in less than 60 days. Some journals ask 3 months, some 4, one journal I am quite fond of asks referees to write back within 6 months. I recently received a first report on a 35-page (so, medium-length, but fairly technical) paper just over 2 years after it was initially submitted; a companion paper (longer, equally technical) was submitted in March 2019 and I have not received referee reports. The AMS keeps records of publishing times in mathematics; e.g., here’s the report from 2019: https://www.ams.org/journals/notices/201910/rnoti-p1713.pdf
My spouse is in the biological sciences, where turnaround on the timescale of a few weeks is normal — but often referees request additional months of labor.
My sense is that nearly all of the typical delay in mathematics is in the “finding referees” and “getting reports from referees” stages (i.e., the second and fourth points above). Perhaps, in other fields, editors do more editing than they do in mathematics!
I think a good rule of thumb is: if a typical paper in your field receives n referee reports, then you should do at least n referee reports for every paper you submit to a journal. But this is a question of quantity, not speed. Ingrid, I think your goal of doing reports within a week is noble, and perhaps Neil is right that there is something to be gained by journals asking for much shorter windows (then giving, say, two extra weeks before moving on to new referees). I know of one journal in mathematics (IMRN) that prioritizes quicker reviews, but that means that one may receive from them a desk rejection for the reason that the paper probably cannot be reviewed quickly.
SusanC 09.16.21 at 7:08 am
“I think a good rule of thumb is: if a typical paper in your field receives n referee reports, then you should do at least n referee reports for every paper you submit to a journal.â€
But the acceptance rate can be as low as 10%.
So if you’re a person whose papers actually get accepted (and hence, you are deemed acceptable as a reviewer), you would need to be reviewing 10n (I.e, about 30) papers for every one that you submit.
SusanC 09.16.21 at 7:21 am
The above argument assumes it’s always the same 90% of authors who are getting rejected, which isn’t fully realistic.
JBL 09.16.21 at 10:31 pm
SusanC, if my paper is submitted to journal A and gets rejected (with referee reports), and then afterwards submitted to journal B and gets accepted, I figure that’s two submissions and I owe 2n referee reports for it. This is somewhat aspirational, in that I don’t keep good enough records to know if I’m really meeting my quota at any given moment. I am sure that I have refereed more than mn papers where m is the number of my published papers, but perhaps not where m is the number of my journal submissions that have received reports. But there is also a distortion that cuts the other way: most of my papers have coauthors.
SusanC 09.17.21 at 10:29 am
@yes, if your paper took two submissions to accepted, you would need to referee 2n papers for things to balance out.
But the effect I was alluding to is: if you submit to a “top tier†journal, get rejected, then resubmit to a lower ranked journal and get accepted, the top tier may not consider you to be competent as a reviewer. So for things to balance out for the top tier journal, the people whose papers it did accept (and who it considers to be competent reviewers) have to do correspondingly more reviewing.
JBL 09.17.21 at 7:46 pm
@SusanC, I suppose that’s a thing that could happen, but if so it seems to me that it’s more in the category “self-inflicted wounds of overly snooty journal editors” than “systematic problems of broad concern” :)
Comments on this entry are closed.