WarrZone 2004

back to WarrZone

Peer Review

 

Wendy Warr & Associates, 6 Berwick Court, Holmes Chapel, Cheshire, CW4 7HZ, England. Tel/fax +44 (0)1477 533837. Email: wendy@warr.com WWW http://www.warr.com


For the purposes of this article, peer review may be defined as a process in which a manuscript submitted to a scholarly journal is sent to a number of the authors' equals ("peers") for advice on whether the manuscript should be published in the journal in question, and for suggestions on improving the paper [1,2]. Peer review has been in use for over 100 years and has become an essential part of the tenure and promotion process in academia: most authors prefer to publish in peer-reviewed journals and the reputations of the best journals depend upon quality control.

Thoughtful peer review, and editorial care maintain high standards and the integrity of science but peer review is stretched to the limit: it is hard, slow work, carried out by over-worked, usually anonymous researchers who get little recognition for their efforts and no financial reward. Journal editors are immensely grateful to their loyal reviewers but, in order to avoid bias, they, and the publishers, are limited in the inducements they can offer.

Not surprisingly, in recent years, some people have begun to question the value and viability of the whole process and to study alternatives. Much of the discussion has taken place in medicine: very little has been published on the subject of peer review in the chemical literature. Few chemists seem to know about the International Congresses on Peer Review in the Biomedical Sciences, sponsored by the American Medical Association, the proceedings of which are published in themed issues of the Journal of that Association. Are medicine and chemistry different? Almost certainly they are and, here, we are concerned mainly with chemistry.

FUNCTIONS OF THE PEER REVIEW PROCESS

The peer review process seeks to filter out bad papers and improve mediocre papers. It has, however, been said that peer review should be seen as a traffic policeman not as a filter since there is a "pecking order" of journals [3]: if the paper is not published in the journal of first choice, it may then be passed down the line to a second, third, or even "lesser" journal. How can journals be ranked in this way and how does ACS Publications ensure that its journals rank highly, meeting the advertised aims of high quality and high impact?

Recording how often a journal's contents are cited in scientific literature has long been the conventional way of measuring the importance of specific publications and even of the authors themselves. One commonly used criterion is the journal impact factor, a measure of the frequency with which the "average article" in a journal has been cited in a particular year. The impact factor helps users to evaluate a journal's relative importance, especially in comparison with others in the same field. The impact factor is calculated by dividing the number of current citations to articles published in the two previous years by the total number of articles published in the two previous years [4].

However, the widespread availability of electronic journals on the Web has enabled Chemical Abstracts Service (CAS) to provide a new measurement: a tally of researchers' actual requests (Real-Time Document Requests) for full-text articles transmitted via CAS search services. CAS Science Spotlight [5] highlights the most requested articles in chemistry and related sciences. Typically, there is little if any correlation between the list of most cited articles in a year and the most requested articles of the same year.

There is a wide range of literature relating to journal quality evaluation and the many problems associated with impact factors have been well publicized [6,7]. Real-Time Document Requests are too recent to have been researched in any detail but it is worth noting that the Journal of the American Chemical Society has won a CAS Science Spotlight award.

In research work on peer review three criteria may be used to determine the quality of the process: reliability, fairness and predictive validity. If the system is reliable, reviewers will be generally in agreement and in an experiment, a paper published this year should not be rejected if resubmitted, for example, with altered title and authors, next year. The system is fair if the reviewers are truly the peers of the authors and if there are no biases on the part of the editor and reviewers. If the peer review process is valid, and only "good" papers are published, it is likely that those papers will be well read and frequently cited and that the journal will have a high impact factor: this is the criterion of predictive validity.

In an age when anybody can publish anything on the Internet, and the number of articles submitted to journals increases relentlessly, separating the reliable, good and valuable results becomes a problem. Peer review should help in extracting the signal from the noise. In 2003 ACS received almost 45,000 papers and published over 24,000, theoretically reducing readers' overload by about 46%. Some people would, however, argue that few scientists now read just a few eminent journals, and since most work finds some outlet for publication in print or on the Internet, peer review is probably not significantly reducing information overload. It would be ideal if peer review could detect abuse, fraud, plagiarism etc. but this is not feasible in all cases. The ethical obligations of editors, authors and reviewers are discussed elsewhere in this book.

Another function of the peer review system is to give authors feedback but, unfortunately, the suggestions of the first round of reviewers are often ignored if the paper gets published instead in a journal of second choice [8]. ACS has a mechanism for ensuring that such feedback is used if the article is deemed more suitable for publication in a different ACS journal from the one to which it was submitted. If, for example, the reviewers for a manuscript submitted to the Journal of Medicinal Chemistry recommend that the paper is more suited to the Journal of Chemical Information and Modeling, the editors of the latter journal can make use of the original reviews (with one additional review) thus saving delays as well as ensuring that the text of the manuscript is improved.

Peer review has a significant economic impact for publishers. Some workers suggest the cost per published paper might be about $500 but estimates vary. On the other hand, whether or not a journal is peer reviewed may affect an author's decision on whether to submit a paper to it, and from the point of view of the publisher's marketing department, peer review is bound up with the image and branding of the journal.

PEER REVIEW IN PRACTICE

Let us now look at the mechanics of the review process. An author, or group of authors, first writes a paper and chooses the journal to which it will be submitted. The corresponding author submits the manuscript and sends a letter to the editor, sometimes suggesting reviewers, occasionally asking that certain researchers should not be chosen as reviewers. An ACS editor would usually respect such requests, within reason. It is worth noting that reviewers selected by authors do not always supply especially favorable reviews.

The author gets an acknowledgment of the submission, then waits. Speedy publication is important to ACS. For ACS NanoLetters and Organic Letters, the median time from receipt of the manuscript to Web publication is just seven weeks. To some authors, however, a few weeks can feel like an eternity. What is actually happening at this stage?

The editor will read the manuscript and note any immediately obvious problems. Publishers who have just one centralized editorial office can sometimes detect unethical attempts at duplicate publication at this stage but this is harder for ACS because of the size of operation and because the editors are somewhat independent and widely dispersed in multiple editorial offices.

Rejection of manuscripts without review is now common among major chemical journals but most manuscripts submitted to an ACS editor are sent for review almost as soon as they are received. Editors go to great lengths not to show bias against authors who are not of English mother tongue but, although we can handle non-idiomatic English, material in incomprehensible English cannot be peer reviewed. Papers dealing with an obscure topic, covering a subject of minority interest or material outside the scope of the journal, displaying poor presentation, lacking novelty, describing routine experiments, or using invalid statistics are also likely to be rejected before peer review.

As the number of manuscripts increases, it gets ever harder to find suitable reviewers. Reviewing is a thankless task but the editor's job is also not an easy one. At some times of year it seems that every researcher is busy writing and submitting papers yet all the reviewers are on vacation. This is odd given that authors and reviewers are the same group of people. Human nature also dictates that some authors who are slow in reviewing manuscripts for others may expect their own manuscripts to be reviewed quickly.

How does an editor choose reviewers? Good sources are the list of literature references, the ACS Directory of Graduate Research, the editor's own network, the pages of the journal in question and competing journals, the results of searches of the Internet or the reviewer database, SciFinder search results, and suggestions from another editor, a member of the journal's advisory board, or a colleague. Also, in the case of a reviewer saying he or she is unable to do a review, that reviewer can be asked to recommend someone else.

Editors usually assign two or three reviewers. Disgruntled reviewers have pointed out that the new electronic systems encourage an increasing number of submissions, some of them perhaps premature, and it is too easy for an editor to assign an unnecessarily large number of reviewers to a paper. There is also a danger that a reviewer will delay a review in the hopes that others will comply quickly and the reviewer in question will never need to look at the manuscript. In this editor's experience, if three reviewers are assigned, most likely a maximum of two reviews will result whereas if only two reviewers are assigned and more have to be assigned later, there can be a long delay to acceptance or rejection. This editor has had cases recently where seven, eight, or even nine reviewers were assigned before two or three valid reviews were obtained.

Typically, reviewers are given about ten days to three weeks in which to submit a review (or just five days for short papers). Experience shows that not many comply within that time unless reminded. Sometimes the assignee immediately agrees to review; many times he, or she, refuses because he or she is too busy, traveling, doing six other reviews already, is unqualified, or has a clash of interest. Sometimes the editor hears nothing until the review appears; sometimes nothing is heard at all. In many cases a timely review does not appear and the editor must start on the reminders. Sometimes an assigned reviewer still does not respond and the editor then has to assign a replacement reviewer.

Eventually, some reviews are sent to the corresponding author with (preferably) a not too unkind rejection note or with a request for revision. A revised manuscript is usually received at a later date but sometimes no revision appears and ACS withdraws the manuscript. A revised manuscript might be revised more than once, and reviewed again (sometimes more than once), before it is finally accepted or rejected. In all cases, it is courteous of the editor to tell all the reviewers what happened, and send them all the (anonymous) reviews, author responses etc. A rejected manuscript may go through the whole cycle again with the editor of another journal.

Occasionally there will be critical feedback from a reviewer who feels that his or her views have been ignored, or from an author after a rejection decision has been made. Journals have evolved their own rules for dealing with these issues. Sometimes, for example, the editors of a specific journal will adjudicate for each other. Occasionally a "rejected" author will insist on a different editor for the peer review process for his next submission. Such requirements can usually be accommodated. Positive feedback is also not unknown: some authors even send kind comments thanking the editors for their efforts.

GUIDANCE FOR REVIEWERS

Some journals consult reviewers about their willingness to review before sending the manuscript; others save time by not doing so. It is much appreciated if a reviewer lets the editor know very soon if he or she is unable to do the review and perhaps recommends an alternative reviewer. Reviewers should disclose any conflicts of interest (e.g., working on the same subject, collaborator of the authors, or offended, or biased in some other way) and will probably have to decline to review if there are serious conflicts. Reviewers should also disclose if they are unable to judge certain aspects of the paper. They should read the manuscript carefully (some, seemingly, do not) and reply within the deadline.

Probably the first question to answer is whether the manuscript is likely to be of interest to the broad readership of the journal. Reviewers may also notice errors such as missing tables, figures or references, or extra ones not referred to in the text. The title and abstract should accurately reflect the content of the paper. Reviewers are not copy editors: they may point out typographical errors if they wish but their responsibilities go well beyond that. Editors will not be happy with a review that says "publish without change" unless there is a good explanation of why the manuscript is so perfect.

Above all, the editor will want to hear about the scientific value, originality, significance, and novelty of the work, together with some evidence on which the reviewers base their opinion. Does the work fill an unmet need, answer an important question, or disclose a new capability, or does it describe trivial or routine experiments? How does it add to existing knowledge? Is it a rehash of earlier work or are the authors fragmenting their work into several, short manuscripts instead of producing one comprehensive one?

Reviewers should indicate whether the writing is clear, concise, logical and understandable. Where the authors are not of English mother tongue, some kind reviewers do a complete annotation of the manuscript but this is over and above what is expected from a reviewer. Commercials and sales pitches are inappropriate in a scholarly journal. The paper should have a logical flow and be written in a style appropriate to the journal and its typical expert reader. Figures, captions, and keys should be clear. Reviewers should decide whether the article justifies its length and has the correct level of detail, making precise recommendations for what might be done to shorten (or lengthen) it. There may, perhaps, be too many figures or tables, or the introductory section might be too long or too short. It may be possible for the authors to submit some of the material as supplementary information.

As regards the main arguments in the manuscript, reviewers should ask themselves the following types of questions. Are the conclusions adequately supported by the data presented? Are the experimental methodology and data interpretation sound? Are the logic, arguments, inferences, and interpretations sound? Are counterarguments or contrary evidence taken into account? Is the theory sufficiently sound and supported by the evidence? Is the theory testable? Is it preferable to competing theories? Could the reader repeat the work? Is enough information supplied e.g., experimental details including methods and materials, and supplementary material such as datasets, or software? Are the conclusions justified? Could other conclusions be drawn from the results presented? Does the author make unfounded claims? Excessive speculation is to be avoided. If the paper uses statistics, the editor will usually choose a reviewer competent to decide if the statistical design and analysis are appropriate and if there are sufficient data to support the conclusions [9].

A reviewer should also comment on whether the authors take into account relevant current and past research on the topic and whether the literature references are appropriate and correct, pointing out (in sufficient detail) any missing literature references. There are also cases where an author cites too many references: even review articles need to be selective. Equations, units and chemical structures should be checked. Are hazardous procedures clearly defined as such? Are compounds fully characterized? Finally, the reviewer may have suggestions for other experiments or further work but should justify those opinions and be realistic: the authors may have had good reasons for setting the boundaries of the project.

The review should be organized carefully, perhaps with the points numbered so that the author can respond to each in turn. More trivial comments should be distinguished from more important suggestions for revision. General criticisms need to be qualified with specific suggestions or comments. It is not enough to say "expand this section", or "the literature survey is not good enough" or "the theory is inappropriate". Equally, the authors should not just respond by saying "all the criticisms have been adequately addressed". The editor may not have time to read through the revised manuscript checking what has changed and publication could be delayed.

Both reviewers and authors should be polite and objective and avoid personal comments. This is particularly important for reviewers, possibly the authors' competitors, who have the advantage of anonymity. There is no place for vested interests, axes to grind, and preconceived notions. In return the author should be prepared to accept criticism and should respond dispassionately to the reviewers. The maxim is "do as you would be done by".

REVIEWER DISAGREEMENT

Frequently, reviewers disagree in their recommendations about publication and it is not unusual for them to disagree very widely. It is important to note, however, that "reliability" and "validity" of the reviews are not the same. If all the reviewers agree, the process is reliable but this says nothing about validity as measured by impact factor or some other way of estimating the "truth" about the paper. The reviewers could all be wrong.

There are many good reasons why reviewers disagree. The editor may have deliberately chosen them in such a way that agreement is unlikely, e.g., one might understand one aspect of a paper while another specializes in another aspect. Thus, one reviewer might be a statistician with no chemistry background and another might be a medicinal chemist. Quite apart from this, two experts in the same field might still apply different criteria or standards, or have different biases and academic perspectives.

Many studies have been carried out into reviewer disagreement [1]. Highly significant disagreement is common. A quick study by this editor suggests that it happens for 25-30% of submissions, and in 4-5% of cases, two reviewers are poles apart in their evaluations. Where there is strong disagreement, the editor will make a judgment based on his or her own assessment, often after asking another reviewer or a fellow editor to adjudicate. Asking the initial round of reviewers to comment on the entire (anonymous) set of comments is another useful approach. What sometimes happens is that the authors agree to rewrite the manuscript and submit it for review by an entirely, or significantly different set of reviewers.

Where editors have to make difficult decisions, a number of factors may come into play. If the authors are not of English mother tongue the problem may be solved by getting a native English speaker to check a rewritten manuscript. If there is no hope of improving the presentation of a paper, it will be rejected. The choice of journal can be another cause of disagreement; ultimately it is up to the editor to decide if the work is within the required scope.

Editors are often familiar with the biases, specialties, and personality characteristics of individual reviewers and can use this knowledge to guide a decision. Decisions based on the publishing record of the team of authors may be dangerous in some respects but if other evidence also supports rejection, a history of previous rejections may be taken into account. It is also not unknown for an editor to reject a paper even if two or three reviewers all recommend its publication.

Unfortunately a lot of gut feeling goes into editorial decision making. The peer review process is not statistically sound and editors and reviewers are human beings making subjective decisions, however hard they try to be fair. For many authors it is a heartbreaking experience to have a manuscript rejected by a chosen journal1 and editors must empathize while still trying to be dispassionate and objective.

BIAS

Should an editor really choose the "peers" of an author? It would seem fair to get a paper from Oxford reviewed in Cambridge, or a paper from Harvard reviewed at Yale, but what about making sure that a paper from a young, aspiring author is reviewed only by post-docs at a more minor school? Editors do not want a system that ensures that mediocre contributions are approved by authors of poor papers. The whole peer review system is very dependent on the care exercised by editors: an unscrupulous editor could strongly influence the acceptance or rejection of a paper by using a suitably biased set of reviewers.

There is the potential for reviewers to be biased on the grounds of sex, race, age, or geographical location, but their personalities and differing ranges of specialized knowledge could be used to advantage, not unethically, by editors. Daniel [8] reports a study by Siegelman in Radiology in 1991 where reviewers are classified as zealots, pushovers, average, demoters or assassins, depending on their past record for being kind or cruel to authors. The author of this article knows reviewers who are "softies", "snails" and "nit-pickers", not to mention "the invisible man" who never responds. Fair choice of reviewers and fair decision-making in the case of reviewer disagreement, often depend on such intimate knowledge. It also has to be admitted that editors can have biases. The vast majority of ACS editors are white, male, American academics. This particular editor fails on three out of four of those scores but will admit that the six least criticized papers that she has handled recently were all written by British or American teams. In the face of such facts, editors should be keenly aware of their responsibility to eliminate bias. It is thus pleasing that more than 60% of papers published by ACS are by non-US authors.

ALTERNATIVES TO TRADITIONAL PEER REVIEW

Editors of medical journals seem to be much more skeptical about the peer review process than are chemists [10,11]. Richard Smith, former editor of the British Medical Journal also regrets that the traditional paper is frozen once published and he recommends "aftercare" in the form of post-publication commentary (also known as peer commentary). Recent studies by the Association of Learned and Professional Society Publishers suggest that this may become important (but in addition to traditional peer review). Some authorities, however, feel that "open" peer review, whether before or after publication, has numerous disadvantages. Public review is, for example, less likely to be critical, with younger, less well known authors being particularly unwilling to give negative comments about work done by their seniors. The Royal Society of Chemistry found that the members of their Editorial Advisory Board looked on open peer review with hostility.

It certainly failed for the (now defunct) Chemistry Preprint Server [11] but in that case there was little incentive for researchers to post comments on the Web, let alone valuable reviews of the sort described in the previous section in this article. Research has also been carried out into double blind peer review, i.e., hiding the names of the authors as well as those of the reviewers [1], but the method has not been adopted to any significant extent. It can be very difficult to conceal the identity of authors.

No-one believes that peer review is perfect. So why do ACS editors still trust their reviewers and believe in the system? If there is no mechanism for selection, there is the potential for flooding the literature with trivial and repetitious publications, thus making extraction of reliable and valuable information more difficult; possible premature disclosure with inadequate experimental details or supporting data; premature claims of priority; and lack of proper references and credit to prior work.

Scrupulous editors try very hard to be fair and to avoid cliques and nepotism. They do not choose reviewers in such a way as to "guarantee" rejection or acceptance. They bend over backwards to help those whose mother tongue is not English and try their best to avoid delays. They strongly believe that they would be failing in their duty to authors and readers alike if they published anything and everything that is submitted. Experience, and studies of research on peer review, suggest that unbiased, judicious use of peer review, by an editor who has years of experience of the system, and of the reviewers, is about the best we can hope to achieve.

REFERENCES

  1. Weller, A. C. Editorial Peer Review: Its Strengths and Weaknesses. Information Today: Medford, NJ, 2000.
  2. Brown, T. Peer Review and the Acceptance of New Scientific Ideas. (A discussion paper from a Sense about Science working party.) 2004. http://www.senseaboutscience.org.
  3. Cronin, B.; McKenzie, G. The Trajectory of Rejection. J. Doc. 1992, 48(3), 310-317.
  4. JCR. The Journal Citation Reports database produced by the Institute for Scientific Information, Philadelphia, USA. http://www.isinet.com/products/evaltools/jcr/.
  5. CAS Science Spotlight Web site (http://www.cas.org/spotlight/index.html).
  6. Bence, V.; Oppenheim, C. The Influence of Peer Review on the Research Assessment Exercise. J. Inf. Sci. 2004, 30(4), 347-368.
  7. Cronin, B. Bibliometrics and Beyond: Some Thoughts on Web-based Citation Analysis. J. Inf. Sci. 2001, 27(1), 1-7.
  8. Daniel, H. D. Guardians of Science: Fairness and Reliability of Peer Review; VCH: Weinheim, 1994.
  9. Hawkins, D. M. The Problem of Overfitting. J. Chem. Inf. Comput. Sci. 2004; 44, 1-12.
  10. Smith, R. Peer Review: Reform or Revolution? British Medical Journal 1997, 315, 759-760. http://bmj.com/cgi/content/full/315/7111/759.
  11. Warr, W. A. Evaluation of an Experimental Chemistry Preprint Server. J. Chem. Inf. Comput. Sci.; 2003; 43, 362-373.

.This page last updated 26th February 2006