SCHOLARLY COMMUNICATION

On scholarly evaluation and scholarly communication:
Increasing the volume of quality work

C&RL News, September 2001
Vol. 62 No. 8

by David E. Shulenburger

As the scholarly communication crisis largely a creature of the faculty evaluation system? Do academic department heads, deans, and members of promotion and tenure committees simply count the faculty members’ publications and award salary increases, promotion, and tenure by the numbers? If we reformed the faculty evaluation system, would the scholarly communication crisis disappear?

One commonly encounters anecdotes that appear to support affirmative answers to these questions. Faculty sometimes boast of publishing the “least publishable unit,” a reference to dividing significant work into several smaller pieces to derive the maximum number of articles from it. Others describe mechanical systems they have established that, upon rejection of a manuscript by one journal, will automatically submit that manuscript to the journal next in the status pecking order, continuing through as many journals as needed until one finally agrees to publish the manuscript.

At least two significant efforts aimed at gaining control of the scholarly communication crisis have identified the faculty evaluation system as part of the problem. In 1997, the Pew Higher Education Roundtable published a treatise entitled “To Publish and Perish,” which urged universities to “place greater emphasis on quality rather than quantity in the promotion and tenure process.”

In March 2000, a gathering of academics, administrators, and librarians drew up the “Tempe Principles for Emerging Systems of Scholarly Publishing,” which have since been endorsed by both the AAU and NASULGC membership. One of the principles states: “To assure quality and reduce proliferation of publications, the evaluation of faculty should place a greater emphasis on quality of publications and a reduced emphasis on quantity.”

Thus both anecdote and study point to the faculty evaluation system’s role in generating published scholarship that adds little to the fund of knowledge. How important is this problem?

I have served on and chaired faculty evaluation committees at the school and university level for more than 20 years. During those years, I have reviewed many résumés that list publications that are at best marginal when evaluated against the criterion of generation of new knowledge. Why did the faculty member write them? Why were they published?

Mark Twain said that one should not criticize others on the grounds that one cannot stand perpendicular to himself. It is very difficult for an author to determine the ultimate worth of his or her research. No one sets out to do inconsequential work, and having invested weeks, months, or years in a project, it is expecting too much of human beings to judge their work to be inconsequential. Thus the norm is to write up the work and submit it for peer review so that others make the judgment.

But peer reviewers have similar difficulties. Referees are themselves researchers. As researchers they are entangled in the web of knowledge and become easily fascinated by a new detail or by the resubstantiation of an old one. They look to see whether the data used should be relied upon, whether the work followed the methods required to produce valid science, whether it appropriately built upon the literature, etc., and then make a judgment from the middle of the same thicket as to whether it should be published.

Editor's note

A common theme in the debate about scholarly communication has been the need for faculty to publish in a large number of publications to receive tenure. “Publish or perish” is an accepted concept in higher education everywhere.

Are tenure committees truly blind to the issues of quality? How do they determine quality? Do we have too many low-quality publications, or do they serve a purpose in the scholarly communication process? The answers to these questions are complex and have been debated by librarians, publishers, faculty, and administrators for many years.

For this month’s column, we have invited David Schulenburger, provost of the University of Kansas, to share his views on this subject. Schulenburger is well known for his efforts to help us understand the economics of scholarly communication and to reform the scholarly communication process. We hope that this column will help to spur discussion of these issues on your campus.—Ann C. Schaffner, annsch@rcn.com

Refereeing weeds out the bad
I have great respect for the refereeing process. While I am aware of the growing criticism of this process, I have faith that it almost always weeds out bad science. However, I do not believe that the process admits only research that makes a significant addition to knowledge. Peer reviewers are simply too close to the process to be expected to know what will be judged by future generations to represent significant additions to the discipline. Thus the refereeing process tends to weed out the bad but does not eliminate the insignificant.
But back to those résumés. Based upon my many discussions with provosts across the nation about the evaluation process, I believe that evaluation committees at the University of Kansas are similar to those at most research-intensive institutions. In our process, volume of publication alone carries no weight. Evaluation committees examine the perceived significance of the faculty member’s work and if, and only if, it is perceived to be of significance do they begin to measure the quantity of the work. Quantity takes on importance once quality is established. Doing very small amounts of quality work simply is not sufficient justification for the standard expectation that 40 percent of a faculty member’s time should be devoted to research.

The committee’s judgment of the ultimate significance of a faculty member’s work is suspect for the same reason that peer reviewers’ evaluations are suspect: committee members simply don’t have the right perspective to make an infallible judgment. The evidence used by evaluation committees comes from their own reading of the work, their judgment of the rigor of review given the work by the journal of publication, and, especially in promotion and tenure cases, the opinion of outside reviewers who evaluate the entire body of the faculty member’s work.

The latter group is particularly important as outside reviewers are chosen because they are experts in the faculty member’s field. Given the narrowness of some fields, only by including external reviewers can real expertise be brought to the evaluation process. By reviewing the entire body of work from the viewpoint of the discipline, outside reviewers are in a position to judge the cumulative impact of the faculty member’s work.
This evaluation process places essentially zero weight on publication in so-called “backwater” journals.

Evaluation committees generally take for granted that work appearing in such outlets got there either because the author judged it to be of little worth and sent it directly to the journal or because it failed to gain acceptance in one of the top journals in the field and by default landed in a lesser one.

Sometimes such automatic dismissal is a mistake. Sometimes manuscripts that display extraordinarily significant new knowledge are rejected by top journals because its ideas challenge the orthodox views. Thus a revolutionary idea like plate tectonics reaches the field through lesser journals and ultimately—through the weight of published findings in low-level, peer-reviewed journals—finds its way over time into the top journals in the field. If faculty evaluation committees or peer reviewers were true judges of ultimate significance, such articles would command great respect at first reading rather than suffer automatic dismissal because of the low esteem for the publications in which they originally appeared.

The real damage done by the faculty evaluation process then is not by rewarding faculty for quantity of publication; it is by rewarding faculty for quality of publication and by basing quality judgments on the rigor of the peer review process in journals where their work appears, a process which is perceived to be strongest in the top-ranked journals. Evidence that this is true is the lack of uproar when a library cancels a subscription to a journal perceived to be of low quality. The lack of turmoil over such decisions confirms that the problem is the reinforcement of demand for top-quality journals, not the proliferation of journals of low quality.

What can be done?
What we must do is restore the public goods nature of journals by reducing the ability of journals to use the market power they possess to raise prices. There are many efforts now underway to accomplish this aim, and SPARC (Scholarly Publishing and Academic Resources Coalition) represents one such strategy. By sponsoring modestly priced new journals edited and refereed by top scholars, SPARC endeavors to accelerate the supply of prestigious journals and thereby reduce the possibility of further price increases by existing top tier journals. By creating products like BIOONE, SPARC keeps in the public domain a large group of journals in the biological sciences for which prices will not be raised.

Three years ago I proposed the creation of NEAR, the National Electronic Article Repository. By making scholarly journal articles available for free three months after publication, I surmised demand for the journals would become more price elastic. That is, the ability to raise prices would be limited severely by the fact that many purchasers would choose to wait a short time until articles were freely available rather than pay the higher subscription prices.

While manuscript authors need no direct return in order to generate articles, publishers do. By having journals retain the exclusive right to an article for three months, the journals would maintain the ability to charge a smaller subscription price, but a subscription price that would cover necessary costs. Thus the proposal aimed to keep alive the current refereed journal system. However, my proposal suffered from the lack of a mechanism to make it happen. Two subsequent developments have created such mechanisms.

First, the National Institutes of Health, under the leadership of Harold Varmus, created PubMed Central, a virtual location in which bio-medical journals could be securely archived.

Second, a group of scholars initiated the PublicLibraryofScience.org petition, which constitutes a pledge that its signers will avoid journals that do not agree to make their contents publicly available six months after publication. By signing the petition, scientists agree not to subscribe, submit papers, edit or referee papers for journals unless those journals make articles available to the public after a lapse of six months.

Public Library of Science is the consciousness-raising mechanism to encourage journals to move from a profit motive to a public goods orientation. Thus far, about 25,000 scientists have signed the pledge. I am optimistic that many more scientists will join them and this effort will be effective.

These initiatives may soon have an impact on the ability of journals to raise prices. In fact, I am optimistic that these initiatives will lower prices and reverse the decades of untrammeled inflation. Exploitation of the economics of electronic publication, while returning journals to their deserved public goods status, will permit an increased volume of quality work to be published and acquired within the reach of existing library budgets.
Universities should not encourage quantity of publication over quality in faculty evaluations. But the imperative is that quality scholarly work has the opportunity to be published in rigorously refereed journals and that it be readily and affordably available to all scholars.

About the Author
David E. Shulenburger is provost at the University of Kansas in Lawrence, e-mail: dshulenburger@ku.edu

About the Editor
Ann C. Schaffner is an MBA Candidate at Simmons Graduate School of Management, e-mail: annsch@rcn.com