Benchmarking and Restructuring

Gloriana St. Clair

This paper discusses how the technique of
benchmarking—comparing local practices with best practices—can make a restructuring program easier, more credible, and more effective. During the 1990s, librarians have shown increasing interest in mechanisms of organizational change such as benchmarking because higher education, as a whole, must either restructure to meet new challenges or stagnate. This chapter blends an account of benchmarking initiatives at Penn State Libraries with broader assessments of this approach to restructuring academic libraries. The discussion proceeds in twelve sections: rationale for benchmarking, selection of a benchmarking team, determination of appropriate and meaningful measures, planning the strategic process, justification of the planning process with university administrators, discernment of best comparators, collection of data, development of questions as a brainstorming activity, preparation for the trip, report of results, redesign of the local process, and conclusions.

Rationale for Benchmarking

Benchmarking is a quality assessment tool that operates effectively in a strategic planning environment. Etymologically, the term
benchmark comes from a surveyor’s mark to establish elevation. In business, and particularly in total quality management (TQM), a benchmark means a standard of excellence against which other similar outcomes are measured or judged.1 For higher education, several ideas are often pulled together in the use of the term. The most inclusive one is
assessment of different ways to determine the effectiveness of programs. Two related ideas are
process benchmarking to compare processes and
comparative analysis to improve results. Comparative analysis focuses on
what was accomplished, whereas process benchmarking examines the work flow to help a planning unit improve its effectiveness and/or efficiency.

In the literature on TQM, William Grundstrom describes benchmarking as “the practice of being humble enough to admit that someone else is better at something, and being wise enough to learn how to match and even surpass them at it.”2 That suggestive approach may be particularly useful for those librarians who resist benchmarking efforts for fear that their operations are not optimal. Such resistance stems from two powerful sources—pride and fear of change.

Selection of a Benchmarking Team

The link between benchmarking and strategic planning is crucial. Benchmarking programs must be directly related to the organization’s strategic objectives. Consequently, a benchmarking team should include persons who understand both the specific processes being benchmarked and the broader objectives of the library together with its parent institution. Such persons should also have extensive professional contacts, imagination, and an ability to explain the benchmarking project to administrators, faculty, and staff.

Authority and ability to move a restructuring project through the organization are perhaps the most important keys to success for a benchmarking team. The team may choose to work with a process improvement team after an initial decision has been made about which processes appear in greatest need of benchmarking. That team would help assess how best to implement the results of a benchmark process into the organization. Benchmarking requires that the owners of the process (those who do it daily) be empowered to change it. TQM models recommend a team composed of practitioners with a high-level sponsor to provide political and economic resources for project implementation.

A general problem or area being benchmarked usually involves more than one work flow process. For instance, if a college were to read in the Higher Education Data Sharing (HEDS) data that one of its comparators was able to add a book to its collections for $10 a volume while its own cost was $26 a volume, the investigation of that differential would lead to different areas of the library—collection development, acquisitions, cataloging, and circulation. The comparison of figures would constitute
comparative analysis, and the affected work within each library area would be continuous
process benchmarks. A team with administrative leadership and membership from all those areas would coordinate the project.

DuPont Corporation, which has a well-developed benchmarking program, suggests an average of five months per project, with key players spending 20 percent of their time on the project. Penn State Libraries’ experience shows that, for two or three months a year, at least one member spends 20 to 25 percent of time on benchmarking and three (or so) other team members about 10 percent of their time. The trip to investigate another institution is actually a minor part of the team’s time. Overall, about a third of the team’s effort will be for planning, another third for integrating results back into local practices, and the final third for completing the process of change, including initial efforts to reshape the organization’s culture.

Determination of Appropriate and Meaningful Measures

The search for best practices is time-consuming and therefore expensive. As noted above, areas selected for benchmarking must be important in the organization’s strategic plan. Confusion among assessment, comparative analysis, and process benchmarking are likely to arise. Any of these approaches can lead to organizational improvement but, for the purpose of restructuring, process benchmarks are the most useful because they (1) specify a best practice that is clearly superior to local practice and (2) provide a clear direction for implementing it into the local organization.

In the example of HEDS data on the cost of adding a book to a library collection, comparative analysis will point to one or a few cost-reduction areas, and the process benchmark will indicate how the lower-cost institution manages such cost-effective work flows. The higher-cost institution would either restructure certain processes to match those of a lower-cost institution, or justify its higher cost by acceding to various claims by staff that (1) outputs are of higher quality; (2) inputs, such as salaries, have greater value; or (3) the local process is somehow a unique condition. However, an administrator should beware of resorts to “uniqueness” because they tend to be tied to a reluctance to change, rather than to any justified need for higher costs.

In a related fashion, staff members and immediate supervisors rely on three tactics to avoid process benchmarking. First, trying to steer benchmarking away from their own areas (the “uniqueness” argument). Next, after a cursory comparison (sometimes even using published data), contending that their operation is underfunded, thus bound to fare poorly in a benchmark. Finally, engaging in prolonged debates about what is an appropriate methodology. A library administration intent on restructuring must simply persevere in the face of all such defensive tactics. The opposite situation occurs when supervisors, instead of avoiding scrutiny, want to select an area that they believe will make them look good. Although that could lead to some ego gratification, such an expensive benchmarking effort would not contribute to meaningful change. The goal of benchmarking projects should not be to show superiority of performance but, rather, to show scope for improvement. (A caveat is that a goal of restructuring is often not to improve a process but to shift resources from one area to another, as academic libraries are now looking to save money in collection management, and particularly in technical services, to reinvest in digital library projects.) Many universities and their libraries have a set list of institutions for comparison. However, for the particular strategic processes selected, such a list might not include the best comparators. When restructuring, the library should find the best practices in each area of change.

Planning the Strategic Process

In many institutions, restructuring will be accomplished through an established cycle of strategic planning. Basic strategic considerations should guide all initiatives, including benchmarking. First, benchmarking selections should reflect key issues from the mission statement, strategic plan, or some other organizational framework. Second, the areas selected must be measurable. Third, an institution undertaking its first benchmarking exercise may prudently avoid areas of known resistance unless it is clear that those areas desperately need restructuring. Prolonged debates over appropriate methodologies indicate poor prospects for fundamental change (e.g., because reference service generally has a long history of unsettled scores in the literature and contentious relations with the faculty, it may prove difficult to benchmark). All in all, first benchmarking projects should be visible and have a high probability for success.

At Penn State, the first benchmarking project was part of a strategic plan requirement of all colleges and campus units. The libraries’ benchmarking team initially chose a project focused on faculty productivity, as that is a source of staff pride. However, we realized later that, although our own productivity might compare well with other libraries, it would not compare well with that of Penn State colleges. Although this decision might imply the kind of avoidance behavior noted above, it boiled down to a realistic assessment of the difference between service-oriented library faculty productivity and research-oriented academic faculty output, which has highly visible publication outcomes.

Many of the colleges at Penn State planned to benchmark a large variety of different areas—twenty or thirty, with seven or eight different comparator universities. The libraries ended up selecting just three areas for benchmarking: electronic resources, because the strategic plan predicates an electronic future; human resources development, because individual learning is essential to an electronic future; and interlibrary loan (ILL) borrowing, because the access paradigm is a key strategy. (Some asked why we would focus just on borrowing instead of on both ILL functions of borrowing and lending. We responded that lending is not strategic for Penn State users: From a faculty/student point of view, the libraries lend only so that they can borrow—and, of course, to be good citizens.)

That process of selecting benchmark aims took several weeks. The basic principles—be strategic, plan to improve, and envision next year’s plan—have been critical to success. During the first year of benchmarking, the library’s lack of a strategic plan hampered the effort. For example, one librarian reported difficulty in selecting processes to benchmark because the library had no strategic plan. In essence, a planning unit must have an agreed-upon direction, if not a plan, before selections about appropriate benchmarks can be made.

Justification of the Planning Process with University Administrators

One of the keys to benchmark planning is that members of the team communicate not just with others in their field but also with local administrators on why particular areas were selected. Drawing generally on DuPont planning materials as guidelines, Penn State Libraries made the case for the ILL borrowing benchmark project with discussions of (1) the need to increase access to other collections when decline in the library’s purchasing power diminished the strength of local collections and (2) the apparent room for improvement found by comparison of Penn State’s ILL to other ILL operations. Although justifications vary according to particular benchmarks, the essential point is that some cogent explanation must be offered for the strategic importance of each process selected. In addition, the knowledge or interest of an audience is a salient consideration for any benchmark (or strategic) plan. Planning units often exhibit a propensity for jargon and detail. It is important to clarify and simplify benchmarking plans, because university administrators and faculty, not just experts in the same field, are directly involved.

Discernment of Best Comparators

After selecting the benchmarking team and the processes to be measured, and justifying those decisions in terms of the institution’s strategic plan, the next step is to identify the best comparator organization. The initial choice is whether to limit the search to other libraries or to seek a generic comparison in another industry. Until processes have been compared outside the industry type, the best comparators may not have been located. Companies use lists of best practices and information about winners of Baldrige awards to identify possible comparators. (Baldrige winners pride themselves on their willingness to share information about best practices with others.) Similarly, articles in library literature will often suggest which libraries are significant in certain areas.

Staff will likely raise arguments about the comparability of benchmarking results that came from a comparison with a different type of institution. As with claims about institutional uniqueness and quibbles over methodological approach, such arguments probably reflect a resistance to change. Of course, the more analogous a comparator organization is to the group undertaking the benchmark, the easier it will be to persuade the group to change. However, if a generic comparator provides a truly superior process, then it is worth the effort to persuade staff to use it. For example, because one of Penn State Libraries’ benchmark areas, human resources development, centers on training and we knew from attending TQM meetings that some engineering companies had well-developed programs, we put them forward as possible comparators in that area. And in the area of electronic resources, we had thought we could rely on just other libraries but, in a follow-up benchmark the second year, we decided to benchmark our providing support for remote users by visiting IBM.

For any organization, networking is the main way to find out who are the best in areas of interest. We began the process of identifying all library benchmark comparators by asking colleagues around the nation who they thought had the best programs in particular areas. In order to find the best comparators for ILL borrowing, we developed a different decision matrix of identifying borrowing units with the same number of requests placed on both OCLC and RLIN.

Collection of Data

Before a planning unit can develop a list of questions about another institution’s processes, the unit must understand its own process and the needs of its customers. Institutions that practice TQM will already have analytical tools at hand, whereas those not engaged in TQM may do training or reading on such techniques as flowcharts, control charts, and customer surveys. The TQM team at Penn State for improving ILL borrowing used many of those techniques. It began with a telephone survey of selected customers, followed by a short, paper-record analysis of current users. These data were put into a Pareto chart which demonstrated that getting materials in a timely fashion is the most important ILL feature for customers. Although over 80 percent of customers indicated satisfaction with a ten-day response time, the team sponsor (the dean of libraries) challenged the team to strive for a five-to-seven-workday goal.

In order to compare its outcomes with those in other libraries, the team developed other flowcharts. From paper files, it found that the average ILL borrowing turnaround time in 1994 was twenty-one working days—far longer than customers desired. With the help of a statistical specialist from the university’s TQM center, the team continued to track its performance. In 1995, it was able to reduce delivery time to fifteen days; current delivery is, on average, ten days.

Although the main flowchart still reflects a great deal of complexity, the number of process steps has been cut a third. The team continues task analysis by sharing charts with best comparators to determine other steps that can be eliminated or modified to make benchmarking outcomes more equivalent.

Development of Questions as a Brainstorming Activity

Questions for the comparator about the process being benchmarked may be developed by either the benchmarking team or a process team. Any set of questions may take two to three hours to formulate; it should be tested with colleagues, refined—and then answered by the originating organization because it is essential to know local practices when exchanging information with the benchmarking partner. Steps for brainstorming a set of questions include:

  • Write down questions without regard to order.
  • Ensure that all members of the team agree on the meaning of each question and the definition of each term.
  • Group the questions around steps in the process.
  • Prioritize the questions and determine their sequence.
  • Test the questions under the interview or investigative conditions in which they will be used.
  • Modify the questions based on the results of the test.
  • Be sensitive to wording and cultural differences when the respondent and the interviewer have different backgrounds.
  • Answer the questions yourself.

Used as a platform for the dialogue with the other institution, the question set allows the benchmarker to delve into layers of the best-practice operation. Refinement of questions may be augmented by preliminary minitours of, and handouts from, the comparator institution.

In general for comparative analysis, many questions can be answered from standard sources, such as statistical publications of the ARL or of the ACRL. What the process should focus on, however, is not simply what the result was but how it was achieved. Questions should have that methodological point of view. Libraries are generally quite free about sharing information on operations, and we encountered no problems of confidentiality.

Preparation for the Trip

The character of a trip to a generic enterprise is quite different from a trip to another library. Companies that are practitioners of renowned quality in one form or another are accustomed to both seeking best practices and receiving visits. Dates and length of visit are discussed (along with any special requirements for visiting a manufacturing plant) and a list of questions sent ahead. The process of scheduling the visit with another library should be as easy, but libraries tend to ask many more questions than do manufacturers about the nature of the benchmarking project and about interview scheduling. On one Penn State Libraries trip, a corporate contact was forthright about the quid pro quo, wanting to know what training programs the university has available and whether they were open to outsiders. The benchmark team will, of course, invite those with whom it benchmarked to visit them and may expect other libraries to inquire about its experience.

Report of Results

Because the benchmarking team must communicate with various audiences, several different reports, with varying emphases, need to be written. The overall product for Penn State Libraries’ annual report is actually a compilation that pulls together three or four of the most important ideas from a benchmark process and explains them in terms of organizational and parent institutional advantages, barriers, and costs. The team implementing the benchmark will make use of a much broader range of information to implement the changes, but the rest of the organization needs only to focus on overall improvements.

Unlike a typical faculty committee report, the library’s product should be a table of actions for the organization. Because the team will have worked with a sponsor along the way, it should have already developed coalitions of supporters and strong prospects for implementation. The academic institution makes a strong general commitment to the project, whereas the library administration gives detailed and directive guidance to the unit being restructured. At least some members of the unit should be on the benchmarking trip to enable them to see an alternative process firsthand and thus gain greater commitment to the changes proposed. However, it may not be possible for the whole team to make the trip, given the costs as well as some inconvenience to the comparator organization. An alternative is to invite someone from that organization to visit the local workplace and to discuss the restructuring process. Telephone or conferencing (such as PicTel) offer other alternatives for communicating with a larger work group.

Redesign of Local Processes

A typical difference between academic committees and TQM teams is that the former produce reports that tend to be shelved whereas the latter make enduring improvements in work processes. Yet, the resistance from units facing a benchmark restructuring can be visceral. Employees will fear for their jobs, their schedules, their established procedures, their work space, their benefits, their prestige, and so on. Such fears can build into a range of barriers to redesigning things. Typical barriers are the “not-broken” syndrome, innumerable requests for detail, pronounced misunderstandings, hurt feelings, slow response times, and exaggerated claims of uniqueness.3 Benchmark communications—assurances, explanations, and directives—must be repeated in a variety of different ways and contexts.

Incremental change is not the only means available to remedying a shortfall from the best practice; a complete restructuring (or reengineering) of a process may be necessary. Outsourcing is the greatest response to a shortfall between local and best practices. For example, the University of Alberta Libraries recently outsourced cataloging to a Canadian vendor. Their studies showed that 40 percent of such costs would be saved by outsourcing and that half of those costs could have been saved by simplifying their own cataloging processes, for an accretion of “special needs and handling” had driven internal costs up. That phrase and those figures have had a marked influence on a Penn State group currently at work on an outsourcing project.

In the area of ILL borrowing, Penn State Libraries are awaiting the results of a project, called the virtual electronic library (VEL), by the Committee on Institutional Cooperation (CIC), a consortium of twelve major midwestern universities. An objective of this project is to enable library patrons searching the catalogs of other CIC institutions to initiate their own ILL requests. The VEL group believes that a five-day turnaround time will be necessary to maintain credibility with CIC teaching and research faculty. Continuous improvement of Penn State’s ILL borrowing process might meet the five-day standard someday, but displacing the TQM model in this area would appear to be more cost-effective and expedient. With CIC interlibrary borrowing shifted from the traditional model to the circulation model, all users of the thirteen institutions would be like local borrowers for each library. The outsourcing approach requires that we throw out the old model and start anew.

Conclusions

Benchmarking is a complex process requiring a genuine search for improvement on the part of the initiating institution. A significant investment of time must be made to identify strategic areas for benchmarking, select individuals to participate in the benchmarking process, determine appropriate and meaningful measures, justify the processes selected, discern the best comparators, collect significant data for comparison, make the trip, and introduce the changes back into the organization. Although some processes will lend themselves to continuous improvement, others may require fundamental restructuring or even outsourcing. Work redesigns, whether incremental or radical, must be fully implemented in the organization before the project can be judged a success. Thus, although the formal benchmarking process can occur rather rapidly, the changes that benchmarking introduces into the culture of the home institution can be profound.

Although benchmarking is complex, it can be summarized in terms of four primary elements:

  1. Be strategic: Align benchmarking objectives with strategic directions.
  2. Be humble: Select a process that needs improvement.
  3. Choose a process: Make sure the area being investigated qualifies as a process.
  4. Plan to succeed: Pick a strategic area in which significant improvement can occur.

Taken together, these elements can make organizational restructuring efficient, effective, and credible.

Acknowledgment: Research and editorial assistance by Karen L. Gerboth, Sondra K. Armstrong, and Susan L. Walker.

NOTES

  1. The standard text on benchmarking is Robert C. Camp’s
    Benchmarking: The Search for Industry Best Practices That Lead to Superior Performance (White Plains, N.Y.: Quality Pr., 1989), which uses a model to describe the benchmarking process. Many companies have published manuals about their procedures for doing benchmarking studies, and a variety of articles are available in business literature. However, exhaustive reading in the area is not necessary and may lead to study without action.
  2. William Grundstrom, “C+Q+P/M=Benchmarking: TQM and Academic Libraries,” in
    Total Quality Management in Academic Libraries: Initial Implementation Efforts: Proceedings from the 1st International Conference on TQM and Academic Libraries (Washington, DC: Office of Management Services, Association of Research Libraries, 1995): 131.
  3. Jerry W. Young, “Building Support for Change, A Workshop for Leaders at Work,” in
    Work-Session Workshop for the Academic Council of Penn State University Libraries, workshop presented by Stable Change Consulting, Nov. 30–Dec. 1, 1995, 84.