5. Data Gathering and Evaluation
This section includes sites that develop, test, and refine methods of gathering and evaluating data in digital and face-to-face reference. It includes tools on maintaining, tracking, and storing reference transactions. There are resources in this section for academic and public libraries.
We welcome suggestions for additional web sites. For this section, please send suggestions to Interim Executive Director, Bill Ladewski, bladewski@ala.org.
Introductory sources on Data Gathering and Evaluation
Murphy, S. A. (2013). The quality infrastructure: Measuring, analyzing, and improving library services. American Library Association.
A library's infrastructure of programs and personnel is its most valuable asset, providing the foundation for everything it does and aspires to do, which is why assessment is so vitally important. In this collection of case studies, Murphy and her team of contributors describe how quality assessment programs have been implemented and how they are used to continuously improve service at a complete cross-section of institutions. Summarizing specific tools for measuring service quality alongside tips for using these tools most effectively, this book helps libraries of all kinds take a programmatic approach to measuring, analyzing, and improving library services.
Thompson, C., & Salvo-Eaton, J. (2016). Using Data to Drive Public Services Decisions. In F. Baudino & C. Johnson (Eds.), Brick & Click Libraries: An Academic Library Conference Proceedings (pp. 89-95). Maryville, Missouri: Northwest Missouri State University.
Many libraries collect massive amounts of data, but much of that data sits in a spreadsheet waiting for mandatory reporting, bragging about services, or other reporting. Meanwhile, public service departments make major decisions based on impressions, anecdotes, and past practice. University of Missouri-Kansas City (UMKC) University Libraries have been working toward increased evidence-based decision making, and particularly in public services on making decisions about staffing and services based on data. This article discusses common practices for library decision-making, the tools and methods used at UMKC for data collection and analysis, and several examples of how UMKC Libraries used this data to make decisions about proposed changes to staffing and services.
CUNY’s Library Assessment: Assessment Tools and Resources Guide
This guide aggregates information about library assessment activities and best practices. The “Assessment Data Collection Methods and Tools” section offers an overview of different methodologies to gather data with links to “best practices” and “how-to” articles.
IFLA’s Statistics and Evaluation Section: Useful Links Related to Statistics and Evaluation
A guide that links to software, performance measures, standards, and statistics related to library assessment.
Gathering of Reference Transactions
Benchmarking Reference Data Collection: The Results of a National Survey on Reference Transaction Instruments with Recommendations for Effective Practice
“This study provides a cross-institutional snapshot of current practices in reference data collection and analyzes recent changes in what, how, and why academic and public libraries document their reference interactions” (Graff, Dempsey, & Dobry, 2018, para. 1).
SPEC Kit 268: Reference Services Statistics & Assessment
https://hdl.handle.net/2027/mdp.39015052546648
This Association of Research Libraries https://hdl.handle.net/2027/mdp.39015052546648(ARL) SPEC kit examines and documents how ARL member libraries are collecting and using their data on reference service transactions. The entire work, including supporting documentation, is available through Hathi Trust Digital Library.
Evaluation of Reference Transactions
The READ Scale Research WebSite
http://readscale.org/
The READ scale is a valuable tool for understanding reference transactions.
Gerlich, B. K., & Berard, G. L. (2010). Testing the viability of the READ scale (reference effort assessment data)©: Qualitative statistics for academic reference services. College & Research Libraries, 71(2), 116-137.https://crl.acrl.org/index.php/crl/article/view/16067
The READ Scale (Reference Effort Assessment Data) is a six-point scale tool for recording qualitative statistics by placing an emphasis on recording effort, knowledge, skills, and teaching used by staff during a reference transaction. Institutional research grants enabled the authors to conduct a national study of the READ Scale at 14 diverse academic libraries in spring of 2007 and test its viability as a tool for recording reference statistics. The study data were collected from 170 individuals and 24 service points with over 22,000 transactions analyzed. There was a 52 percent return rate of an online survey of participants, with more than 80 percent of respondents indicating they would recommend or adopt the Scale for recording reference transactions. The authors suggest that the READ Scale has the potential to transform how reference statistics are gathered, interpreted, and valued. This paper presents the findings of a nationwide study testing the Scale in spring 2007 and suggests practical approaches for using READ Scale data.
(Virtual)
Brown, R. (2017). Lifting the veil: Analyzing collaborative virtual reference transcripts to demonstrate value and make recommendations for practice. Reference & User Services Quarterly, 57(1), 42-47.https://journals.ala.org/index.php/rusq/article/view/6441
Extended transcript analysis was used to analyze how our chat reference was being used and make recommendations for practice. Because this analysis was longitudinal (over a year, or at least several months) significant patterns were documented. Several themes were noted that emphasized the unique characteristics of the Community College population. The project documented that chat reference patrons are persistent. The questions that came up about assessing the service underlined the commonalities between virtual and face to face reference.
Valentine, G., & Moss, B. D. (2017). Assessing reference service quality: a chat transcript analysis.
Oakleaf, M., & VanScoy, A. (2010). Instructional Strategies for Digital Reference: Methods to Facilitate Student Learning.
http://ala.org/rusa/sites/ala.org.rusa/files/content/awards/OakleafVanScoy%20%28002%29.pdf
Research done by Single Institutions
LibStats
https://espace.library.uq.edu.au/view/UQ:135768
This site includes information about the development and implementation of LibStats, an open source reference statistics tool created at the University of Queensland. It also includes an instruction manual and powerpoint presentation about LibStats.
Developing a Model for Reference Research Statistics
https://journals.ala.org/index.php/rusq/article/viewFile/3371/3617
This article describes the unique approach by public and academic librarians at the Dr. Martin Luther King Jr. Library in San Jose, California, developed for collecting reference statistics in a hybrid library.
University of Guelph Virtual Reference Project
https://journals.library.ualberta.ca/eblip/index.php/EBLIP/article/view/236/421
Article describing the University of Guelph’s assessment of their virtual reference services.
Should Chat Reference be Staffed by Librarians?
https://repositories.lib.utexas.edu/bitstream/handle/2152/19726/IRSQ_final_version.pdf?sequence=3
An assessment of chat reference conducted at Grand Valley State University using LibStats.
Why do you ask? Reference Statistics for Library Planning
https://www.emeraldinsight.com/doi/pdfplus/10.1108/14678040310471220
Paper presented by Tord Høivik (Oslo University College) at IFLA Statistics Section (2002) outlining an exhaustive study of reference activity at Norwegian public libraries. Both virtual and in-person transactions are examined.