Enough Already?: Blazing New Trails for School Library Research
An Interview with Keith Curry Lance, Director, Library Research Service, Colorado State Library & University of Denver
Interview questions and discussion by Daniel Callison, Professor, Indiana University–Indianapolis, and Editor of School Library Media Research. The interview as conducted electronically July through October 2005.
A series of studies that have had a great deal of influence on the research and decision-making discussions concerning school library media programs have grown from the work of a team in Colorado—Keith Curry Lance, Marcia J. Rodney, and Christine Hamilton-Pennell (2000). Lance served on the founding editorial board for School Library Media Research (SLMR). He has presented at numerous conferences for the American Association for School Librarians, has been the keynote speaker at several Treasure Mountain Research Retreats, was one of the principal presenters at the 2002 White House Conference, and most recently, he presented at the first international research conference sponsored by the Center for International Scholarship in School Libraries (CISSL) in New York City (April 2005). The comments delivered at the CISSL conference by Lance form the frame for this "interview" with Daniel Callison, founding editor of SLMR .
This written discourse is based on six questions Lance has raised and addressed as a reflection on the studies that he and others have completed over recent years. Lance considers what has been accomplished and what remains ahead. The author has posted additional questions within Lance's reflections. Questions and comments (SLMR Questions and Comments A through L) from Callison are in italic print and are linked to within Lance's original statements. Lance's responses to the SLMR questions are given in the sections linked to from his text. These questions are challenging and are raised to help Lance and other researchers in the field give and gain clarity for the agenda that school library media librarians and researchers face—seeking evidence of the value of school library media programs and professionals. These questions also raise these challenges: Do school library media programs add enough value, the right type of value, and do they interact with other school and community factors that result in a better learning and teaching environment? Is impact on student achievement more than trying to influence standardized test scores? Has a school library media specialist instructional role emerged yet that really has an influence on the quality of the learning and teaching environment? What new directions should researchers take to test the influence of school library media programs and professionals on student learning?
A call for studies with convincing evidence derived from investigations that are experimental in nature, based on randomized sampling, and independent of bias factors that may result in selection of only data favorable to current school library media programs was made in Knowledge Quest in 2004 (Callison). SLMR invites manuscripts that reflect this move toward research that meets high standards and builds on the extensive state studies completed over the past decade.
Since the 1960s, demonstrating the measurable impact of school libraries on academic achievement has been a topic of recurring interest to library and information science researchers as well as education and school library policy-makers. Through the 1980s, such studies tended to be experimental or quasi-experimental in design. In the late '80s and early '90s, a statistical modeling approach was developed by researchers in Colorado, and replicated by others, to weigh the impact of library variables on test scores while controlling for more other school and community conditions than was previously possible through experimentation. During the past five years alone (2000-05), studies based to greater and lesser degrees on this Colorado research design were conducted by at least five different teams of researchers in more than fourteen states.
In the past few years, a sea change has occurred in the academic and political environments that provide the context for such research. The current and foreseeable climate for school library impact studies is dominated by philosophies, policies, and practices associated with the education reform movement evoked by the slogan, "No Child Left Behind." Chief among these ideas is a narrowly-defined conception of what constitutes "scientifically-based research" being promulgated by the U.S. Department of Education's Institute of Education Sciences. To be considered "scientific" and to be listed, and labeled "meets evidence standards," on the department's What Works Clearinghouse (WWC), a study of any type of educational intervention must be a large-scale controlled randomized trial (CRT) or, when randomization is not possible, employ another experimental or observational design that eliminates reasonable alternative causes of improved test performance. It does not appear that studies involving statistical modeling—even those including numerous control variables—pass muster, even though they arguably do consider competing explanations for why one school outperforms another. This being the case—and owing to the consistency of the findings of numerous "Colorado" replications—it seems a propitious time to reflect on the status of school library impact research and chart a new course for the future.
[ Back to Top]
These reflections on the current status and desired future of school library impact research are framed by six questions:
- What have we done so far?
- What difference has it made?
- Why is it time to move on to something new?
- What questions need answers?
- How can those answers make a difference?
- What increases or decreases our chances of making a difference?
What have we done so far?
The basic question tackled in school library impact research to date have been if school libraries or librarians make a difference? And, if so, how much and how? At least in recent years, more attention has gone to measuring the impact of school libraries than to explaining how that impact is achieved; but, the focus is beginning to move from the former to the latter. Four studies, or sets of studies, illustrate the formative history of this line of research.
The Gaver Study
In 1963, Mary Gaver of Rutgers University reported a quasi-experimental study involving 271 schools in thirteen states. She compared the test scores of students in schools with classroom libraries only, those with centralized libraries run by non-librarians, and those with centralized libraries run by librarians—with predictable results. Students in schools with centralized libraries managed by qualified librarians tended to score higher than their counterparts in schools without centralized libraries or qualified librarians. In discussing her findings, Gaver noted the many obstacles to conducting a compelling experimental study on a sufficiently large scale (statewide vs. one or two schools, schools vs. students) and involving a persuasive number of control variables (i.e., other school as well as community conditions). The sheer volume of the data involved made it difficult to conduct large-scale studies, and the difficulty of gaining access to room-sized university mainframe computers (then usually monopolized by mathematicians, astronomers and physicists) prohibited the consideration of numerous control variables. Yet, she could see what needed to be done; the computing resources (ubiquitous desktop computing and user-friendly statistical analysis software) simply did not exist to achieve it.
SLMR Questions and Comments A (click here for questions and responses on this section)
The School Match Episode
Two and a half decades later—and after a decade (the 1980's) in which virtually everything published on the topic was a review of the literature from the 1960's, and '70's—new life was breathed into school library impact research almost by accident. In the summer of 1987, William L. Bainbridge of SchoolMatch, a commercial vendor of school data—primarily to relocating parents concerned about school quality—was interviewed on National Public Radio's Weekend Edition. Once it was established that SchoolMatch had a treasure trove of data about the nation's schools and students, the interviewer asked what single factor exerts the greatest influence on students' school performance. Without hesitation, Bainbridge identified spending on the school library. This off-the-cuff answer to an unexpected question was not documented by any published study, rather it was an observation based on in-house analysis at SchoolMatch. A flurry of publicity ensued as the news of this claim was published widely in the professional literature of librarianship, beginning with the American Library Association's own magazine, American Libraries.
The Colorado studies
Although it was 1993 before the report on the first Colorado study was published, that project was a direct response to the 1987 School Match episode and a realization of the research design envisioned by Mary Gaver three decades previously. A 1990–92 research and demonstration grant from the Library Programs division of the U.S. Department of Education funded the project. The original study team was led by Keith Curry Lance and also included Lynda Welborn and Christine Hamilton-Pennell. The findings documented, and elaborated upon, the SchoolMatch claim that [the level of] school library expenditures was a key predictor of academic achievement, as measured by standardized tests, specifically in Colorado, scores on the Iowa Tests of Basic Skills (ITBS). In addition, the first Colorado study identified other key library predictors, including the amount and level of library staffing, collection size, and the amount of time the school librarian spends playing an instructional role. Gaver's vision for a large-scale statistical modeling study was realized by its usage of schools rather than students as units of analysis and its successful documentation of the persistence of library predictors when controlling for other influential school and community differences (e.g., teacher-pupil ratio, per pupil spending, poverty, adult educational attainment, racial and ethnic diversity).
Reports of a successor study in Colorado and replications in other states have followed only since 2000. By the late 1990's, virtually every state in the nation had begun to promulgate academic standards and to develop its own standards-based tests. Scores on these tests were taken as the measure not only of students' academic success but also of teacher quality. Consequently, the wish to replicate the Colorado study model in other states had less to do with building a critical mass of school library impact research than a perceived political necessity. The relevance of school libraries had to be demonstrated anew in an era when learning was equated with academic achievement and academic achievement with high-stakes test scores. Regardless of the motivation, however, by 2005, the Colorado study model had been replicated and elaborated upon to a greater or lesser extent in Colorado and more than a dozen other states by five different researchers or research teams. Collectively, they have studied the impact of school libraries in approximately 8,700 schools with enrollments totaling more than 2.6 million students. These studies elaborated upon the original Colorado study model by identifying specific activities of school library staff that constituted playing an "instructional" role, and considering the potential impact on student performance of library-related technology—specifically networked computers and licensed databases, especially those licensed statewide.
While there were some substantial differences in the detailed results of these studies, their core results were remarkably consistent. Across states and grade levels, test scores correlated positively and statistically significantly with staff and collection size; library staff activities related to learning and teaching, information access and delivery, and program administration; and the availability of networked computers, both in the library and elsewhere in the school, that provide access to library catalogs, licensed databases, and the World Wide Web. The cause-and-effect claim associated with these correlations was strengthened by the reliability of the relationships between key library variables (i.e., staffing levels, collection size, spending) and test scores when other school and community conditions were taken into account.
Indeed, across the "Colorado"-style studies that included this critical analysis, the two most consistent predictors of test scores, when all potential predictors were considered, were the prevalence of students from poor households and the level of development of the school library. Many are surprised that other often-noted factors—such as the teacher-pupil ratio, per pupil school expenditures, and adult educational attainment—did not weigh in more heavily in these analyses. [I] speculate that the probable reason they did not is that such factors are more likely to be influenced strongly by the wealth or poverty of a community than the quality of the school library program. Further, [I would note], the era of standards-based testing has also been the era of site-based management, leaving the fate of most school libraries in the hands of principals and other building-level decision- makers rather than a matter of state or district policy.
The Krashen Studies
Stepehen Krashen is one of the leading reading researchers and one of the most critical analysts of reading research. Both individually and with colleagues, he has studied the impact on academic achievement of school and public libraries as well as the availability of reading materials in the home. His studies show consistently that students who have access to more reading materials from all of these sources—and particularly materials on subjects that interest them—are more likely to read voluntarily, read in greater volume, read more often, and score better on achievement tests. Indeed, Krashen challenges head-on some of the most skeptical critics of studies linking school libraries and test scores.
The most skeptical reviewers of school library impact research suggest that perhaps school libraries are not the cause, but merely an effect, of communities with higher-scoring students. Might it be, they ask, that the most successful students happen also to be from homes and communities wealthy enough to support better-funded schools and libraries, and to be the ones most inclined to use such libraries? Krashen suggests quite the reverse. Reading and library use are not direct consequences of students being from more prosperous homes, but rather from the fact that more prosperous homes tend to offer more books and other reading materials, and, thereby, to encourage reading and library use. Thus, he hypothesizes, libraries—both public and school—have an important role to play in equalizing access to books and other reading materials for disadvantaged students. He also warns that the value in large school library collections lies not in the amassing and owning of them, but in their being current and available to students. Where such access is restricted or books are out-of-date, he found, there is no salutary effect on achievement.
The Ohio studies
In 2003, Ross Todd and Carol Kuhlthau of the Center for International Scholarship on School Libraries (CISSL) at Rutgers University charted a new course for school library impact research—a qualitative one. Taking the Colorado studies as a point of departure, they sought to learn how students benefit from effective school library programs with credentialed librarians. Employing a judgment sampling process, an expert panel chose thirty-nine schools across Ohio. From these schools, survey responses were obtained from more than 13,000 students and almost 900 teachers. Students were asked approximately fifty questions about how their school libraries had helped them. Teachers were asked parallel questions regarding their perceptions of the library's helpfulness to their students.
Students and teachers ranked libraries as most helpful to students in finding and locating information and using computers in the school library, at school, and at home. Notably, however, teachers ranked computer-related help above finding and locating information. Students and teachers agreed on the next highest-ranked kind of help, which was using information to complete school work. After that, students ranked help with school work in general, while teachers chose general reading interests.
Overall, students and teachers confirmed that the school libraries studied helped students by making them more information- and computer-literate generally, but especially in their school work, and by encouraging them to read for pleasure and information—and, in the latter case, to read critically—beyond what they are required to do for school.
By March 2005, reports on fourteen state studies of school library impact had been released. The individual states pursued a variety of strategies in disseminating and utilizing their findings; but, in the aftermath of these reports, there has been one question of common interest: What difference has this study made?
To answer that question, the Library Research Service—where the Colorado studies were conducted—initiated a Web-based survey to gather data to answer that question. As this questionnaire will be available indefinitely, a probability sample was not attempted. The availability of the questionnaire was made known through multiple postings to LMNET, the library media mailing list, as well as messages to contact people in every state for which a study has been completed. As of May 27, 2005, 501 individuals from thirty-six states had responded. Following is a summary of their reports of the outcomes realized from the school library impact studies.
Responses to this survey indicate that using this research to advocate for school library programs has affected the relationships of school librarians with both principals and teachers. Four out of five respondents (81 percent) reported that they shared the research with their principals. (Between one-third and half also reported sharing this research with their superintendents, other administrators, technology staff, and/or parents.) Almost two out of three respondents (66 percent) reported sharing the research with teachers. As a result, approximately two-thirds of respondents report that sharing the research improved their relationships with their principals (69 percent) or teachers (66 percent).
Respondents reported that becoming familiar with the findings of this research affected their own professional practice. Three out of five respondents reported that, compared to before reading this research, they now spend more time planning collaboratively with classroom teachers (63 percent), teaching information literacy skills to students (62 percent), and identifying materials for teachers (60 percent). Almost half of respondents (48 percent) also reported spending more time teaching collaboratively with classroom teachers.
School library programs have also been affected in substantial ways by the sharing of this research. Almost half of respondents report that, as a result of sharing the findings, their students now have access to more electronic information (48 percent) and larger collections (45 percent). Two out of five respondents report that classes and other groups now visit their school libraries more frequently (40 percent) and on more flexible schedules (39 percent). More than a third of respondents (37 percent) report increased library visits by individuals.
If replicating the Colorado studies has been so popular and replicating the Ohio study seems to be the obvious next step, why is it time to change the course of school library impact research? There are several reasons for change, some having to do with the research itself and others concerning the political context of such studies.
The Colorado model has been exhausted in at least two ways. As noted earlier, the consistency of the findings across fourteen states is remarkable. Usually, research is recommended for replication as long as each successive study continues to yield new insights. While the most recent Colorado replications have clarified some earlier findings, they have not yielded enough wholly new insights to encourage further such studies. This model has also reached its limits in terms of the data involved. These studies have relied heavily on available data. Data on all of the other school and community conditions considered in these analyses have been available data—data that state departments of education are required to collect from every school and to report to the National Center for Education Statistics (NCES). T
These common origins have made it possible to apply a similar research design in various states, but they also limit the number and variety of other school and community conditions that can be considered. A particular, and increasingly constraining, fact is the relationship between one of these variables and all the others. In every Colorado-style study, the strongest available predictor of test scores has been socio-economic conditions, as indicated by the percentage of students eligible for the National School Lunch Program. This single variable has explained half to two-thirds of the variation in test scores in states where studies have been conducted. Further, the strength of this lone variable is the likeliest explanation for the failure of other school and community variables (e.g., teacher-pupil ratio, per-pupil school spending, adult educational attainment) to demonstrate the impact that conventional wisdom and other research attributes to them.
In other words, because the economic variable is so strong, and because it confounds the effects of so many other variables of interest, it is time to explore new methodological options. In some states, at some grade levels, these confounding effects have actually precluded performing the type of analysis (i.e., regression) that separates and measures the impact of multiple variables simultaneously.
Clearly, the political context of contemporary school library research—like all education research—demands stronger causal evidence. Nothing demonstrates this reality as strongly as the demand for scientifically-based research (SBR) by the U.S. Department of Education. Ideally, controlled randomized trials (CRTs) are the approach-of-choice. In its own grant-making and in its evaluation of extant research (e.g., WWC), this methodological bias is quite clear. That bias has become less absolute since the National Research Council (NRC) responded to it with a report titled "Scientifically Based Research in Education." This report challenged the notion that CRTs alone constitute scientifically-based research, identifying the critical characteristic of a CRT as the requirement that it rule out competing causes through randomization or matching. Quite rightly, the NRC report asserts that the CRT model is not the only methodology that takes into account competing causes. Indeed, it identifies "statistical modeling" studies (like the Colorado studies) as one of the more obvious alternatives to a CRT when the latter type of study cannot be done practically or ethically. Nonetheless, the bias toward CRTs that is becoming institutionalized through the current administration's education policies and its reorganization of the federal education bureaucracy is a force to be reckoned with.
In all likelihood, CRTs involving school libraries and the students who use them will not meet the randomization criterion, unless the universe under study is narrowly defined. For instance, a study of the efficacy of credentialed school librarians who collaborate with classroom teachers will never be conducted by the random assignment of schools generally to librarian and no-librarian groups, or even collaborator and non-collaborator groups. A school that has a collaborating librarian is not going to ask her to stop collaborating with teachers so the ill-effects can be documented. Instead, the researcher for such a study would have to begin by defining the universe as schools that do not have librarians, then decide randomly where librarians would be introduced. But, while this scenario overcomes the most obvious ethical dilemma, it still involves a host of practical ones. For instance, how would the researcher find school librarians willing to undertake "experimental" assignments at locations revealed on short notice?
A variety of new research questions need to be answered. For the federal education establishment, the pre-eminent question is "what difference is made by specific library 'interventions?'" The Department of Education's Institute of Education Sciences (IES) defines "interventions" as "programs, products, practices, and policies." In other words, IES wants to identify the proverbial magic bullet—a particular, contained strategy that will result in measurable test score improvements. Some might suggest that this magic bullet approach ignores the fact that, like any human enterprise, a school is a social organism, and the organic interrelationship of the myriad factors impacting achievement suggests that it is an unrealistic oversimplification to suggest that changing one element of the situation alone can, or should, make such a dramatic difference.
Thus, it may take some effort to shoe-horn the research questions of true interest to the school library field into this perspective; but, it is possible to do so. For example, the DeWitt Wallace-Readers Digest Fund's Library Power project might have lent itself to a CRT-type study, if schools receiving the peer-to-peer training involving principals, teachers, and librarians could have been offered that training randomly, or, in the worst case scenario, if schools that volunteered for Library Power could have been matched on key control variables, such as socio-economic status of the community. The Library Power project was also one that recognized explicitly the organic connections between various players—principals, teachers, and librarians—and their respective contributions to the teaching and learning environment.
State and district school library officials and other school library advocates have a far more urgent question: "How are negative decisions affecting school library programs (e.g., layoffs of librarians, staffing of libraries with aides) hurting students?" Studies to answer this question will never achieve full CRT status, due to the inherent absence of randomization; but, matching should be possible. The challenge implicit in trying to gather data from schools where the librarian's position has been eliminated or downgraded is that it may be very difficult to achieve an acceptable response rate, due to the problem of finding someone to respond to a questionnaire about these issues.
Practicing, building-level school librarians have the most urgent research question of all: "How can educators be motivated to help develop and support libraries that help their students—and them—succeed?" The future of school libraries lies in a battle for the "hearts and minds" of school administrators, classroom teachers, and technology staff. Without the energetic support of their fellow educators, school librarians cannot succeed, regardless of their own individual attributes and performance or the funding placed at their disposal. Schools that are losing or downgrading librarian positions are almost certainly those where the librarian's contribution to student learning is either not understood, not recognized, or—dare we say it?—absent. Barring the failure of the school librarian to do her job well, the number-one at-risk factor for a school library program is a lack of adequate support from other educators.
Recent school library impact studies have also identified, and generated some evidence about, potential "interventions" that could be studied. The questions might at first appear rather familiar: How much, and how, are achievement and learning improved when . . .
librarians collaborate more fully with other educators?
libraries are more flexibly scheduled?
administrators choose to support stronger library programs (in a specific way)?
library spending (for something specific) increases?
The initial reaction of some to these questions might be that they were addressed by the Colorado studies and others; but that is not true, strictly speaking. A key concept in the world of No Child Left Behind is "improvement"—i.e., a documented increase in test scores for a particular school after a successful intervention. Notably, what the Colorado studies and others have done is compare schools with more and less of these attributes—for example, schools with median-and-above weekly hours of librarian staffing and schools with below-median weekly hours of librarian staffing. Consider another approach: define a universe of schools that have never had a school librarian. Randomly select two groups from this universe—one group of schools in which to introduce full-time librarians, and another group to continue without librarians. A year later, compare the test scores of these two groups. In a research milieu where CRTs are the gold standard, the latter sort of study makes a stronger cause-and-effect case than the former; the former is dismissed as merely "correlational."
How can answers to the questions suggested above make a difference? Several things might be done differently, or better, to maximize the reach and outcomes of future school library impact research.
In disseminating the results of future studies, high priority should be given to reaching teachers, administrators, and public officials as well as school librarians and school library advocates. Reaching these audiences will depend on the success of efforts to publish books and articles in their professional press and scholarly journals. Similarly, when future findings are ready to be shared, they should be shared at conferences attended by teachers, administrators, and public officials. While pursuing such a strategy for reaching key decision-makers and supporters will require tremendous effort, it could yield very dramatic results at local, state, and national levels.
As long as the current regime is in place at the U.S. Department of Education, it will be important to share future studies in such a way as to infiltrate the No Child Left Behind movement. One very substantial way to do that would be by submitting future studies for review by WWC, a Web site run by the department's Institute for Education Sciences. Everything submitted to this clearinghouse is eventually rated as meeting their evidence standards—with or without reservations—or, somewhat ominously, as not meeting them. Thus, extreme caution is advised in pursuing this strategy. The potential payoffs are great, but so are the risks. Before deciding to submit a future study report to WWC, it is recommended that a thorough methodological review be solicited from a competent authority.
Perhaps the most strategic option, albeit a long-term one, is to infiltrate schools and colleges of education. Most school administrators and teachers never had to take a course, or even part of a course, that introduced them to what constitutes a high-quality school library program. Add to this the age demographics of many of these individuals, and it is apparent that some of them have no useful frame of reference for school libraries. Part of working with administrator and teacher preparation programs to advocate for stronger school libraries should be an effort to persuade them to introduce some required content about school libraries, including an introduction to the extant research about school library impact.
Many factors are at work in determining what increases—and decreases—the likelihood that research-based advocacy for school library programs can a difference. Three factors are working against successful advocacy for school libraries: (1) the age demographic of librarians, (2) the lack of institutionalization of librarianship in K–12 schools, and (3) the lack of support from educators due to their lack of education or training about libraries and good experiences with libraries and librarians. Unfortunately, the fact that librarians, like other educators, tend to be older than other workers means that the number of librarian positions becoming vacant is unusually high. A Colorado study indicates that half of that state's school librarians expect to retire within the next five years. These vacant positions are highly vulnerable to being downgraded or eliminated in these times of tight budgets, not merely because there is less money to go around, but because superintendents, principals, teachers, and other education decision-makers do not understand the role a school librarian can and should play. This lack of understanding is explained by two factors: the age of these decision-makers themselves—and the consequent fact that most of them were educated before school librarianship was a fully developed education profession—and the failure of schools and colleges of education to teach decision-makers about libraries and librarians.
Two factors are working for successful advocacy for school libraries: (1) regional accreditation requirements and the age demographic of educators. Across the nation, it is more likely that high schools have professionally-staffed libraries than elementary or middle schools. This is a direct consequence of the fact that most accrediting agencies require professional librarians of schools wishing to become accredited. Because elementary and middle schools are not accredited and tend to serve smaller enrollments, they do not have the same protection. While librarians are aging, so are their educator colleagues. As superintendents, principals, and teachers become younger, the likelihood that they experienced well-developed school libraries and professional librarians will increase dramatically.
It is time to re-assess the focus and strategies of our research and our professional "politics." In recent years, school library scholarship has focused on specific quantitative and qualitative approaches to documenting the impact of school libraries and librarians on academic achievement. None of these approaches is currently "in favor" with the education research establishment, and, specifically, the U.S. Department of Education. Getting their attention, and putting school library impact research on the record in a prominent way, is going to require accommodating the official bias toward controlled randomized trials to whatever extent that proves possible. Beyond that, we need to do everything we can to strengthen the quantitative evidence for a cause-and-effect claim regarding school libraries and students' achievement. We need to make the strongest claims we can based on the impact of school libraries on students' standards-based test scores, because, however limited they may be, results on such tests are the measure of learning enshrined in No Child Left Behind. That does not mean we should stop providing more qualitative evidence regarding the contributions of school libraries to the development of information literacy skills or the information-seeking process; but, it is necessary to establish school libraries as a recognized contributor first, before we can expect much serious attention to other related research.
This is all about talking less just to ourselves and more to other educators and policy-makers on their terms. We have made as much progress as we can expect to make while "preaching to the choir." Our future success depends on the extent to which we frame our research to include administrators and teachers more centrally, develop methodologies designed to address current inter-disciplinary and political biases, and offer reports of our findings to journals and conferences that reach other educators. If we want the school library to be regarded as a central player in fostering academic success, we must do whatever we can to ensure that school library research is not marginalized by other interests.
Many of the studies related to this interview can be located through the following Web sites:
LRS: Library research service. Assessed Aug. 12, 2005, www.lrs.org/impact.asp.
School libraries work! A scholastic research and results foundation paper. Assessed Aug. 12, 2005, http://www2.scholastic.com/content/collateral_resources/pdf/s/slw3_2008.pdf.
SchoolMatch. Assessed Aug. 12, 2005, www.schoolmatch.com.
Callison, Daniel. 2004. Establishing research rigor in SLMR. Knowledge Quest 32, no. 5: 18–20.
Gaver, Mary. 1960. Effectiveness of centralized library service in elementary schools—phase I. New Brunswick, N.J.: Graduate Library School at Rutgers, The State University of New Jersey.
Fraenkel, J. R., and N. E. Wallen. 1996. How to design and evaluate research in education. New York: McGraw-Hill.
Gaver, Mary. 1963. Effectiveness of centralized library service in elementary schools. New Brunswick, N.J.: Graduate Library School at Rutgers, The State University of New Jersey.
KRC Research. 2003. A report of findings from six focus groups with K–12 parents, teachers, principals, as well as middle and high school students. Chicago: ALA and AASL. Accessed Aug. 12, 2005, www.ala.org/ala/aasl/aboutaasl/aaslgovernance/aaslstrategicplanning/strategicplanning.htm.
Krashen, Stephen D. 2004. The power of reading: Insights from the research. Englewood, Colo.: Libraries Unlimited.
Lance, Keith Curry, Lynda Welborn, and Christine Hamilton-Pennell. 1997. The impact of school library media centers on academic achievement. San Jose: Hi Willow Research.
Lance, Keith Curry, Marcia J. Rodney, and Christine Hamilton-Pennell. 2000. How school librarians help kids achieve standards: The second Colorado study. San Jose: Hi Willow Research.
Lynch, Mary Jo, and Ann Carlson Weeks. 1988. Director interviewed: SchoolMatch revisited. American Libraries 19 (June): 459–60.
McQuillan, Jeff. 1998. The literacy crisis. Portsmouth, N.H.: Heinemann.
Oberg, Dianne. 2002. Looking for the evidence: Do school libraries improve student achievement? School Libraries in Canada 22, no. 2: 10–13, 44.
Stephen, Peter, and Susan Hornby. 1997. Simple statistics for library and information professionals. London: Library Association Publishing.
Todd, Ross. 2003. Student learning through Ohio school libraries. Accessed Apr. 15, 2005,
Whelan, Debra Lau. 2004. 13,000 kids can't be wrong. School Library Journal 50, no. 2 (Feb.): 46–50.