Locating Categories and Sources of Information: How Skilled Are New Zealand Children?
Gavin Brown, University of Auckland, Auckland, New Zealand
This research was carried out with Purchase Agreement funding from the New Zealand Ministry of Education while the author was at the New Zealand Council for Educational Research. An earlier version of this article was presented at the joint NZARE & AARE conference, Melbourne December 1999. The author acknowledges the support and advice offered by the NZCER Essential Skills Assessments Research Team, led by Cedric Croft. Further information about the assessments described herein can be found at the New Zealand Council for Educational Research Web site (www.nzcer.org.nz).
The ability of New Zealand students to locate information using library structures and systems was measured through the standardization of six new Information Skills tests on students (N=5,400) in years 5 through 8. The paper and pencil tests are based on an information problem-solving perspective of the New Zealand Curriculum Framework Essential Skills. The tests focus on a formative exploration of students' understanding of the information skills involved in using library-related resources, specifically libraries, parts of a book, and reference sources. Girls, students in higher socio-economic schools, and students in higher year levels outperform their counterparts, though the literacy level of students is not controlled. The strengths and weaknesses of students are identified for each test. There is strong evidence from all six tests that students experience difficulty with sorting through the various dimensions of a search task in order to select an appropriate category (Dreher and Guthrie 1990), specifically the volume, page, or library section that the required information will be in.
Locating information is considered a relatively easy stage in information problem-solving (Crooks and Flockton, 1998) but it is not necessarily straightforward. Indeed, students often encounter their first real problems in answering their research questions when they try to find the information they need in a library.
To locate information, students must accurately identify the category of information being sought (e.g., the physiology of a bird's wing or its function). This topic-clarification process is one that should begin before the search starts; however, as Moore (1995a) has pointed out, the act of locating information may cause students to rethink their topic. Moore (1995b) has also shown that students must be able to identify the terms or labels used by adults to categorize the topic that they themselves express in "naive" terms (e.g., the category "flying" may be expressed as "flight" or even "aviation"). The difficulty students have in selecting appropriate categories was identified by Dreher and Guthrie in their research into grade school students' searching of text books (1990).
In addition, searchers have to know how to use the various structures, systems, and tools by which information is categorized and accessed. For example, Moore (1995b) showed that students had difficulty with the fact that libraries shelved items along a shelf and then down to the next shelf in each set of shelves or bay before putting items on the top shelf of the next set of shelves. She found that students would become confused as to the location of a book because the numbering or lettering of a shelf suddenly jumped to a noncontiguous letter or number as the students carried on searching along the shelf. Two difficult structures for her students were the bay-shelf arrangement, along with the Dewey decimal system.
New Zealand's National Education Monitoring Project (NEMP) assesses the abilities of children in primary school (years 4 and 8; nominally ages 9 and 13) across a wide variety of curriculum areas and essential skills. NEMP uses a variety of methods to investigate information skills in 31 tasks: one-on-one interview settings, small group team situations, and independent task work recording responses on paper (Crooks and Flockton 1998). NEMP found that students were particularly strong at locating information from catalogs and reference books, making significant progress between year 4 and year 8. However, concerns were raised about students' relative weaknesses at "defining information needs, asking contextualized questions, and using the information analytically to answer questions' (NEMP 1998).
Thus research by Moore and NEMP has identified two major questions concerning students' abilities to locate information. First, can students find the category of information they are seeking when authors, publishers, and librarians have used a different language and cognitive system for categorizing information? Second, can students use the library's search tools to locate appropriate information sources? Light has been shed on these two questions in the New Zealand context through the standardization of a series of new information skills assessment tests. This article reports the theoretical and methodological backgrounds to the test standardization. Pertinent results that identify what New Zealand students can do are highlighted and implications for teacher practice are touched upon.
A wide range of definitions and descriptions of information literacy exist in the literature (Brown 1997, 1999). However, Doyle's definition is still a powerful beginning point: information literacy is "[t]he ability to access, evaluate, and use information from a variety of sources" (1993, 138). Moore's empirical findings and theorizing on information problem-solving have formed the framework adopted for this research (1995b). For the purpose of developing assessment tools, the model used here is more linear and sequenced than hers. This means that the interactive, recursive nature of information problem-solving cannot be properly examined by this type of assessment; reflective observation and judgment by experienced teachers is needed to determine if students are letting new information affect their planning, locating, evaluating, or synthesizing.
The type of skills, knowledge, and attitudes required in information literacy can be summarized by a series of questions that students should ask of themselves:
What is the problem I have that information will help solve? What exactly do I need to know? How do I get the information I need to answer my problem? How do I know which information to trust? What does the information mean? How does the information I have found address my problem? How do I make sense of the information so that I can create a solution for my problem? How do I share my solution with others? How do I know that my solution is any good? How do I know that the processes I used are any good?
These information problem-solving skills are, in the New Zealand Curriculum Framework, an essential skill that has been implemented in such learning areas as English, science, and social studies (Brown 1997, 1999). Figure 1 presents a summary of the various knowledge, attitudes, and abilities that students need to exercise in information problem solving.
Before students begin to actually obtain information, teachers need to make sure that students:
activate what they already know of a topic; develop some hypotheses or ideas about what the solution to the problem might look like; develop an understanding of what their goals are (e.g., Do I write a written report or draw a poster? When does it need to be done?); develop a plan to decide which activities will be carried out and schedule them; and make use of appropriate affective characteristics (e.g., perseverance, cooperation, honesty, etc.).
The arrow indicates that the various activities interact with each other and that carrying out these activities is not unidirectional. Nevertheless, as mentioned earlier, paper and pencil testing is not well suited to eliciting information about students' abilities to integrate these various dimensions.
During information problem-solving students must locate and evaluate information sources, select and understand the information within sources, analyze and apply information to the stated information problem, and synthesize a novel or creative solution to the stated information problem. Normally these activities are carried out sequentially, as indicated by the one-way arrow, though there will be recursion and revision as new information is brought to light.
After having developed a solution to their information problem, students need to present their answer in some format and undertake metacognitive reflection on their product and processes. This interactive process, as indicated by the two-way arrow, allows students to engage in an evaluation of their strengths and weaknesses, leading to improved learning.
Assessing students' ability to locate sources of information is possible through the use of paper and pencil standardized tests; however, most of the "before" and "after" stages can only be validly assessed through teacher or peer evaluations, by student self-reports or by performance assessment.
In March 1999 information skills tests were administered to nearly 5,400 randomly selected students. The students were enrolled at schools randomly selected from a geographically and school-size stratified sample. The number of students tested for the primary tests at years 5 and 6 (normally ages 10–11) and at years 7 and 8 (normally ages 12–13) for the intermediate tests is shown in table 1. The numbers are different since each student only completed one test, and so three different sample populations were used for the three test modules.
A careful examination of the demographic makeup of these test populations was undertaken. The following school-level characteristics were examined: school size and type, school socio-economic status (expressed in New Zealand as school decile), school funding basis, proportion of Pacific Island and Maori students, urban-rural mix, and geographic location. The sample for each test is suitably similar to the New Zealand population from which it was drawn.
The size of these samples provided sufficient numbers of students to examine the main effects of year level, gender, and school decile on achievement. However, the sample is not sufficiently robust to make strong inferences about the interaction of school decile on gender or year-level achievement since there are gaps in the decile populations and unequal distribution of gender across decile. Nevertheless, the sample is large enough to examine gender and year-level interactions.
The materials used in this survey consist of six tests, two for each topic. The intermediate version contains resource materials and questions at a more advanced level than the primary version. The test content probes key knowledge and skill areas that students are expected to be able to use in order to meet their information needs. Throughout the test development, the fact that students need to know how to use the various information structures and systems embedded in libraries, reference sources, and books for the purpose of solving information problems has been kept central. Table 2 shows the content areas covered by the six tests.
Each of the six tests went through an extended period of development in 1998 that involved repeated drafting, field-testing, item analysis, and rewriting to ensure that the difficulty levels, language, and content were appropriate for students at each level (Brown, 1998). A thorough analysis of the item content, test language, and the item-difficulty characteristics was carried out on the standardization survey results. Statistical software used for item analysis was ConQuest (Wu, Adams, and Wilson 1997) and SAS (SAS Institute, 1996) was used for multiple analysis of variance (MANOVA).
The tests are designed to take no more than 30 minutes of testing time and contain between 26 and 34 items each. The tests are largely in constructed response format with multiple-choice items ranging from no more than a quarter to half of all items. The tests can be used at any stage of the school year for largely formative purposes, although norms are available should schools require them.
Each test is composed of stimulus material around which questions are structured. The stimulus material includes replicas of such things as catalogue displays, library shelves, pages from books, encyclopedia volumes, and so on. Students are asked to identify key information elements and use the information to solve information problems. Figures 2–4 give a sample from each test of what is asked of the students and show how students must answer.
As the figures show, the tests require students to solve information problems typical of the kind of activity they might normally engage in for school purposes. In figure 2, taken from the Intermediate Library test, students are required to locate items using the Dewey decimal system and show mastery of the shelf-bay structure referred to earlier that is commonly used in libraries. Students have to grasp that the Dewey numbers 500 and 600 are in the left-hand bay (letters A and B respectively); while letters C and D in the right-hand bay represent Dewey numbers 700 and 800. Those that are not familiar with the shelf-bay structure will read across the bay divider and place 500 at letter C and so on.
In figure 3, from the Primary Reference Sources test, students have to generate the key search term that expresses the class concept for the examples given or sort through the many searchable terms for the key one. Similar category selection questions are posed in the same test module by asking students for appropriate search terms that they would use in the context of a CD-ROM encyclopedia. Likewise in figure 4, extracted from the Primary Parts of a Book test, students must sort through the given information to identify the key search term that leads to the correct passage of text, while adapting the language of the question to the author's as used in the chapter titles.
The mean results for the tests provide an interesting overview of student locating skills. The tests have strong psychometric properties, especially in light of the fact that they are quite brief, and so many questions are in constructed response format, that is, they require teacher judgment in the marking process. The coefficient alpha measures of internal reliability range from 0.84 to 0.90, with an average of 0.86. This indicates that student responses to items are scored consistently, with such consistency accounting for around 74% of all variance on average. The standard errors of measurement are quite small, ranging from 7.4% to 8.3%, with an average of 8.0%.
The first statistically significant finding (table 3) is that students at a higher year level are somewhat more able to answer the questions than students at a correspondingly lower one. However, the difference between year levels is often not very large; the effect sizes range from 0.27 to 0.86 with an average of 0.45. Such small difference may occur because there has been little direct teaching and learning going on related to the constructs measured by the tests. In other words, students may learn more about these things incidentally rather than as a result of deliberate instructional programs. However, if the better reading ability of older students plays a significant part in these results, then students may be learning very little at present.
International comparison data is difficult to obtain since few other educational jurisdictions carry out such representative random selection survey assessments in this area. However, a recent experimental training study of Canadian children in grades 3–5 (nominally ages 9–11) provided interesting comparisons (Symons et al. in press). It is worth noting that Canadian children are one year older than New Zealand children in the same grade or year since New Zealand children start school at age 5. In Symons et al., the children had to find answers for three questions by searching an informational text through using either the table of contents or index to locate the appropriate page and then extract the correct answer. The performance of the control group of students is most likely to be similar to that of the students in this New Zealand survey. On average, the grade 3 control students, across three studies, got about 25% of the 3 questions correct. Similarly, the grade 4 control students scored about 36%, again across 3 studies. The grade 5 control students scored 75% but were measured in only one study. Thus there seems to be some rough similarity in children's ability to locate answers to questions of category selection, given the difference in task, time to complete, and number of questions to complete.
The second major statistically significant finding is that girls are consistently somewhat more able to answer the questions than boys (table 4). However, this difference is quite small, ranging between approximately one-fifth and one-third of a standard deviation. This difference is not surprising, given the general superiority of girls in terms of reading vocabulary and comprehension and the paper and pencil nature of this survey.
The interaction of gender and year level is statistically significant for all six tests (see table 5). Between years 5 and 6 the trend is quite clearly an increase in the difference between boys and girls, with an average increase in favor of the girls of 0.8%. Between years 7 and 8, the gender gap decreases, with an average increase in favor of the boys of 0.6%. The interaction accounts for quite a small proportion of variance—the average equals 6.6%—except for the Primary Library test where the gender gap between years 5 and 6 explains nearly a fifth of the variance.
Thus, it would appear that as boys and girls progress from year 5 to year 8 the difference in their ability to locate information first increases and then begins to decrease. Interesting as these interactions are, there is insufficient difference between boys and girls to warrant separate norms.
The socio-economic status (SES) of students has been measured through the proxy of the socio-economic status indicator of the school. In New Zealand, as mentioned earlier, school "decile" indicates SES. Decile indicates the tenth of SES in which the school falls as measured by statistical sampling of the incomes, household crowding, ethnicities, and education of a sample of households within the various geographic areas from which students attending the school come (Ministry of Education 1997). Low SES is associated with deciles 1–3, while high SES is associated with deciles 8–10.
The third major finding is that the mean achievement of students in high decile schools exceeds that of those in low decile schools. The difference between deciles is statistically significant for each test, though there are some interesting anomalies. The increase in mean score by decile does not increase monotonically. Often the mean score of students at a low decile (1–-3) will be as high as those in mid decile (4–7) schools. Just as often the mean score of students in high decile (8–10) schools will be as low as that of students in mid decile schools. Furthermore, the range of scores within each decile is very similar; there are students at every decile who get the lowest and highest possible marks.
This result has strong implications for classroom teachers. No matter the decile of the school, it is possible there will be students who will do very well on these tests while there will be others who will not be able to answer many questions correctly.
It should be noted that all of the demographic and student variables identified explain only 20% to 30% of variance in the test scores. This clearly indicates that student literacy, verbal and scholastic abilities, and other idiosyncratic traits generate the greatest proportion of variance. Further validation studies are needed to isolate the role of these factors.
Since these tests are designed to inform teachers as to the learning needs of students rather than just provide comparative norms, a thorough analysis of the content and the kinds of errors made by students was carried out. It is this level of detail about information literacy that is of real interest to teachers, teacher-librarians, and administrators. In other words, on which types of locating information skills do which students need further instruction? Thus, use of these tests allows the identification of the locating skills that students need to develop. Table 6 reports the average level scores for each of the categories discussed in table 2 earlier.
The difficulty of the questions was determined from the item logits derived from the ConQuest item analysis, which is based on the Rasch single-parameter model. Detailed discussion of the findings relevant to the six tests is reserved for the easiest and most difficult sections only as determined by the average logit values of the questions. The following narrative discusses the hardest and easiest sections of each test at each level as well as the category selection problem common to all tests. It is worth remembering at this point that the information skill being reported is the locating skill only.
Identifying the title of a book and locating fiction and nonfiction items on the shelves were the two easiest skills for primary age students, with about 50% of year 5 students answering the 8 items correctly and between 65 and 75% of year 6 students answering correctly.
The wrong answers provide some insight into what students find difficult in this easiest area of library information skills.
Approximately 25% believed that the subject keywords, rather than the Dewey decimal number, would help them find books in the nonfiction section. Approximately 20% chose to use the publisher's name, rather than the Dewey decimal number, to locate books in the nonfiction section. Approximately 11% believed the Dewey decimal number referred to the number of pages in the book. Approximately 15% of students identified the author as the title from a catalog display.
The two hardest sections for primary students were the 9 items that required them to discriminate between "fiction" and "nonfiction" categories (6 questions) and to identify the subject keywords (3 questions). On average only 33% of year 5 students and 47% of year 6 students answered the 6 items about fiction-nonfiction correctly. The average scores for the 3 items on subject keywords were 23% for year 5 and 41% for year 6 students. The accuracy difference between the hardest and easiest sections was 26 percentage points at Year 5 and 32 percentage points at year 6.
Analyzing the wrong answers for the harder sections is difficult because more students missed or skipped the questions or provided answers that could not be categorized. However, from the wrong answers that were available, it was clear that 25% of all primary students reversed the meaning of the terms "fiction" and "nonfiction." The next most common error (9% of students) was to look for nonfiction items in the reference section. Two of the keyword questions required students to write in the two subject keywords provided in the stimulus material. Of those who were incorrect, 9% were able to provide at least one correct keyword indicating that those students were on their way to fully grasping the concept. In contrast, 8% of students simply used the first topic in a list of subjects as the keyword that had generated the list, indicating that they were truly unfamiliar with the requirements of the task.
At the intermediate level, students found identifying the complete range of bibliographic citation information relatively easy (at an average of 69% correct at year 7 and 78% correct at year 8 on 7 items). Identifying and creating keyword search terms was next easiest (at an average of 58% correct at year 7 and 70% correct at year 8 on 4 items). Students found accurately identifying the publication year, place, and publisher the hardest part of this skill, with the proportion of students making this type of error ranging from 5% to 11%.
Selecting sources that are not books was relatively hard (at an average of 41% correct at year 7 and 52% correct at year 8 on 3 items). Discriminating between fiction and nonfiction categories was still the hardest library information skill (at an average of 20% correct at year 7 and 27% at year 8 on 3 items). Approximately half of all students selected a book-format item when a nonbook was required. And again, 25% of students reversed the meaning of the words "fiction" and "nonfiction," while 15% looked in the reference section for nonfiction items.
Parts of a Book
Results are reported by ease or difficulty rather than by level since students performed in such a similar manner.
At both primary and intermediate levels, students found the alphabetic order questions easiest (at an average of 58% correct at year 5, 66% correct at year 6, 55% correct at year 7, and 63% correct at year 8 based on 6 items in each test). Mistakes were most often made with the last or penultimate words when putting lists of 4 or 5 words into alphabetic order (11% and 12.5% accuracy rate of students for primary and intermediate tests respectively).
The primary students found the index section most difficult (at an average of 39% correct at year 5 and 49% correct at year 6 on 7 items). The intermediate students found the 7 items on the table of contents most difficult (at an average of 43% correct at year 7 and 49% correct at year 8). It is interesting to note that in both topic areas the information skill is similar: category selection. Both sections require students to integrate several factors into the creation of a search strategy resulting in a selection of an appropriate category. Approximately 33% of primary students and 20% of intermediate students responded in a way that showed they had not kept in mind all the task criteria when answering.
By way of illustration, students were provided with a table of contents and asked to identify the page on which information would be found about Lake Wakatipu, a water-filled glacier valley. Intermediate students who got this question wrong might have been distracted by six different contents pages: "the valley stream," "glaciers," "lakes and swamps," "river-made lakes," "blocked-valley lakes," or "lakes fresh and salty." Selection of any of these choices indicates that the student failed to combine the three critical elements of glacier, valley, and lake in the question. These terms are captured simply and inferentially by the contents heading "glacial lakes."
Results are reported by ease or difficulty rather than by level since students performed in such a similar manner.
At both primary and intermediate levels, students found the directory questions easiest (at an average of 72% correct year 5, 77% correct year 6, 68% correct year 7, and 74% correct year 8 based on 5 items in each test). Again, both sections require students to integrate several factors into the creation of a search strategy that results in a selection of an appropriate category. Approximately 40% of primary students and between 20% and 33% of intermediate students responded using only one key term or ignoring the key restrictive terms such as "only" or "after."
At both levels, students found the dictionary section hardest (at an average of 43% correct year 5, 40% correct year 6, 44% correct year 7, and 41% correct year 8 based on 6 and 7 items, respectively). The choice of the last word on the page as the second guide word posed the most challenge for students. Approximately 40% of primary students and 33% of intermediate students chose the top word of the right hand column of a dictionary page as the last word on a page guide, instead of the last word in the right hand column.
Category selection (Dreher and Guthrie 1990) has been apparent in all six tests. Samples of this can be seen in figures 3 and 4. The task to select the correct volume of an encyclopedia for the Olympic Games requires deciding on the key search term: "olympic" or "games." The task of identifying the volume of an encyclopedia with information on dinosaurs from a list of examples requires the students to generate the organizing category. Fortunately dinosaur is both a word and concept in the vocabularies and minds of many young people in western society, and so this task of generating the appropriate search category from a list of examples is realistic and valid. On the other hand, selecting the volume that has information on the New Zealand woman Kate Sheppard requires selection from multiple search terms, such as, "New Zealand," "woman," "Kate," and "Sheppard." Only those students who have mastered the convention that a person's surname is used as the key search term were capable of generating the most appropriate answer. Only if such a search provided no answer would a search by a more general category be justified.
The pattern of results when averaged across the tests is informative. At the primary level about 50% of students are capable of selecting the correct category when working with one key search term. When two or more key search terms are provided, only 33% of students are capable of selecting the correct category. In the latter condition another 33% of students select a category that meets the requirements of one of the search terms that is not the key or central term in the search task.
At the intermediate level, 60% of students are able to correctly select a category when working with one key search term. Nearly 50% of them are also able to correctly select the correct category when working with more than one search requirement, while only 25% of them select a category using a minor search term.
Plainly there is growth in students' abilities as they spend more time at school. However, significant proportions of students will need much more teaching, modeling, and experience to develop the cognitive ability to correctly sort through search terms and focus on the key search term in such information problems.
The results of this test standardization—though not intended as a survey—have pointed out in some detail the strengths and weaknesses of New Zealand students concerning locating information. Locating information is one of the building-block skills of information literacy; without information literacy, crossing the so-called "digital divide" will be difficult for today's children.
Teachers can quickly identify the topics of instruction that students need help with using libraries, parts of a book, or reference sources to find relevant information. This is very helpful, given that, in nearly every New Zealand classroom of 20 students, as many as 10 will have one or more of the misunderstandings identified in this article. It is hoped that use of these tests will improve both the teaching and learning of information literacy. However, this will only happen as teachers develop appropriate learning activities in response to the students' results.
It must also be kept in mind that students' full abilities have not been captured by these paper and pencil tests of locating skills. Students' information skills need to be observed and judged as they carry out the full range of information tasks. Toward that end, teacher rating and student self-report forms that can be used for evaluation of information skills performance are being developed. If used in conjunction with the forthcoming Information Skills tests, teachers will have a more complete view of students' information literacy and will be better able to plan learning activities.
Given that there are deficiencies in students' ability to locate information, what possible solutions can schools implement? Of the many possible responses to the findings, it seems to this assessor that schools can take responsibility for two main contributors to student achievement: teachers' professional development and provision of information-rich learning environments. Through professional development, teachers would develop greater ability to provide the modeling, instruction, and activities that students need to develop their information literacy. Secondly, schools will probably need to invest more in creating information-rich environments in which their students can practice these information skills. Resource-based learning may provide an ideal environment for information problem solving.
Brown, G. 1997. Information skills in the New Zealand curriculum: A blueprint for education? Paper presented at the New Zealand Association for Research in Education Annual Conference, December 4–7, 1997, Auckland. ERIC Document ED 429 618.
———. 1998. Assessing an essential skill: Finding information in the library. Paper presented at the New Zealand Association for Research in Education Annual Conference, December 3–6, Dunedin. ERIC Document ED 429 995.
———. (1999). Information literacy curriculum and assessment: Implications for schools from New Zealand. In The information literate school community: Best practice, J. Henri and K. Bonanno, eds. Wagga Wagga: Charles Sturt Univ., Centre for Information Studies, 55–74.
Crooks, T., L. and Flockton. 1998. Information skills: Assessment results 1997. National Education Monitoring Report 7. Dunedin: Univ. of Otago, Educational Assessment Research Unit.
Doyle, C. S. (1993). The Delphi method as a qualitative assessment tool for development of outcome measures for information literacy. School Library Media Annual 11: 132–44.
Dreher, M. J., and J. T. Guthrie. 1990. Cognitive processes in textbook chapter search tasks. Reading Research Quarterly XXV, no. 4: 323–39.
Ministry of Education. 1997. Ministry of Education socio-economic indicator for schools. Unpublished paper. Wellington: Ministry of Education, Data Management and Analysis Section.
Moore, P. 1995a. Information literacy: Past approaches and present challenges. New Zealand Annual Review of Education 5: 137–51.
———. 1995b. Information problem solving: A wider view of library skills. Contemporary Educational Psychology 20: 1–31.
NEMP. 1998. Information skills. Forum Comment (July): 2–4.
SAS Institute. 1996. SAS system for windows. V6.12 TS050. Carey, N.C.: SAS Institute, Inc.
Symons, S., et al. In press. Strategy instruction for elementary students searching informational text. Scientific Studies of Reading.
Wu, M., R. Adams, and M. Wilson. 1997. ConQuest: Generalised item response modelling software. Melbourne: ACER.
Manuscript submitted: October 2000
Board approved: January 2001