Diane Nahl

Strategic Research:
Problem-Solving Through Systematic Assessment


Diane Nahl

University of Hawaii
nahl@hawaii.edu

Abstract

Our organizations no longer consider it sufficient to make budget or service decisions based solely on professional judgment. Accountability yields the strategic planning process, important to clearly describing the aims of the organization, defining goals and objectives for the near future, planning and carrying out relevant actions, measuring the outcomes, examining the resulting data and making decisions based on systematic assessment. Instructional design follows the same systematic process of guiding instructors to create and evaluate lessons based on learning outcomes, measuring what students have learned, and using the data to refine instruction to improve their rate and amount of learning. Other research methods can also provide information that helps us make decisions about facilitating user-learning.

Introduction

At this time in history we are experiencing many new demands on us as professionals. The strategic planning initiatives that our organizations are implementing are leading us through the problem-solving process of specifying our mission, goals, and objectives, and measuring their outcomes.

Strategic Planning

  • Articulate Mission
  • Define Goals
  • Specify Objectives
  • Plan Actions and Measurements
  • Establish Timeline
  • Conduct Evaluation

Instructional Design

  1. Conduct Needs Assessment
  2. Define Goals and Objectives
  3. Select Formats, Methods and Materials
  4. Devise Test and Evaluation Procedures
  5. Construct and Teach Prototype
  6. Evaluate and Analyze Results
  7. Revise and Recycle

Instruction librarians are well prepared for this broader emphasis on assessment because strategic planning and systematic instructional design have much in common. Many of us are already accustomed to creating and evaluating instruction based on desired learning outcomes, measuring what students have learned, and using that data to refine our instruction to improve the rate and amount of learning.

Action research is a handy extension of strategic planning because it gives us the tools we need to support the data-driven decisions that our institutions require. Since my task here is to discuss the preliminary stages in a research project, this afternoon I will introduce the concept of strategic research for decision-making, touching on identifying questions or problems, and action research methods. Then, I will show you a technique you can use to identify the variables and state the problem.

Preliminary Stages

1. Research is something you already know how to do.
Certainly there are technical things to learn, and this is to be expected in a career that emphasizes lifelong learning. Your experience in helping users define their information search queries and your experience searching the literature of the sciences and social sciences contribute to your ability to define research problems, learn to select methods and perform analyses.

2. Search the literature first.
This step is fundamental because it helps transform the problem into a research design. Searching the literature brings awareness of problems, methods, and the types of analyses available to make the data useful for decision-making. Consider replicating or building on a published study.

3. Inquire on listservs.
Listservs are an unprecedented resource for professionals. Never in history have we been so accessible to one another, regardless of time and distance. We are free to seek advice and opinion within our profession, and our colleagues gladly contribute to the solutions we seek.

4. Keep an open mind, test your assumptions.
As professionals we are accustomed to forming opinions based on informal observations. When we shift to making systematic observations, our assumptions are sometimes challenged. Consider that there may be other possible explanations, and let the data reveal the patterns in the setting or situation.

5. Formulate questions, "What is my 'information need'?"
Conduct a self-reference interview to draw together all of the relevant aspects of the problem.

6. Collaborate with colleagues or those with experience in a technique or process.
It is helpful to work together on problems because the collaboration provides support, varied views on the problem, and a mix of strengths.

7. Look for existing population data kept by the institution.
There may be baseline information available in institutional surveys and reports.

8. Be prepared to believe the data.
Let the data reveal the patterns. Gather several forms of data to cross-verify findings. However, if you doubt the results, check the raw data entry and analyses, gather more data, revise the instruments, redesign the study.

9. Be prepared to use the data to create a new instrument.
Research is an iterative process of refinement, where one study suggests others to follow-up, to consider further aspects or stages of the problem.

The speakers today will provide examples of these points when they discuss their projects.

How do you find a good and worthwhile question to research? First, by being involved in the idea that policy or service decisions need a research component to justify specific interventions. Second, by identifying research problems through strategic planning. What's happening in your environment that causes difficulties, or what do you want to change?

What kind of problems lend themselves to action research? Strategic research focuses on making data-driven decisions within an outcomes assessment model.

STRATEGIC RESEARCH

  • A policy initiative you are concerned with
  • A problem your users are experiencing
  • A change you are interested in creating
  • An evaluation for a current service

There are no restrictions as to the kind of problem as long as it focuses on some behavior, activity, or process that naturally occurs in your environment. Any problem or issue can be researched, as long as it can be transformed into a research query that yields useful numbers. I'll give you some hints on how to do this.

How do you scale a problem down to size that is manageable and relevant. It's always good to start small and think of it as a pilot study. I think that most problems can be translated into a research design. The first step is to identify the variables you want to examine.

For instance here's a relevant problem that some of us are facing: Have you already, or do you intend to create a large-scale Web site for the Library that will require lots of time for development and continued maintenance? This project generates information needs. What do you need to know to make this task effective and efficient? Many things. Some examples:

  • You will need to know which links are used most frequently so you assign top priority to those pages on the update schedule, to maximize their usefulness. (Browser statistics programs)
  • You'll want to know what people find most useful about the site so you can increase its usefulness and expand it according to real information needs. (online ratings, email suggestion box, surveys, focus groups)
  • You'll want to know what they learn from the instructional materials you provide. (cognitive tests)

    You'll want to know where they are failing in what they try to do on your site. (cognitive tests, transaction logs, think aloud method, protocol analysis)

  • You'll want to know which sites they link to most frequently from your site to determine the actual proportion of linking to outside, campus, and library database sites. (Browser statistics program)

And many other questions that are generated from such a practical project. These questions are your "research information needs." They identify the variables you are interested in finding out about, e.g., For example, you might want to study the variable errors so you can design screen instructions that reduce the error rate for users. You can study the characteristics, types, and frequency of errors made on the site, then use that data to redesign aspects of the interface, and test the effect of the new design. Or you might want to find out the variable menu labels and which ones appeal most to users or which labels they understand better.

We might look at what data exists in our libraries currently that is often overlooked but could contribute to decision-making and research plans. The library and the institution itself collect data that might be relevant to a particular research information need you have. Students also contribute to data collection in group projects for courses. A survey that includes some information relevant to your study may exist in another unit. Information that is routinely logged or recorded in the Library may turn out to be relevant, along with data presented in regular accreditation and institutional review reports. Institutional listservs may help you identify useful sources of data.

You often need several types of data in stages to complete a project. For example, you may do a needs assessment to get baseline information by designing a survey questionnaire, examining log files, holding focus groups, and sampling the questions people ask at the search workstations. All of these data contribute to the needs assessment and yield different levels of information. Then you can use the data to design an intervention, some change that you systematically introduce to achieve a different, desired result, and collect data about the effects of the intervention to see whether your intervention had the intended effect. This new data helps you plan the next phase of effecting a change. Research, like searching is typically interative, it evolves. There is never just one search, and one study leads to another.

How do you match your methodology to the problem? Needs Assessment can make use of a number of methods for data collection and analysis. Research methods that are particularly suited to action research include:

Research methods

Summative Assessment: Retrospective reports, recollections, impressions of past experiences or activities.

  • Surveys/Questionnaires
  • Interviews
  • Focus Groups

Rating Scales: Summative rating of past impressions or formative ratings in an ongoing process. The scales range from 3-10 points.

  • Semantic Differential Scales (bi-polar scales, helpful/confusing; easy/difficult, confident/unsure)
  • Likert-Type Scales (defined scales, strongly agree, agree, unsure, disagree, strongly disagree)

Knowledge Assessment: Formative testing throughout the learning process or summative assessment on completion.

  • Performance Tests (Hands-on exercises requiring answers.)
  • Short Answer Pre-Post Tests
  • Paper Interface Instruments (Very convenient. Slows down the search process. Allows you to find out how users interpret screen information and system functions. Entire classes can be tested simultaneously.)

Process Monitoring: Formative assessment during an ongoing process or activity.

  • Baseline-Intervention-New Baseline (Collect systematic observations in the environment to determine the current state, introduce a specific change and collect observations, remove the change and measure again to verify that the intervention was responsible for the change in observations. Continue by re-introducing the intervention, and others of interest one at a time.)
  • Log File Monitoring (Transaction logs provide useful data on how people use the system, how they interpret the commands, what types of errors are typical, and more. Doesn't tell you why they did what they did, re-entering the same zero results search repeatedly.)
  • Logging (Many sorts of data are logged in institutions, e.g., reference questions are categorized and frequencies are reported, in some cases, the actual questions are logged and analyzed for content and level.)
  • Reflection Logs and Journals (Students report on their research process through the use of structured self-reports and guided exploration. The entries are analyzed for content, comprehension, evidence of typical stages in the research process, vocabulary development and search query composition, and more.)
  • Content and Protocol Analysis (Taxonomic methods for identifying and categorizing pertinent themes and stages in self-reports, journals, recorded think aloud reports, recorded reference interviews, recorded research or problem solving discussions, etc.)
  • Time-series Field Sampling (Conduct systematic observations in the environment at regular intervals over time, using participant-observer methods for monitoring and logging.)
  • Experiments (Testing cause-effect relationships in the user environment to assist strategic decision-making.)

I found in my own experience that the key to action research is the process of translating what you want to happen into a research query. This task comes at the very beginning of a research project and is similar to a process we're already quite familiar with--the task of helping people translate their information needs into information research queries. When we are poised to begin a research project we are like students before beginning a term paper: We have an information need. Part of this need is to do a literature search, of course. But let's assume you're past that and now you need to make a plan to collect relevant data.

At this point you have a specific or special information need, namely to translate your policy initiative into a research design query. Unfortunately, you can't use the regular Boolean operators to do this! Instead you use statistical operators and variables. I developed an easy way of doing this that's almost foolproof and will work for you in most of the problems you'll be facing in terms of research design and statistical decision making. I published this idea in Research Strategies, 5(4) Fall 1987. You can check it out later if you decide to follow it up. It can help you get started.

It's called, "Teaching the Analysis of Titles: Dependent and Independent Variables in Research Articles." Sounds pretty dry, I know, but I noticed that many titles of journal articles that report statistical data include the Independent and Dependent Variables of their research design and I thought that knowing this could be helpful to searchers.

For example: An article of Trudi Jacobson's with Janice Newkirk, "The Effect of CD-ROM Instruction on Search Operator Use." in the January 1996 issue of College & Research Libraries. This is a good action research problem for instruction librarians who want to know how to improve student search behavior. First by finding out what they are doing, then by designing instruction to influence search behavior.

"The Effect of CD-ROM Instruction on Search Operator Use." Sounds familiar, doesn't it? The effect of something on something else. Or the effect of X on Y. It's one of several formulaic expressions available to express the cause-effect relationship between the Independent and Dependent Variables in a research design. I'll be using this expression from now on-- IV/DV. It sorts of grows on you, and then you get to be fond of it as you begin to feel confident about its meaning. Oh, yes, that's the DV and that's the IV. You'll be practicing this in just a few minutes.

Back to the title: "The Effect of CD-ROM Instruction on Search Operator Use." This title implies that an experiment was done and that the DV was "Search Operator Use" (measured in some way), and the IV was "CD-ROM Instruction." The expression "The effect of" implies that an experiment was done in which the effect of one thing on another thing was measured. The one thing (IV) here is "CD-ROM Instruction" and its effect was observed on "Search Operator Use." the DV. This implies that there was more than one type of instruction or perhaps instruction vs. no instruction.

CD-ROM Instruction

"The Effect of CD-ROM Instruction on Search Operator Use."

one

Take another example. This article appears in the January 1998 issue of College & Research Libraries., "Using Transaction Log Analysis to Improve OPAC Retrieval Results." The title tells us that an experiment was done in which the DV is "OPAC Retrieval Results" observed through "Transaction Log Analysis." In other words, they might have a policy initiative to "improve" their OPAC service by making it easier for searchers to find what they need. From the title we can also deduce that they decided to use transaction logs as the source of their data to indicate whether there was an improvement or not, the DV.

The IV is not visible in the title but the method is. To make the title complete to reveal the independent variable, it could be expanded to: "Using Transaction Log Analysis in Screen Re-Design to Improve OPAC Retrieval Results." So, various re-designs of the screen (IVs) were attempted and their effect on retrieval results was measured (DVs). This is an example of a Baseline/Intervention Design.

T-Logs

"Using Transaction Log Analysis to Improve OPAC Retrieval Results."

Revised to include IV/DV:

two

From these two example we see two important things. First, that your action policy or initiative needs to be translated into an DV/IV research design query. Second, that the DV identifies the target result--what you want to happen, while the IV identifies your intervention procedure--what specific condition are you going to change in order to bring about the target result. The IV/DV query format is the most basic unit of all research methodologies and action research. This is because it is the standard accepted method of accountability in our society. For instance, if we say that Condition A causes result B, we/re making a cause-effect IV/DV statement. Condition A is the IV, and result B is the DV. The IV is always a Condition or an Intervention, and DV is always a result or a score.

The first step, then, is to translate your policy initiative into the IV/DV Query. I'll have you practice in a few minutes. But there is a second step that goes with it, and that is to translate the DV part of the query into a variable or measure. This is the process of creating an operational, measurable definition for the dependent variable. This takes some experience, and sometimes some ingenuity, but you get better at it with practice. When you're starting it's always acceptable to use a measure that someone else has already used for similar purposes--so you don't have to re-invent it.

But if a suitable measure is not available, it's acceptable to invent your own. The only requirement is that you get some reliable number out of it. It has to be a number, and it has to be reliable. It has to have a high test/re-test correlation. That means that if you ask the same people a second time, right after you asked them the first time, you end up with the same number, or if you ask another sample from the same population and get the same results--within some small margin of error of course. All measurement involves some error. That's an accepted policy, and statistical formulas include an error term. We usually try to aim for less than 5% error in measurement (p<.05).&lt;p /> Let's see by example how some others have done these two steps. This next article in the Fall 1997 issue of Reference and User Services Quarterly. The title is, "Flip Charts at the OPAC: Using Transaction Log Analysis to Judge Their Effectiveness." The DV is "Transaction Log Analysis" while the IV is "Flip Charts." The title tells us what was done by giving us the IV/DV query.

"Flip Charts at the OPAC: Using Transaction Log Analysis to Judge Their Effectiveness."

three

Note the additional words giving us information: "to Judge their Effectiveness"--in other words, to address their instructional goals they wanted to measure how users performed with OPAC searches to see whether these performances improve when they make flip charts available near work stations. Their target result is to improve the effectiveness of searches (that's the DV) and they propose to bring this about by providing this new service of flip charts (that's the IV). This is another baseline intervention design that began with observations of the current state, then added an intervention and measured the results, creating a new baseline if further improvement is desired.

So this was their first step, to translate their policy initiative into an IV/DV query. The second step was to translate the DV into a measure that yields numbers for comparison. This information is not available in the title. Sometimes it is available in the Abstract. In this case we're told: "Comparisons were made among the types of searches conducted, the percentage of zero hits, and the types of errors found before and after the introduction of the flip charts to determine whether searching success rates had improved or if searching strategies has changed."

This tells us a great deal about the research set up and goals. At the broadest level of translating their policy initiative, they refer to improvement in "searching success" and changes in "searching strategies." They then translate searching success and searching strategies into three DVs to yield numbers: (DV 1) types of searches performed (Author, Title, Subject, Keyword); (DV 2) % searches with zero hits (presumably the flip charts should cause these to go down); (DV 3) types of errors (defined as: Typos, Invalid Subject, Incorrect Boolean, Not in database).

Since these are the two crucial steps at the beginning of any research project, let's analyze a couple of more examples, and then I'll have you pair up and give you a few minutes to practice these two steps with one of your own policy initiatives. This article was published in the November 1997 issue of College &amp; Research Libraries. The title is, "Navigating Online Menus: A Quantitative Experiment." The problem evolved as their menu of databases grew, students experienced difficulty selecting the appropriate database. We've all experienced that instructional problem because it's quite common.

What does: "Navigating Online Menus" mean?-- and yet it contains the IV/DV query if you unpack the title using your ordinary information science logic. "Navigating" is the DV because that's where you can observe the user's performance--it's the user who is doing the navigating. "Online Menus" is the IV because that's the context or condition under which the user is doing the navigating. That's what you can change, the menus.

"Navigating Online Menus: A Quantitative Experiment."

four

The title says that it is "an experiment" and this tells us that some treatment condition or intervention was created with online menus (that's the IV) and the resulting effect of this intervention on user's navigation performance was measured (that's the DV). We're not told in the title what the measures were or what the intervention was. But the abstract tells.

"This article describes...the effect of terminology and screen layout on students' ability to correctly select databases from an introductory screen."

Here we can identify two independent conditions or interventions. IV 1 is "terminology" and IV 2 is "screen layout." The next sentence specifies what the interventions were. "Results indicate a significant improvement in students' ability to navigate menu screens where terminology is expanded and selections are grouped by type." This informs us that one intervention consisted in expanding the terminology on the screen (that's IV 1) and the second intervention was to group menu selections by type (IV 2). These two interventions are said to "enhance searching and reduce the need for end-user training." Now this refers to the two DV measures: enhancing searching"--that's DV 1, and reducing the need for end-user training (DV 2). These two are the consequences or result of the two interventions.

In order to find out how they generated numbers for the two DVs, we need to look at the Tables in the Results section. Here we see that they gave out three different questionnaires on paper versions of the Interface with two alternative designs, one per group of students, and then compared the answers across the three groups. The questionnaire contained pictures of the screen layouts, and the students answered test questions about choices they would make to find some specified magazine or title. The answers were scored for accuracy (whether they could find it or not) and efficiency (or, number of steps needed to get there). So these were the two DVs: accuracy and number of steps. "Navigating" was operationalized as accuracy and efficiency.

Perhaps it's time to get you to practice. Since the seating arrangement doesn't allow you to form groups, at least you can do this in pairs. After introducing yourself to your neighbor, one of you can take the role of explaining some policy initiative you are concerned with, or some problem your users are experiencing, or some change you are interested in creating that you might want to attempt some day. Or it could be some evaluation interest you have for a service that's ongoing but that might need some change, or interventions.

IV/DV Exercise

1. One person explains:

  • A policy initiative you are concerned with
  • A problem your users are experiencing
  • A change you are interested in creating
  • An evaluation for a current service

The other person listens and asks questions. You can take notes if you like. Then both of you create with a title for the project that contains the IV/DV information. Try to create a title that describes your IV and DV in specific rather than broad terms, so that the reader can deduce from your title what measures and interventions you'll use.

2. Discuss the research problem. Think about how to measure the current situation &amp;/or the changes.

3. Create a title for the project that includes the IV/DV information.

You'll have 5 minutes to do this. Write the title on a sheet of paper along with any questions or puzzles you may have left over. I will collect the papers and read some out loud. I will comment on them and answer some of the questions.

I'll leave this transparency on to help you with the various kinds of formulaic expressions that research titles can take.

Title cause-effect formulas

  • The Effect of IV on DV
  • The Role of IV in DV
  • DV as a Result of IV
  • IV and DV (or, DV and IV)
  • IV 1 and IV 2 as Determinants of DV 1, DV 2, and DV 3
  • DV Characteristics of IV Systems
  • DV 1 and DV 2 in IV 1

Bibliography

Atlas, Michel C., Karen R. Little, and Michael O. Purcell. Flip Charts at the OPAC: Using Transaction Log Analysis to Judge Their Effectiveness. RQ 37(1) (Fall 1997): 63-69.

Blecic, Deborah D., Nirmala S. Bangalore, Josephine L. Dorsch, Cynthia L. Henderson, Melissa H. Koenig, and Ann C. Weller. Using Transaction Log Analysis to Improve OPAC Retrieval Results. College &amp; Research Libraries (January 1998): 39-50.

Eliasen, Karen, Jill McKinstry, Beth Mabel Fraser, and Elizabeth P. Babbitt. Navigating Online Menus: A Quantitative Experiment." College &amp; Research Libraries (November 1997): 509-516.

Jacobson, Trudi E. and Janice G. Newkirk. The Effect of CD-ROM Instruction on Search Operator Use. College &amp; Research Libraries (January 1996): 68-76.

Nahl, Diane and Leon Jakobovits. Teaching the Analysis of Titles: Dependent and Independent Variables in Research Articles. Research Strategies, 5(4) (Fall 1987): 164-171.

 


Instruction Section Home Page

Send us your comments and questions