Field-Testing FAQs

Menu

  1. How are the surveys to be administered?
  2. Does a library (or the individual librarian) that is doing field-testing need to be a member of ACRL?
  3. For field-testing, what form would the raw data responses take when they're delivered?
  4. Can multiple librarians from the same institution volunteer? Or should one person represent the institution?
  5. Do you expect the volunteers to test these surveys with the users at their library?
  6. How many surveys do you expect each library to complete?
  7. If I already filled out the form to volunteer for field-testing, when will I be notified I am part of the field-test?
  8. I changed my mind about which surveys I’d like to field-test after I volunteered. Is it alright do different ones?
  9. Is there any expectation as to how many responses an institution gathers?
  10. If we use a survey for one instruction session, for example, do we need to submit a feedback at that point?
  11. It looks like the surveys are mostly intended to be given after a specific event/workshop/instruction etc. Is it inappropriate to distribute some of these more openly (particularly those dealing with technology, spaces, and outreach)?
  12. I work at a graduate school; are we welcome to volunteer or is this only for schools with undergraduate programs?
  13. My institution subscribes to Qualtrics. Can I enter the survey questions there for field-testing and submit the compiled data to you once we have gathered input from survey participants? By using Qualtrics we can obtain a much greater sample within our institution.
  14. Is there a difference between the data we can see about our surveys during the field-tests vs. what we'll see when the survey is final?
  15. Can I change the questions on the surveys when I field-test them?
  16. As the field-testing goes forward, if it becomes evident that a change may help to clarify the surveys, will a change be made?
  17. Do I need to apply to my IRB to use these surveys?
  18. Will the final Project Outcome for Academic Libraries toolkit surveys be online?
  19. Will the software aggregate results for each library?
  20. Can we benchmark against our peers?
  21. What if I want to collect additional information beyond the 6 standard questions?
  22. Will each library need to pay for this or purchase software to run the online survey product?
  23. What about the Project Outcome for Academic Libraries graphic elements (charts, etc). Does access to this feature require the purchase of any software?
  24. If software is required, what will it cost?
  25. What level of support for administering the surveys will be provided by ACRL and/or the Task Force?
  26. How do the learning outcomes represented in the Project Outcome surveys relate to the ACRL Framework for Information Literacy?

 

How are the surveys to be administered? 
For field-testing, you can administer surveys on paper or electronically. When you volunteer you will receive an information sheet with links to the PDF versions of the surveys and to the SurveyMonkey versions. If you choose to administer the surveys on paper, we ask that you manually enter the responses in SurveyMonkey. Asking field-testers who administer paper surveys to enter their data will enable us to support as many people field-testing as possible rather than capping participation because of the time it would take for staff to manually enter all the responses. If you administer the surveys electronically, you can send the link to users who will fill it out online (and their data will be automatically captured).

 

Does a library (or the individual librarian) that is doing field-testing need to be a member of ACRL?
No. Anyone can field-test, and anyone can register to use the final toolkit.

 

For field-testing, what form would the raw data responses take when they're delivered?
An Excel spreadsheet is all we can provide from the field-testing phase, as the data will not be entered into the Project Outcome database. When the final version of the toolkit is created you will be able to create reports and generate custom data visualizations with the same look and feel as those created by the current PLA Project Outcome

 

Do you expect the volunteers to test these surveys with the users at their library? 
Yes, the idea is that you would administer the surveys to your library users and then share the responses and some feedback on how that process went. Both the data and feedback will inform revisions to the surveys as the final toolkit is developed.

 

Can multiple librarians from the same institution volunteer? Or should one person represent the institution?
Multiple people from the same institution are welcome to volunteer to field-test. The only complication that could arise during field-testing is if two people at the same institution administer the same survey at the same time but for two different programs, because we would not be able to distinguish the responses from those two programs. While this may be an unlikely occurrence, it’s up to you how you want to handle that. In the final toolkit, this will not be a problem and everyone at an institution can register and use the toolkit.

 

How many surveys do you expect each library to complete?
You can test only one of the 7 surveys, or all of them - it’s up to you. There’s no minimum or maximum number of responses to the surveys that must be completed.

 

If I already filled out the form to volunteer for field-testing, when will I be notified I am part of the field-test?
If you are willing to volunteer, then you are part of the field-testing! Everyone is welcome. Once you fill out the form, you should receive an email with further information within a day or two. If you don’t see an email, please check your spam folder and if it’s not there either, email Sara Goek (sgoek@ala.org).

 

I changed my mind about which surveys I’d like to field-test after I volunteered. Is it alright do different ones?
Yes, but please email Sara (sgoek@ala.org) with the change so that the records can be kept up to date.

 

Is there any expectation as to how many responses an institution gathers? 
No. You can administer one survey or all 7 of them, and there’s no minimum number of responses required either for field-testing or in the toolkit. During field-testing, for each instance on which you administer a survey, you will be asked to also fill out a feedback form that asks how many people attended the program (if applicable) and, of those, how many filled out the survey, but that is just to inform our analysis.

 

If we use a survey for one instruction session, for example, do we need to submit a feedback at that point?
Yes, we ask that you submit your feedback for each instance on which you test a survey. If you are only testing one survey but use it to capture data on two different programs, then please fill out the feedback form twice. This will help us determine what types of programs/services the surveys are being used for and whether they are suitable for all types of programs/services.

 

It looks like the surveys are mostly intended to be given after a specific event/workshop/instruction etc. Is it inappropriate to distribute some of these more openly (particularly those dealing with technology, spaces, and outreach)?
It would be appropriate to distribute some of these more openly, depending on your need. For example, if you are looking at the use of group study space, you might distribute the space survey when someone comes to check out a study room, or you could send out the survey to library users more widely and ask anyone who used the group study space to fill it out.

 

I work at a graduate school; are we welcome to volunteer or is this only for schools with undergraduate programs?
This is not only for undergraduate programs. The Task Force did intend one survey to apply primarily to undergraduates (the “undergraduate instruction” survey). Others could definitely apply to different types of library users, including graduate students and faculty. And, if you think the “undergraduate instruction” survey would be useful for graduate students, please go ahead and use it and tell us that in the feedback, because the Task Force could decide to change the name.

 

My institution subscribes to Qualtrics. Can I enter the survey questions there for field-testing and submit the compiled data to you once we have gathered input from survey participants? By using Qualtrics we can obtain a much greater sample within our institution.
That should be fine as long as you can provide a spreadsheet with data in the same format as we’d get otherwise. The results should have the same fields and values for responses as the surveys are currently set up in SurveyMonkey, plus a field for date and time so that we can match the response data to the feedback submitted by program administrators. As long as it’s all standardized we should be able to easily combine it with the other data that we gather.

 

Is there a difference between the data we can see about our surveys during the field-tests vs. what we'll see when the survey is final?
The questions may change because of the field-testing process; that is why we are doing the field-testing. In the final toolkit you will also have access to a range of resources and tools that can help you capture and analyze the data.

 

Can I change the questions on the surveys when I field-test them?
No. Please use the standard questions and in the feedback form suggest changes you would like to see made.

 

As the field-testing goes forward, if it becomes evident that a change may help to clarify the surveys, will a change be made?
It is unlikely that we will change the standard questions during the field-testing process. However, the purpose of the process is to see whether changes should be made before the toolkit is finalized. That is where your feedback is really valuable, so please suggest any changes that you think would help.

 

Do I need to apply to my IRB to use these surveys?
The surveys do not collect any personally identifiable or sensitive information, therefore are unlikely to be subject to IRB review. If you are unsure, it would be best to check with someone at your institution’s IRB office.

 

Will the final Project Outcome for Academic Libraries toolkit surveys be online?
Yes, the final toolkit will have the same look and feel as the current PLA Project Outcome toolkit.

 

Will the software aggregate results for each library?
Yes. The final toolkit will use the same platform as the current PLA Project Outcome. It allows you to create reports and data visualizations for your library.

 

Can we benchmark against our peers?
Yes, in the final toolkit you will be able benchmark the quantitative results against your peers (by basic Carnegie Classification) and against all other academic libraries nationwide. 

 

What if I want to collect additional information beyond the 6 standard questions?
The final version of the toolkit will allow you to add up to 3 additional custom questions. At this time, we are only collecting field-testing data on the 6 standard questions. However, if you choose to administer the surveys on paper, you could potentially add your own questions on the back of the sheet, and only report the data from the 6 standard questions to ACRL. When you fill out the feedback form, please let us know what questions you added.

 

Will each library need to pay for this or purchase software to run the online survey product?
No, it’s free to register and use online.

 

What about the Project Outcome for Academic Libraries graphic elements (charts, etc). Does access to this feature require the purchase of any software?
No. You will get full access to the data dashboards when you register.

 

If software is required, what will it cost?
No software. $0!

 

What level of support for administering the surveys will be provided by ACRL and/or the Task Force? 
For field-testing, we will be providing an information sheet as well as access to the surveys. The final toolkit will contain much more in the way of resources for users (again, following the model of PLA’s current system).

 

How do the learning outcomes represented in the Project Outcome surveys relate to the ACRL Framework for Information Literacy?
Overall, Project Outcome is not intended as a tool specifically to assess information literacy. It is intended to assess learning outcomes across a wide variety of library programs and services (see the examples in the summary document) and a wide variety of users, including undergrads, grad students, and faculty. Therefore, while the Task Force discussed how the Framework might relate, that is not the specific purpose of this toolkit. For example: you might give a workshop on an aspect of information literacy and decide to administer the “undergraduate instruction” survey. Since the final version of the Project Outcome toolkit will allow you to add up to 3 custom questions in addition to the 6 standard survey questions, your custom questions could relate more specifically to the content of your workshop. The 6 standard questions remain standard to allow you to benchmark against peer institutions.

 

If you have further questions, please contact ACRL Program Manager and Mellon/ACLS Public Fellow Sara Goek at sgoek@ala.org. This page will be updated on an on-going basis. 

To volunteer to field-test the new Project Outcome for Academic Libraries surveys, please complete this form