2004 Midwinter Minutes

MINUTES

RSS Reference Services in Small and Medium-Sized Research Libraries Discussion Group

ALA Midwinter Meeting, San Diego

4:30 -6: 00 pm Sunday, Jan. 11, 2004

 

 

Roster:

Linda Harris, Chair, University of Alabama at Birmingham

Jan Lewis, Member-at-Large, East Carolina University

Colleen Seale, Past Chair, University of Florida

 

 

I. Introductions:

The RUSA RSS Reference Services in Small and Medium-Sized Research Libraries Discussion Group met on Sunday, Jan. 11, 2004.    Linda Harris, Chair, welcomed the group of twenty-one attendees and called the meeting to order at 4:35 pm.  

 

II. New Business

Job Announcements:

Vacancies were announced at:

Gallaudet University, University of Southern California, University of Nevada at Las Vegas, University of Alabama and Louisiana State University

 

Midwinter Meeting Format:

We were unable to secure an invited speaker for this meeting. Do we want to continue with an invited speaker at Midwinter?    Comments: it's nice to have someone with expertise address a topic. The general consensus was to keep the format, if possible.

 

III. Discussion Topics

First Topic:

The 55% rule and accuracy of service:    is walk-in, in-person service different from digital?

Second Topic:

Library as place, reference as place, reference statistics, etc.

 

IV. Discussion

Since starting a chat service, it has offered a way to archive the transaction. At the desk, the transaction disappears. It provides a knowledgebase. What kinds of questions are being asked, how are we answering them? By analyzing the transactions, we can discern patterns. In terms of email, we are making a deliberate effort to try to improve our responses.

We're using the transactions as a way to help broaden opportunities to improve service.

 

The QuestionPoint service allows you to add a really good answer to the knowledgebase. You can cut and paste answers into your email response. We looked at 150 questions and only two needed additional information.

 

There is potential for errors in the chat environment. Librarians are more anxious, less likely to put forth any answer and often feel under the gun.

 

With chat questions, we often end up emailing answers later. Chat is often used for class assignments that shouldn't be using it. It's a legitimate process to say you can't answer a question.

 

There is a learning curve using the chat environment. Sometimes it's the wrong question for this type of environment. No formal statistics - just anecdotal.

 

There is an expectation that you should be able to answer the question.    Chat provides a means of communicating with the patron, not necessarily the means for an answer.

 

Providing chat reference is like waiting for the phone to ring.    There are lots of practical issues, timing out, etc. How soon do you need this information? It's like triage in an emergency situation. Some people need band-aids; some need heart transplants.

 

Are responses wrong or are responses just inadequate?

 

In the chat environment, clarifying questions can seem like an interrogation. In person, you can set the context for their stage in the research process by asking questions, but typing this same friendly cast is difficult. You're also limited by licensing agreements. We have lots of different contexts to deal with. Sometimes users are not affiliated and we have to turn them down.

 

That someone can't have some information is a reasonable answer as long as you deliver it in a nice way.

 

There are studies on evaluating approachability. One of the sessions of the Virtual Reference Conference did deal with approachability.

 

For email, having a good template would be nice. It's good to know when the information is needed. Patrons also need to show a commitment.

 

We have a template with our AskaLibrarian service. We say up front that if you're not affiliated we will only answer questions dealing with our unique resources. We do require email responses to be copied to check for accuracy and turn around time. We don't coordinate questions received by selectors or individuals.

 

Our in-person questions have dropped and our email has dropped but we may not be counting the individual email questions that are received.

 

We encourage everybody to report email questions.

 

We use an extensive form, an away from the reference desk form that tracks time beyond the initial contact with a patron at the desk or follow-up in a different category as well as emails, phone calls, or off-desk (caught in the hallway) questions.

 

We count consultations with faculty.

 

We count liaison work.

 

There is a program to evaluate services over multiple formats. We're using QuestionPoint. Thirteen years ago, we used the Wisconsin-Ohio Reference Evaluation program (now moved to Kent State) that seems very dependent on an in-person format. Real-time chat is just another instrument like the phone. Is anyone pulling this all together?

 

I think the long-term goal is to develop a parallel form for chat.

 

It requires a lot of analysis of the question by the librarian. The librarian-side factors are used.

 

What questions does that tool answer?

 

The Wisconsin-Ohio Reference Evaluation Program is about quality and the quality of the answer as perceived by the patron.

 

It's been normed but chat evaluation would need the same development, collect information on the same transaction.

 

How many of you are evaluated on some institutional research questions? At Oklahoma there is a standard questionnaire done every year. Five questions pertain to the library.

 

We had 4 questions on the returning students form.

 

There was a library question on the form sent to graduating seniors. We might be able to negotiate the question so that we could get more valuable data.

 

We've done a survey not on reference performance but just on overall satisfaction. Getting the numbers is hard to do. We're mandating a tutorial for some campuses. We sent a survey to ask about the tutorial and satisfaction. It showed improvement over the last 2 years.

 

The NCES (National Center for Education Statistics) runs a national survey which has an information literacy question. There was an article in College & Research Libraries last year that discussed how they're using it.

 

But evaluating reference service - does anyone evaluate in-person/face to face.

 

We rarely have two people at the desk and observation is a key to evaluation.

 

It seems to be only when you're training a new person that you do some evaluation. At some point, it becomes invasive and insulting.

 

If you use the Wisconsin-Ohio Reference Evaluation Program, it does all the analysis for you. One of the areas where we were the poorest was using the online catalog. Librarians thought our system was so simple that they didn't give adequate information about using it. The evaluation can be done during an intensive week - practical reference observation. This type of evaluation can help identify where training is needed.

 

It's hard and frustrating to start a new service. We've never evaluated the in-person service, never done a parallel and it would be nice if there were a tool to evaluate services across the board.

 

We tend to evaluate by complaints, it's usually not the answer but the treatment.

 

We did an observation study but the reference desk was just one component.

 

We've used focus groups.

 

The topic was not exhausted but we decided to go back to the list of topics proposed at the annual meeting to start a new topic.

 

 

Second Topic: Library as Place and Reference as Place, Reference Statistics, etc.

 

We reduced the reference collection by 1/3 through a re-shelving survey. If a book was not re-shelved twice in a year, then someone had to justify keeping it in the collection. Weeding the collection has been a very good thing. As faculty members wanted updated editions, we would withdraw or send the others to storage.

 

We moved all of the bibliographies to the circulating collection and felt that this was a real service to users.

 

We have a 3-yr old library. We are now re-evaluating the space. We have four times the study space available.

 

We are weeding reference with a goal to reduce the collection by half by withdrawing or putting titles in our storage facility. We have an automatic storage and retrieval system that is connected to the catalog. We can double the size of the storage facility.

 

We just had an informal measure of the use of the library. A bagel in a toaster oven set off the fire alarm and it was amazing to see how many people were in the building.

 

Our reference statistics are going down. Reference webpage hits should be counted.

 

When we have statistics that show that no one is using the library, we should count webpages.

 

We should count webpage counts over time. There is no standard for webpage counts. We should include them as patron contacts.

You can get the statistics from the server. These statistics should be included in the library's annual report.

 

There is no uniform way to measure hits on a webpage, you can check to see which pages are most popular.

 

I'm a doubter on the use of webpage hits as a measure of library use. We did some IP analysis for one month to determine if use was intentional or accidental. When stats are going down, we try to collect all of the informal instructional activity away from the desk.

 

We don't get the easy questions any more, we get harder questions now and we're trying to find a way to collect that information.

 

We also have received some of the most challenging questions.

 

We need to sell the library as a complete sensory experience; we're in a milieu...but not the library as a theme park. Has anyone done a study to see what people want?

 

There was an idea to mount large screen televisions.

 

LibQUAL results at our university showed that students wanted quiet study areas and collaborative study areas. Faculty wanted dedicated carrels.

 

Is the library the best place for study halls? Minnesota has just study halls. Students want a place to study but also want to eat.

 

We allow food because we didn't want to chase people around who brought food in. We don't want pizza delivered and we don't want it around the computers but we do allow snack food and for the most part, they've taken good care of picking up after themselves.    What about the pest problem? We have big trashcans with lids and two janitors assigned to the area. Bugs haven't been a problem. We're happy with the decision we made.

 

We had a push to keep the library open 24 hours but not many people used it. It brought homeless people in and also was expensive because the main library is a six-story building. The extra library fees that were assessed to do this were resented. All the students really wanted was a clean, well-lit area open 24 hours where food was allowed.

 

Should the library be the main study hall on campus?

 

Our students do a lot of group study work, a lot is course-related, and a lot is not. We have very few group study rooms. We have a designated quiet study area.

 

We've talked about a designated study area.

 

If the library is a study hall, it should be a 24-hr study hall where ID's are checked.

 

Our 24-hr study hall led to a feral cat being trapped....

 

We had a raccoon problem.

 

The meeting was adjourned by Linda Harris at 6 pm.

 

Minutes respectfully submitted,

 

Colleen Seale