ALA Annual Conference 2011

Volunteer Reporters Cover ALCTS Forums and Events in New Orleans

Volunteers who attended events at the 2011 ALA Annual Conference provided the following summary reports. We thank the volunteers who covered a program or event sponsored by ALCTS or one of its units. Their efforts enable the rest of us to benefit from their presentations. We regret that volunteers were not available to report on all the events.   

Preconferences | Programs of Interest | ALCTS Forums & Programs | ALCTS President's Program

ALCTS Preconferences

Patron-Driven Acquisitions in Academic Libraries

Virginia Kay Williams, Wichita State University

About seventy people gathered for a full-day preconference on patron-driven acquisitions (PDA), also known as demand-driven acquisitions (DDA). Rick Lugg opened the day with a few observations on the changing value of local print collections. Electronic resources are the new mainstream; print circulation per student declined 50 percent from 1997 to 2007. The estimated cost of maintaining a print volume in open library stacks is $4.26 per year and those open stacks require space that could be used for collaborative study space, writing centers, and other needs. Selection competes with liaison work and other projects for librarian time, but as Suzanne Ward of Purdue University noted, use studies published from 1969 to 2010 find that 40 to 50 percent of academic library books never circulate. Ward also reviewed interlibrary loan (ILL) purchase programs, which began in the late 1970s. The few published studies of these ILL PDA programs indicate that patrons are good at selecting books. When evaluating programs, librarians agreed that nearly all books are appropriate for the collection and found that most books continue to circulate after the initial requestor returns them. Ward pointed out that declining book budgets, growing availability of e-books, and broad acceptance of electronic resources are all encouraging libraries to move from the traditional just-in-case collections model toward a more patron-driven just-in-time collection model.

Doug Way of Grand Valley State University (GVSU) and Robin Champieux of Ebook Library (EBL) discussed an e-book PDA program that began in 2009 with the goals of expanding the universe of books available to students, improving use of materials budget given that 40 percent of collection had never circulated, and freeing librarians to spend more time out of the library. Way considers EBL’s short-term loan program crucial to the success of GVSU’s PDA program. During the first year, GVSU loaded about fifty thousand title records into the catalog. Patrons could browse a title for five minutes with no cost to the library; looking longer, printing, or downloading from a title would trigger a short-term loan (STL) and the third STL of a title would trigger a purchase. After one year, GVSU patrons had browsed slightly more than ten thousand books, triggered more than 6,200 STLs, and triggered 343 purchases. GVSU budgeted $150,000 for the first year of the EBL program; they spent $69,000. GVSU now spends one-third of their book budget on PDA; they recently moved their program to YBP Library Services so they can offer both EBL and ebrary titles.

Unlike GVSL, Cornell wanted to build virtual collections for specific subjects following the closure of a library, so they based the PDA profile on the approval profile for those subjects, shifted funds from print approvals to PDA, and automatically purchased books on the second click instead of using STLs. Clare Appavoo of Coutts, Boaz Nadav-Manes and Jesse Koennecke of Cornell discussed the programming and scripts that Cornell and Coutts use to run an e-book preferred PDA program using MyiLibrary with minimal staff intervention. Because two-thirds of scholarly books are not available as e-books within sixty days of print publication, Cornell and Coutts also developed a print PDA program with automatic rush processing so print books are available to patrons within a week of the patron ordering through Cornell’s catalog. Cornell, which targeted the Q-QE and T-TS classifications for its PDA program, has loaded less than two thousand titles into its catalog and purchased about 10 percent of the titles loaded.

Annette Day of North Carolina State and Matt Barnes of ebrary discussed the ebrary DDA pilot conducted in 2009-2010, and progress toward integrating ebrary PDA program with YBP services. Both expressed some frustration with the pilot; the title list was static, triggering events varied by publisher, and a substantial amount of manual work was required. Despite these frustrations, both ebrary and NCSU have continued beyond the pilot; triggering events have been standardized and workflows simplified. NCSU’s ebrary program has been integrated with YBP, allowing for combined profiles, duplication control, and integrated technical services, but with the loss of some flexibility. NCSU is committed to growing the PDA program as part of its collecting toolkit; Day anticipates using PDA primarily for low-use areas.

Michael Levine-Clark of University of Denver and Barbara A. Kawecki of YBP discussed DDA as a new way of thinking about library collections and vendor collection services. Levine-Clark noted that Denver acquires about a third of the scholarly books published in North America annually and that less than 20 percent of the books cataloged at Denver from 2000 to 2004 have been used at least four times. His goal is to provide a broader collection, not necessarily to save money, by using short-term loans, print-on-demand, and demand-driven acquisitions to pay for content at the point of need and to pay for the amount of usage needed. Levine-Clark says librarians need to focus on developing a consideration pool, not on title-by-title selection; his goal is to develop profiles that fill the pool with all titles the library is willing to buy if a patron requests them while keeping the pool at an optimal size relative to the library’s budget. For Denver, the consideration pool is about 100,000 titles. Kawecki commented that for vendors, DDA is like trying to tread water and perform water ballet simultaneously; vendors are developing print and e-book DDA programs, integrating them with traditional firm and approval order services, and providing workflow support for libraries, and providing duplication control.

Although PDA programs vary greatly, with consideration pools ranging from less than two thousand to one hundred thousand titles, all the speakers agreed on some questions librarians must answer when considering a PDA program. What are the library’s goals? What must happen for the library to consider the program successful? What services does the library need for a vendor? How and when will the library remove titles from the consideration pool? Both librarians and vendors emphasized the need to discuss PDA widely within the library and to ask vendors for data about how the mix of short-term loans, different purchase triggers, and student body size impact expenditures; the vendors shared questions and sample data that libraries can consider in establishing a PDA plan. Integrating PDA into the library’s collection development program requires substantial planning, but also offers libraries the opportunity to increase access to content while reducing the number of unused titles in our collections.

What Is IT, Anyway? Library of Congress Genre/Form Terms for Library and Archival Materials

Liz Perlman Bodian, Chicago Public Library

The preconference focused on the development and application of the Library of Congress Genre/Form Terms (LCGFT) thesaurus. LC is working with several interest groups in the cataloging community to develop this thesaurus and is releasing groups of terms as they are approved.

Genre and form terms describe what a work is, rather than what it is about. There were many examples given to help participants understand the difference between genre/form terms and traditional subject headings. In many cases, the two have previously been conflated, and the projects are working to separate them.

Janis Young, Library of Congress, talked about previous genre/form thesauri, many of which are used at the Library of Congress, and discussed the process of unifying them into the new LCGFT. She emphasized that Library of Congress Subject Headings (LCSH) is not a true thesaurus, because it does not always have a full syndetic structure. LCGFT is being built with strict hierarchies and syntax. She explained the assumptions that LC made when they started the project and told us how, in many cases, those assumptions were not completely borne out. LC is constructing the thesaurus in pieces, and each piece gives them more information to use for the next. The end of the morning was spent doing hands-on exercises, applying subject and genre/form terms to specific resources, then discussing the exercises. This helped everyone get a better grasp on how to apply the terms. Young had her own answers to each exercise, and everyone appeared intrigued that the participants did not all come up with the same answers.

Yael Mandelstam, Fordham University Law Library, began the afternoon discussing a project for law genre/form terms. This project was the first to develop a thesaurus from scratch. Mandelstam described the time-consuming process involved in evaluating existing terms and figuring out which others should be included. She told us about the principles they used to decide which terms should be included. The main principles were that terms should:

  • be specific but not too narrow
  • reflect current usage
  • work across legal systems

She gave examples of how these principles play out when addressing specific issues in the thesaurus. There was one hands-on exercise with a law publication, which helped participants understand the complexity of the system.

Beth Iseminger, Loeb Music Library, Harvard University, talked about the music genre/form project, which is still in the early phases. It is still being determined how to separate genre, form and medium of performance from existing LCSH headings. Once this project is completed, it is anticipated that it will completely change how music is cataloged. The main question still at issue is where in the MARC record the medium of performance will be located. There have been a number of options discussed, but no conclusion reached so far, although using a revised version of the 382 field is the most likely option. The plan is to hold training sessions at ALA Annual Conference and the Music Library Association Annual Meeting. No hands-on exercises were done in this portion of the session because the terms have not yet been developed and approved.

There was a short wrap-up period with questions, primarily addressed to Young. For more information on genre/form terms, visit www.loc.gov/catdir/cpso/genreformgeneral.html.   

Programs of Interest

OCLC's Enhance Sharing Session

Shana L. McDanold, University of Pennsylvania

OCLC’s Enhance Sharing Session began on Friday morning, June 24 at 10:30am. Jay Weitz opened the session with a few housekeeping details and introductions. He reviewed key points from the "News from OCLC" handout. The key highlight is the new version of the Connexion Client, version 2.3, which was released in April. Users will be required to upgrade from any earlier version on Nov. 1, 2011. As in the past, users will get notices reminding them to upgrade starting a few weeks before when the log on to the Client. Connexion Client 2.30 supports 32-bit and 64-bit versions, and works with Windows XP, Vista, and 7. It does not work with Windows 2000. It requires .Net framework 4 extended.

Enhancements to the Client in this version include: links to RDA ToolKit are integrated; the display of 029 field has been moved to the end of the record; the language of cataloging (ll:) limit has been separated out into its own box in the search dialog box. Additionally, the language of cataloging can be displayed in local save file search results; the number of batch searches that can be performed in one moment has increased from 100 to 150; users can now export filled-in workforms without having to add the completed record to WorldCat. There’s also a new export/import support for MARC XML. The Client upgrade also includes many changes from the OCLC MARC Update 2011, including improvements to macros, authority control changes and improvements to Connexion digital import to attach digital content. The enhancements are detailed in Technical Bulletin 259.

Jay then outlined the OCLC MARC Update 2011 plans. The current schedule is to implement the update in August 2011. After the release of the update, a pop-up dialog box will appear asking users to download new files only once. The main item of note in the update is a new fixed field. Computer file fixed field FORM (008 pos 23 and 006 pos 06) is added to allow for differentiation between online and tangible electronic resources.

The next big news item is the planned expansion of bibliographic updating permissions for NACO-authorized institutions. This change is being made in cooperation with the Library of Congress’ Program for Cooperative Cataloging (PCC) to allow NACO participants to update PCC records. This news announcement was followed by several questions. Things still to be determined include which authorization should NACO Enhance participants use, and if it will be institutional or dependent on individual authorization rights. Jay reminded the audience that there are certain instances where you can update a PCC record now such as controlling headings, minimal level upgrade capabilities, and actions that fall under database enrichment activities.

The discussion then moved into the announcement regarding the implementation of RDA, scheduled to occur no earlier than Jan. 2013. OCLC will release a statement declaring their intentions and is beginning the process to determine how to best proceed with the full integration of RDA into WorldCat. The current policy statement in effect since publication of RDA in June 2010 remains in effect until further notice; it has been updated slightly to reflect end of test period and the decision. OCLC requests that libraries continue to abide by current RDA policy statement which includes not creating duplicate records or editing master records to change from RDA to AACR2 or vice versa, unless permitted under policies as currently set forth.

Additionally, OCLC will issue a discussion paper regarding working with records containing mixed practices, and work to determine best practices for cooperative cataloging. OCLC staff will be participating on the three PCC RDA-focused task forces, which will help to inform the decisions and discussions at OCLC. The discussion paper will be announced on OCLC-Cat when it is available. The timeline anticipates completing the paper by the end of summer 2011.

Jay then addressed questions that had been submitted prior to the session. Paige Andrew asked a question regarding the retrospective conversion of old headings from Library of Congress Subject Headings (LCSH) form/genre to Library of Congress Genre/Form Terms (LCGFT) headings. What is the plan to convert/flip headings in bib records? Paige is specifically interested in the plan for converting maps headings. Jay asked Robert Bremer about conversion/flip. Robert replied that a quality control macro will change various obsolete forms of $v when appropriate and construct 655 genre heading with $2 LCGFT if not already present. They have not made changes for non-cartographic materials at this point. He asked that users email suggestions for conversions to askqc@oclc.org. It was also pointed out that the flip is not that complicated and that the University of Minnesota has successfully flipped using the available conversion tables and MarcEdit software.

Robert then asked his own question for the group attending the session. Are LC genre terms and LCGFT headings the same or different? Should they convert LC genre to LCGFT? Or can they both remain? During discussion the attendees decided that they are both same and different. Additionally, since not all genre lists are complete, heading flips will have to occur over a long period of time. Most likely OCLC will only flip those headings that have a genre authority record. The main downside to leaving the records as they are and not flipping the headings is that everyone has to flip locally. Of course, what each institution does locally is up to them, and this decision is really about master records.

The discussion continued with several questions from the attendees. Master records should be linked (via controlling headings function) to the appropriate authority records for maintenance purposes, but what about form/genre headings from other thesauri? Are genre/form headings distributed via OCLC’s bibnotification service? Neither Robert nor Jay are certain, but they both believe would be treated same as other subject authority changes in bib record. Concern was expressed about the impact on vendors that use OCLC records and if they will provide updates to their subscribers. Previous changes, such as updates to ISSN and ISBNs have resulted in problems for some vendors. Robert believes the volume for genre headings changes would be lower than ISBN/ISSN and would be more of a trickle, thus preventing issues for vendors.

Jay then opened the floor to additional questions. Questions about credits and encoding level revisions were answered with unknowns. Additionally, when reviewing encoding level standards, there is a question about the usefulness of revision knowing that there is a timeline to move beyond MARC. The final question was about the expert community experiment. There are no new statistics on types of upgrades, but the project is no longer an experiment and is still going strong. Links to the Enhance program and Expert Community can be found at www.oclc.org/us/en/worldcat/catalog/quality/default.htm . A document detailing the various authorization levels, is available at www.oclc.org/support/documentation/connexion/client/catalogingauthorizationlevels.pdf.

Linked In: Library Data and the Semantic Web

Sponsored by LITA

Shana L. McDanold, University of Pennsylvania

The “Linked In: Library Data and the Semantic Web” session consisted of presentations from Gluejar Inc.’s Ross Singer and Eric Hellman, Talis Information, Ltd.

Singer’s presentation, “It’s Not Rocket-Surgery: A Brief Introduction to Linked Library Data” began with an explanation that legacy data doesn’t work through linked data very well. Singer took us through a brief introduction to linked data in general. Tim Berners-Lee created the four rules to linked data, which are considered the foundation for good linked-data practice. He also showed several linked data clouds, demonstrating how interconnected linked data sets are to each other. At the center of the cloud was DBPedia, and along the outer edge was the limited amount of library data that has been released as linked data. Much of this activity has occurred in Europe and much of it isn’t specifically library data. Ideally, the MARC record for a book would link to authority records, which would then link out to other sources of information in the data cloud including maps, places, and subjects.

Singer then broke down Resource Description Framework (RDF) and the parts of linked data. RDF is not a format, but rather a data model to share data effectively. It consists of triples, which can be described as an entity-attribute-object relationship. As the name suggests, it consists of three parts: subject (the referent) (URI) – predicate (a property) (URI) – object (the value) (URI or Literal). Using URIs in triples makes things unambiguous as if two things have the same URI, they are the same thing. Linking triples together creates a graph. To expand the graph, users simply follow the triples (or follow their nose). The relationships between things demonstrated by triples and graphs allow users to figure out things about things by knowing the relationships between things. In triples, it can be useful to use existing schemas and vocabularies, and mix and match them as needed.

RDF operates on an open world assumption: we cannot assume that any source has all of the facts about a resource, nor can we assume that all the facts are true. There is no “document” because it is always a work in progress through augmentation and editing of data. The question remains: why bother with it? Simply because data is valuable. Singer also presented a series of disclaimers about creating linked data. He pointed out that there are no standards currently in use and that linked data is not an attempt to catalog something in the traditional sense. The point is to get the data into the cloud so it can be used. He then presented an example of how to create a linked data model for a single book. He pointed out how literals create dead ends in linked data, and how URIs pull in data from the URI, further connecting the data with more data. For bibliographic data, Singer used Bibliographic Ontology (BIBO), which is agnostic towards FRBR. He advocated for the use of vocabularies such as Dublin Core and RDA for library data and Simple Knowledge Organization System (SKOS) vocabularies. He also showed how Web Ontology Language (OWL) is useful for disambiguation through the use of the OWL:sameAS reference to infer that the subject and the object of the two triples are talking about exactly the same thing, thus immediately connecting all properties and values for one resource automatically to the other resource.

Singer wrapped up the presentation with questions from attendees. He emphasized that we are going to have to think of library data in abstraction, and we are going to have to commit to a URI polity including plans for sustainability and preservation of the data. This was followed by a question about the leadership role for libraries. Singer pointed out that libraries are known for provenance and trust, and we need to utilize that. But he also countered that libraries need to lighten up on some of the control and let the data be freer for use and manipulation.

Hellman then gave his presentation, “Library Data: Why Bother?” At the core of the presentation were the two questions: What should we be doing with our data? What is the purpose of our library data? He also gave a brief history of his background in engineering and physics, pointing out that he’s not a librarian.

Hellman started by talking about MARC: What is this “MARC thing” and why? If it’s to aid discovery, why is it not relational? If it’s to manage “inventory” why doesn’t it? If it’s “machine readable cataloging”, why is it so hard for machines to read? He noted that MARC was created a long time ago, and it will not die. He pointed out that metadata was developed as a surrogate for paper, which is difficult to search. Today, more resources are digital and easy to search, so surrogates aren’t necessarily needed for searching. Metadata is now more important to the supply chain, and thus, resources come with metadata. What’s more, why should libraries bother with creating data anymore if resources come supplied with their own metadata?

Hellman answered his question by stating that libraries are there to help manage the abundance of materials, emphasizing the library roles in selection, space, people, community, and the educational value of instruction that libraries can share. Linked data also has a role in serving communities, since communities consist of linked people. Library collections and cataloging also create links between people and places, creating communities in the process.

Hellman declared the number one purpose of library data in the digital information age is search engine optimization. Libraries can provide the microdata needed by search engines to function properly. He believes that library data should be mapped to the schema provided by search engines to give them what they want for search engine optimization. Our mission should be to provide data to search engines to help support the creation of social graphs and to serve as aggregators for information and data. Hellman concluded by sharing the Gluejar “Unglue” button and giving a brief description of the EPUB 3 specification and its use. He ended his presentation by emphasizing again his belief that libraries have a role in creating communities through data.   

ALCTS Forums and Programs

ALCTS 101: What Is ALCTS, and How Can I Be Involved?

Amy Jackson, University of New Mexico

ALCTS 101 is a program aimed at newer and potential members of ALCTS to help them learn more about ALCTS, its structure, and how to get involved in the organization. The program was held on Friday, June 24 following the opening of the exhibits, in order to reach attendees before the busy weekend schedule began. It was organized by the ALCTS New Members Interest Group and ALCTS Membership Committee.

The program started with welcoming remarks from ALCTS President Cynthia Whitacre and ALCTS President-Elect Betsy Simpson. They encouraged new members to meet other people and become involved in the organization in order to get the most benefit from their membership.

The main section of the evening was a speed-networking event. Representatives from various areas of ALCTS sat at tables, and the new members spent five minutes at each table talking to the representatives. Most tables had two leaders and five or fewer attendees, so conversations were personal and interactive. Each table had colorful signage, and some table leaders provided free giveaways to attendees at their table. The ALCTS Acquisitions Section was represented by Lisa Spagnolo and Hester Campbell; Cataloging and Classification by Debbie Ryszka and Tamera Hanken; Continuting Resources by Meg Mering and Charles McElroy; Collection Management and Development by Brian Falato and Ginger Williams; Preservation and Reformatting by Gina Minks and Tara Kennedy; ALCTS Division-level Committees by Dina Giambi and Carolynne Myall; Networking by Erica Findley and Megan Dazey; ALCTS Publications/How to Get Published by Rebecca Lubas and Sion Romaine; How to Get Involved by Keri Cascio and Dracine Hodges; Resume Help by Elizabeth Lorbeer, Beth Picknally Camden, and Eleanor Cook. In addition to the speed-networking, attendees completed a scavenger-hunt bingo card with information they could collect from the table leaders. The first four winners received a free ALCTS webinar, and a fifth webinar winner was drawn from the completed cards. Additional prizes included free ALCTS membership renewals and student attendees received Starbucks gift cards. The ALCTS Office provided food and drinks for the program.

The program concluded with a brief ALCTS New Members Interest Group business meeting. Erica Findley and Sarah Ricker rotated out of officer positions and Yoko Kudo, Erin Boyd, and Jessica Mlotkowski joined as new officers.

According to ALCTS new member Claudia Banks, winner of an ALCTS webinar, “The session was informative and helpful. The speed networking was a creative, fun way to meet other members and learn more about the Association.” Other attendees also commented on the energy in the room, and conversations were lively and interactive. ALCTS 101 is held at every ALA Annual Conference and the organizers hope to see you next year!

CCS Executive Committee Forum: Turning Catalogers into Semantic Web Engineers, or, Out of the Catalog Drawer and onto the Internet Highway

Brian Falato, University of South Florida

This year’s Cataloging and Classification (now Cataloging and Metadata Management) Section Forum featured three speakers to discuss the transformation of cataloger to semantic web engineer. It was held Friday, June 24 at 3:30 pm.

Karen Coyle spoke about “Things & Strings and Other Stuff, Too.” Strings contain information readable by humans. But human language can be ambiguous, and machines can’t handle ambiguity. So they need “things,” represented by identifiers. Identifiers are language-neutral. Strings are endpoints, while things can connect to other things. The things in our metadata (people, corporate bodies, families, places, events, topics, resources) can link to other metadata, link to services such as maps and locating services, and add services for users.

Coyle said we’ve created data that can be accessed by non-library people only after jumping through numerous hoops. These obstacles are discouraging collaboration and sharing of data. A new environment modeled on FRBR is a moment of opportunity. We should grab it.

Gordon Dunsire, Centre for Digital Library Research, University of Strathclyde, was the second speaker, speaking on “Bibliographic Data in the Semantic Web: What Issues Do We Face in Getting it There?” Dunsire first spoke about Resource Description Framework (RDF), which is designed for machine-processing of data at a global scale, with trillions of operations per second. RDF uses triples, simple, single irreducible statements constructed in three parts: the subject of the statement, nature of the statement, and value of the statement. It employs machine-readable identifiers called URIs (uniform resource identifiers) that are unique combinations of numbers and letters, but with no intrinsic meaning. RDF requires the subject and predicate of a triple to be URIs. The object can be a URI or an information string.

Dunsire discussed the obstacles in getting bibliographic data into the semantic web. MARC 21 is not available in RDF. There are legal issues of ownership of records, and whether triples can be copyrighted.

A preservation and archive regime is needed because URIs should last forever. The cost of re-engineering systems, redesigning interfaces, and retraining catalogers is sizable, but Dunsire believes the long-term benefits will justify the investment. It will be more costly not to do anything.

Ed Jones, National University in San Diego, concluded the forum by discussing how RDA plays with the semantic web. The punning title of his presentation, “Linked Data: The Play’s the Thing,” was a clue to the humor in his presentation. Characters from The Simpsons were used to show how RDA would work with the semantic web. Martin Prince, the nerdy classmate, represented RDA as Mr. Nice Guy, Ralph Wiggum, the not-so-bright classmate, as “playing sort of nice” (some RDA elements are too granular, some not granular enough, but most are just right), and bully Nelson Muntz represented the cases where RDA wouldn’t play nice at all, primarily because of MARC and legacy data. MARC is like the Acela Express train, Jones said. It’s designed to reach 150 mph, but averages only about 70 mph because of track limitations in the U.S. Similarly, the MARC infrastructure is holding RDA back from reaching its potential.

The slides for each presentation are available on ALA Connect at http://connect.ala.org/node/136967

On Beyond Zebra: Taking RDA beyond MARC

Jessica Hayden, University of Northern Colorado

“On Beyond Zebra” was held Saturday, June 25 at 10:30 am. The purpose of this panel was to showcase examples of current projects related to RDA in non-MARC environments.

Centre for Digital Library Research Depute Director Gordon Dunsire, University of Strathclyde, kicked off the session with a presentation entitled “Transitions, Transformations, and Shifting Sands: The Landscape Beyond MARC; The Ground Beneath the Record.” He stressed that there is now a shift in focus from the record to the individual metadata statement. He theorized that a bibliographic record will soon become an aggregation of statements linking metadata triples to display different things to different users. He also noted that the changing metadata landscape, which incorporates links to data created outside the library community, will make it a necessity to track provenance of metadata statements more closely since there is no built-in “test of truth.” His presentation is available at www.gordondunsire.com/pubs/pres/BeyondZebraGD.ppt

Glenn Patton, director of the WorldCat Quality Management Division at OCLC, filled in for the originally scheduled Jean Godby to present “Mapping to MARC and Beyond.” This presentation focused on the difficulty in creating metadata crosswalks that map accurately to MARC. Using the ONIX (ONline Information eXchange) metadata schema as an example, Mr. Patton discussed MARC characteristics that make crosswalk development imperfect. These troublesome characteristics include the fact that only a few of the MARC 1XX subfields map to ONIX, that there are many redundant fields in a MARC record, that many concepts are not equivalent between MARC and ONIX, that some MARC fields are ambiguous, that free text fields have formatting differences, and that a large number of MARC mappings are not used. He also noted that RDA cannot be rigorously coded into MARC. He finished by stating that MARC introduces unnecessary complexity to metadata use and that it does not fully take advantage of new developments.

During the presentation “The eXtensible Catalog: RDA in a FRBR-based environment,” Jennifer Bowen talked about the work going on in the eXtensible Catalog Project. Bowen, co-executive director of the eXtensible Catalog Project, University of Rochester Libraries, is directing the project to develop open source software which will convert existing MARC records to a FRBR-based structure. Bowen noted that by 2013, the proposed RDA adoption date, libraries will still be in a MARC-based environment, and software such as the eXtensible Catalog will allow libraries to take advantage of RDA improvements while still using our traditional records. It will accomplish this by parsing MARC XML into FRBR-based records, and thus offering libraries a “risk-free” way to experiment with RDA. More information about the eXtensible Catalog Project is available from their web site, www.extensiblecatalog.org/

Jenn Riley, head, Carolina Digital Library and Archives, University of North Carolina at Chapel Hill, presented “Speculating on the Future of the Metadata Standards Landscape,” in which she began by showing various pictorial representations of the current metadata environment. She speculated on what a diagram of newly emerging standards will look like, and stressed that the framework will need to better represent the interoperability of metadata. She ended her portion of the panel by affirming that a new framework will emerge and that libraries must help articulate it. Her presentation is available from http://connect.ala.org/files/rdaPanel.pptx

Professor Jane Greenberg, School of Information and Library Science, University of North Carolina at Chapel Hill is the metadata manager for the Dryad project. Greenberg presented “Dublin Core: A Stepping Stone for the Dryad Repository.” She used Dryad as a case study to show the impracticality of using MARC for such projects. Much of the metadata residing in the repository is machine or author-generated, making MARC difficult to maintain. Dublin Core offers a much more flexible and easy-to-understand method for contributors to supply metadata for their publications. Greenberg noted that, as projects like Dryad proliferate online, metadata will be contributed from many individuals and no longer just by librarians. This fact will make it a necessity to find methods to provide useable metadata without the learning curve and lack of interoperability involved with MARC. More information about Dryad is available from http://datadryad.org/

Slicing & Dicing: Usage Statistics for the Practitioner

Christine Korytnyk Dulaney, American University

Now that your library has collected usage statistics, what can you do with those data sets? The three speakers at this program held Saturday, June 25 at 10am, described projects which they completed in their library or organization which used these statistics in order to justify purchases of resources or cancellations of periodical titles.

As director of the Statewide California Electronic Library Consortium (SCELC), Rick Burke described a research portal which his organization created in partnership with PubGet, a discovery tool, to assist member libraries track usage and analyze costs of online databases. SCELC is a consortium of 111 private academic and nonprofit research libraries in California, Texas and Nevada. With the research portal, member libraries are able to retrieve usage statistics quickly and consistently across a variety of database providers, negotiate license agreements more effectively, and determine which databases or aggregate packages are cost-effective for their users. The usage statistics collected by the research portal provide visual graphics for quickly understanding usage trends and cost analysis. As a result member libraries are able to make purchasing decisions based more closely on the needs of their users.

John McDonald demonstrated how he used statistics in order to improve the effectiveness of the Claremont College library’s approval plan. At his library, McDonald questioned whether the existing approval plan was meeting the library’s needs for providing accurately targeted books or notification slips. Librarian time was not used effectively because the library did not receive enough auto-shipped books and too many notification slips for review. If librarians were spending time managing simple purchase decisions, then more complex purchase decisions were not getting enough attention.

To address this problem, McDonald devised a research project which collected total figures for all the books and notification slips provided by their vendor, YBP Library Services, with the titles purchased by the library. By comparing these data sets, McDonald identified a gap between what the library expected to happen with a particular title and whether YBP sent the title, a notification slip to the library or excluded the title completely. This comparison further revealed how YBP interpreted the library’s approval plan instructions and how the library needed to adjust the approval plan so that it would reflect the library’s collection needs more accurately. As a result of these changes to the approval plan, the library increased the number of books which are received automatically and decreased the number of irrelevant notification slips. As the efficiency and accuracy of the library’s approval plan increased, the number of titles requiring librarian intervention decreased. Consequently, librarians have more time to spend on reviewing the more complex purchasing decisions.

At North Carolina State University (NCSU) Libraries, Annette Day, associate head, collection management, described how data can be used to provide a rationale for collections decisions. Day described three projects which used statistics to drive decisions for purchasing or canceling library resources: a journal review, a monograph use study, and a collections view tool. For the journal review and monograph use projects, Day created data sets from various sources which were then combined in order to highlight user needs. For the journal review, Day collected metrics such as use, importance of title to the subject discipline as well as publication and citation data. Based on this data, Day was able to identify journal titles which were of lesser significance or used by patrons. These titles were then considered for cancellation. For the monograph use study, Day combined circulation data and bibliographic information from the library integrated library system (ILS), in order to analyze use of monographs by patrons. Through this statistical analysis, Day identified usage trends by discipline and was able to articulate a rationale for acquisitions decisions. Finally, Day developed a collection views database which graphs expenditures per faculty member, student or university department in order to align resource expenditures with research initiatives university-wide.

The projects described by these three speakers effectively demonstrate how data can be combined and analyzed in order to enhance library effectiveness. If a library has usage statistics, that information can be analyzed to make effective purchasing decisions. Careful analysis of statistics can assist librarians in refining their approval plans which result in workflow efficiencies and better use vendor services so that librarians can provide enhanced services to patrons. By reviewing and analyzing statistics, libraries can work smarter, purchase materials that users need, and understand how the library is serving its constituents.

Leading Technical Services in 2011

Ruth Elder, Troy University

Beth Farwell, moderator, kicked off the Saturday, June 25 program by reminding us that leadership is an action verb not a noun, and that we can lead from any position in the library.

Marlene Harris, director of the public services division, Alachua County Library (Florida), had ten excellent tips to share.

  1. Feel the fear– and do it anyway! Life is what happens while we are making other plans.
  2. Say yes– it’s good for you and your institution.
  3. Keep your commitments– better to under-commit and over-deliver.
  4. When you are the project manager, have a backup person– always have a plan B.
  5. Give credit where credit is due.
  6. Teaching is the fastest way to learn -- become an expert.
  7. Give yourself permission to fail – but don’t fail the same way twice.
  8. Sometimes it is better to ask forgiveness than permission – technically, do this only once.
  9. Pick your battles – can’t fight everything.
  10. Everyone feels like a fraud some of the time.

She finished by reminding us that opportunity never arrives, it’s here. The only way to be prepared is to say yes! Harris’ web site is www.readingreality.net.

Anne McKee, program officer for resource sharing with the Greater Western Library Alliance (GWLA) Consortia also had ten tips:

  1. Social media is not your (professional) friend. Remember those pictures are out there for the rest of your life.
  2. Volunteer – you need to be out there so they can see you do a good job.
  3. Be an information gatherer – sometimes it’s best to just sit, listen, and observe.
  4. “Go towards the light”– go to committee meetings that you are not on so they will know who you are.
  5. Broaden your horizons – you can get involved more quickly in smaller professional organizations.
  6. Beware of the “BWADITW” (But We’ve Always Done It That Way) people. Have your business plan ready for this.
  7. Professional reading is your friend: library journals, the Chronicle of Higher Education, blogs, etc.
  8. Keep your friends close and your vendors closer; you’ll be amazed at how much you can learn from them.
  9. Mentors are wonderful, but choose carefully. They may be there for life or only for specific project.
  10. Does it pass the knickers test? Friends, colleagues, mentors who are willing to ask if you really want to do this and bring you back to reality.

Peggy Johnson, Associate University Librarian, University of Minnesota commented on the fact that several of the presenter’s tips were similar. That’s because these concepts are fundamental to leadership and they do not change. She gave fourteen helpful tips:

  1. Get a mentor or role model who you trust and who will be honest.
  2. Build trust by being consistent and genuine. This provides stability. Don’t talk about people!
  3. Don’t monitor yourself too much. “You can’t lead the charge if you are worried about how you look on a horse.”
  4. Take risks but don’t be a loose cannon. You also need to be dependable.
  5. Volunteer. Show how you can solve problems.
  6. Envision the future and think long term.
  7. Listen, and keep communications open. You may not like what you hear but you need to hear it.
  8. Don’t be afraid to color outside the lines. Be willing to do things differently.
  9. Don’t always take the glory. The spirit of collaboration leads to the best results.
  10. However, don’t be too modest. Take the credit you deserve.
  11. You can’t always depend on your boss. Make sure you take care of yourself.
  12. Be self-aware. Define yourself by building on your strengths.
  13. Being a leader is hard work! It takes persistence and lots of hours.
  14. Have aspirations. Imagine where you want to be and set goals.

In a great example of leadership, Johnson demonstrated some of the skills and abilities needed in a leader when the presentation computer crashed and she completed her talk without the PowerPoint slides.

Continuing Resources Section Holdings Forum

Gracemary Smulewitz, Rutgers University

On Saturday, June 25, CRS held a Holdings Forum to explore the current demand for universal publication history as well as new initiatives seeking to collect holdings information and preservation details.

Julie Su spoke on behalf of David Lawrence, editor of the SafetyLit database. SafetyLit’s goal is to provide a database of material about safety and injury prevention. It is funded by San Diego State University and the World Health Organization. SafetyLit is a consolidation of articles from many disciplines ranging from medical to legal. The majority of the editing and abstracting is completed by volunteers, and has from sixty thousand to seventy-five thousand visitors each week. Current journals and back issues are tracked for anything published that pertains to safety. The challenge for those working on SafetyLit is that the number of issues and supplements published within each volume is unknown, so it is difficult to know how many issues need to be reviewed.

Yvette Diven, publisher and senior product manager, Serials Solutions, explained that Ulrichsweb tracks publication histories and maps the lifecycle of serials. The primary tracking mechanism is Serials Solutions’ KnowledgeWorks. Serials are tracked and history is gathered to identify changes over time, support researcher needs for information, support collection management needs and to help create holdings records.

Serials Solutions’ aim is to leverage what’s in KnowledgeWorks and what’s in Ulrich’s. Ulrich’s is a global database that includes online availability, reviews and recommendations. Over 220,000 actively published serials including publications of scholarly societies. It also contains ninety thousand ceased and suspended status titles.

Serials tend to be inconsistent; a serial can start print in a given year, suspend publication, restart after five years, begin new series, launch online, cease the print with volume and issue, merge with another title, and so on. Serials Solutions tracks elements including start and end years, publication frequency, publication status, restart dates, online edition launch dates, new series, base volume, supplements, indexes, title and ISSN history, title mergers and splits, provider and full-text coverage data.

Regina Reynolds, Library of Congress ISSN coordinator, spoke on behalf of Peter Burnhill, director of EDINA and head of the Edinburgh University Data Library. Reynolds reported on Piloting an E-journals Preservation Registry Service (PEPRS) and Serials Union Catalogue (SUNCAT). PEPRS is an EDINA UK project funded by JISC in partnership with the ISSN International Centre in Paris. It is a two year project whose purpose is to provide users with information about journals that are being preserved. It is not an archive, but rather a registry. There is no de-duping process for the pilot. SUNCAT is designed for the UK research community. It is a free tool to help researchers and librarians locate serials held in the UK.

The public beta for PEPRS is now available and can be searched by ISSN or eISSN; the data is coming from the ISSN registry. The record shows what was preserved, publisher dates and dates preserved and how it is preserved. The challenge is how to deal with holdings information that is coming to the registry in different forms. Soon it is expected that lists of ISSNs will be loaded to check preservation status. Also expected is OpenURL linking to PEPRS at the title level, and ultimately at the holdings level.

Center for Research Libraries Print Archives Program Manager Lizanne Payne shared information about the emerging movement of shared print management. Shifting publication patterns, library space issues and the changing benefits of legacy print are all driving these new programs. OhioLINK, Partnership Among South Carolina Academic Libraries (PASCAL), Washington Research Library Consortium (WRLC), and Minnesota Library Storage (MLAC) are some examples of institutions building shared print management programs.

Payne explained that institutions need to ensure there is a mechanism to record and disclose archiving commitments at title and volume level, as well as decision support to identify titles and volumes suitable for archiving.

PARS Two Thumbs Up: A Preservation Film Festival

Emily Prather-Rodgers, North Central College

The “Two Thumbs Up! Preservation Film Festival,” presented by the Preservation and Reformatting Section (PARS) and sponsored by Alexander Street Press, provided an opportunity for conference attendees to take a break from the New Orleans heat late on Saturday afternoon while enjoying nineteen short films on the dos and don’ts of collection care and materials handling. The films were focused on user education, and they can all be viewed online for free. Most can be found on YouTube. Audience favorites included:

  • Librarian. Haunted Love, 2007. http://www.youtube.com/watch?v=Ne_WXP7lUWM
    In this dark music video from New Zealand-based pop band Haunted Love, two bespectacled librarians lure their handsome young patron into the closed reserves after he fails to follow standard “library protocols.”
  • BR/Harold B. Lee Library Book Repair. Multimedia Production Unit, Brigham Young University, 2010. http://www.youtube.com/watch?v=tyNbUWvq2mM
    This very short (52 second) but entertaining film, set to the theme song from the television series ER, provides an overview of the work done in the library’s preservation unit.
  • Library. From The Best of Mr. Bean, Volume 2. A&E Home Video, 1990. http://www.youtube.com/watch?v=CwOrp6Q7kCE
    Mr. Bean’s visit to a rare book room ends in hilarity when he accidentally destroys an illuminated manuscript.
  • Team Digital Preservation and the Arctic Mountain Adventure. WePreserve. 2010. http://www.youtube.com/watch?v=PGFOZLecjTc
    In this episode, one of six animated films created by WePreserve, DigiMan neglects to plan for DigiNiece and DigiNephew’s safety on their hiking trip. Fortunately, the children are rescued by a friendly polar bear, and DigiMan learns a lesson about the importance of planning ahead to ensure that both physical and digital objects are preserved effectively.

Planning for the Worst: Disaster Preparedness and Response in High-Density Storage Facilities

Tara Kennedy, Yale University

Emergency preparedness and disaster response in high-density facilities was the focus of the program, including specific areas such as fire suppression, emergency planning, and emergency response. The program was held on Sunday, June 26 at 8am.

Bobbie Pilette, director of preservation for Yale University Library, spoke about the research report on fire suppression for high-density storage facilities. Preservation librarians responsible for collections in their high-density facilities realized that the fire suppression systems in warehouse-type facilities would not be sufficient for library high-density storage facilities. FM Global, the insurer for most of these facilities, agreed to conduct a series of fire suppression tests. The goals of the project were to provide fire protection options for a typical high bay, high-density storage arrangement; develop loss mitigation methods to reduce non-thermal damage; make recommendations for the future design of high-density storage modules, if necessary.

The test bays were set up as typical facilities: Spacesaver racks twenty feet high with books in cardboard trays. All tests had smoke detectors and sprinklers.

  • Test 1: In rack and ceiling sprinklers
  • Test 2: In rack and ceiling sprinklers with added archival boxes containing “archives”
  • Test 3: Same as Test #2 with added face sprinklers to the racks

Results and Conclusions:

  • Smoke detectors went off first
  • In-rack and ceiling sprinklers provided adequate fire protection, indicating that in-rack sprinklers are effective in reducing the temp of the racks, thereby limiting the risk of the rack collapsing
  • Face sprinklers reduced damage by 50 percent due to fire and water
  • Narrow aisles made firefighting difficult
  • Amount of materials affected even in a small incident was large: high density. One rack is 400 linear feet. Figuring 10 books per foot, that’s 4,000 books
  • Cardboard trays failed quickly and create a falling book hazard; trays and books absorbed water very quickly and when the books expanded they tore the trays and fell. This continued for four hours after the test
  • Barcode information on tray was lost
  • Weakened trays could not be used to pull books off shelf. Coated trays resisted only an additional 30 minutes. Covered trays did help keep water out, but water was absorbed through bottom of the tray

Final recommendations:

  • Early detection devices are key
  • In-rack and face sprinklers are needed
  • Local fire department needs to know the facility and its potential challenges and hazards
  • Response and recovery plan are necessary
  • Consider replacing corrugated trays with something that is non-combustible and will not fail when wet
  • Need to keep books on the shelf and keep from falling off the shelves. Piles of books in the aisles inhibit recovery

The final report will be available soon. The presentation is available on the Yale Library web site: www.library.yale.edu/about/departments/preservation/addl_pres_res.html

Library of Congress Collections Officer Beatrice Haspo then spoke about a water event that occurred at the Library of Congress’ high density storage facility at Fort Meade, Md., built in 2009. Their fire suppression system consists of in-rack and ceiling sprinklers that are set to go off at 165 degrees Fahrenheit. Smoke and water detectors are also in the facility. The housing containers are corrugated cardboard trays with lids, so books are in enclosed cases. One of the challenges of the facility is that it is within a military base (Library of Congress is a “tenant”) so security is strict.

In March of 2009, the test of the dry pipe system in cold vault storage caused the sprinklers to go off – essentially, an equipment failure. The books were not damaged (having closed containers helped), but the boxes were damp from the bottom up. Response was done quickly because it was during a work day. Library of Congress facilities re-sequenced the pumps, changed sprinkler heads that were damaged, and modified systems to avoid future damage. It was considered a “successful failure” that helped prevent a larger future disaster.

Jacob Nadal, preservation officer, University of California Los Angeles, discussed how to prevent damage from earthquakes in libraries (not just high-density storage facilities). His points included:

  • Earthquakes have directionality and it is important to note when building facilities where the fault lines are
  • Buildings codes are set by states, counties, and cities. Local codes are the targets your architect has to meet
  • Bracing/ anchoring shelving is important to prevent toppling bookshelves during an earthquake
  • Doors, cords and edging are effective for moderate shaking; there is a system where the bar drops automatically; rubber tape at front of shelf works well too
  • High density at UCLA is shelved tightly, and the design is a building within a building so the stacks are suspended within the exterior shell, to dampen vibration. Shelving towers are about seven-feet high, with no trays
  • High bay storage facilities are less tested in real-life earthquakes, and no simulations have been performed. A more certain problem is damage to the shelving system. Orientation after a quake could inhibit retrieval/ recovery

His web site includes more information: www.jacobnadal.com/176

Director of the Preservation and Digital Conversion Division, Columbia University Library Janet Gertz then talked about creating a disaster plan for a shared repository (ReCAP for Princeton, New York Public Library and Columbia).

The ReCAP facility is located on Princeton’s campus and is more than ten years old. It is based on the Harvard model: five modules that can hold 10 million volumes; with the goal of 8 million by May 2011. The site will allow for fifteen modules which could hold 38 million items. Special collections are interfiled with general collections for security reasons.

The facility is a partnership: any disaster will hit all three institutions, so they can’t have three different sets of procedures and policies. All three partners agreed to a single disaster plan.

ReCAP contracted with Copper Harbor Consulting to do risk assessment, response and recovery planning; they took information and organized it into a plan. The best way to avoid a complete disaster is to have a careful design, and ensure routine maintenance actually happens.

Jennifer Hain Teper, head conservator and interim head of preservation, University of Illinois at Urbana-Champaign (UIUC) spoke about her institution’s experience with testing the effects of an activated fire suppression system in their high-density storage facility.

UIUC has been designing a disaster recovery protocol since 2008. They do not have in-rack sprinklers, but they have huge sprinkler heads. These sprinklers are to extinguish the fire instead of control it. A very large amount of water will be discharged from these sprinkler heads.

UIUC collaborated with Industrial and Enterprise Systems Engineering (IESE) Program for assistance with their plan. Fire suppression tests were conducted in real-life, worst-case scenario situations. The test results included damage such as 40 percent swell on items that were enclosed (80 percent on unenclosed items), and the loss of barcodes through sheer water force. Acrylic coated enclosures did not fare better than uncoated enclosures. Prioritized extraction was hindered because special collections are distributed throughout. Trays were a different color, however, making them easy to identify.

From the two tests, the following recommendations were made:

  • Determine placement of special collections materials to reduce risk of damage and allow for manual retrieval
  • Use same tray for general and special collections. Visibly labeled makes for a security risk
  • Install sprinklers for every foot. In-rack sprinkler system is the only option for each row, using flexible piping

An audience member asked if anyone had considered a high fog/mist system for fire suppression. The response was that this type of system wasn’t acceptable to the underwriters, and they are not recommended for high density facilities.

Open Source Electronic Resource Management Systems: CORAL and ERMes

Debbra F. Tate, Kentucky State University

This presentation highlighted two of the open source electronic resource management systems currently available: CORAL, which was developed at the University of Notre Dame, and ERMes, which comes to us from the University of Wisconsin-La Crosse. Two representatives from each university discussed their respective systems, with Benjamin Heet and Robin Malott speaking about Centralized Online Resources Acquisitions and Licensing (CORAL), and Galadriel Chilton and William Doering introducing ERMes. The program focused on the benefits and challenges of implementing an open source electronic resource management system, followed by demonstrations of both systems.

The primary benefit, other than cost, of using one of these open source management systems is that they more customizable than a commercial product. Since libraries can vary widely in terms of workflows and purchasing models, the ability to tailor the system to your needs is a valuable quality. Both of these systems were built for this very reason: to support the institutions’ actual workflows, rather than having to change workflows to accommodate the one-size-fits-all approach of commercial electronic resource management products. Neither CORAL nor ERMes was originally designed as open source software, but when interest was expressed from other institutions, the decisions were made to share the code. This not only contributes to the greater library community, but also enables others to help make the products better. The presenters pointed out that finding the time for support and development can be a challenge, given their other job responsibilities, so the input and assistance from others who use their systems is helpful.

ERMes is a Microsoft Access-based system. It can function without data entry in all the fields, allowing libraries to utilize the areas that support their needs. Users can also change code as they see fit and contribute these adaptations to the community at large. Sixty institutions from across the country and around the world are currently using ERMes. Support offered via the ERMes web site (http://murphylibrary.uwlax.edu/erm/) includes a blog and links to tutorials on using MS Access, as well as a Google Group. The current software release is version three, although the earlier versions are still available. A Java overlay is being developed in the coming months to simplify permissions and simultaneous users.

The development approach to CORAL, which was built using PHP 5 and MySQL, was to keep it needs-based, simple, and easy to use. Because CORAL is a modular system, users have the option of installing or utilizing just the portions that they need. For example, some libraries are only interested in using it to manage license agreements. Between twenty and thirty confirmed sites are using CORAL, both in this country and abroad. The CORAL discussion list, however, has subscribers from 100 libraries, indicating additional interest. There is a demo version of CORAL available at http://erm.library.nd.edu, although you will need to request a login for the system. Discussion list instructions are also available at the site.

RDA Update Forum

Yoko Kudo, University of California Riverside

The Resource Description and Access (RDA) Update Forum was held on Sunday, June 26 at 1:30pm, featuring six speakers.

The National Agricultural Library’s Chris Cole, also the U.S. RDA Test Coordinating Committee co-chair, described the background to the testing of Resource Description and Access (RDA). In response to the concerns raised by the Report on the Future of Bibliographic Control, three national libraries agreed to conduct a full-scale testing of RDA before its implementation. What was investigated in the testing includes whether the Joint Steering Committee (JSC)’s goals for RDA have been achieved or not. As a result of the test, the Test Coordinating Committee has decided that three out of ten goals have not been achieved yet. Those three goals are: to be written in a plain English, to be optimized for use as an online tool, and to be easy and efficient to use.

Jennifer Marill, National Library of Medicine and U.S. RDA Test Coordinating Committee co-chair, presented the testing methodology. Testing institutions were expected to produce at least three sets of records to be submitted to the Library of Congress, using both RDA and the current cataloging codes, which turned out to be mostly AACR2. For the common original set, the Coordinating Committee selected titles with attributes that would likely be recorded differently with RDA. The following records were produced during the test period:

  • Common original set: bibliographic 1,509/ authority 1,226
  • Common copy set: bibliographic 123/ authority n/a
  • Extra set: bibliographic 7,786/ authority 10,184
  • Informal testers: bibliographic 1,148/ authority 1,390

In addition to four questionnaires related to the different test sets, four surveys were used for user review. The purpose of the additional surveys was to collect testers’ demographic information, solicit feedback from record users, collect management responses from institutions, and allow informal testers to submit records and comments.

The findings and recommendations from the test results were reported by Library of Congress’ Beacher Wiggins, also a U.S. RDA Test Coordinating Committee co-chair. Submitted records were reviewed to determine which treatments were acceptable out of broad variations. Errors were found in the same consistency between RDA and AACR2. The errors with RDA clustered around the access points for works and expressions. Time assessment was conducted as part of the criteria for cost analysis. Self-reported record creation time ranged from one minute to 720 minutes per record, and the average time spent by professional catalogers was seventy-five minutes for RDA and forty-six minutes for AACR2.

User survey results were outlined as follows:

  • 85.2 percent answered that the RDA records fully meet or mostly meet their needs.
  • Replacing General Material Designation (GMD), and spelling out currently abbreviated words were posted for both positive and negative features of RDA.
  • 75 percent felt that the training documentation needs to be updated before implementation.
  • Many responded that the RDA Toolkit was difficult to use, but its Workflow feature was useful.
  • 63 percent of testers anticipate that major or minor changes need to be made to their local operations.
  • 62 percent of institutional responses indicated that the U.S. should implement RDA, or implement it after making some changes.

Based on the test data analysis, it has been decided that the U.S. national libraries will not implement RDA prior to January 2013. Along with this decision, five sets of recommendations (action items) have been made for executives of three national libraries, ALA Publishing and other co-publishers, cataloging communities including Library of Congress’ Program for Cooperative Cataloging (PCC), the vendor community, and JSC. A timeframe for completion is assigned to each action item.

A concern was expressed from the audience about cataloging teaching at the library and information science (LIS). Wiggins emphasized the importance of introducing students to RDA and the FRBR terminology as soon as possible.

University of California Los Angeles’ John Riemer, chair of PCC, introduced that three task groups (TGs) have been formed to prepare the community for the transition to RDA. “RDA Decisions Needed TG” deals with issues that need to be addressed for successful implementation. “Acceptable Headings Categories TG” identifies headings currently existing in the authority file by three types of categories, and makes recommendations for the use and changes needed for each category. “TG on Hybrid Bibliographic Records” investigates the use of hybrid records.

Robert Bremer, consulting database specialist at OCLC, provided a brief update on OCLC’s plans for RDA. OCLC will participate in the PCC’ s three task groups. The interim policy on RDA cataloging in WorldCat will remain effect for the time being. A discussion paper will be issued on bibliographic records with mixed practices, in order to generate discussion and build consensus on related issues among OCLC libraries.

Lastly, the news on the RDA Toolkit product was shared by ALA Publishing’s Troy Linker. The double-user offer has been extended until August 31, 2012. All marketing and access information on the Toolkit is available on the web site www.rdatoolkit.org. The information about the future webinars will be provided on the Toolkit blog. Any suggestions can be made using the feedback mechanism on the web site, or be emailed to rdatookit@ala.org

Consultants for Technical Services: What Do They Do and How Can They Help?

Arthur Miller, Princeton University

ALCTS held an interesting and informative program in New Orleans on Sunday, June 26. Overseen by Lila Ohler, an Acquisitions Librarian at the University of Maryland Libraries, the three speakers covered the use of consultants from the point of view of the users and of the consultants.

After a brief introduction, the program began with Andrew White, Interim Dean and Director of Libraries of Stony Brook University, a research library located on Long Island. He discussed the results of having three different sets of consultants examine library operations within two years. The impetus for this was a combination of budget considerations and questions from the president on down. Each consultant’s report led to different changes, but each set of changes followed from the ones before. Also, each consultant had a different charge.

Dr. Ruth Kinnersley, Director of Trevecca Nazarene University’s Waggoner Library, discussed their use of consultants when it was decided to examine the technical services department upon the retirement of the Technical Services librarian. With the help of the consultant, the library totally revamped their operations and increased their effectiveness and efficiency.

Ruth Fischer is a founding partner of R2 Consulting, LLC, a firm that works with libraries and library vendors. Ruth made it clear that there is stress with nearly any consultant visit. She also emphasized that to accomplish anything they need the cooperation of the people who are bringing them in and of the staff affected.

The details of each presentation varied but there some common points among them. First, the organization and the consultants have to know what you are asking them to do. Careful thinking and planning can maximize the benefits you can derive from the use of any consultant. Be as clear as possible about what you hope to accomplish. Be as open and honest as possible in answering the consultant’s questions. Anything less makes failure much more likely.

Proposals from the consultant may often be ideas that someone will say “I’ve been saying that for years.” It’s important to agree and move on. The idea may have been around, but the consultant can explain why it needs to be done and sometimes how to implement it. Further, having it come from an outside source can provide political cover and a push for change that might otherwise have been hard to implement.

Finally, try to involve as many of the affected people as possible. The more they can contribute, the better the product and the better chance that changes will be supported. Give out as much information as is practicable.

It was pointed out that not all the changes suggested have to be implemented. Some things can be done and some times, some things can’t. Besides, as Ruth Fischer pointed out, consultants can be wrong!

A program handout and the presenter’s slides are available on ALA Connect: http://connect.ala.org/node/141395

PARS Forum

Alice Platt, Southern New Hampshire University

Daniel Burge, senior research scientist, Image Permanence Institute at Rochester Institute of Technology, was the featured speaker for the ALCTS-PARS Preservation Forum, held Sunday, June 26 at 4pm. Moderated by Karen Brown, preservation librarian, State University of New York-Albany, the program was popular with ALA attendees, garnering approximately fifty participants in a small meeting room at the JW Marriott.

Burge provided a report on the ongoing Digital Print Preservation Portal (DP3), a project intended to determine and publicize best practices for preserving digitally printed resources. According to Burge, these resources could include any text or images printed from a digital format. Digital prints have different risks for deterioration and degradation, depending on the paper used to make the print, as well as the printing method used.

Printing methods include, but are not limited to, electrophotographic (what we know as laser printing) and drop-on-demand inkjet, typically called inkjet printing. The speaker discussed these two methods in great detail.

Burge noted that in inkjet printing, there are two types of inks: dyes and pigments. Dyes are shown to have a shorter life than pigments, as dyes are derived from an organic, plant-based resource. Both the ink type and the type of paper used can contribute to different types and levels of deterioration. With laser printing, however, paper is the main variable for determining what type and how much deterioration might occur. A tool intended to help determine what type of print is in your collection is available on the study’s web site, http://dp3project.org/.

Plan I of the project, funded by the Andrew W. Mellon foundation and the Institute for Museum and Library Services, focused on a broad survey of materials and the types of decay that may occur. Phase II, scheduled to be completed by 2014, will refine these findings and develop final care recommendations. These recommendations will be posted to the study’s web site. Much more information about the project is already included on the web site, as well as a quarterly newsletter.

Emerging Research in Collection Management and Development Forum

Stephen Dew, University of North Carolina at Greensboro

The “Emerging Research in Collection Management & Development Forum,” was held on Sunday, June 26, at 4pm, in the Morial Convention Center, Room 269.

The first presentation was by Douglas Jones from the University of Arizona, entitled “Assessment of a Fully Integrated Patron Driven Access (PDA) Model to Provide English Language Books at the University of Arizona.” The University of Arizona supports PDA programs for both e-books and printed books. For the past few years, the library has been compiling detailed statistics on PDA usage in order to view patterns and trends over time, and in addition, customer satisfaction surveys were implemented in order to gain supporting information. Using the information gathered, the PDA programs were adjusted slightly, and further adjustments are expected as more information is gathered. Arizona considers the programs successful, and Jones noted that PDA empowers customers, is cost effective, and frees library selectors for other work.

The second presentation was by Heather Hill of the University of Western Ontario, entitled “Breaking Out of the Silo: Public Library Use of Free Online Resources.” Jenny Bossaller of the University of Missouri was a co-author for this presentation. However, she was unable to be at the presentation since she was attending another meeting, where she received RUSA’s 2011 Reference Service Press Award for an article published earlier in Reference & Users Services Quarterly. Hill emphasized such resources as Google Books, Project Gutenberg, PubMed, Internet Public Library, Internet Archive, the American Memory Project, NASA, and sites using Creative Commons licensing. Surveys were sent to thirty-four large public libraries inquiring how and why they provided users with access to certain free online resources. About 75 percent reported such access provision to be an important library policy, and about 25 percent reported that staff time and money constraints limited such access provision.

Publisher-Vendor-Library Relations (PVLR) Forum

Kay Granskog, Michigan State University

“Managing Your Future E-Book Collection” was the theme of the PVLR Forum, held Monday, June 27 at 8am.

Michael Zeoli, YBP Library Services opened with a discussion of Andrew Pettegree’s history of early publishing, The Book in the Renaissance, reminding the audience that the e-book business is still young. Books were not standard at one time and while history has shown the significance of Johann Gutenberg’s contribution, it was not clear in his own time. Today, 20 percent of the e-books YBP handles are published simultaneously with print, an increase from 6 percent last year. Those numbers are low for anyone considering patron-driven acquisitions, consortia deals for e-content, and short loans for titles that may have been purchased in earlier in print. Libraries still expect vendors to provide comprehensive coverage of publisher output regardless of platform, and duplication control.

Melanie Schaffner, Project MUSE, explained why Project MUSE added e-books to their offerings. They have a proven track record with e-journals, proven relationships with journal publishers, many of whom also handled e-books; libraries requested it, and users like an integrated experience.

She addressed the opportunities and challenges e-books bring. There are synergies to bringing books and journals to a single platform for an established base of users as a built-in customer base with an entity that has fifteen years of experience with e-resources. On the other hand, e-books don’t work like serials. There are many more titles, lack of standards for e-ISBN, pricing concerns, and difficulty creating an accurate depiction of collection content at invoice time. Project MUSE is still learning and determining how far they want to venture into this area.

Beth Fuseler Avery, University of North Texas Libraries, compiled some questions that libraries face when thinking about e-books:

  • Libraries struggle with collecting “just in time” versus “just in case.”
  • It makes sense that multiple user costs more as multiple copies do, but why is there no single-user discount, as with print?
  • Why are e-books distributed later than print versions?
  • The problem with short-term lease as an alternative to permitting interlibrary loan is multiple platforms. We want a reader that can be used across platforms.
  • Can a single user license flip to a multi-user license if needed?
  • In the future will it be article or journal; chapter or book? What does that do to royalties?    

ALCTS President’s Program

M. Dina Giambi, University of Delaware

Paul Courant, University Librarian and Dean of Libraries, Harold T. Shapiro Collegiate Professor of Public Policy, Arthur F. Thurnau Professor, Professor of Economics and Professor of Information at the University of Michigan, was the featured speaker at the ALCTS President’s Program on June 27, 2011. Rosann Bazirjian, University of North Carolina at Greensboro chaired the ALCTS President’s Program Committee for ALCTS President Cynthia Whitacre.

Courant, who identified himself as an academic economist and a university librarian, addressed the topic “Economic Perspectives on Libraries and the Value Thereof.” Courant commented on the complexity of academic libraries and that they want to be loved by their named professors. He referred to the golden era for academic libraries when they controlled what students read. If that era ever existed, it is gone. An academic library supports scholarship, including scientific work. It provides relevant information and material to students and faculty and provides reliable, stable access to the scholarly record and associated source material.

Libraries have traditionally shared expensive goods with their users. Courant expects that this will continue to be case with new examples of expensive goods being equipment and video and editing studios. In the past, copying/printing was expensive, but this is no longer the case. Distributing copies was also expensive, but now it’s cheap. Digitization will enable the sharing of great collections and eliminate the need for duplication. The case for local collection building will be much weaker. Expensive physical space will be freed up and money will be saved by the use of digital copies.

Preservation and access go together with printed books and continues to be an automatic process even though it is expensive. Electronic files are cheaper to preserve, but there is nothing automatic about the process. Courant stated that born digital works that are hosted by publishers are at great risk. How do libraries assure that currently produced scholarship will be preserved?

Courant announced that the University of Michigan will begin to make books that have been categorized as orphan works available in its digital collection. These are books that are still in copyright, but for which the copyright owner is not known.

The formal remarks were followed by a lively and extended question and answer session.

Continuing Resources Cataloging Committee Update Forum

Teressa Keenan, University of Montana

The Continuing Resources Cataloging Committee Update Forum was held Monday, June 27 at 1:30pm. A variety of speakers gave updates of interest to the continuing resources cataloging community.

Les Hawkins and Hein Nguyen provided an update from Cooperative Online Serials (CONSER) and their work related to RDA. A standard practices document was created in an effort to aid those testing RDA by providing general guidance for resolving differences between RDA Core and CSR mandatory set. The document, “RDA as modified by CSR: Recommended Guidelines,” is available from the CONSER web site: www.loc.gov/acq/conser/CSR-RDA-Test.pdf. This document will continue to be reviewed and updated as the process of implementation RDA progresses.

Regina Reynolds discussed the potential of using the ISSN in a linked data environment as an identifier, suggesting that a separate number be assigned to print and digital versions of the same title. The U.S. ISSN Center (http://www.loc.gov/issn/) is working on the assignment of ISSNs to online versions of print materials (approximately 1200 titles are in progress). A task force is working to synchronize the ISBD rules to better match with RDA. Reynolds finished by providing an update on developments with the Recommended Practices for the Presentation and Identification of E-Journals (PIE-J), www.niso.org/workrooms/piej. Reynolds is a NISO PIE-J Working Group member and the NISO ISSN Coordinator.

Naomi Young, University of Florida, reported for the Committee on Cataloging: Description and Access (CC:DA) stating that the Joint Steering Committee for Development of RDA (JSC) will be meeting in November; new proposals are due by August 11th; and responses to those proposals will be due in September. Information on the Bibliographic Framework Transition Initiative (www.loc.gov/marc/transition/) is available online. The RDA test site (www.loc.gov/bibliographic-future/rda/) is being reworked to provide additional information as we move from testing to implementation. Troy Linker indicated that the double-user access to RDA Toolkit has been extended and a free trial subscription is still available. There will be a free RDA Toolkit webinar on July 12, 2011. They are working on improvements to the documentation on how to edit workflows and are in the process of translating the toolkit into French, German, and Spanish. They are also planning to implement virtual user group meetings that will take place three times per year.

Playing in the RDA Sandbox – Valerie Bross, University of California Los Angeles, provided an overview on the activities of the CRCC Informal RDA Testing Task Force. She then gave an overview of the VTLS sandbox; and provided the following links for more information: