All Those Programs You Missed

Each year, ALCTS members volunteer to cover a program they attend at ALA Annual Conference. These efforts enable the rest of us to benefit from their presentations. Whether you attended the conference but missed the program, or couldn’t attend at all, these articles provide a great way to learn about what was covered. We regret that volunteers were not available to report on all the events.

This year, we’ve loosely organized the ALCTS programs into subject categories, with some being easier to organize than others (sound familiar, catalogers?) RDA was a huge theme in 2012, and Big Deals also emerged as a topic worthy of discussion. In the ALCTS Programs section, you’ll find topics that relate to all of us in technical resources, from acquisitions to digital preservation. And finally, six forums were held this year, where speakers focused on a specific topic for discussion. Follow the shortcuts to find your topic of interest, or scroll down to read it all.

RDA | The Big Deal | Preconferences | General Programs | Forums

RDA

A Change in Authority: Authority Work in the RDA Environment

See entry under Preconferences.

Continuing Resources Cataloging Committee Update Forum

See entry under Forums.

RDA and Government Publications

By Tassanee Chitcharoen, University of Colorado at Boulder

This year’s ALA Annual Conference included a program cosponsored by the Government Documents Round Table (GODORT) on the implications of cataloging government information in the Resource Description and Access (RDA) environment. The speakers included Regina Romano Reynolds, director of the U.S. ISSN Center at the Library of Congress (LC) and head of LC’s ISSN Publisher Liaison Section, Jennifer Davis of the U.S. Government Printing Office (GPO), Jim Noel, from MARCIVE, and GODORT Chair Richard Guajardo, who is head of resource discovery systems at the University of Houston Libraries.

Regina Romano is a member of the U.S. RDA Test Coordinating Committee, and co-chaired an internal LC group that recommended LC projects based on the recommendations of the Working Group on the Future of Bibliographic Control. Regina spoke of “Day One,” (March 31, 2013), the RDA implementation date. On Day One, all LC bibliographic records will be in RDA; March 30 is the last day of cataloging in AACR2. This deadline includes the National Agricultural Library (NAL), National Library of Medicine (NLM), and other international partners in the European Union. By Day One, the RDA Toolkit will be improved, with instructions worded in plain English. Chapters that currently need review should also be improved. The Library of Congress currently has RDA training materials available on their website and there are also podcasts available in iTunes (search for Library of Congress RDA). Questions regarding RDA and LC’s RDA plans may be directed to LChelp4rda@loc.gov.

Regina also discussed the opened metadata registry to allow data to be shared worldwide, and contracted with Zepheira to help accelerate the launch of the Bibliographic Framework Initiative. The major focus of the contracted project is to translate the MARC 21 format to a Linked Data (LD) model while retaining the historical format. The transition from MARC to a new framework can be found at the Bibliographic Framework Transition Initiative website. LC is also soliciting demonstrations of prototype input and discovery system that uses RDA element set including relationships. They are currently working with MARC of Quality (TMQ) on the MARC mapping visualization tool, and RIMFF (RDA in Many Metadata Formats) which provides insights for RDA elements. Find more information on the updates from the U.S. RDA Test Coordinating Committee (June 20, 2012) online.

Jennifer Davis coordinated the training at GPO to prepare for cataloging federal government documents with RDA. Catalogers at GPO received more than 70 hours of training using a variety of tools. She stated that GPO had planned to implement RDA in fall of 2011, but they delay provided an opportunity to learn the newly reworded version of RDA and gave them more time to study the RDA standards and take advantage of training opportunities. It also gave GPO’s ILS technical support time to install upgrades to CGP/Aleph (Catalog of Government Publications) to support new MARC fields. GPO will be training staff for the rest of 2012.

During the RDA test period, GPO catalogers created and shared sample RDA records for Congressional documents, serials, monographs, microfiche and maps. Currently, GPO is only cataloging monographs for resources such as BIBCO records. By January, everything will be cataloged with RDA.

Jennifer also listed several changes in RDA that are notable in how they differ from current AACR2 standards, such as “local practice” options for capitalization, elimination of the “rule of 3” and the impact of RDA on authorities. For more information on the implication of RDA and government publications see Resource Description and Access (RDA) and the Implications of RDA for Federal Depository Libraries. Another option is to contact the Federal Depository Library Program (FDLP) directly at askGPO. GPO will look into the possibly of an electronic mailing list or another similar mechanism that will alert depository libraries of new changes in RDA.

Jim Noel, representing MARCIVE (a cataloging vendor), discussed problems and solutions they have encountered in implementing RDA. Their main challenge was converting AACR2 to RDA. They had to build and keep adapting tables. He stated that the biggest change for MARCIVE customers is the RDA authority record process. There will be big changes in the authority records in January and some in July, over 300,000 in summer and another 300,000 in winter.

MARCIVE will also continue to leave GMD in 245 field; they provide this service for some of their customers already. Noel stressed there will be no additional charge for certain changes. MARCIVE will provide the new format and if people need changes due to local requirements, they should contact their representative.

Richard Guajardo presented how “your” library can prepare for RDA, such as forming a library task force from catalog and systems departments, where they can review decisions from the RDA test phase and provide input. Public services librarians should also have the opportunity to provide feedback on the user display. Richard also spoke of RDA fields in the OPAC, and discussed when to make RDA field visible to patrons or what fields to display in the bibliographic record. He suggested that libraries use MARC load tables, which will need to be updated, and provided examples of display fields and subfields. He concluded by asking, “What does RDA mean for the library?” Implementing RDA will move us beyond MARC, especially for public services units. He encourages libraries to closely examine their readiness, and to call their local ILS with specific needs.

Slides from this program are available from GODORT’s space in ALA Connect, http://connect.ala.org/godort

RDA Update Forum

By Elise Wong, Saint Mary’s College of California

The RDA Update Forum featured six panelists representing different organizations and committees in the library field, who outlined steps being taken in preparation for implementation of RDA sometime after January 2013. Presenters included:

  • Sally McCallum, ALA Machine-Readable Bibliographic Information committee (MARBI);
  • John Attig, Joint Steering Committee for Development of RDA (JSC);
  • Glenn Patton, OCLC;
  • Linda Barnhart, Chair, Program for Cooperative Cataloging (PCC);
  • Beacher Wiggins, Library of Congress; and
  • Troy Linker, ALA Publishing.

Sally McCallum explained that the RDA waves of changes are based on MARC/RDA group analysis, RDA test results, and PCC input. In addition to adding MARC fields to represent content, media and carrier types, there will be new MARC fields to represent attributes of names, resources, and relationships. Discussion of new values in fixed fields (006) and 007 is underway. There will be new fields for authority and bibliographic formats, a change of MARC tags from non-repeatable to repeatable, and additional subfields to existing fields. Some of the related changes in the upcoming months include: adding machine actionable fields to relate titles and names (authority format), expanding 368 (other corporate body attributes) to include person attributes (authority format), and making 250 edition statement repeatable (bibliographic format). In short, some changes have settled down while others are rolling in slow pace. See RDA in MARC (June 2012).

John Attig discussed recent progress from the RDA Joint Steering Committee. Rewording of RDA based on testing RDA participants’ comments is in the pipeline. Vocabularies in RDA elements sets have been added and will also be reflected in RDA metadata registry. Terms and definitions are compiled and added to RDA glossary. Minor revisions of RDA text, examples, and appendixes will be incorporated into the monthly release of the RDA toolkit. JSC is preparing for the upcoming meeting in November 2012. Possible RDA changes and proposals are posted on the JSC website.

Glenn Patton from OCLC noted that all Technical Bulletins except 261 have been incorporated into Bibliographic Formats and Standards and/or Authorities: Formats and Standards. OCLC members refer to Technical Bulletin 261: OCLC-MARC Format Update 2012 to learn about changes to OCLC-MARC records, many of which are related to Resource Description and Access (RDA), the proposed successor to AACR2.

There will be indexing changes in bibliographic standards and authority formats. RDA work forms for bibliographic and authority records will be provided in Connexion. Some enhancements include: ongoing record matching for RDA elements, additional validation rules to ensure correct coding, and IP authentication to link to RDA Toolkit. There will be some MARC format changes and additional macros. The current OCLC policy statement is still in place. For new developments, OCLC users should review “Incorporating RDA practices into WorldCat: a discussion paper.”

No one will be required to do original cataloging according to RDA. OCLC proposes to allow anyone to change records created under previous coding to RDA beginning March 31, 2013. The scope of RDA records is limited to those with cataloging language “eng” only. Some changes shall be made to existing records, such as, adding access points, 336-338 tags to any records, and spelling out abbreviations. Concerning the controversial proposal of removing General Materials Designators (GMDs) in AACR2 records, OCLC will delay the removal of GMDs for now, but will eventually implement this in WorldCat. OCLC is taking the first step toward adding linked data to WorldCat by appending Schema.org descriptive markup to WorldCat.org pages. OCLC is working with Schema.org partners (e.g., Google, Bing, Yahoo) to include library vocabulary. This initiative is a further step in exposing library metadata to the broader web.

Linda Barnhart began by announcing that the Program for Cooperative Cataloging has a new website. As of March 31, 2013, new records will be constructed according to RDA standards and entered into LC/NACO file. All access points in bibliographic records coded PCC need to be RDA compliant, even if description is AACR2. PCC Day One for RDA Authority Records is available for download in Word format. There will be no PCC day one for RDA bibliographic records.

RDA NACO training in RDA can be found online as part of the Catalogers Learning Workshop. During the transitioning process to LC/NACO authority file, PCC determined that: 95.1 percent of headings would be formulated the same as AACR2, 2.8 percent of headings need human intervention, and 2.1 percent of headings need machine updating. Once PCC identifies the records that need human intervention, records updating will begin. Identification of what needs machine updating will begin in March 2013. PCC and Library of Congress will have one single consolidated set of RDA policy statements with same general standards. You are invited to read the final report of the PCC/LC PSD RDA Policy Statements Task Group—look for the entry “RDA Policy Statements: Final Report (April 2, 2012”)—containing policy statements, task group reports, and RDA examples. New task groups for hybrid guidelines, access points for expressions, relationship designator guidelines, as well as integrating resources manual revision task group will be formed.

Beacher Wiggins represented the Library of Congress. He announced that the RDA test coordinating committee update is now available. Implementation will occur no sooner than Jan 2013. So far, 10,566 bibliographic and 12,800 authority records have been created. Recommendations from three national libraries are being acted on, including rewording RDA (JSC), RDA Toolkit release scheduling (ALA Publishing), and MARC replacement (Library of Congress). Several library communities (e.g. PCC, ALCTS) are involved in training and preparation. The discussion of RDA in a non-MARC environment is reflected in | www.loc.gov/marc/transition/ |Bibliographic Framework Transition Initiative| (this will not happen before RDA implementation).

The RDA implementation date for the National Agricultural Library and National Library Medicine is set in the 1st quarter of 2013; the Library of Congress’ Day One is March 31, 2013. The Library of Congress’ long-range RDA training plan was published on March 2, 2012.

ALA Publishing’s Troy Linker announced that the RDA Toolkit training calendar and archived webinars are available online. The first batch of RDA rewording will be released and incorporated into the RDA Toolkit no later than Dec. 2012; the remaining reworded chapters will be finished by mid-2013. The RDA print update will be released in December 2012 and mid-2013; annual updates will occur thereafter. Some of the new RDA toolkit features are page toggle, search thesaurus, instructions revision history, and other customizations. Upcoming RDA Toolkit enhancement will include improved integration with RDA registry and customization for auto table of content auto sync. Single account sign on (for non-IP authenticated users), improved locally shareable bookmarks, and improved RDA toolkit training modules are also upcoming.

Slides from this program are now linked to this session’s space in the ALA Scheduler, http://ala12.scheduler.ala.org/node/399

RDA Worldwide

By Autumn Faulkner, Michigan State University

The RDA Worldwide program included four different speakers presenting on their counties’ approaches to RDA implementation.

Christine Frodl from the Deutsche National Bibliothek summarized the standards currently in use in Germany up to this point. As the transition to RDA approaches, the National Library’s focus is on translating RDA documentation and providing information sessions and training. The National Library plans to lead the German-language community in implementing Scenario 2 of RDA initially, which will apply to all records except internet publications. Policy statements concerning best practices for a variety of formats will be formulated, culminating in Scenario 1 being implemented down the road.

(At this point in the presentation, a fire alarm was set off in the conference center, causing an evacuation. Unfortunately, presenters were obliged to cut their presentations short!)

Ageo Garcia of Tulane University spoke on the history of cooperative cataloging in Latin and Central American countries, emphasizing the fact that standardization is on the rise in these regions. Annual meetings have become more regular, which will assist with the transition to RDA. Again, translation of RDA documents into Spanish is a main focus in preparation for the shift, as well as translations of FRBR models and examples. Based on these factors, Garcia expects to see extensive application of RDA in Spanish-language libraries of South America.

Chris Ray Todd came all the way from National Library of New Zealand to discuss the challenges of RDA for her country. One of her first slides humorously pointed out the expanse of water between New Zealand and neighboring countries, in illustration of the fact that geographical distance has a very real impact on cataloging practice. When changes like RDA come, New Zealand libraries must monitor development from a distance. Todd mentioned electronic mailing lists and webinars as primary sources of information, since overseas conferences are generally too costly to attend. The National Library has also built a wiki to share RDA updates and information with smaller libraries in the country, and they plan to support training as much as possible in these communities, though implementation may be postponed considering that January and February are summer months in New Zealand, and a majority of library staff will be on vacation before the March 2012 official implementation date.

Kai Li, from the Capital Library of China, is also a student at Syracuse University. He lamented the fragmented nature of cataloging in China, where a variety of local standards are used and there is a good deal of inconsistency between institutions. In light of this, he foresees that RDA implementation will not be universal in Chinese libraries—rather, he expects to see a hybrid of local standards and RDA practices develop. RDA documents are also being translated into Chinese, and RDA and FRBR are popular topics among librarians, even though a widespread effort at implementation may not be forthcoming.

Slides from this program are now linked to this session’s space in the ALA Scheduler, http://ala12.scheduler.ala.org/node/258

The Air We Breathe: Borrowing Lessons from RDA Development to Train for Your Next Triathlon

By Miloche Kottman, University of Kansas

The panel discussed four lessons learned during the ongoing transition to RDA. The speakers included June Abbas, Moderator/Provocateur; Barbara Tillett, chair of the Library of Congress Joint Steering Committee for RDA; Chris Oliver, author and editor of many RDA publications, and a member of the RDA Advocacy Committee; Nannette Naught, IMT, Inc.; Troy Linker, ALA Publications. (IMT, Inc., is the company overseeing RDA product development).

Lesson #1: It’s an evolution, not a revolution.

  • MARC can be a stumbling block
  • Collaboration and proposals are needed
  • Principle-based, data model driven instruction continues.

Barbara pointed out that for now, we still have to use MARC; Troy mentioned the tools being developed that facilitate the use of RDA with MARC in cataloging utilities like Connexion and SkyRiver. Barbara emphasized the Joint Steering Committee’s desire for RDA change proposals to rework the rules to be more principle-based. Anyone can propose changes, from special communities like law, music, etc. to individuals.

Lesson #2: It’s a journey, not a destination.

  • “It’s not done yet…” can be a stumbling block
  • Standard and practical examples are needed
  • Growth and adaptations of standards and workflows continue to evolve.

June stated that we need to get out of the “it’s never going to happen” mindset. Chris pointed out that the shared workflows area in the RDA Toolkit is a good way to help staff navigate through RDA and that it would be helpful if communities, e.g., rare book catalogers would develop workflows and share with RDA users. Barbara mentioned that the Toolkit now links to practical examples under the Tools tab.

Lesson #3: It’s about people, not just machines.

  • Varied learning styles can be a stumbling block
  • Patience and multiple voices are needed
  • Training and implementation continues.

Chris mentioned her excitement at listening to a new generation of voices talking about RDA and talked about the fun she has teaching people without an AACR2 background to use RDA.

Lesson #4: It’s about engaging, not waiting.

  • Transformation can be a stumbling block
  • Patience and safe places to learn are needed
  • Post-event Toolkit access, training and development will continue.

Nannette pointed out how helpful and friendly everyone is when asked for assistance with RDA. Barbara indicated that there have not been any repetitive questions submitted to LC’s question/help mailbox at LChelp4rda@loc.gov, and encouraged people to continue to submit questions.

Transformation: Revenge of a Fallen Code. Morphing Our Current MARC Reality into a New RDA-enabled Future

By Tatiana Barr, Yale University Library

The title of this program was inspired by the science action film Transformers: Revenge of the Fallen, and invited the audience to hear about “the trials and tribulations of library metadata specialists…trying to develop new frameworks for the future.”

The two speakers were Glenn Patton, OCLC’s director of world quality management and Nanette Naught, vice president of strategy and implementation for IMT, Inc. Glenn stood in for Jean Godby, OCLC’s research scientist—metadata transformations.

Jean Godby’s presentation gave an overview of OCLC’s role in the post-MARC landscape, what it has accomplished, how RDA fits in, and what challenges it faces that affect the future of the WorldCat interface.

OCLC works with or is influenced by, among other things:

Glenn asserted that MARC is outdated and RDA’s potential cannot be realized without changes in the carrier. There is a need to embrace common web standards that will move the library technology environment from a niche market to a model readily understandable by future creators, data models and programmers. OCLC’s long-term goal is to create research ontologies and make WorldCat compatible with new standards. The challenge is that no existing standard fully supports OCLC’s data modeling needs.

The challenge with linked data is that the model is too radical for current production grade systems. It needs the cooperation of third parties such as ISSN and ISBN. We may be underestimating the effort required to convert library records into sets of useful statements. OCLC’s model “is compatible with goals of RDA/FRBR” but the great challenge is again legacy data. RDA ontologies are about relationships between works and people, but in legacy data this is often buried in textual 5XX note fields. OCLC is working to answer the question, “How do you turn this data into “well-behaved linked data” able to work in the broader web world?” They’ve learned a great deal about interoperability from the Virtual International Authority File (VIAF) and Faceted Application of Subject Terminology (FAST).

Nanette Naught represented publisher’s point of view. IMT, Inc. works with establishments engaged in data processing and preparations. One aspect of their work is to train publishers in RDA. Nanette described how publishers and users drove the change to RDA. Users were suffering from web overload, and our systems grew, we stretched these systems beyond the limits of our patience. Nanette said that publishers expect formats to be agnostic, that all models be extensible, and identifiers be present. Librarians had to respond with new systems, models, and encoding schemas.

We will have to manage the chaos that will be released March 31, 2013. Nanette defined this chaos as the mixed world of RDA in MARC, RDA without MARC, and the world of the future: linked data. “Managing chaos” means making these separate worlds work together.

IMT, Inc. is exploring RIMMF (RDA in Many Metadata Formats) and experimenting with the APA Taxonomy Tagging Template to extend data fields and terms using publisher’s information, RDA elements, ONIX and MARC for journal titles. The result is an APA record that controls RDA style data, linked authority data and resource subject metadata.

The Big Deal

CMS Forum
Emerging Research in Collections Management & Development

See the article under Forums.

Ending the Big Deal: Truth and Consequences

By Tammy S. Sugarman, Georgia State University

The economic downturn in 2008 and its subsequent negative effect on library collection budgets forced many libraries to carry out serials cancellation projects. When libraries found the elimination of individual serials subscriptions did not reduce enough costs, they looked at breaking apart “big deals”—those journal bundles that account for a large portion of a library’s collection expenses. The three speakers at this program, sponsored by the Continuing Resources Section (CRS), described their library’s experiences in leaving one or more big deals.

Jonathan Nabe, collection development librarian/science and technology at Southern Illinois University (SIU), discussed breaking big deals with Elsevier (loss of 242 titles) and Wiley (loss of 597 titles) for annual six-figure savings. Although the library considered usage data to determine which titles to subscribe to and which to cut, Jonathan concluded that postcancellation ILL requests rather than numbers of downloads, provide a “true measure of need.” For the cancelled Wiley titles, 20 percent were requested via ILL in the subsequent 24 months; Elsevier, 26 percent. While increased ILL requests were one consequence of leaving the big deals, Jonathan emphasized that the increase in workload in this area was manageable. Overall, negative reaction on the SIU campus was minimal and positive reaction was garnered from university administration. Jonathan cautioned libraries thinking about ending big deals to “prepare for potential reaction” from publishers, including paying new or increased content fees and list prices for journal titles. He concluded by urging librarians to evaluate alternatives, be willing to walk away from the big deal, and expect and engage in tough negotiations.

David Fowler, head of licensing, grants administration, and collection analysis at the University of Oregon (UO), reported a similar economic situation as SIU in 2008–2009. The library collected two years of data including journal cost, usage and cost-per-use in order to target high-cost, low-use titles for cancellation. After discussions with Portland State University and Oregon State University about cooperative collection development, the three schools negotiated a new Elsevier shared deal for two years (ending in summer 2011). Overall, UO cut fourteen percent of their Elsevier journal subscriptions but gained shared access to the other two schools’ titles. The UO library’s Wiley deal was part of the Greater Western Library Alliance (GWLA) where UO had access to approximately 1,000 titles and shared access via their GWLA partners. While UO considered working with OSU to negotiate a Wiley contract, David said that “not enough time and energy remained to put together a second mini-deal.” As a result of leaving the GWLA deal, UO saved approximately $168,000, and David said their patrons have adequate access to their highest use titles. David concluded that a future serials cancellation will be “inevitable” but now UO librarians have the background and experience to deal with it in the future.

The final speaker was T. Scott Plutchak, director of the Lister Hill Library of Health Sciences at the University of Alabama, Birmingham (UAB). When the U.S. economy faltered in 2008, UAB had finished a five-year big deal with Elsevier as part of the Network of Alabama Libraries, and Pulchak said at that time he felt uncomfortable with the $500,000 UAB was spending with Elsevier. In order to cut approximately $200,000, the library went to individual subscriptions, resulting in a loss of access to approximately 4,000 titles. In response to the cancellations, the library reduced the cost to patrons for ILL and instituted mediated pay-per-view for articles. However, Scott stated that in biomedicine, it does not fit the workflow of researchers to request material and wait for delivery. In 2010, focus groups of faculty and graduate students said the loss of titles negatively impacted their research and teaching, and LibQual+ data showed faculty view the loss in access to electronic content as a major issue with the library. In conclusion, Scott said he would not return to a big deal because, “flexibility trumps modest price caps.” While individual subscriptions may cost more than a big deal package in the long run, he believes the library is more nimble without the multi-year commitment. Scott suggests that libraries look at their own campuses and spending, and make decisions on an annual basis.

Slides from this program are now linked to this session’s space in the ALA Scheduler, http://ala12.scheduler.ala.org/node/259

Preconferences

ALCTS Preconference
A Change in Authority: Authority Work in the RDA Environment

By Julie Renee Moore, California State University, Fresno

The ALCTS Preconference for Authority work in the RDA Environment was presented by Ana Lupe Cristán, Cooperative Cataloging Specialist in the Policy and Standards Division at the Library of Congress (LC) and Paul Frank, Cooperative Cataloging Specialist in the Cooperative and Instructional Programs Division at LC. Paul also oversees the Name Authority Cooperative Program (NACO) and Subject Authority Cooperative Program (SACO) Programs of the Program for Cooperative Cataloging. (PCC)

This LC dynamic duo provided an excellent overview of what authority work might look like in the RDA world, given that we are still in a state of flux.

RDA implementation does not only mean changes to bibliographic records, but also changes to authority records in order to support the FRBR/FRAD relationships. Authority files will ultimately be reissued in RDA. Most of the authority rules will stay the same as AACR2; RDA changes some of them. There are also a number of new MARC 21 fields in authority records that will begin showing up in our catalogs.

There were many PCC members in this preconference; they plan to create these authority records in RDA, coinciding with Library of Congress’ RDA Day 1: March 31, 2013. The speakers emphasized the need for flexibility during this transition period. LC just published its LCPS’s (Library of Congress Policy Statements). PCC policies are still in flux; things literally change by the minute. Even with some of the presenters’ slides, things had already changed. One of the best places to keep an eye on PCC RDA activities is the PCC website.

Ana Lupe Cristán gave an overview of FRBR and FRAD, FRBR being the foundation of RDA. She gave the class a tour of RDA Authorities information and places to find it—in RDA Toolkit, LCPS’s (free), MARC 21 Authority Format, DCMZ1, the PCC website, and the NACO RDA Participant’s Manual, scheduled to be made available in September 2012. Throughout the workshop, the Catalogers Learning Workshop was mentioned a number of times as a good place to start for guidance and online training. On the Catalogers Learning Workshop website, there is a page for LC RDA Training Materials, which could be of great use to others, as well.

Paul Frank discussed MARC 21 in RDA records. An RDA authority record can be identified by the 008/10 = z (other) that is used for RDA and 040 $e = rda. Frank reminded us that of the 8.3 million authority records, only .5 million records were not acceptable in RDA and will need to be changed. Special note was made of religious headings, especially Bible, N.T. and O.T. and Music headings.

Takeaways include:

  • Some fields will still be used without any changes at all. For example 670 (information found), 675 (information not found), and 663 (multiple pseudonyms).
  • Some old fields will be used but with minor changes. Examples include 1XX, 4XX, 5XX ($c may occur in front of $q) RDA 9.19.1.1.
  • 5XX will use the relationship designators found in Appendix K.
  • 678 (biographical or historical data note) is meant for the public to view; as such, it shall now be written in complete sentences.
  • The brand new fields are all optional, but we are encouraged to use them because they are searchable and machine manipulatable.

All authority records

046 (special coded dates)

370 (associated place)

371 (address)

373 (associated group)

377 (associated language)

Personal names only

372 (field of activity)

374 (occupation0

375 (gender)

378 (fuller form of personal name) – Notice: we will no longer be providing the fuller form of names in the 1XX field. For example:

            100 1 Smith, Nancy

            378 $q Nancy Elizabeth

Corporate bodies only

368 (other corporate bodies)

Family names only

376 (family information)

Works and expressions only

336, 380, 381

336 (content type)

380 (form of work)

381 (other distinguishing characteristics)

Music only

382, 383, 384

382 (medium of performance)

383 (numeric designation of music work)

384 (key)

Frank then explained, “RDA is a content standard, not an encoding standard.” He also reminded us that “RDA is agnostic,” because it tells you what content to provide, but not where to put it.

Personal names, family names (families can be “creators” in RDA), corporate bodies, and geographic names were discussed along with examples and exercises.

Frank strongly encouraged the audience to download and make use the RDA learning tool, RDA in Many Metadata Formats (RIMMF) developed by Deborah and Richard Fritz from The MARC of Quality. This learning tool can get people thinking in RDA. They are using it at the Library of Congress, and it is freely available for use by everybody else, as well.

This was a valuable preconference with excellent speakers, and an audience eager to learn how we will be cataloging in 2013. Materials from this program are now linked to this session’s space in the ALA Scheduler, http://ala12.scheduler.ala.org/node/1209

ALCTS/LITA Preconference
Creating Library Linked Data: What Catalogers and Coders Can Build

By Jessica Hayden, University of Northern Colorado

This preconference, cosponsored by LITA and ALCTS, focused on developments in the world of linked data and proposed to give attendees experience using linked data. Facilitated by Debra Shapiro, the day began with a keynote address by Zepheira‘s Eric Miller. During this address, Eric stressed that linked data is not about libraries; it’s about the Web. Linked data does, however, have huge implications for libraries. He also clarified that data should be left where it naturally resides rather than moving it to one giant data silo. We just need to make sure that the data connects; that it is exposed using the Web as a platform. He showed examples of a couple of projects that take advantage of linked data, including the Library of Congress’ Viewshare and linked data service systems. Eric wrapped up his address by discussing next steps that need to be taken to make library linked data a reality: identify existing data sets; begin to discuss licensing for individual records rather than databases as a whole; identify and expose identifiers (people, places); develop policies; begin to think about data curation; accelerate linkages; and work on making MARC work as linked data.

The session then moved on to a series of lightning talks by several people involved with linked data projects. Jon Voss announced a “challenge” for teams to enter a competition to create a linked data app for the chance to win a trip to the June 2013 LOD/LAM conference in Montreal. Karen Coombs joined the session remotely to discuss linked data developments at OCLC focusing on schema.org. Bob Syslo-Seel talked about Serials Solution’s work on linked data and the process they are engaged in to redesign their database to incorporate FRBR concepts, noting that the new Intona system is being developed to utilize linked data. Kevin Ford discussed the Library of Congress’ linked data service and talked about the importance of value vocabularies. Michael Panzer from OCLC announced the availability of Dewey Decimal Classification and Relative Index, edition 23 as linked data. Corey Harper compared open and closed world assumptions and discussed current Dublin Core initiatives, including the  DC Abstract Model. Jenn Riley talked about what needs to happen at the administration and community levels to make linked data systems happen. Rachel Frick talked about the importance of making small, uncataloged, “hidden” collections findable. Phil Schreur discussed the scope and outcomes of the Stanford Linked Data Workshop that took place during summer 2011. Last in the series of talks, Jennifer Bowen, discussed the eXtensible catalog and noted that the next phase of its development will focus on utilizing linked data.

In the afternoon, participants broke into eight working groups. The groups varied from ones doing hands-on exercises, such as using Viewshare to map a collection, to ones devoted to discussion of policy related to linked data development. The day ended with the eight groups reporting back on their activities.

Materials from this program are now available in ALA Connect, http://connect.ala.org/node/179435

ALCTS Programs

ALCTS Affiliates Showcase

By Arthur Miller, Princeton University

This year’s Annual Conference featured the first ALCTS Affiliates Showcase, which will hopefully become an annual event. The showcase featured three summaries of presentations held recently at local conferences. These presentations were strong enough to merit the additional exposure that the Annual Conference provides.

Each presentation was kept to 15 minutes so questions were held to the end. “Libraries Technical Services Process Improvement Based on LEAN” was presented by Cyrus Z. Ford, Greg Voelker, and Richard Zwiercan, all of the University of Nevada, Las Vegas. They outlined how they managed to totally revamp procedures in their library, making them more effective and more efficient.

“Digging into Our ‘Hidden Collections’: Maximizing Staff Skills and Technology to Enhance Access to Special Collections” was presented by Sarah Buchanan, The Meadows School, Las Vegas, and Elaine A. Franco and John Sherlock, both from the University of California, Davis. This was a good clean description of one approach to making a library’s hidden assets accessible to the public, building benefits for the library in the process.

“Best Practices for Collaboration in Technical Services and How it Can Filter Out to the Rest of the Library” was presented by Jana Slay, Ruth Elder, Olga Casey, and Erin E. Boyd, all from Troy University, Montgomery Campus. One ongoing complaint in many libraries is that different sections do not communicate well, let alone work well together. This presentation showed how the librarians successfully solved this problem.

Of course, each of these presentations was a summary of a larger one, so there was a limit to the information that could be presented—but the ALCTS Affiliates group planned for that. You can download the original presentations and presenter notes from ALA Connect. By mid-August 2012, the node had already received more than 500 views, a testament to the quality of the presentations.

ALCTS 101

By Deborah Ryszka, University of Delaware

“ALCTS: What It Does, and How You Can Be Involved” was the focus of this year’s ALCTS 101 event, held on Friday, June 22, 2012, from 7:30 to 9:30 pm, at the Hilton Anaheim. About 80 new and veteran ALCTS members attended the fun-filled evening, organized and hosted by the ALCTS Membership Committee and the ALCTS New Members Interest Group (ANMIG).

ALCTS President Betsy Simpson and President-elect Carolynne Myall opened the evening’s program. They briefly described the important work that happens within ALCTS and why ALCTS is such a dynamic organization. Carolynne encouraged attendees to consider becoming involved in the organization by volunteering to serve on a committee. She described the committee appointment process within ALCTS and the different types of committee appointments available to those who submit a volunteer form.

The main portion of the evening was a speed-dating event where guests went from table to table to learn about ALCTS, its sections, and other professional topics such as networking and publishing. True to a speed-dating event, guests were encouraged to move to a new table every five minutes. Experienced ALCTS colleagues manned the themed tables, providing information about that particular table’s topic.

The ALCTS Acquisitions Section (AS) was represented by Jim Dooley and Robin Champineaux; Lori Kappmeyer and Deborah Ryszka served as representatives for the Cataloging and Metadata Management Section (CaMMS); the Collection Management Section (CMS) had Brian Falato and Harriet Lightman as their table leaders. The current and incoming chairs of the Continuing Resources Section (CRS), Meg Mering and Jacquie Samples, provided information about CRS, and preservation experts Becky Ryder and Gina Minks manned the table for the Preservation & Reformatting Section (PARS). Information on the ALCTS Division-level committees and their functions was provided by several current ALCTS Division-level committee chairs, including Miranda Bennett, Janet Lute, David Miller, and Susan Wynne.

Throughout the evening, the Networking table was a busy stop during and after the speed-dating rotation. Guests were interested in hearing tips from Dina Giambi, Loretta Staal, Susan Davis, and Cindy Hepfer on how to network professionally and successfully at conferences.

Guests who were interested in obtaining practical advice on how to become involved in ALCTS received useful pointers at the table staffed by experienced and active ALCTS members Keri Cascio, Dracine Hodges, and Kristin Martin. Norm Medeiros, Alice Platt, and Dale Swensen presented information on ALCTS publications and on how new and potential members could contribute. The Resume Review table was staffed expertly by Elaine Franco, Meg Manahan, and Lisa Spagnolo, who offered tips and suggestions for improving resumes and cover letters.

After the speed-dating event, guests were encouraged to fill out ALCTS bingo cards with information they learned. Certificates for ALCTS webinars and ALCTS membership renewals were awarded to the guests who completed their cards. Library school students in attendance were presented with Starbucks gift cards.

The evening concluded with a short ALCTS New Members Interest Group business meeting. Elizabeth Siler will be the new chair of the Interest Group. For the coming year, Erin Boyd and Emily Sanford will serve as cochairs-elect, and Elyssa Sanner will be the group’s secretary.

After the event, many commented that they enjoyed the speed-dating format. It was a fun and exciting way to interact with like-minded colleagues and to hear information about ALCTS and its mission.

Brittle Books Strategies for the 21st Century

By Mary Miller, University of Minnesota

On Sunday, June 24, the Preservation and Reformatting Section (PARS) presented a program chaired by ALCTS member Kim Peach that examined recent changes in brittle books programs. The panelists discussed new options for providing access to content in brittle books, as well as the decision-making processes and workflow practices they’ve developed at their own institutions.

Emily Holmes, Assistant Director for Preservation Reformatting and Metadata at Columbia University Libraries, discussed how Columbia redeveloped a well-established brittle book program to accommodate digitization workflow. As recently as 2003, Columbia Libraries, like many of its peer institutions, were still microfilming and photocopying brittle materials to preserve content, and in-house digitization efforts were limited to special collections. As digital surrogates became an increasingly accepted means of replacing print materials, the library staff realized they needed to change the program to support digitization of circulating collections.

Columbia undertook a pilot project to develop new workflows, test new equipment, and engage selectors in the process. They began by examining what the libraries already had in place (a motivated staff, a good understanding of brittle book workflow) and what they lacked (adequate equipment and experience in digital imaging). They visited imaging labs at Harvard, Cornell, and various vendors, researched equipment, and retrained staff. Throughout the course of the pilot, each division of the library was allotted a certain number of pages for the project, and was encouraged to send monthly allotments. They learned several important things from the pilot:

  • They could successfully leverage expertise they already had with traditional formatting and cataloging, such as collation, locating prospective pages, and prospective cataloging
  • They should make workflow flexible and extensible in order to allow for vendor-created images, customer orders, faculty requests, online exhibits, etc.
  • A great deal of patience and flexibility, as well as an ongoing financial commitment, would be necessary to deal with constant changes in software, operating systems, and equipment, and increases in demand.

Allyson Donahue, Preservation Review Librarian at Harvard College Library (HCL), discussed the workflow and decision-making criteria HCL developed for treating both brittle and non-brittle books. Each item is checked against HathiTrust, the Internet Archive, or Google Books to determine if a digital surrogate exists. Even if a copy is found, they will occasionally re-digitize the item if the digital copy is of poor quality; of the approximately 3,000 items they send to Imaging Services each year, approximately one-third have digital surrogates already.

Allyson also discussed cataloging aspects of the program. HCL records preservation actions in a note in the MARC 583 field. They will also note materials they did not choose to digitize immediately, such as long-run serials or oversize materials; they can then revisit these materials at a later date as digitization becomes more economical. Once items are digitized, HCL generally changes the status to non-circulating and the item becomes reading room use only, although they rarely suppress the record.

Kara McClurken, Head of Preservation Services at University of Virginia Libraries, presented the results of a survey that examined how college and university libraries are addressing heavily damaged materials in their collections. Of 66 respondents, 52 institutions indicated that they had workflows in place to deal with heavily damaged materials. While brittleness was one of the most frequent issues cited, issues such as water damage, vandalism, inability to be rebound, or the need for time-intensive repairs were also common. The majority of institutions dedicated a limited FTE staff (5 percent or less) to workflow for heavily damaged materials.

Respondents noted that local criteria factored prominently in the decision-making process: for instance, 64 percent stated that circulation data was used in the decision-making process, and 85 percent stated that they factored in the number of copies available in the institution. The majority of respondents also looked at the number of copies in WorldCat (62 percent), whether a digital surrogate existed (69 percent), and whether a replacement copy could feasibly be purchased (74 percent).

The institutions who responded to the survey varied in their responses about the kinds of decisions they made regarding heavily damaged materials. While 90 percent considered withdrawal, and 95 percent considered replacement as an option, only 50 percent considered whether to create a digital surrogate, and only 36 percent considered outsourcing repair. At least one respondent noted that the default decision at their institutions was a digital facsimile placed in HathiTrust.

Kara also discussed how the University of Virginia’s relatively new preservation program has been handling heavily damaged materials in its circulating collection. UVA withdrew, or withdraw and replaced, a large percentage of the items that came through the preservation department. A smaller number were put in protective enclosures, a very small percentage underwent intensive repair, and still others were transferred to special collections. Finally, Kara noted that some of the unforeseen benefits of working to find solutions to deal with brittle and other heavily damaged materials: increased collaboration between Preservation staff and subject librarians, special collections staff, acquisitions staff, and interlibrary lending services.

The E-book Elephant in the Room: Determining What’s Relevant and Effective for Your Patrons & Making Effective Decisions for Your Future E-collection

A representative for ANO was unable to attend this session, but a report on this session appeared in the No Shelf Required blog.

ALCTS/ACRL Presidents’ Program
Future of the Book: Innovation in Traditional Industries

By Carolyn Coates, Eastern Connecticut State University

In this Monday morning joint program, Duane Bray, Head of Global Digital Business and Partner at the global design firm IDEO, spoke on IDEO’s exploration of the “future of the book,” or, as amended in his slides, “perhaps the future of narrative.” IDEO’s project on imagining the future of the book and on the potential of book publishing in digital formats identified three distinct opportunities, or models, for ways in which publishing might develop in the near future. The first concept, called “Alice,” makes storytelling participatory and non-linear; the second, “Coupland,” aims at making book discovery and reading more social; and the third, “Nelson,” connects books to commentary and contextual information. These concepts were presented as short videos that can be viewed on IDEO’s website. The “perhaps, narrative” emendation was to the point in the sense that Bray focused on ways in which digital technologies allow story or text to move from reliance on a discrete physical object, like a conventionally printed and published book, into a web of digital narrative connections.

The talk unpacked these ideas and provided examples of how narrative is evolving as digital tools become portable and ubiquitous. Bray had four tightly integrated themes: location, social media and collective intelligence, personal narrative, and co-authored narrative. He wondered how each might change the practices of libraries and of authors. The location services made possible by smart phones can enhance the “storyverse” of a given narrative by identifying and moving the reader/viewer through local references and settings. One example of this sort of location-based storytelling or “terrative” (terrain plus narrative) discussed was the mobile app “Parkman,” (Untravel Media) which takes the viewer/listener on a walking tour of sites associated with a notorious nineteenth-century murder in Boston. Games and other activities help the story unfold. Would walking around a city with an app that pointed sites mentioned in your favorite novel change your experience of that work? What consequences this sort of storytelling might have for readers, authors, or libraries: do such tools enhance or detract from the storytelling process? Are they solitary, or group-based? Can libraries participate in or be collectors of local stories through events and performances?

Second was social media and potential of collective intelligence. Can a network help build a narrative around the experience of place? Can it change the experience of story? Examples in this theme included Pocket Tales, a game for kids that promotes reading and the sharing of reading lists, and Tim Burton’s use of voting to determine the course of an exquisite corpse project at MOMA. Much is made of the way in which mobile phones take the attention of their users away from the immediate moment and companions, but perhaps an accessible social network can enlarge the experience of the moment. Do tools designed for self-expression help to create better storytelling? Would the use of social reading lists and other tools amplify the curatorial roles of libraries? Do they have a role in the online discovery of social narratives?

Examples of the rise of personal narrative examined the ways in which online identities can be created through implied or explicit stories in order to communicate, manipulate, or deceive. The move towards multi-threaded or co-authored narratives come in the forms of fan fiction, mobile apps based on TV shows, and websites that corral fans into further interaction (potentially) with primary authors and each other. The power of fans, he pointed out, is sometimes seen as extreme and best ignored, but Bray sees creative power in moving from a model of narrative production in which a work is created and then published to a model in which a narrative is created and then allowed to evolve as it interacts with its readers/consumers. What’s the best sort of media for this? Could libraries serve as publishers for fan fiction, or as hosts for fan groups?

Bray ended his talk with an explanation of six techniques IDEO uses for “human-centered forecasting,” or attempts at understanding and addressing design challenges: observation (in a multi-threaded way), empathy (put yourself into the situation), inspiration (how similar problems are solved in other contexts), storytelling (building a narrative about new ideas), spaces (consider the environment and physical context), prototyping (making ideas tangible).

This program hinted at ways that libraries might expand traditional outreach programs and encourage and support better digital tools for both scholarship and personal expression. The Q&A also touched briefly at some of the big issues, such as the preservation of diverse digital forms or the difficulties of institutional change, that are bound to confound libraries and the librarians of the very near future--or now.

Integrating ‘e’ and ‘p’: Building a New Monograph Approval Infrastructure

By Alice Platt, Boston Athenaeum

In the past, most e-books were purchased in packages. But now, we’re starting to see them as part of approval plans from vendors such as YBP and Ingram. The University of Colorado-Boulder and Brigham Young University are both early adopters to this type of program. Approximately 50 attendees gathered to hear from panelists Gabrielle Wiersma, Electronic Collections & Assessment Librarian, University of Colorado at Boulder Libraries; Jenny Hudson, Sr. Collection Development Manager, YBP; Rebecca Schroeder, Monograph Acquisitions Librarian, Brigham Young University; Sarah Forzetting, Collections Consultant, Coutts Information Services at Ingram Content Group.

The librarians discussed their experiences developing e-book approval plans, while the vendors discussed e-book challenges from their points of view. Both librarians agreed that availability and cost are the biggest challenges to creating e-book approval plans. Not all titles are available as e-books. While the quantity available is increasing dramatically, e-books are still only 20 percent of titles that Ingram has available in approval plans each year. Another challenge is not being certain of if or when the book will be available in e-book format; often e-books come out several months after the print. However, this is also becoming less of a problem over time.

Plan advantages included the ability to designate different approval processes for different subject areas. One subject area could be designated for patron-driven acquisitions (PDA) while another subject area might be selected for title-by-title ordering. Both librarians emphasized the importance of working closely with the vendor to develop an accurate profile. It was noted that it might take six months to fully develop the profile, but the effort is worth it. Both Ingram and YBP remarked on the automated ability to prevent duplication in ordering.

Both vendor representatives agreed that it is possible to receive complete MARC records that mirror the print records, but there was some indication that there might be a cost involved with ensuring the e-book MARC records are as strong as print MARC records.

While both librarians noted that the number of e-books purchased relative to print materials is very small, they both considered their experiences successful. It seems prudent for other major libraries to begin similar programs to test the waters, as eventually, as e-book availability and use grows, similar approval plans for e-books will become the norm.

Materials from this program are now linked to this session’s space in the Conference Scheduler, http://ala12.scheduler.ala.org/node/250.

Transforming Technical Services: Growing IT Skill Sets Within Technical Services Departments

By Teressa Keenan, University of Montana

This program brought together five speakers from different backgrounds to speak about adapting technical services departments to remain relevant in a predominately digital environment. A panel discussion followed the individual presentations. A common thread among the speakers was the effectiveness of growing IT skills within technical services departments. They described the specific skills that were found to be useful, how to grow these skills in-house and what strategies were used to overcome potential barriers.

Boaz Nadav-Manes, Director, Acquisitions & Automated Technical Services, Philosophy Librarian, Cornell University Library, began his presentation by comparing technical services personnel to artists. We move ahead and push forward; we make sense of things and we do things that are relevant for a small audience for a short period of time. He gave specific examples that involved using tools such as MS-Access, flowcharts, project management tools (JIRA), MarcEdit, and PERL to enable services such as bulk record loading, batch searching, and bulk import. He suggested that it is very important to cultivate new skills within the department and that it is necessary to create friendly interfaces. Tools need to be able to be used by non-specialists. He emphasized reaching for balance; we should move forward with the things that are good and leave behind the things that are not. Part of his guiding principle was to understand that not everything can be automated. Departments should automate everything that might be possible but acknowledge that there are some things that shouldn’t be automated.

Paul Gallagher, Associate Director, Discovery Services, New media and Information Technology, Wayne State University, began his presentation by teasing the audience with the possibility of a magic bullet. He let us down gently, saying the solution to all our issues is not quite that simple. It is of a constant iterative process more akin to a thousand points of light working together to solve a problem. He focused on the importance of determining what we can do; what we can’t do; what we shouldn’t do; and what we are already doing. He suggested using task analysis to determine where the gaps are and then developing the infrastructure needed to fill those gaps. He described a technical services staff who have amazing amount of experience, wisdom, and depth, but who may run the risk of getting stuck in the mud. It is important for us to acknowledge the fact that we need to change. He then provided some tips for incrementally adding skills and for successfully expanding the work that is done in technical services.

Gary Strawn, Authorities Librarian, Northwestern University Library, provided a view of the journey he has taken in his professional career. This journey provided solid examples of how he grew his IT skill sets. Some of the specific skills he addressed were related to programming languages and relational databases. He then discussed in general how our environment is constantly changing and we need to adapt along with it. He stressed the importance of looking ahead, encouraging experimentation and learning.

Tasha Bales, Library IT Analysis, University Library, University of California, Santa Cruz, provided examples of specific projects or problems that her library solved by increasing IT skills. She discussed how decreasing budgets, reduced staffing, emerging technologies, ILS limitations, and minimal IT support were a driving force for her library. The following skills were highlighted as important:

  • Application software utilities (link resolvers, URL checking, etc)
  • Programming (scripting languages)
  • Training and instruction documentation/technical writing
  • Project management

She also highlighted some of the tools they discovered:

  • MarcEdit
  • Vbscript
  • Teraterm
  • Marcxgen
  • Autolt

She emphasized the importance of starting small and suggested looking for ways to reduce tedium and repetition. It is important to seek out tools that are readily available, well documented and easy to use. She also emphasized the importance of planning, suggesting it is necessary to use resources wisely and to focus attention on solutions that would have the most impact. She discussed some of the barriers to increasing IT skill sets and ended by highlighting positive results such as self-sufficiency, technological agility, improved decision making, and better communication.

Lai-Ying Hsiung, Head of Technical Services, University Library, University of California Santa Cruz (UCSC), defined IT and IT skills as technology which serves the needs of technical services. She discussed specific applications such as Excel, Word, ILS, and PERL, and the importance of transformative use of skills. She indicated that it is important to take the time to learn to maximize the use of systems and software. Her philosophy is to never do anything that you can make the computer do for you. She suggested evaluating workflows, saying “anytime it takes staff a long time to do something, stop, think about the process, and what you can change to make it easier. She then discussed specific processes such as batch record manipulation, searching, uploading, and downloading that could be improved with IT as well as strategies and training necessary to make staff successful.

Materials from this program are now linked to this session’s space in the Conference Scheduler, http://ala12.scheduler.ala.org/node/249.

ALA Transforming Collections

By Ben Hunter, University of Idaho

Jamie LaRue of Douglas County Libraries, Robert Kieft from Occidental College, and Robert Wolven, Columbia University, spoke about the present and near-future of library collections from the perspective of public, college, and university libraries. They addressed a diverse range of topics highlighting both the similarities and differences in collections issues at the different types of libraries and provided an imposing list of opportunities and challenges facing libraries in the coming years.

Jamie offered a detailed perspective on the role of libraries, publishers, and distributors as intermediaries in the relationship between author and reader, specifically regarding electronic content. Much of his talk was informed by his work with Douglas County Libraries’ locally created and maintained e-book platform and the related negotiations with publishers to purchase and digitally distribute their content.

He defined four streams of content that libraries are currently dealing with: mainstream commercial publishers (specifically the “big six”), midlist and independent publishers, local history, and self-publishing. While libraries are often focused on mainstream publishing, there are significant opportunities for libraries to work with smaller publishers; these companies tend to be much more receptive to working with libraries. Also, there is a huge volume of work that is self-published each year. Not only are there opportunities for libraries to distribute self-published works, but libraries can offer substantial support to local authors as they create and publish their work.

Jamie offered numerous principles underlying collections issues including ownership, discount, integration, and publisher relations. Ownership and discounted prices have both been lost through relationships with providers such as OverDrive, and different interfaces from different vendors have destroyed an integrated and consistent user experience. Libraries have largely lost their relationships with publishers, and instead now deal primarily with distributors. Generally speaking, when libraries do deal with publishers, they tend not to negotiate well.

The locally created e-books platform at Douglas County Libraries addresses many of these principles and their related issues. They deal directly with approximately 800 different publishers to purchase content and make it available through a single, unified interface. By dealing with publishers directly, libraries can reclaim their discounts. Importantly, this approach preserves both the libraries’ role as a content provider and, through use of industry-standard digital rights management software, publishers’ business models.

Robert Kieft delivered a paper he had written for the occasion which addressed the college library collections perspective, noting that each college library has its own unique set of challenges and opportunities. Generally, there is a move away from print collections. He cited an ITHAKA report that presented survey results indicating that if the responding university libraries were struggling for space, 90 percent of them would withdraw print historical monograph collections if presented with dependable, freely available copies online that were adequately preserved. He also spoke about his own research where he asked numerous directors if they would try and replace their print collection if it suddenly disappeared; not one of the directors interviewed would. In addition to this move toward electronic access, there is also a growing reliance on consortial borrowing and distributed preservation models for print collections, further diminishing locally held print materials.

While libraries have traditionally been defined by their collections, Robert observed that user services have become equally important. He stated that college libraries will likely continue to emphasize user services while disengaging from traditional collections, as libraries are increasingly more about what people do with their materials than the materials themselves.

Robert Wolven closed the program with the academic and research library perspective. He began by outlining a number of pressures currently acting on academic libraries, including stagnant budgets, space constraints, format obsolescence, special collections commodification (e.g., where individuals used to give collections to libraries, they now try and sell them to libraries), and an expansion of the formats and materials libraries are now expected to collect.

There have been numerous responses to these various pressures, including large-scale collaborative projects between academic libraries, such as the 2CUL collaboration between Columbia and Cornell, digitally published special collections materials, and open access initiatives such as arXiv. However, there are a number of substantial questions that remain for academic libraries, including how to best support open access initiatives, open source versus commercial platforms, how best to preserve scholarly digital content, preservation for content from the open web, and how to work within copyright law. These questions and others like them are likely to become even more important in the coming years, and will help to define the future of library collections.

Write for It! Jump Start your Research Agenda and Join the Conversation

By Yolanda Blue, University of California, Santa Barbara

Do you want to write and publish to share your knowledge and expertise in the profession? This interactive audience participation session provided many answers, with practical tips for the attendees.

Faye Chadwell, editor of the journal Collection Management, and Lisa German, Penn State University Libraries’ associate dean for collections, information and access services, presented. Both have extensive experience in research and publication.

The session was a unique departure from PowerPoint presentations and panel discussions.

The presenters engaged in a dialogue with the audience about their writing and publishing experiences. It was an informative way to gain insight about the challenges and rewards of the process. The attendees chimed in with their specific questions, comments and personal writing experiences.

The first question posed to the attendees by the presenters:

“How many are you faced with promotion or tenure?” Many attendees raised their hands.

Some of the questions and answers discussed included:

“Do you remember your first publication?” Both presenters recounted how they first started. They noted that new authors should get help from other authors and editors, and read all the literature available on your topic.

“What was the best advice you ever received?” Don’t go through the process without having someone else look at your draft. Pay attention to the journal’s guidelines for articles; this is very important.

“Do you have to write a literature review?” Yes! This is very important. It helps set the context and provides more information to the reader.

“How do you work with co- authors?” Determine who does what up front; everyone has different writing techniques. Make it clear to the editor who is in charge. Talk early about open access, where to submit, timelines, deadlines, who will be listed as first author, and so on.

“How do you deal with critical feedback?” Look at what editors are saying and read the feedback carefully. Does the submission match the scope of the journal requirements? Review your writing with a critical eye. Be sure to respond to any requested substantive changes. Talk to the editor about the feedback, and be sure to take a step back and “breathe”—that is, don’t take it personally.

“Is okay to submit to more than one publisher at a time?” No. But if the process is too long, let the editor know, and then submit somewhere else. One attendee commented that submitting abstracts to several editors to see who bites first could be an option.

Other tips discussed:

  • Write about job duties that can be generalized for other situations
  • “Weaving research with daily duties”
  • Read the literature, join a writing group, schedule time for writing and research
  • Get institutional support
  • Participate in a research and writing boot camp
  • Take a class on research and writing
  • Look for writing/publishing programs offered through ACRL and ALCTS

Most of all—JUST WRITE

Blogs, journals, notes, reports for conferences/meetings, newsletters—these experiences will all help you. The more you write, the better you get at writing.

Materials from this program are now linked to this session’s space in The Conference Scheduler, http://ala12.scheduler.ala.org/node/251

Forums

Cataloging and Metadata Management Section Forum
Reimagining the Library Catalog: Changing User Needs, Changing Functionality

By Brian Falato, University of South Florida

Four speakers discussed the state of current library catalogs and possible changes needed to stay relevant for users. In an attempt to broaden the audience beyond those present at the Anaheim Convention Center, the forum offered a live streaming feed for participants who registered in advance. Unfortunately, only the first speaker was heard by this audience, as the transmission cut out afterwards.

James Weinheimer, now an independent information consultant in Rome, called his talk “Reality Check: What is it the public wants today?” Weinheimer presented his program via an audio transmission from Rome, Italy (after some initial technical difficulties). He began with some figures: 555 million web sites in existence, 1,052,803 book titles published in 2009 (the last year a figure was available), and 72 hours of video uploaded to YouTube every minute. But with only 271,171,535 records in WorldCat (including non-book material), library metadata is like a drop of water in the ocean.

Weinheimer then mentioned ideas from several thinkers (all non-librarians) on the impact of this flood of information: Clay Shirkey (it’s not a matter of information overload, but rather filter failure), Barry Schwartz (too many choices can lead people to shut down because they don’t know what to choose), and Noam Chomsky (just walking into a library won’t make you an expert on a subject; you need a framework before you can begin research).

With this as background, Weinheimer then expressed his views on library catalogs. The traditional catalog serves library managers and expert users. The general public spends little time looking at individual records, but a lot of time looking at result sets. It’s the result sets that librarians (especially catalogers) should be focusing on, rather than individual records. This will allow them to better provide users the assistance they need in choosing relevant material.

Faceted catalogs provide statistical information on a library’s holdings, but Weinheimer feels facets can still be confusing for patrons. He suggested a narrative be built from the statistical information to give catalog users a better understanding of what’s available. He mentioned the company Narrative Science, which uses its artificial intelligence programs to transform data into narrative. Using tools such as narratives to explain library holdings will be more helpful to patrons than anything accomplished by RDA or FRBR. Weinheimer summed up his presentation with the statement: “The problem is the catalog, not catalog records.”

Kevin Ford of the Library of Congress Network Development and MARC Standards Office was next. Ford manages LC’s linked data service and sits on the board for the bibliographic framework transition planned by LC.

The function of the catalog needs to expand beyond its core function, Ford said. Much of the change is programmable, but long-term gains will come from a change in the bibliographic framework. Catalogs (and cataloging) need to better accommodate the 99 percent of users that are not scholars. We are violating Ranganathan’s fourth law of library science by not saving the time of the reader.

Authority records are not used currently as a point of entry, but information in them could provide more context and assistance. Right now, though, it takes too many clicks in a catalog to get to relevant information about a person. Ford used the Italian painter Cavaliere d’Arpino as an example. It takes three or four clicks in a library catalog to find out that the artist’s real name, Giuseppe Cesari, is the authorized form of name and find material related to him. In contrast, a search in Google displays results that immediately show you the relationship between the two names, and one click gives you relevant information. He explained that Amazon and Google do a better job than the library catalog of providing information that is most relevant to a user, providing an interlinked environment that allows a user to go from resource to resource to resource. A library catalog has the user go back and forth between a description of a resource and the resource itself.

Ford concluded that RDA shows promise because it can replace strings of information with identifiers and its richer authority data can provide more biographical information and better contextualize the information. The new bibliographic framework will also be important. Within two to four years, libraries can augment their data with external data and explore smart linking that will allow intelligent links to related resources such as scholarly reviews. In five to 10 years, the new framework will be in place, and developers can exploit new relationships in data.

John Myers of Union College was the third speaker. He also contrasted the searching techniques and information retrieved in Amazon, Google, and Wikipedia with that of library catalogs. To get the catalog to where it needs to be, Myers said we should migrate away from MARC-based workforms. We should be using encoding-neutral input forms that can be converted to any desired schema.

Myers showed some examples of records coded in Dublin Core and MARC XML, but first, he had the audience state aloud that they wouldn’t be frightened by these slides, since they looked different from what librarians were used to seeing in MARC. Myers pointed out that a raw MARC record doesn’t make for a very intelligible display, either, but that we have become used to the display conventions used in MARC. If we use encoding-neutral templates, we will be able to display the information in user-friendly formats, regardless of the native encoding standards for the schema.

Once we take these steps, Myers concluded, we can make library catalog data feel like a safe and comfortable cove in the information stream, rather than the formidable or inaccessible cliff it can be now.

Jane Greenberg of the University of North Carolina at Chapel Hill was the final speaker. The technological gremlins rose again, as the battery in the computer displaying her slides died, and she was thus unable to show all the catalog displays and artwork that were to accompany her talk.

Greenberg used the Forum’s title as the title of her speech, with a MARC-friendly subtitle: 245 $b: Diversity and harmonization. She said we should not ignore the needs of the 1 percent when making records. We can’t quantify what good cataloging records from 20 years ago did to help provide researchers the relevant material that led to important new scientific discoveries, for instance. The diversity of users should be recognized. Everybody is indispensable.

Harmonization is another key concept. Catalogs are not a thing of the past, Greenberg said. We should embrace those willing to explore new approaches, but not totally reject what we already have. A healthy tension between what we have now and what others want for the future is good. Documents and discrete facts can both be important.

Greenberg concluded with some predictions for the future of library catalogs and cataloging. Five years from now will probably be a time of storm and stress, but within ten years, there will be greater tolerance. In 50 years, Greenberg said, we could have catalogs that recognize you and what you want to search for by using thumbprint identification or reading facial expressions.

Materials from this program are now available in ALA Connect, http://connect.ala.org/node/182365

Continuing Resources Cataloging Committee Update Forum

By Kevin Balster, UCLA

The Continuing Resources Cataloging Committee (CRCC) Update Forum featured four speakers covering a variety of issues, but mainly focused on RDA. This CRCC meeting was also the first to be simulcast online for remote users.

The first speaker was Adolfo Tarango, CJK, Serials, and Shared Cataloging Division Head, University of California, San Diego. Adolfo is also the CRS Rep to CC:DA, and in that capacity, he presented a report on the decisions made during the recent CC:DA committee meeting. Some highlights for serials catalogers include:

  • The approval to create a discussion paper on the use of machine-actionable data for elements in RDA Chapter 3 that contain quantitative information, most notably extent and dimension.
  • The recommendation to incorporate instructions for recording the names of government subordinate bodies with the instructions for recording the names of non-government subordinate bodies. CC:DA is also recommending the elimination of RDA 11.2.2.14 Type 6.

Please see the CC:DA committee update report for a complete list of the decisions made by CC:DA.

Les Hawkins, the CONSER Program Coordinator, spoke next about the CONSER serials training plan. There will be two types of training; the first of which is bridge training to help catalogers move from AACR2 to RDA, and will begin in fall 2012. The second is basic RDA serials training and will likely begin in early 2013. The current plan is to use webcasts and self-paced tutorials and exercises for the bridge training and to convert the existing SCCTP training modules to RDA for the basic serials training. In the meantime, Les recommended using the Catalogers Learning Workshop for accessing any existing RDA training. Other helpful documentation can be found under the Shared Maps section in the Tools tab of the RDA Toolkit, including two draft lists of the CONSER RDA Core elements—one in RDA instruction order and one in MARC21 field order.

Regina Reynolds, Director of the U.S. ISSN Center, reported on some new developments within ISSN. First, starting in summer 2012, digital reproductions of print journals will be assigned e-ISSNs rather than using the existing print ISSNs. The ISSN Center will be working closely with JSTOR to incorporate the e-ISSNs given their extensive collection of digital reproductions. Next, Regina discussed several ISSN policies on integrating resources; namely that print loose-leafs will be assigned new ISSNs when completely replaced (unless they are normally serial in nature), and online serials that change to integrating resources will also be assigned new ISSNs, though ISSN Center policy and RDA currently differ in this regard. The ISSN Center is also looking into ways to incorporate a semi-automated approach to bulk ISSN assignments. Regina concluded with a  presentation on the Presentation and Identification of E-Journals (PIE-J) report that will be submitted to NISO and possibly adopted as a recommended practice.

The last speaker was Ed Jones, author of the forthcoming book RDA and Serials Cataloging, who provided a humorous account of his experiences while writing his book. According to Ed, his life as a cataloger and author was inevitable given the amount of time he spent (involuntarily) working with his father’s overwhelming collection of magazines that later became a major source for his father’s books on the history of advertisement.

Ed was also able to shed some light on a long-standing problem in RDA—how are you supposed to verbalize RDA instructions? Well, as Ed found out, according to ISO 2145: 1978, full stops separating subdivisions are omitted when spoken. This means, for example, that RDA 2.2 is spoken as “two two” rather than “two point two.” Knowing this, RDA training should go much more smoothly.

Continuing Resources Standards Forum

By Roman S. Panchyshyn, Kent State University

The Continuing Resources Standards Forum, sponsored by the ALCTS Continuing Resources Section (CRS) and SWETS, featured Regina Romano Reynolds, ISSN coordinator, Library of Congress; NISO Executive Director Todd Carpenter, and John Hostage, serials cataloger for the Harvard University Law Library.

Regina addressed the problems serials catalogers face when attempting to synchronize or harmonize the use of the ISSN in a FRBR/RDA environment. She discussed several examples where “unsynchronization” is occurring, such as a change in the mode of issuance from serial to integrating resource, and changes to the corporate body main entry for serials with distinctive titles. She stated that the FRBR-object oriented conceptual reference model, which is an extension of the CIDOC CRM, may help to reconcile FRBR with continuing resources and ISSNs. For harmonization to occur, catalogers must think more in terms of elements instead of a record based environment, and move from record based to statement based metadata management.

Todd presented ResourceSync, a project being developed by NISO as a response to a request by the Open Archives Initiative to update the Protocol for Metadata Harvesting standard (OAI-PMH). This is a joint effort by both NISO and OAI. The goal is to develop a change notification protocol standard, a push protocol that would allow source servers, whose data may change over time, to push these changes to “mirrors” or third-party sites based on the interests of these destination servers. The projected timeline calls for developing a beta product for testing by December 2012, and for approval for the new standard by December 2013. Todd’s presentation slides are available in SlideShare.

John Hostage gave a presentation on International Federation of Library Associations and Institutions (IFLA) standards. He gave a brief history of IFLA, IFLA’s role in the development of the “Paris Principles” as a basis for international standardization in cataloging, and IFLA’s role in the development of the  International Standard Bibliographic Description (ISBD). John advocated that IFLA apply open access to its publications and standards. As an example of an open access initiative at IFLA, he discussed the Multilingual Dictionary of Cataloguing Terms and Concepts (MulDiCat) project. The terms listed in the dictionary reflect international agreements for cataloging and classification concepts. Of particular relevance are the terms used for FRBR, FRAD, and ISBD concepts. The MulDiCat dictionary points directly to the open metadata registry where RDA elements are listed.

CMS Forum
Emerging Research in Collections Management & Development

By Kristin E. Martin, University of Illinois at Chicago

The 2012 Collection Management Section (CMS) Forum featured two topics: “Comparison of Citation Use Patterns to Link Resolver and Vendor Statistics in Health Sciences Journals,” presented by Kristin E. Martin, Deborah D. Blecic, and Sandra L. De Groote (in abstentia) from the University of Illinois at Chicago, and “Modeling the Cost of Abandoning the Big Deal: A Case Study from the UK,” presented by David Beales, who now works at California Polytechnic State University.

In the “Comparison of Citation Use Patterns to Link Resolver and Vendor Statistics in Health Sciences Journals” presentation, the challenge was to compare different measures of e-journal use to see how well they correlated. The three measures of use compared were: COUNTER Successful Full-Text Article Requests (SFTAR) obtained directly from vendor platforms; click-through statistics compiled by an electronic resource management system (ERMS), which included click-throughs from the link-resolver, e-journal A-Z list, and MARC records; and local citation data. The presenters found that all three types of use statistics are highly correlated, suggesting that libraries can use the simpler-to-collect ERMS click-through statistics in lieu of vendor SFTARs to assist in journal retention decisions, although the correlation is not perfect. They also found that citation data use patterns are consistent with ERMS and vendor statistics, so do not need to be collected independently. That said, the comparison of the vendor SFTARs and the ERMS click-through statistics did reveal some anomalies in the data, which were helpful in identifying both problems with the knowledge base (e.g., titles not properly turned on) and problems in the recording of SFTARs. Areas for more research include expanding the study to other disciplines beyond the health sciences, and exploring user behavior and what journals are more likely to be accessed through a link resolver versus those more likely to be accessed directly.

David Beales’ presentation, “Modeling the Cost of Abandoning the Big Deal: A Case Study from the UK,” focused on recent analysis of Big Deals undertaken by the Research Libraries United Kingdom (RLUK) consortium to negotiate new journal package deals with Elsevier and Wiley-Blackwell. Beales, a librarian at Imperial College at the time, led the analysis after the consortium rejected initial offers by both publishers. His charge was to develop a cost model to allow RLUK to drop out of each Big Deal for a 12-month period, which would buy another year’s negotiating time. Much of his work used the COUNTER JR5 report, which shows journal usage broken down by year. This was important because older years of a title would still be available due to post-cancellation access clauses. Beales used the JR5 data to create a spreadsheet of journals based on use, and then libraries could then decide which titles to subscribe to individually, and which titles would rely on interlibrary loan for access.

The consideration of cancelling the deals also involved the coordination of a new method of document delivery among consortium members, whereas previously they had relied upon the British Library for most of the content. In doing the analysis for Imperial College, Beales explained a number of adjustment factors that he used to determine the costs for interlibrary loan/document delivery. These included adjusting the statistics to take into account PDF downloads versus HTML retrievals, the percentage of turnaways that actually would result in article requests, and the number of requested articles that would be available via open access. From the analysis Beales concluded that the Elsevier package provided much better value than the Wiley-Blackwell package, and even went so far as to extrapolate the costs of permanently cancelling the Wiley-Blackwell package, which would restore control over budgets for the libraries. In the end, however, the deadlock was broken for both deals and they were renewed for RLUK in December 2011.

Holdings Information Forum
Quality Standards in Batch Records and Adventures in Cooperative Cataloging: Many Hands Make Light Work

By Heather Staines, Springer Publishing

This session was sponsored by ALCTS Continuing Resources Section (CRS). Les Hawkins, Library of Congress, CONSER Liaison to the Committee on Holdings Information, presented on the open access journal project, a systematic approach to making CONSER records available for the journals in the Directory of Open Access Journals (DOAJ). CONSER is the continuing resources component of the Program for Cooperative Cataloging (PCC). CONSER members cooperatively maintain an authoritative database of bibliographic records for serials and integrating resources hosted by OCLC WorldCat. The CONSER database is a subscription product produced by the Library of Congress Cataloging Distribution Service. The database is used by subscribing companies to maintain knowledge bases of e-resource management services such as: A-to-Z title lists, OpenURL link resolvers, federated search services, MARC records services, electronic resource management (ERM) applications.

CONSER has long been interested in cooperatively targeting the creation of records for collections of electronic resources. The Directory of Open Access Journals is a collection that many CONSER institutions select for inclusion in local catalogs. The journals in DOAJ are highly cited, go through peer or editorial quality review, have ISSN, and there is no embargo period so all issues are readily available. Users of e-resource management tools that subscribe to the CONSER database would benefit from having complete and well maintained CONSER records for the journals in DOAJ.

The project began with a listing of journals in the directory that did not have a CONSER record available. The journals were divided among participants by language and subject expertise. Over the course of 2011, more than 1,200 records were added to the CONSER database to provide coverage of the journals in DOAJ. CONSER decided to extend the project for another year in 2012.

The benefits of the project include reduced duplicate effort through cooperation, enhanced user access to open access content, and wide availability of consistent metadata to across different user information environments. The Cooperative Open Access Journal Project supports providing records for a collection of journals of increasing importance to a number of member institutions.

Jonathan E. Rothman, Head, Library Systems Office, University of Michigan University Library, presented the HathiTrust Supplemental Metadata, including an update on the HathiTrust Print Holdings Database Project. He described the HathiTrust as a partnership of academic and research libraries whose main product to date is a large-scale digital repository. Partners who contribute content to the repository also supply cataloging metadata to UM staff to provide description and access for each digitized item.

The process includes checking for duplicate records using OCLC numbers. Rothman suggested several ways to further improve discovery of matches with duplicates by means of OCLC number clustering, currently based on data from the OCLC resolution table, and hopefully in future based on the OCLC Manifestation Identifier (OMI), a subset of the OCLC Global Library Manifestation ID (GLIMIR). He expressed hope that both the content and use of the OCLC number clusters can be expanded in future to further improve duplicate detection, duplicate resolution and overlap analysis.

Ted Fons of OCLC was scheduled to speak, but was unable to attend due to a work commitment.

RDA Update Forum

See entry under RDA.