Volunteer Reporters Cover ALCTS Forums and Events in Washington, D.C.

Volunteers who attended the 2010 ALA Annual Conference in Washington, D.C. provided the following summary reports. We thank the volunteers who covered a program or event sponsored by ALCTS or one of its units. Their efforts enable the rest of us to benefit from their presentations. We regret that volunteers were not available to report on all the events.

ALCTS Preconferences

Cataloging and Description of Cartographic Resources: From Parchment to Pixels, Paper to Digital

Julie Moore, California State University, Fresno

Instructors: Paige Andrew, Pennsylvania State University Libraries; Mary Lynette Larsgaard, University of California at Santa Barbara (retired); Susan Moore, University of Northern Iowa. This ALCTS preconference was cosponsored by the Online Audiovisual Catalogers, Inc. (OLAC) and the Map and Geography Round Table (MAGERT).

This trio of expert cartographic catalogers delivered a dynamic one and a half day workshop on map cataloging. The workshop consisted of two main sections: Paige Andrew and Susan Moore were the primary instructors for cataloging sheet maps on the first day; Mary Larsgaard was the primary instructor for cataloging digital maps on the second day.

Andrew and Moore covered the basics on map cataloging, walking the participants through the process of creating bibliographic records for maps. Areas that tend to be especially challenging for most new or “occasional” map catalogers are: 1) title, 2) mathematical data, and 3) physical description. The participants were given ample time to explore these challenges with numerous hands-on exercises.

There are a number of title problems that arise with maps, ranging from having too many titles to lacking a title. The instructors discussed how to handle each of the various circumstances.

The mere mention of the phrases “mathematical data” and the need to “calculate scale” raises the anxiety level in any map cataloging workshop. The instructors made the lessons as painless as possible, providing many examples and time to explore such topics as determining scale, projection, and coordinates. This information is represented in the MARC 255 and 034 fields.

Providing a physical description offers a variety of scenarios from one map on one side—to two maps on one sheet—or one map on two sheets—to one map on both sides (one map continued on the back of the sheet), and more. All components of the MARC 300 field were discussed, using actual maps as examples, and a measuring exercise taught participants how to provide dimensions.

A block of time was devoted to subject analysis, how to build a Library of Congress call number (using the G classification schedule), and how to apply Library of Congress Subject Headings to maps. The recently approved sixty-five cartographic genre/form headings along with other subdivisions which will be implemented by the Library of Congress on September 1, 2010, were discussed.

On the second day, Larsgaard taught participants how to catalog digital maps. She introduced terms and phrases such as metaloging geospatial data, digital orthophoto quarter quads, digital raster graphs, ellipsoids, and datums. Larsgaard emphasized the need for recording projection, coordinates, and especially resolution.

Larsgaard touched on the RDA Toolkit (which became available June 23, 2010.) She noted that there is one major change in the physical description, which will look something like this for digital maps:

  • 300 1 online resource (1 image file)

In addition to the usual bibliographic fields that are used to catalog maps, there are some special fields for digital maps:

  • 342 Geospatial Reference Data
  • 343 Planar Coordinate Data
  • 352 Digital Graphic Representation
  • 514 Data Quality Note
  • 552 Entity and Attribute Information Note

As with any other electronic resources, one must build in extra time when cataloging maps that are in electronic form, as they are considerably more complex.

Between the information shared at this workshop and the packet of materials (including a bound booklet of handouts, exercises, record examples, a bibliography, and a natural scale indicator), participants walked away from this workshop with the tools and a basic knowledge to catalog maps, whether they are paper or electronic resources.

Linked Data: Making Library Data Converse with the World

Laura Akerman, Emory University

This sold-out preconference provided an overview of what linked data is, its use outside of libraries, benefits for libraries as producers and users, preparing our data for use in this environment, current library projects and issues. Slides are available online. The full day had something of value for those at any level of familiarity with the topic.

Jennifer Bowen noted in her introduction that Eric Miller of Zepheira could not be present, but others would cover his planned topics. Karen Coyle, consultant to Open Library and other projects, gave a non-technical but informative overview of the differences between the linked data/Semantic Web approach to information and standard library metadata, how it can benefit our users, with examples from Open Library, Freebase, and WorldCat Identities, and some of the advantages and challenges of making library data available in this form.

Corey Harper from New York University provided a background (e.g. Tim Berners-Lee's Design Note on Linked Data), basic definitions and concepts, and gave a sense of the growing momentum and potential of the Semantic Web, illustrating with uses of linked data from DBPedia, BBC and other sources, and noting new sources from the library community and a new W3C Library Linked Data Incubator Group to investigate further standardizations for libraries.

Ed Summers from the Library of Congress pointed out the value of FRBR relationships to linked data, with demonstrations from the National Digital Newspaper Program, which is using OAI-ORE, assigning URLs to titles, issues and pages, and linking location coverage to DBpedia and GeoNames for more data.

Diane Hillman and John Phipps covered vocabularies, SKOS (Simple Knowledge Organization System, using RDF to describe classes of knowledge), and the proposed RDA vocabularies. Hillman showed the progression from the Z39.19 guidelines for monolingual controlled vocabularies to linked data structures. Phipps gave an overview of SKOS, and Hillman showed some of the challenges of designing the RDA vocabulary in SKOS to serve both library and non-library user. The following table exercise had attendees assign some SKOS attributes such as "preferred label" to an invented vocabulary. Hillman invited participants to visit a web site to practice.

Ross Singer of Talis offered practical advice for use of legacy MARC data: "Assert what you know now. Add more later." MARC has "low hanging fruit" in fixed fields, identifiers and controlled vocabularies. One suggestion: incorporate RDF triples into library online catalog records.

Jennifer Bowen described an experimental use of eXtensible Catalog tools to convert MARC records to RDF triples, and some challenges in making MARC "fit" into a combined Dublin Core and RDF vocabulary model. The following table exercises led to lively discussions about cataloging practice and MARC limitations.

The day concluded with an engaged question-and-answer session with presenters; among the topics: reliability of access to data and institutional commitment, data quality and provenance, and needs for "non-techie" tools for experimentation. Karen Coyle invited interested people to a linked data "unconference" on Friday morning for informal in-depth conversations.

TOP

Taming the Licensing Tiger: New Formats, New Standards, New Challenges

Rebecca Kemp, University of North Carolina-Chapel Hill

Speakers:

  • Becky Albitz (Electronic Resources and Copyright Librarian, Pennsylvania State University)
  • Robert Boissy (Director of Network Sales, Springer)
  • Tracy L. Thompson-Przylucki (Executive Director, New England Law Library Consortium (NELLCO)

Becky Albitz began the preconference with “Licensing Issues: the past, present, and near future.” Licenses can cause libraries to waive certain rights that would otherwise be granted under copyright law and fair use, necessitating careful consideration of what rights the library needs to retain. She then detailed segments of a license, including the terms and conditions of use, jurisdiction, and obligations in case of a breach of contract. Audience members asked questions throughout regarding particular challenges, and lively discussions ensued. Albitz presented attendees with a sample license that they modified to address library needs. Participants shared with each other the results of their analyses.

Albitz described e-book licensing models, including publisher based packages, aggregated packages, title-by-title, and patron-driven selection. She described library concerns for e-books: perpetual access provisions, interlibrary loan allowances, and ensuring no duplication of print holdings.

Tracy Thompson-Przylucki presented “Consortium licensing: boon, bust, or a bit of both?” The goals of library consortia include cost avoidance (savings) for the members, professional development, collegiality, labor savings, simplified billing and renewal, and the ability to take collective risks. Consortia can also serve as library advocates, as in the International Coalition of Library Consortia’s (ICOLC’s) January 2009 statement on the economic crisis. 1

Consortial dealings do have their drawbacks: they often take a great amount of time; it may be difficult for individual member libraries to customize deals; and deals require a great deal of communication between all interested parties. Thompson-Przylucki indicated that the best ways to navigate consortial dealings are to be patient but persistent, to inform all stakeholders of the details of the deals, and to assign a single library point person to each consortium.

Bob Boissy discussed “Licensing from a Publisher’s Perspective.” He discussed publisher goals (including protecting licensed content and making a sale) and challenges (including managing title lists). He also discussed two emerging tools: Online Information eXchange for Publications Licenses (ONIX-PL) and the Shared Electronic Resource Understanding (SERU). 2 ONIX-PL is a metadata schema for marking up licenses so that the terms and conditions can appear in a public display in a standardized manner. ONIX-PL has not yet emerged as a widely adopted tool; it would take a great deal more publisher buy-in to succeed. SERU has been a bit more widely adopted as an alternative to the licensing process, although crafting a purchase order with terms not covered in the text of the SERU document may be just as lengthy a process as negotiating a license.

Boissy asked for audience involvement in answering the question, “What amount of an e-book is a reasonable amount to lend on interlibrary loan?” The chapter was a popular unit, although opinions varied. Mr. Boissy also asked participants: should vendors be required to make public the specific deals that they give to libraries? Pricing is currently unequal, because it is based on historical spends. Two suggestions were either to change to pay-per-view pricing to “level the playing field,” or to require authors to pay page-charges instead of charging libraries subscription fees. Regardless of how pricing is decided, participants indicated that they would like to know how publishers determine fees.

The preconference covered a wide range of licensing and acquisitions issues and left the participants with many issues to think about, discuss, and research.

  1. International Coalition of Library Consortia, "Statement on the Global Economic Crisis and Its Impact on Consortial Licenses," January 19, 2009 (Columbus, Ohio: ICOLC, 2009) http://library.yale.edu/consortia/icolc-econcrisis-0109.htm (accessed Sept. 21, 2010) References to additional documents are available in the preconference booklet, “Taming the Licensing Tiger: New Formats, New Standards, New Challenges: An ALCTS Preconference, Presented by Becky Albitz, Robert Boissy, and Tracy L. Thompson-Przylucki. Friday, June 25, 2010, ALA 2010 Annual Conference, Washington, DC.” (Chicago: ALCTS, 2010).
  2. NISO/EDItEUR ONIX-PL Working Group, "ONIX for Licensing Terms: ONIX-PL Publications License format, Version 1.0" (London: EDItEUR, November 2008) http://www.editeur.org/files/ONIX-PL/ONIX-PL_format_spec_%20v1.0.pdf (accessed Sept. 22, 2010); NISO SERU Working Group, "SERU: A Shared Electronic Resource Understanding," (Washington, D.C.: National Information Standards Organization (, February 2008) http://www.niso.org/publications/rp/RP-7-2008.pdf (accessed Sept. 21, 2010).

XSLT for Digital Libraries

Holly Tomren, University of California, Irvine

This ALCTS preconference was presented by Christine Ruotolo and Patrick M. Yott, and moderated by Kevin Clair. Ruotolo is Digital Services Manager for Humanities and Social Sciences and Bibliographer for English at the University of Virginia Library, and Yott is Digital Library Manager at Northeastern University Libraries. Both instructors have extensive experience teaching XML and XSLT workshops for librarians. This preconference included material from two workshops the instructors have previously taught through the Association of Research Libraries (ARL): Web Development with XML - Design and Applications, and Advanced XML - Data Transformation with XSLT. Ruotolo was an instructor for the XSLT for Librarians preconference sponsored by ALCTS and LITA at the ALA Annual Conference in 2009. Workshop participants ranged from those with no XML or XSLT background whatsoever to those with considerable experience who were seeking review.

The workshop began with a review of basic XML (Extensible Markup Language) concepts, including basic structure, rules of well-formedness, processing XML, and XML namespaces. The instructors explained that it was important to know the rules of XML because XSLT is a “flavor” of XML. They then oriented the participants to the oXygen software that would be used in the workshop to edit and manipulate XML and XSLT files.

The next portion of the workshop consisted of an introduction to XSLT (Extensible Stylesheet Language Transformations), which is the language for transforming the structure of an XML document. According to the instructors, the application of XSLT for the digital library world is generally for the purpose of either presentation or for interchange of XML data. Ruotolo explained how XSLT transformation works and covered the structure of an XSLT stylesheet. She helpfully pointed out features that were new to XSLT Version 2.0, which became a W3C specification in January 2007. Ruotolo discussed the default rules, common XSLT elements, and navigating with XPath. Following this section, the instructors led the class through a group exercise in constructing a basic stylesheet, manipulating an XML file made up of sample bibliographic metadata from the Open Content Alliance, walking through each step hands-on using oXygen.

In the afternoon, the instructors covered further topics in XSLT, including sorting, functions, conditionals, and variables and parameters. The workshop concluded with an exercise in which workshop participants worked with a MODS (Metadata Object Description Schema) record and used XSLT to convert it to either an HTML file for presentation on the web or a SOLR (an open source XML indexing utility from Apache) document for indexing. Following the preconference, all of the sample files used in the workshop were emailed to workshop participants for further practice and reference. The instructors recommended Michael Kay’s XSLT 2.0 and XPath 2.0 Programmer’s Reference for those wishing to continue studying XSLT.

RDA 101

Mary Beth Weber, Rutgers University

This sold-out full-day preconference was organized by the ALCTS CCS RDA Programming Task Force and was cosponsored by LITA, ALCTS and OCLC. The program opened with remarks from June Abbas, Chair, and RDA Update Forum, who welcomed attendees and discussed logistics for the day.

There were five speakers for the preconference:

  • Barbara Tillet, Chief, Cataloging Policy and Support Office (CPSO), Library of Congress
  • Vinod Chachra, President and CEO, VTLS
  • John Espley, Principal Librarian, VTLS
  • Robert Ellett, Ike Skelton Library, Joint Forces Staff College and School of Library and Information Science, San Jose State University
  • Shawne Miksa, Department of Library and Information Sciences, College of Information, University of North Texas

Tillett gave two presentations “ RDA: General Overview” and “The Future of Cataloging and Metadata.” In the first presentation, Tillett noted that Appendix I of the booklet given to attendees provides a bibliography for the U.S. test of RDA, including LC documentation, LC choices for the test, and webcasts. She explained that RDA is intended to address the differences and show the similarities between different carriers. It is also intended to inspire collaboration with other metadata communities beyond libraries.

Cataloging rules began with Panizzi in 1841. In 1876, Cutter developed objectives of a catalog, which is to collocate items. In the 1950s, Seymour Lubetsky was commissioned by LC to look at existing cataloging rules. He took the basic principles to IFLA and this led to the foundation of modern cataloging principles. AACR2 evolved after the Paris Principles. In 1969, the International Standard Bibliographic Description (ISBD) was used worldwide for the descriptive portion of bibliographic records. A consolidated ISBD is under consideration until July 2010. In 1978, there was a change from old North American rules referred to as “desuperimposition” (this was a proposed LC policy to change all corporate or personal name entries to conform to AACR2 when it was published). The rules are now closer to the ideals of the Paris Principles. This led to a big push to close card catalogs and led to the creation on online catalogs. This was the first time that the United States, the United Kingdom and Canada shared the same rules, yet GMDs were still not unified. There were revisions to AACR2 in1988, 1998, and 2002.

FRBR is a conceptual model that reinforces the basic principles of Panizzi and Cutter. FRBR attributes let us know which basic data elements are to be included in resource description. The Variations Project, VTLS’ Virtua system and OCLC’s WorldCat all have FRBR implementations and use FRBR groupings. Relationships create pathways to resources. FRBR is not so different from what we do now. It will make concepts clearer for next generation catalogs and will enable linked data. It will provide more access to related works. It can also extend out to subjects such as Wikipedia articles. Resource discovery will provide pathways to all sorts of resources.

Tillett noted that the RDA Toolkit provides access to RDA, Tools, and Resources (these are available via tabs in the Toolkit). RDA includes many “compromises” to enable the transition from MARC. She closed by stressing that no one wants to repeat “desuperimposition.”

Tillett was followed by Chandra and Espley, who discussed VTLS’ product and approach to RDA. VTLS has been in the FRBR business for eight years and has several systems in production. Chandra noted that linked data is the correct implementation strategy for FRBR and RDA. Collocation is enabled by linked data. Navigation and discovery also benefit from linked data. Chandra discussed how “linked trees” are necessary to navigate between connected works. Sample records and displays were included in their presentation; VTLS has defined the 004 for linked data. A participant asked why the 004 was chosen for this purpose. The reply is that this is an old linking tag. The University of Rochester has also implemented FRBR independent of VTLS and chose the 004 tag for this purpose. MARBI was asked to consider defining the 004 for this purpose; there has been no decision or approval on this matter. See http://www.loc.gov/marc/marbi/2007/2007-dp04.html for more information.

Espley gave an overview of how VTLS enables the following processes: RDA workflow, simpler RDA cataloging, VTLS FRBR/ RDA extensions, cataloging assistance and authority records. Their Power Point slides are available at http://www.vtls.com/products/virtua (go to “Media” on the right side of the screen and select “Insights and Processes from VTLS’s 8 years of Experience with FRBR and RDA”).

Questions and discussion followed Chandra and Espley’s presentations. Tillett was asked to comment on VTLS’ terminology. Chandra and Espley noted that when Virtua was designed, RDA terminology was not yet available. The RDA Toolkit uses the term “workform,” which is not to be confused with VTLS’ templates. The RDA Toolkit provides a step by step process referred to as a “workform.” Libraries can use this workform to create their own workflows.

Chandra and Espley were followed by Robert Ellett, who stressed that RDA is all about options. He began his presentation with a photo from the 1979 Conference on AACR2. He noted that AACR2 is still the approved and implemented code since RDA had only been published that week. Library administrators have been waiting since 2008 to implement RDA. Ellett reported that he searched Amazon.com for books on RDA. There were no published books available.

Ellett focused on the manifestation level of FRBR for his presentation. He compared this to an analysis of an apple core with seeds. Ellett described it as “If the apple lacks seeds, we will need to add more to it.” He continued by discussing RDA records in OCLC as part of the U.S. test of RDA. OCLC will identify RDA records (they will not distinguish between “core” or “core plus”; they will be identified as RDA by a 040 $e rda). See OCLC’s policy statement.

Ellett reviewed the list of RDA core elements. RDA has “guidelines,” not “rules” (which is AACR2 terminology). “Extent of resource” no longer requires $b (other physical details) or $c (dimensions). Carrier type was also discussed. These carrier types, which are addressed in RDA 3.3.1.3 are not to be used for unmediated carriers (carrier type reflects the format of the storage medium and housing of a carrier and the type of device required access a resource; unmediated carriers do not require a device to access them). He cited examples of unmediated carriers: card, flipchart, and object.

Data transcription was addressed; general rules on transcription are provided in RDA 1.7.1, which instructs catalogers to transcribe elements as they appear on the source. If information is lacking, catalogers may also use style manuals such as Chicago Manual of Style or data from a scanned or downloaded source on the resource. This all translates to “use what you have.” Ellett discussed terminology changes between AACR2 and RDA (area versus element, heading versus authorized access point, main entry versus preferred title plus authorized access point for creator if appropriate, etc.).

Ellett walked the group through an exercise where RDA core and core plus records were examined. Participants were asked to partner with someone at their table to do a brief hands-on exercise to apply RDA guidelines to three examples. Participants reported back to Ellett and the group discussed the outcome of the exercise.

Ellett’s presentation was followed by a presentation by Shawne Miksa who discussed nonbook resource cataloging using RDA. Miksa briefly discussed what RDA will mean for cataloging nonbook resources and shared examples of resource records for “The Taming of the Shrew,” prepared by one of her graduate students. The examples compared AACR2 and RDA treatment of this title. She also distributed information from the RDA Toolkit on recording content, media, and carrier type and shared a workflow diagram that detailed how both AACR2 (rules) and RDA (guidelines) had been applied to create the records.

The preconference closed with a summary by Ellett. He shared with participants the “top 13 things to remember about RDA,” which was adapted from LC’s RDA Train-the-Trainer Course:

  1. User needs/user tasks
  2. Take what you see
  3. Work, expression, manifestation, item
  4. “Core” and “core if…” elements—can add other elements
  5. Alternatives, optional omissions, optional additions
  6. Fewer abbreviations
  7. Relationships, relationships, relationships
  8. Content, media, and carrier types
  9. No more “rule of three”
  10. Sources for information expanded
  11. Controlled vocabularies
  12. Identifying characteristics as building blocks for future linked data systems
  13. Don’t agonize over the small stuff. Your catalog uses different standards. The goal is not to produce a perfect catalog, but a useable one.

ALCTS Programs

ALCTS 101

Sherri Griscavage, Central Pennsylvania College

Aimed at having new members interact with old members to share new ideas, ALCTS 101 was held Friday June 24 from 7:30–9 pm in the Washington Convention Center.

ALCTS President Mary Case opened the evening’s program by welcoming new members, encouraging them find a home with ALCTS to face future challenges, make friends and meet colleagues. Case emphasized joining a committee or interest group for both self growth and the growth of the profession. After remarks from Case, a representative from each section within ALCTS discussed the benefits of belonging to their respective sections.

ALCTS President-Elect Cynthia Whitacre addressed the sizeable crowd about the Cataloging and Classification Section. Whitacre explained how she became involved in this section and their work, and concluded her remarks by encouraging members to volunteer, participate and learn.

Marlene Harris spoke about the Council of Regional Groups, which coordinates continuing education for technical services and catalogers in states and regional areas. Harris explained how she became involved in ALCTS and encouraged interested members to develop their own interest group, which takes only ten members and completed forms to create one.

Karen Darling discussed the Acquisitions Section, sharing how the Acquisitions Section was the first to offer continuing education. There are many committees in the section and there is something for everyone and Darling stressed volunteering for a position.

Lynda Wright discussed the Continuing Resources Section, the only format based section in ALCTS. There are eight committees and two interest groups and this section is very active in publishing.

Gina Minks addressed the group next and encouraged members to join the Preservation and Reformatting Section which encompasses an interesting mix of people in all sorts of places with all sorts of jobs from libraries, vendors, digital libraries and archives.

Amy Jackson discussed the ALCTS New Members Interest Group (ANMIG), of which everyone is a member. Very loosely structured with a space on ALA Connect, ANMIG is looking to meet like-minded people. After Jackson spoke to the group, interest group members stepped forward to introduce themselves and encourage members to join their groups. The next half hour following those introductions included small group networking which led to lively discussions between new and senior ALCTS members.

Following the group networking session, the ALCTS New Members Interest Group business meeting was held. The meeting included an ALCTS Presidential Citation to Keisha Manning, open discussion with members about future activities, officer elections and prizes for the library school with the highest turn out and drawings for free ALCTS memberships and Continuing Education courses. The program concluded with a recommendation to attend Saturday’s program “ALCTS Membership in the Division: What’s In It For Me?” The program was a fun and interesting way to encourage participation and meet colleagues while hearing about the mission of ALCTS.

Publishing, Not Perishing: Writing for Publication

Hilary Wagner, Rasmussen College

Diane Zabel, Endowed Librarian for Business and Editor, Reference & User Services Quarterly, The Pennsylvania State University Libraries and Peggy Johnson, Associate Librarian and Editor, Library Resources & Technical Services, University of Minnesota Libraries presented attendees with advice for those new to or struggling with by writing for publication. Zabel identified trends in the publishing process, common pitfalls of writers, and considerations on where to send manuscripts, the submission process, and selection criteria. Of utmost importance to both speakers was reading the instructions for submission and proofreading your work. Providing self-proclaimed “comic relief,” Johnson delineated dos and dont's. The top ten lists advised how to make your editor/reviewer happy or crazy. Recommendations from both presenters included the importance of accuracy and readability of data, insuring that article submissions are the right fit for the publication’s scope, check, double-check, and revise your article, answering the “so what?” question, and avoiding jargon. Try to write about something unique and different; do a literature review to see how much has been published on a given topic and how/if the article will fit into the larger context of library literature. If using statistical data, be sure it is accurate, easy to interpret, and have an expert verify your data. Do not be discouraged if your submission is sent back for revision and resubmission, as this is typical; also acceptance rates vary. They encouraged attendees to query editors regarding article submissions and reiterated that authors should follow their editor’s instructions. Periodically scan electronic discussion lists for calls for chapters, papers, etc. and consider opportunities such as reviewing. Zabel suggested www.lis-editors.org/best-practices/index.shtml for information on best practices for editors, authors, and reviewers. Presenters’ slides will be posted on the ALCTS site.

Preservation Forensics and Document Optical Archaeology at the Library of Congress

Stephanie Lamson, University of Washington

They may not be investigating crimes but the Preservation Research and Testing Division (PRTD) of the Library of Congress uses forensics and a number of scientific methods to uncover information about the past, present, and future of its most important collections. This program was organized and introduced by Holly Robertson, Head, Collections Care Section, Preservation Directorate, Library of Congress and provided an overview of the PRTD.

The PRTD uses technology originally created for other scientific applications for the non-invasive study of collections. The results of their studies are two-fold:

  1. They provide data on the deterioration of library materials that informs conservation treatments and the preservation of collections.
  2. Their analyses give scholars additional information about the creation, provenance, and history of objects that is otherwise inaccessible or invisible to them.

Fenella France described several Case Studies with Hyperspectral Imaging and Scripto-Spatial Digital Mapping. In gathering data for preservation, significant discoveries are also made. For example, by studying an early draft of the Declaration of Independence with different wavelengths of light, the PRTD was able to detect that Jefferson had first written “subjects” and subsequently wrote over it with “citizens.” This helped to corroborate earlier work by scholars and was subsequently covered by the Washington Post. By linking high-resolution images to different areas of a document, they are able to provide a digital mapping of the object similar to geographic information systems (GIS).

Lynn Brostoff described her work using Fourier Transform Infrared Spectroscopy (FT-IR) to analyze pigments in an illuminated 15th Century Armenian Gospel in Case Studies with Portable XRF and XRD. While diagnosing condition and characterizing materials to inform further conservation treatment, Brostoff used FT-IR and several other scientific methods to discover that the manuscript contained blue pigments that were previously thought only to be used much later and in a different geographic region. This discovery offered additional insight into the history of technology and trade routes.

Jennifer Wade demonstrated how microscopy is used for in situ analysis of objects at up to 100,000 times magnification in her presentation, Case Studies with Environmental Scanning Electron Microscope (ESEM). ESEM has enabled identification of the composition of inks and pigments used in a Persian manuscript to help conservators develop a treatment plan to slow further deterioration. Similarly, ESEM characterizes image particles on models of extremely rare and delicate ungilded daguerreotypes to help evaluate surface-damage and develop guidelines for non-destructive analysis and environmental storage to slow or stop further corrosion. This is important to LC since it has eight ungilded daguerreotypes including the first American photographic self-portrait. Lastly, ESEM has given scientists a first look at the differences in “sticky” magnetic tape before and after baking, in an effort to pinpoint the cause of sticky-shed syndrome.

After the presentations, participants toured the Optical Properties Lab and the Chemical and Physical Testing labs to learn more about these methods. The work of PRTD is of general interest to libraries since their work helps to establish specifications for library supplies (boxes, labels, etc.), develop standards for environmental control, and create inexpensive methods to preserve collections.

For more information about LC’s Preservation Research and Testing Division’s research agenda, visit the Research Projects Update. Recordings of similar lectures from LC’s research scientists will soon be made available through the website for the Preservation Directorate’s 50th TOPS (Topics in Preservation Series) program.

Converging Metadata Standards in Cultural Institutions: Apples and Oranges

Rebecca Mugridge, Penn State University

This program, organized by the ALCTS Metadata Interest Group and moderated by Kevin Clair, Metadata Librarian, The Pennsylvania State University, featured:

  • Danielle Plumer, Coordinator, Texas Heritage Online, Texas State Library and Archives Commission
  • Ching-hsien Wang, Manager, Library and Archives Support Branch Organization, Smithsonian Institution
  • Joyce Chapman, Libraries Fellow, North Carolina State University

Plumer discussed the results of a project to provide training to metadata specialists who work in libraries, archives, museums, and state and local government agencies in Texas. The project was funded by an IMLS Laura Bush 21st Century Librarian Grant, and included the Texas State Library and Archives Commission, the University of North Texas, and Amigos as partners. Its intention was to address the need for quality metadata that could be shared across institutions. Desired outcomes of the project were to: 1) train metadata specialists, 2) create partnerships, 3) improve the quality and consistency of metadata for digital collections in Texas, and 4) improve access to rare and unique materials. Courses used for the education of metadata specialists were originally developed by ALCTS for the “Cataloging for the 21st Century” workshop series. Follow-up surveys of participants revealed an overall success with the project’s educational goals; ten partnerships were created that will provide access to at least 1,000 digital objects each.

Wang shared the results of a project at the Smithsonian Institution to develop a gateway to all of the Smithsonian’s various catalogs. The project brought together the metadata from libraries, archives, and museum catalogs to provide one access point for all of the Smithsonian Institution’s holdings. The Smithsonian is a diverse organization, with nineteen museums, twenty libraries, fourteen archives, a zoo, and an astronomical observatory. It is very decentralized with no unified access to its 400 databases and 150 web sites. The resulting Collections Search Center provides unified access to many of its databases, and it continues to grow over time.

Chapman reported on the results of a research study at North Carolina State University in which the cost of creating and the value of specific metadata elements were compared. The question asked was “Can we evaluate cost/value of metadata creation for archival materials by comparing the ratio of metadata creation time for distinct metadata elements to the ratio of researchers’ use and perceived usefulness of those same elements?” They studied these metadata elements in archival finding aids: abstract, biographical/historical note, scope and content note, subject headings, and collection inventory. With two partner institutions, they studied the time it took to create metadata for all of the elements in which they were interested. They also conducted interviews with researchers to determine perceived usefulness of the various elements. While they did not attempt to use a representative sample, and their findings may not be easily generalized, their results were nevertheless useful. The collection inventory took the most time to create, and was also the most useful to the researchers. The second most time-intensive metadata element turned out to be the biographical note, and was the least useful to the researchers. The abstract took relatively little time to create, but was the second most useful element to the researchers. Chapman ended with a discussion of how this type of study might be replicated with other types of metadata.

Multiple Formats and Copies in a Digital Age: Acceptance, Tolerance, Elimination

Scott M. Dutkiewicz, Clemson University

Speakers: Roy Ziegler, Associate Director of Collections, Florida State University; Robert Kieft, College Librarian, Occidental College; Doug Way, Head of Collection Development, Grand Valley State University. Moderator: Mille Jackson, Associate Dean for Collection, University of Alabama.

Roy Zeigler presented a case study on withdrawing volumes duplicated in JSTOR. Since Florida State University’s collections had outstripped current storage capacity, JSTOR de-duplication was an easy method to quickly gain space. He described the collection development policies using three specific withdrawal criteria: relevance to collection, reduction of duplication, and a preference for electronic content. The reaction, which Ziegler termed the “J storm,” was caused by a failure of the library to engage and communicate with affected faculty. Lessons learned included: to take time and effort to engage the faculty, and to be transparent when communicating with the faculty in particular with goals and rationales.

Robert Kieft took a complementary approach to the same issues. He outlined six trends that affect collection development work. For example, digitization of texts is “inexorable and financially desirable.” In ten to twenty years, most printed materials will be special collections items. The concept of a local collection is no longer useful; in contrast, an institution’s collection will be networked, cooperative and commercial in nature. He noted that libraries have a history of de-duplication, citing examples of print newspapers, periodicals and journals to microfilm and the transfer of audiovisual formats from analog to digital. Kieft concluded with a summary of best practices for format de-duplication, such as calling for retention policies in catalogs.

Doug Way provided visuals that portrayed the barriers and benefits of de-duplication. Under the topic of barriers, he focused on a syndrome of fear, aversion to risk, and the abdication of professionalism. Librarians are also misled by the needs of the outlier, or the unique case, whether it is a patron or collection. He then outlined the benefits: space, time, money, and the intangible, “opportunity.”

Cooperative Collection Development: We Really Mean It This Time

Mike Diaz, ProQuest

Moderator Christine Dulaney discussed how renewed focus on best practices in collaborative collection development has stemmed from an explosion of digital information, budget and staffing cuts, increasing user demands, and a need for global information sources.

Ivy Anderson, California Digital Library, echoed these points and added that while this concept is not new, she sees the library community as being at a tipping point now due to new webscale discovery capabilities and disruptive networks that are breaking down walls (e.g. HathiTrust). Beyond traditional benefits such as enhancing services and sharing expertise, Anderson noted that cooperative collection development facilitates sharing of costs and risks, maximizing access to resources, and achieving outcomes not possible through independent action. Typical barriers to success of cooperative collection development include institutional self- interest, desire to maintain autonomy and local identity, and competition among members. Effective collaborative efforts typically are supported by mandates, money, moxie, and motivation (clear and compelling rationale for members). Face to face time and lots of open and honest communication are important for success. Where appropriate, memoranda of understanding and service level agreements also can play a role in aligning expectations of participants.

Two case studies were presented on cooperative collection development in academic and public libraries. Kathy Tezla, Carleton College described a new partnership with St. Olaf College. Outcomes of this partnership have included shared journal review and selection twice a year, a shared print weeding project, a single journal list for access, and rationalization of government documents collections. Dawn Peters, Director, Orchard Park Library, discussed how cooperative purchasing at Buffalo and Erie County Public library system across thirty-seven units increases scope and drives efficiency. This collaboration has also facilitated “floating” of large print titles across the system and creation of specialist collections for cooking and a local polish population.

Strategic Future of Print Collections in Research Libraries

Rebecca Mugridge, Penn State University

Organized by PARS, and moderated by Deborah Nolan, this program featured three speakers:

  • Walt Crawford, Editorial Director, Library Leadership Network
  • Shannon Zachary, Preservation & Conservation, University of Michigan
  • Doug W. Nishimura, Image Permanence Institute

Walt Crawford spoke briefly on the interdependence between print and screen, claiming that digital resources should enhance print, rather than completely replace it. He discussed the benefits of Google Book Search, especially its strength in discovery; however, he questions its value as a way to read long-form written material, and its value as a form of preservation. He also questioned the viability of Google as a company over the long term. He asserted that print books have worked well for a long time, but we do not know that electronic books will.

Shannon Zachary discussed the “implications for the strategic future of print given the Googlization of books.” Although not speaking for Google or the University of Michigan, he did discuss the University of Michigan’s participation in the Google book scanning project. There have been over 4.5 million books scanned from the University of Michigan, and over 13 million including all the Google partners. He addressed the impact of the project on the library, stressing that there has been a commitment to non-destructive scanning and minimal damage. The option exists to withhold books from scanning; Google staff also examine the books and there is an average of 1-2 percent rejection rate based on condition. However, the books are handled at six points during the scanning process, and accidents do happen. They conducted a study to determine if it is possible to predict the amount of damage, and found that it was more difficult to predict than they expected. Zachary continued with a discussion of the role of the print item after it has been digitized; possibilities include: 1) preservation, since we may not feel that the digitized version is reliable; 2) use, if the digitized version is not adequate; 3) source, for future corrections, if needed; and 4) access, since we have not yet resolved all of the issues for in copyright materials. One question that has not yet been answered is whether the availability of digital surrogates replaces print use, or increases it.

Douglas W. Nishimura discussed “print on demand quality issues for libraries.” Advantages of print on demand include: 1) there are no warehousing costs for the publishers; 2) print runs match demand; and 3) corrections and updates can be made easily. However, there has been very little work done to determine the stability of print on demand books. Ink and paper quality may have long term effects that have not yet been determined. Finally, Nishimura discussed the work of the Digital Print Preservation Project, which is sponsored by the Andrew W. Mellon Foundation and the Institute of Museum and Library Services, and which provides information about the long-term care of digitally printed materials.

Cataloging and Beyond: The Year of Cataloging Research

Shirley J. Martyn, St. Mary's University

On January 17, 2010, at the ALA Midwinter Meeting, ALA Council declared 2010 as the Year of Cataloging Research. In honor of this declaration, ACLTS with co-sponsorship from the Library Research Round Table, the LITA Next Generation Catalog Interest Group, and RUSA RSS Catalog Use Committee presented this program. Allyson Carlyle, Associate Professor, iSchool at the University of Washington, welcomed the packed house to the early morning meeting and served as the moderator for this panel discussion.

Sara Shatford Layne, Principal Cataloger, UCLA Library Cataloging and Metadata Center, presented some questions which she felt would be useful to have researched thoroughly. Layne explained that the catalog serves two purposes: 1) to locate an item; and, 2) to explain what has been found. She noted that many more detailed questions stem from the larger question: “Is what I’m doing useful?” Related questions might include “What does authority control do for us?” “What is the purpose of bibliographic control?” and “What level of cataloging should be used for backlogs?” Usefulness is difficult to measure, however. Layne cautioned that what may be easy to measure may not necessarily reflect what is important and that in cataloging research “rare” or “infrequent” does not mean “invaluable,” or does “frequent” equal valuable.

Lynn Connaway, Senior Research Scientist, OCLC, discussed recent OCLC-sponsored (or co-sponsored) research projects, including the JISC-funded “User Behavior Studies of Digital Information Seekers.” This research found some common themes of user behaviors: the importance of convenience, the short time spent on an online resource, and the predominant use of basic searches (as opposed to advanced search functionality). The users largely thought of libraries as collections of books. She advised that the terms users use be considered and analyzed: frequently they are saying what we are, just with different terminology. Connaway also cautioned that the so-called “Google generation” claims may not be supported, based on her research.

Jane Greenberg, Professor and Director, SILS Metadata Research Center, School of Information and Library Science, University of North Carolina at Chapel Hill, directed her comments to the intersection of cataloging and metadata. She focused on three areas of research: automatic metadata generation; creator/author-generated metadata; and metadata theory. For the first topic, Greenberg posited that traditional manual cataloging approaches are costly and cannot keep up with current needs, but that automated applications can assist humans with improved algorithms and increased machine intelligence. We all have questions, many of which are shaped by our daily activities, she concluded. Greenberg also discussed the “cataloging blitz” that was held on the University of North Carolina-Chapel Hill campus this spring and her hope to replicate this event two more times this.

Amy Eklund, Catalog Librarian and Instructor, Georgia Perimeter College Libraries, concluded the panel discussion with a presentation on next generation catalogs. Eklund explained that “next-gen catalog” could mean a variety of things, but generally includes Web 2.0 features (faceted browsing, federated search, ability to handle several metadata schemas, and alternative displays). A variety of areas in need of research were presented including functionality and features, cost-benefit analysis, and system designs. The remainder of the two-hour session was filled with questions from the audience.

Emerging Research in Collection Management and Development

Stephen Dew, University of North Carolina at Greensboro

The ALCTS Collection Management and Development Section (CMDS) sponsored a research forum, "Emerging Research in Collection Management and Development," which was held on Sunday, June 27, 4–5:30 pm, in the Washington Convention Center. The purpose of the forum was to nurture new authors by giving them an opportunity to present research and receive feedback as they prepare manuscripts for publication. Proposals for the forum were solicited by and refereed by the ALCTS/CMDS Publications Committee.

The first speaker was Aline Soules from California State University, East Bay, and her presentation was entitled “A Comparison of Biographical Information in Commercial Literary Databases and on the Open Web.” Soules based her research project on the question, “Do we still need commercial databases for biographical information used in studies of English literature?” In her preliminary research, Soules selected the names of a variety of literary authors and checked for biographical information in the Gale Literature Resource Center, the Wilson Reference Bank, Wikipedia, and the Open Web. Although there was concern about the authority of some sources, ample information was found for most authors in Wikipedia and the Web, suggesting that commercial biographical databases may be expendable for libraries with tight budgets. This study will be expanded to include a larger set of authors.

The second speaker was Jeffrey Kushkowski from Iowa State University, and his presentation was entitled “Core Journals in Corporate Governance: An International Review: Implications for Collection Management.” At Iowa State, business faculty involved in the interdisciplinary field of corporate governance were concerned about their tenure and promotion process, especially which journals should be acceptable publishing venues. Working with the College of Business, Kushkowski undertook a citation analysis of the key journal in the field (Corporate Governance), and from the analysis, he developed a core list of journals. He also made a point of emphasizing that journal rankings can be useful, but they can also be misused; the quality and impact of each particular article is more important than the ranking of the journal that it is published in. The PowerPoint slides for both presentations and Kushkowski’s handout are available on the Conference Wiki.

Open to Change: Open Source and Next Generation ILS and ERMS

Lynda Wright, Randolph-Macon College

Sponsored by the ALCTS Acquisitions Section and Continuing Resources Section, this forum was held on Sunday, June 27 at 10:30 am and featured the following speakers:

  • Galadriel Chilton, Electronic Resources Librarian, University of Wisconsin, Lacrosse
  • Bill Erickson, Vice President for Software Development and Integration, Equinox Software, Inc.
  • Robert H. McDonald, Associate Dean for Library Technologies, Indiana University Digital Library Program and Executive Director of Kuali Open Library Environment

Vendors of electronic resource management systems and integrated library systems are slow to respond to rapidly changing user needs, and librarians are responding by creating their own systems. This program presented three case studies of open source library management systems.

Galadriel Chilton opened the program with a compelling story of the development of ERMes, a homegrown database of databases developed in collaboration with William Doering at University of Wisconsin, Lacrosse. ERMes began as a tool to assist Chilton in managing the more than 275 electronic database subscriptions at his campus. Using Microsoft Access, Doering and Chilton created a flexible system which allowed Chilton to input and manage the specific data she required. Chilton and Doering continuously expanded and updated the system and soon began sharing it with other users. Today more than forty libraries are using ERMes and are contributing to its continued growth and development. (Chilton’s PowerPoint presentation: http://www.slideshare.net/gchilton).

Bill Erikson gave a brief overview of Evergreen, an open source ISL begun in 2004 with catalog and circulation modules. Focusing on the upcoming acquisitions and serials modules which will be released in version 2.0 later this year, Erikson promoted the significance of open source software in creating open standards and encouraging resource sharing. http://www.esilibrary.com/esi/

The Kuali Open Library Environment’s (OLE) mission is to define and build a next-generation library system based on the needs of the academic and research library community. Unlike Wisconsin’s ERMes which was built initially to serve a specific need and then shared with other libraries, the OLE project began by building the partnership community first and only later designing the software. The OLE project, led by Duke University with fifteen other university partners, will be an open source software management system designed to serve the academic and research library community. (See http://www.kuali.org/ole)

Negotiating the Downturn: Collection Development for Tough Times

Raik Zaghloul, University of Arizona

Speakers

  • Faye Chadwell, Associate University Librarian for Collections and Content Management, Oregon State University
  • John Saylor, Associate University Librarian for Scholarly Resources and Special Collections, Cornell University
  • Dan Hazen, Associate Librarian of Harvard College for Collection Development, Harvard College Library

For Oregon State University, who has been enduring budget cuts for almost two decades, the current downturn is like being “sucker punched while still slowly succumbing to some terrible wasting disease.” For Cornell and Harvard, the downturn is a new experience.

Cornell’s John Saylor has been at Cornell for thirty-seven years; last year was the first year that the materials budget was cut. Three selectors took advantage of retirement incentives and their positions were not filled. Cutting the collections budget led to stopping the exchange program, limiting the monographic series that were running for years and had not been analyzed, reduced the approval plan, transformed the physical sciences library (into a virtual library and incorporating the contents into other libraries, and the coming folding of the Entomology Library into Mann Libraries). The collections budget was spared for fiscal year 2011 due to faculty intervention. Progress is being made in sharing selectors for area studies and in plans to buy most of the needed software applications rather than building their own. Some bright spots at Cornell are Project Euclid, with Duke University, that sold $30,000 worth of subscriptions last year, and a print-on-demand project with Amazon that generated almost as much money in the first three months of this year.

Harvard’s financial troubles came from a drop in the payout of the endowments. While layoffs and retirements reduced staff significantly, the collections budget was spared. The library is working on a model to set priorities for spending the collections budget and is considering sharing resources with other university libraries.

Oregon State University offered early retirement incentives and implemented furloughs. To manage the collections budget, approval plans were turned off, serials were cut, and gift money was used to buy monographs. The library developed a formula for allocating funds for monographs that takes into account, among other factors, the number of faculty and students in departments. This formula will be applied to 30 percent of the allocation and the rest will be based on historical spending patterns. The severity of the cuts was reduced by the unexpected passage of tuition and tax hikes and by infusion of stimulus money. Serials fund was reorganized into three areas rather than by individual funds, easing the management of funds and accommodating interdisciplinary areas (a librarian in agriculture cannot cancel a journal without the science librarian knowing about it). A new department, Chadwell calls “the get it department,” was established to bring together acquisitions, access services, and interlibrary loan to create collections services that are users-centered. Central to the recent campus-wide restructuring is the emphasis on the support for signature areas: advancing the science of sustainable earth ecosystems, supporting human health and wellness, and promoting economic growth and social progress. It is not clear whether the emphasis on signature areas will influence the allocation of collections budget.

ALCTS President’s Program: Got Data? New Roles for Libraries in Shaping 21st Century Research

Rebecca Mugridge, Penn State University

Moderated by Brian Schottlaender, University Librarian, University of California, San Diego, this timely program featured Francine Berman, Vice President for Research and Professor of Computer Science, Rensselaer Polytechnic Institute. Berman discussed the need for a cyberinfrastructure to support the storage and management of research data. The cyberinfrastructure must include services that support data access, use, management, storage, and preservation, and must be usable, scalable, interoperable, reliable, capable, predictable, accessible, and sustainable. The costs associated with a reliable data cyberinfrastructure include maintenance and upkeep, software tools and packages, utilities, space, networking, security, and training. Libraries have experience with many of these challenges, and they may be able to work with researchers to solve them. However, she does acknowledge that this is an unfunded mandate for libraries, and it will be necessary to identify some source of funding to support work in this area.

Significant planning will be required to preserve digital research data, including scientific, historical, and cultural data, all of which require different approaches. Questions that must be answered include: 1) what should be preserved?; 2) who is responsible?; and 3) who will pay for the infrastructure required? It is clear that we cannot save everything and that we will have to be selective. Data that we want to save over the long term includes: 1) data of interest to society, such as historically valuable data, 2) data of interest to researchers, and 3) data of interest to individuals, such as medical records and family photographs. Other data must be preserved by law, such as health information, accounting records, and federally-funded research data.

Berman also discussed the economics of digital preservation, reporting on the work of the Blue Ribbon Task Force on Sustainable Digital Preservation and Access, which she co-chaired. This group was charged with: 1) conducting a comprehensive analysis of digital preservation; 2) identifying and evaluating best practices; 3) making recommendations for action; and 4) articulating next steps for further work. The final report of the task force, “Sustainable Economics for a Digital Planet: Ensuring Long-Term Access to Digital Information,” is available at: http://www.cs.rpi.edu/~bermaf/BRTF_Final_Report.pdf. The report identifies the challenges that we face and makes recommendations regarding next steps for all stakeholders. These include encouragement to create public-private partnerships that align stakeholders, address the preservation needs of valuable digital materials, act to reform copyright legislation to address digital preservation needs, and create financial incentives to encourage the preservation of digital materials. Research sponsors should create preservation mandates when possible, invest in building stewardship capacity, and provide leadership in training and education for preservation. Librarians and researchers must “make the case” to decision makers, and articulate the costs of not acting in support of digital preservation.

TOP

Boot Camp for the 21st Century Metadata Manager

Arlene Klair, University of Maryland

This program, co-sponsored by ALCTS CCS and the Online Audiovisual Catalogers (OLAC), was well named as the detail delivered can hardly be conveyed in this report.

Rebecca L. Lubas, University of New Mexico, dove into current options: outsourcing, in-sourcing, contract cataloging and others. Check assumptions. Has staff gained needed skills (languages, subject expertise)? If local customizing exists, does it still serve user needs? Serving users well requires close, regular collaboration with all stakeholders: acquisitions, collection development, and reference. Participation in WorldCat Local and WorldCat.org may mean new work is needed. Outsourcing should be considered differently. Good quality control, expectations based on user needs and careful choice of materials make it more viable and cheaper, especially if segments needing special handling are culled. Batch purchased records vary in quality. Always start with a test project, loading small numbers of records. Continuously assess products by all stakeholders.

Rapid changes in cataloging standards were the focus of Bonnie Parks, University of Portland. She emphasized the importance of a methodology to stay up to date with standards so data created plays well in shared pools of records. Assigning a point person to keep up with changes, comparing new standards to in-house documentation, creating timeframe to up-date in-house documentation is important. Ensure staff are re-trained and use current documentation. Three recent standards to review are the CONSER Standard Record—CSR (2007), Provider Neutral Record for E-Monographs (2009), and the BIBCO Standard Record—BSR (2010). Resource Description and Access ( RDA) is the developing cataloging standard. To keep up with the significant learning curve, assign a point person, look for training opportunities, begin training staff and start using training materials as soon as possible even if records cannot be contributed yet.

Robert Bothmann (Minnesota State University) added more detail to the discussion of RDA. Noting that catalogers resisted the switch from the several previous cataloging codes, concerns about RDA are understandable. One of the historical reasons for resistance was lack of preparation. Understanding the Functional Requirements of Bibliographic Records, FRBR, is essential for reading and understanding RDA. Additional essential preparation include using online training materials, making a local plan, using FRBR vocabulary now, making a map for staff of data elements your institution uses most, and organizing training for staff.

MARC’s new life outside the ILS was the topic for Glen Wiley (Cornell University). MARC data is now used in new ways. A large scale digitization project demonstrated the ability to take publisher’s ONIX data, map it to MARC, use that data to extract and assign more data and send the much fuller MARC record back to publishers for distribution. Another project uses electronic resources MARC data, mapping it to Dublin Core for digital portals. Others map MARC to EAD for specialized digital projects or transform MARC to RDF for semantic web applications and linked data. All these projects mean cataloging work moving away from pieces and moving to project management. Cornell keeps a registry of all projects so scripts can be reused/repurposed to reduce costs and streamline processes.

Elaine Westbrooks, University of Nebraska, focused on Next Generation Catalogs (NCGs). Characteristics of NCGs: proprietary, faceted searching and browsing, social networking features and one stop searching. Unfortunately vendors charge an additional fee because they do not see discovery as integrated with inventory functions. Challenges facing implementation and use of NGCs include: data is lost when the data source is Dublin Core; faceted searching reveals database flaws; policy changes such as MARC coding changes to leverage the NCG can negatively impact cataloging; and vendors do not always share algorithms or relevance rankings so unexpected result sets cannot be readily understood. Interestingly, users are not using social networking features as expected. It was also thought by some that NGCs would reduce cataloging effort. In many institutions it reveals the need for data cleanup or a hasty finish to retrospective conversion. Test load the database to see what effort is needed to obtain the best result for users. TSD staff understands user needs and knows how to make effective changes to practices to support the user experience.

Presentation handouts for this session can be found at http://presentations.ala.org/index.php?title=Monday%2C_June_28

E-Books: How Do You Know It Was Worth It?

Emily Prather-Rodgers, North Central College

By a show of hands, it appeared that most of the attendees at the program “E-books: How do you know it was worth it?” firmly believe that e-books are a valuable part of their libraries' collections. For most, however, statistical proof is needed to ensure that funding for electronic resources will be available in future years.

Terry Kirchner, Westchester (NY) Library System, explained that his system began collecting electronic resources by looking at the formats and genres that were popular in print formats. Despite now having both statistical and anecdotal evidence of several years’ e-book usage, Kirchner admits that he now has more questions than answers about the importance of acquiring electronic resources. He hopes that he will be able to collect additional statistics when the system begins adding MARC records for e-resources to the libraries' catalog. He is also looking for a way to learn more about the types of people who are using e-resources and whether or not they return to use the same products repeatedly.

Tom Wright, Brigham Young University, began his statements by reminding attendees that, at least from the academic institution's point of view, electronic resources, particularly in the form of indexes and journals, are not particularly new. Despite this, Wright admits that e-books do cause some problems for libraries. It is difficult to define use for an e-book and even when statistics are available it can be almost impossible to compare statistics between suppliers.

Although many questions remain, BYU librarians decided several years ago that e-books are, indeed, worth it. They have purchased approximately 450,000 e-books, mostly in the form of publisher-specific packages and platform-level subscriptions. Using the statistics that are available, Wright and his colleagues have determined that rates of usage for print and electronic resources are shockingly similar. Only about 20 percent of the libraries’ holdings in both formats are being used each year.

The final speaker, CEO and co-founder of ebrary, Christopher Warnock, was a clear proponent of e-books. He faced much disagreement when he stated his belief that “there is nothing inherently valuable in a printed page that adds value to research.” One attendee responded, angrily, that publishers and libraries cannot simply stop collecting print, because there will never be a time when all information is available electronically. Warnock countered that, already, more information is available electronically than it is in print. He stated that 92 percent of the information produced in 2002 was stored electronically. Warnock said that he knows that some people will always cherish the printed book and that e-books will never replace the print world entirely. He believes, however, that e-books can be more available to more people than printed books. With e-books, the notion of checking out a book to a single user need not exist, content can be available even when the library is closed, a single copy may be manipulated to be ADA compliant by increasing font size, content can be data mined for improved discoverability, and publishers should be able to produce and sell e-books for less than print equivalents while increasing their companies' profits. Warnock closed by quoting William Gibson: “The future is already here, it is just not evenly distributed.” He firmly believes, however, that libraries have the best chance of improving distribution. They are the only institutions that can make all of the information available and accessible to everyone—and for free.

ALCTS Forums

Continuing Resources Section Standards Update Forum

Ngoc-My Guidarelli, Virginia Commonwealth University

The goal of this forum was to present the development of an institutional identifier (I2) standard and to solicit audience comments. Best practices to manage electronic continuing resources were also discussed.

Grace Agnew, Associate University Librarian for Digital Library System, Rutgers University, gave an update of the NISO Institutional Identifier Working Group. The group is charged to develop “a robust, scalable, and interoperable standard to uniquely identify institutions and to describe relationships between entities within them.” It includes a unique identifier for institutions in the information supply chain. In Phase I, the group surveyed institutional repositories and library workflows. In Phase II, they studied existing standards and developed the purpose, environment, and structure of the new standard. The I2 Midterm report can be found online.

Regina Romano Reynolds, International Serial Standard Number (ISSN) Coordinator, Library of Congress, discussed the challenge of finding articles in some aggregators’ databases due to inaccurate journal citations. Web sites should “accurately and consistently” list the title “under which the content is published” including the ISSN. A NISO working group is developing a recommended practice for publishers and platform providers. It focuses on title presentation, accurate use of ISSN, and citation practices of e-journals.

Finally, Brian Green, Executive Director, International Standard Book Number (ISBN) Agency, discussed the development of ISBN in the past forty years. Computers, then e-commerce, have driven its expansion. Because of the inconsistent use of ISBN for e-books, the International Digital Publishers Forum recommended that each version of e-book, as delivered on different platforms, should have a unique ISBN. He also discussed the use of ISSN and ISBN as related to digital publications. A question-and-answer session followed the presentation.

TOP

The Pennsylvania Study: Research into Optimal Environmental Conditions (PARS Forum)

Kimberly Peach, Yale University

Neal Rusnov, an architect for the Commonwealth of Pennsylvania and Tom Clareson, Senior Consultant for New Initiatives at LYRASIS, discussed the project to study the effects of environment on paper-based material in which they are involved. The project, funded by an Institute of Museum and Library Services Leadership Grant, is two-fold. It includes an analysis of the recent state-of-the-art renovations of the Pennsylvania Rare Collections Library as well as a project to analyze the effects of environment on different types of paper housed in sixteen cultural institutions throughout Pennsylvania.

The renovations include redesigns of areas key to the long term preservation of collections including HVAC, lighting, fire detection and suppression, security, and storage. While measuring the success of these changes toward the project’s goal of preserving the Pennsylvania State Library’s Rare Collections for another twelve generations, the project also evaluates the cost savings and environmental benefits of making a significant monetary investment into energy efficient equipment and facility design.

Key features of the redesign include:

  • Creating a “building within a building” structure with double walls, vapor barriers and air lock entry systems
  • Separating collections by type of media (i.e. rag paper versus paper pulp) into separate spaces each with individualized air handling units
  • HVAC systems that bring in no outside air
  • Clean room specifications for some storage spaces
  • 98.7 percent filtered air
  • Long lasting and low heat producing light fixtures
  • Fire detection systems set at very low levels of detection for early response including a water misting system in public areas and clean agent system in storage areas.

Preliminary results have already shown that the investment is paying for itself while significantly prolonging the life the collections. The study also includes a research project analyzing the effects of environmental conditions of various cultural heritage institutions across Pennsylvania on different types and ages of paper. The Rare Collections Library is serving as the known control environment. The study monitors temperature, relative humidity, light levels, air pollutants, energy usage, and facilities maintenance practices while measuring the rate of aging of the various paper samples. This research is supported by the National Archives and Records Administration as well as various laboratories around the country, including the Glatfelter Laboratories in Pennsylvania where specially formulated papers were developed for the project.

Results of the study and tours of the renovations will be presented at the Pennsylvania Project Conference cosponsored by the Pennsylvania Office of Commonwealth Libraries and Lyrasis.

RDA Update Forum

Miloche Kottman, University of Kansas

Speakers: Sally McCallum (Library of Congress), Beacher Wiggins (Library of Congress), Troy Linker (ALA Publishing), Glenn Patton (OCLC)

Sally McCallum highlighted key changes in MARC21 related to RDA in Updates 9-11. A summary is available online. One of the key changes for bibliographic and authority records is the addition of new 3xx fields. Use subfield “$e rda “in the 040 field to indicate that RDA rules were used for description.” MARBI Proposal No. 2010-07 to revise Leader/18 (Descriptive Cataloging Form) to indicate the absence of ISBD punctuation at the end of subfields is currently being considered. McCallum closed with a list of LC decisions in regards to RDA testing.

Beacher Wiggins covered the plans for RDA testing in the United States.:

  • June–September 30, 2010: Test participants will familiarize themselves with RDA Toolkit.
  • October 1–December 31, 2010: Records will be created and shared. Records will include twenty-five common titles to be cataloged using AACR2 and RDA, five titles to test copy-cataloging and a minimum of twenty-five additional records that each institution chooses. RDA records created by LC catalogers will be distributed to OCLC.
  • January 2–March 31, 2011: A report will be compiled based on the assessment of the test records along with feedback received from participants. A final decision on adoption of RDA will be made by the 2011 ALA Annual Conference. LC is developing a mechanism for non-formal participants to also participate and provide feedback. Policy or procedure type questions about RDA can be sent to: LChelp4rda@loc.gov

Troy Linker of ALA Publishing provided a demo of the RDA Toolkit. June 23–August 31, 2010 was the free open access period. Sign-up is necessary for free access but there is no obligation to purchase and no need to opt-out at the end of the testing period if not purchased. A solo-user subscription is now available for single-user environments. Pricing information for a print version of RDA will be available in the fall.

Glenn Patton, OCLC, discussed OCLC and RDA testing. OCLC is working to ensure that Connexion supports the creation of RDA records. OCLC plans to offer training webinars for test participants and general sessions for everyone else. The Contract Cataloging Service team will be participating in RDA testing. A policy statement on RDA cataloging in WorldCat is available online.

TOP

Continuing Resources Section Forum

Shana McDanold, University of Pennsylvania

The ALCTS Continuing Resources Section Continuing Resources Cataloging Committee Update Forum opened with a welcome and introductions from the current chair Christopher Walker. The Update Forum began with reports from various groups. Regina Reynolds gave a brief ISSN update. Reynolds discussed the recent population of OCLC with the ISSN-L (with a thank you to Robert Bremer), the current pilot program that is assigning ISSNs to integrating resources and the specific types of integrating resources included/excluded, the informal testing of ISSN assignment using RDA rules both at the U.S. Center and the international ISSN centers, the current discussion addressing conflicts between ISSN assignment rules and AACR2 title change rules, and the upcoming ISSN Directors meeting in October 2010.

Les Hawkins provided the CONSER update, including the Open Access journal pilot project (more information is available on the CONSER web site) and a review of the CONSER At-Large meeting. Hawkins reported on three SCCTP workshops given as ALA preconferences. All three workshops had been recently revised to accommodate rule changes. He pointed out that information on the use of the 588 field is now available on the CONSER web site.

The last report was the Committee on Cataloging: Description and Access (CC: DA) report given by Nancy Poehlmann. Poehlmann prefaced her report with the observation that serials are not addressed much in CC:DA meetings. Most of CC:DA’s meeting focused on RDA and the release of the RDA Toolkit. There are some problems with searching features and Unicode characters in the Toolkit, as well as gaps in some of the examples. These issues will be fixed.

The main portion of the Update Forum focused on two presentations about testing serial records in RDA. The Power Point presentations are available both on the ALCTS CRS CRCC Connect page and from the ALA Conference Materials Archive. Please note that the posted presentations include a corrected Slide 11 in the Power Point from Renette Davis.

The first presenter was Tina Shrader, National Agricultural Library. Shrader provided an overview of the U.S. national libraries’ formal testing of RDA and how groups may participate informally. She began with a review of the history of the development of RDA and background information on the U.S. national testing procedure. The purpose of the test is to determine if the changes in RDA are significant enough to justify the cost of training and implementation, essentially: what is the return on investment for RDA? Goals include assessing the operational, technical, and economic feasibility of implementing RDA. Shrader reviewed the test procedures, participants, and the testing timeline. Libraries interested in being informal testers have access to the methodology and a survey (currently under development) to share their results with the formal testing libraries.

Shrader also reviewed the test methodology. There will be a common set of twenty-five items in multiple formats and a common copy cataloging set of five items that will be cataloged by all the formal participants using both RDA and AACR2. Cataloging for both common sets will focus on description and omit subject cataloging and classification. Finally, there will be an extra set cataloged by all participating formal testing libraries to include a minimum of twenty-five items from their regular workflow. Each institution has agreed to focus on one format/type to ensure a good distribution of formats in the sample. In addition to the review of the records themselves, libraries will respond to surveys.

Shrader pointed out the handout on U.S. RDA Test Resources is also available online from the Library of Congress web page and additional documentation regarding decisions on the CPSO page. Regarding informal testing, Shrader reviewed what libraries can do to prepare, including training, local policy decisions, adjustments to your local ILS to accommodate changes in MARC21, and establishing templates and macros for standard data.

All test results, including surveys from informal testers, will be part of the analysis starting in January 2011. No matter what the U.S. decision is about implementation, Shrader stressed that the testing will help inform future adjustments to RDA for the rest of the world and will help improve the IFLA models and principles that RDA is based upon.

The second presenter was Renette Davis from the University of Chicago. The University of Chicago is a formal tester, and Davis’s presentation focused on the test preparation work being done there. The university has created a web page with training documentation, including cheat sheets, links to outside information, and examples. In addition to the university being an official tester, they have the benefit that their ILS is also an official tester. The preparation for the testing began in house in January 2010, which included creating records and loading them into their system (locally only) to review display issues.

The majority of Davis’s presentation focused on examples. Davis stressed that they did not really know what they were doing when they created the initial examples, but that is was a good exercise as it forced them to focus on documentation and to figure things out. The examples shown included an RDA record and a AACR2 record side by side to illustrate the changes and differences. The examples addressed the following: the replacement of the GMD by fields 336, 337, and 338; differences in marks of omission and use of abbreviations; lack of supplied other title information; editions statements; the transcription of publisher as it appears on the resource; no more use of S.l. and s.n.; multiple media and carrier types in a single record; lack of abbreviations in the 362 field (note: this is supported by the CONSER Standard Record currently); and cataloging of reproductions.

The final portion of the Update Forum was a discussion moderated by incoming chair Jennifer Young regarding organizing informal testing of RDA. It was noted that the CRCC was originally formed to test and provide feedback on AACR2, so it follows that the committee would do the same for RDA. Young then reviewed what the CRCC is proposing for informal testing of RDA. The Committee will seek permission from ALA to continue access to the RDA Toolkit after the open access period ends. This access will likely be for serials catalogers participating in the CRCC informal testing only, and not for all catalogers at one’s institution. The CRCC will focus on mixed formats, including both serials and integrating resources, as well as records following the separate record option and the single record option. The informal testers will likely have a space in ALA Connect for discussion, and will follow the same timeline as the formal testers. There will most likely be no training for informal testers. The committee will form a task force to organize the informal testing, as well as provide the access and a central location to collect feedback and records for submission. Judy Kuhagen confirmed Beecher Wiggins’ indication that there will be a mechanism for informal testers to submit records and feedback. LC will be collecting records, but Kuhagen stressed that it would be preferable for the records to be funneled through one source, such as the CRCC, rather than from multiple individuals.

Young opened the floor for questions, suggestions, and feedback about the CRCC’s informal testing proposal. Questions that still need concrete answers included how many records should informal testers create, what will be the feedback mechanism, and guidelines for feedback and submitted records. One of the recommendations is that libraries continue to do AACR2 cataloging in OCLC (once a record is created in either RDA or AACR2 in OCLC it must stay that way; no conversions, and no duplicate records) but create RDA records locally to contribute to the informal testing. This is especially important for those libraries that work in cooperative programs such as CONSER. In response to a question about handling authority work, Judy Kuhagen pointed out that anyone can contribute an RDA authority record to the National Authority File after October 1, 2010. There were several questions about ILS vendors, and the general response was to talk to your individual vendor directly about handling RDA records and the new MARC21 fields. For those libraries that do not have direct contact, it was recommended to discuss issues and questions with one’s consortium.

The final question was how to start the informal testing process. Young responded that they will put out calls via electronic discussion lists after a task force is established. Christopher Walker then thanked the presenters and attendees and closed the session.

PVLR Forum

Beth Hoskins, Duke University Press

The session was moderated by Elizabeth Lorbeer, University of Alabama, PVLR Co-chair. The panel represented library, vendor, and publisher perspectives on patron-driven acquisition of e-books:

  • Nancy Gibbs, Duke University Libraries
  • Matt Nauman, YBP Library Services
  • Kari Paulson, Ebook Library
  • John Riley, Busca, Inc.

Gibbs discussed Duke University’s participation in ebrary’s patron-driven pilot program and how it resulted in the library’s spending $25K in one month without publicizing the pilot to patrons. She identified areas where library workflow must change to accommodate patron-driven selection. Better matching routines and ease of loading and removing catalog records are two of the changes Gibbs felt were needed. Nauman described patron-driven acquisition as the future of e-books and how this will change the role of book vendors, publishers, and librarians. He also discussed the “ideal patron-driven model,” which would include approval plans, business rules, and short-term content loans.

Paulson spoke to EBL’s experience in the e-book industry and about tools EBL has created to assist librarians in administering patron-driven acquisitions. She pointed out that patron-driven acquisition should not be all-or-nothing, but more a tool to maximize library budgets through return-on-investment. Riley ended the panel with the perspective of a small publisher new to the e-books market and spoke to how e-books will enhance opportunities for the dissemination of content overseas. An overarching theme of the panel was the definition of an e-book “use” and how further clarification in this area will make patron-driven acquisition an appealing model for all parties.

TOP