These reports are summaries of ALCTS interest group activities up to and during the 2012 ALA Annual Midwinter Meeting in Dallas, Texas, held January 19–23, 2012.
ALCTS Division Interest Group Reports
Automated Acquisitions/In-Process Control Interest Group
In her presentation “Regional Usage Trends in Shared E-book Collections” Clare Appavoo summarized the usage trends of a humanities and social science e-book collection hosted on MyiLibrary purchased by the Canadian Research Knowledge Network (CRKN) on behalf of 67 member libraries. The collection was purchased over three years and analysis revealed that of the 20,459 titles in the collection only 11 were not accessed during the first 2.5 years. Subject usage data was also very interesting. Discussion followed.
Suggestions for topics for the annual meeting were also discussed. They include: designing workflow to ensure access to all electronic resources purchased, determining content of packages purchased and keeping title lists and access up to date, or designing an acquisitions fund structure that meets the requirement of both vendors and libraries. A call for presenters will be made.
Creative Ideas in Technical Services Interest Group
Thirty-one librarians gathered for the Midwinter Meeting of the ALCTS Creative Ideas in Technical Services Interest Group. Several weeks before the meeting the group’s Chair, Libbie Crawford (OCLC) and chair-elect/vice chair Wyoma VanDuinkerken (Texas A&M University) selected discussion topics and prepared leading questions to facilitate the discussions. The topics were distributed to several electronic mailing lists and posted on the group’s space in ALA Connect.
Attendees spent the first hour of the meeting discussing the topics in small groups they found to be most interesting or challenging in their environments. Facilitators and recorders were chosen for each group. The meeting concluded with one member from each group reporting on the discussion. The reports follow.
Pros and Challenges of Rapid Cataloging, facilitated by Catherine Grove (Northwestern University). Recorded by Chris Case. The members of the group introduced themselves and found there was a mix of public and academic libraries. They defined rapid cataloging and found that the term has It has different meanings in different places. UCLA defines it as the process of bibliographic description that gets books into hands as quickly as possible. For some institutions, it means shelf-ready materials received as part of approval or profile plans. Another set of institutions defines rapid cataloging as a small list of criteria that are evaluated in a record, rather than checking every part of the record. And finally some institutions define it as the use of Bib Notification via OCLC.
Issues that were discussed include:
- Administrators want catalogers to not check anything in the records that are received and/or downloaded. This generally is not perceived to be practical since there are often too many errors that are too important.
- Administrators do not understand record quality issues very well.
- Limited staffing needs to be addressed. Cataloging departments cannot have multiple people dealing with one book multiple times; this is too inefficient.
- Should cataloging be done on OCLC or on the local utility? Many work on the local utility, when it makes the most sense from a community standpoint to work within OCLC itself.
- What is the cost/benefit of checking call numbers? The tolerance for errors needs to be determined based on institution standards for access by users. If a user cannot find an item based on a call number that is wrong, the item is essentially lost.
Some possible solutions:
- One idea was to entirely remove e-journal records from the catalog and rely on next-gen user discovery and/or link resolvers to facilitate access to that content.
- Why do some formats lend themselves more readily to rapid cataloging?
- Possibly confidence in the source of the records?
- More likely, just because they are less visible (i.e., electronic resources, e-books, etc.)
- What kinds of materials should be candidates for rapid cataloging?
- Monographs, some serials, bulk loading of e-resource bibs
- The benefits of rapid cataloging include time saving and improved service to users.
Finally, cataloger should deal with the challenges by:
- Focusing on other benefits to users: the ability to reduce backlogs
- The ability address things that would not otherwise be addressed
- Focusing more effort on the materials that require original cataloging
- Enhancing crosstraining opportunities within the institution
- Using this opportunity to evaluate and redistribute workflows
Patron Driven Acquisitions, facilitated by Wyoma vanDuinkerken (Texas A&M University). Recorded by Sheli McHugh (Scranton University). Several members are looking at YBP and testing MyiLibrary. At Texas A&M patrons were e-mailing requests and subject selectors made the decisions regarding the purchase of the items. Now the request goes to acquisitions department and everything is automatically purchased with the price limit of $150. Acquisitions needs to have the collection in mind and if the title fits within it.
The YBP agreement is Demand Driven Acquisitions (for e-books) with a defined scope and cost. The scope is US$20-30K, with US$200 per item. To set a profile up one must be very specific and exact about what is wanted. The profile includes sending lists to the vendor on a regular basis based on what you want purchased, so you do not get updates of items you already own. Texas A&M sent YBP a list of everything purchased from 2001 to present, then specified they want everything from 2009 to present. Specifications were defined for two types of records: Discovery records that are loaded into the catalog and are basically vendor MARC record (quality ranges from decent to poor). The records are massaged prior to downloading to the ILS. Purchase is triggered by several views, downloads etc. Once purchase is triggered, the vendor sends a second record that costs $2, with the invoice information, Texas A&M overlays the discovery record, now owns the item and sets holdings in OCLC. This process cannot incorporate into OCLC PromptCat—then copy catalogers have to go in and download an OCLC record.
For consortia, records are loaded from Baker and Taylor for print books that might be popular. Patrons can place holds, and once a title has a certain number of holds, an order is triggered.
It is a challenge for the consortium, because the consortium does not want to burden a library that may not be able to purchase the titles. The consortium would prefer to have a pool of money to buy for the consortium. It would basically buy everything people want us to buy, but would like to stream line the process. Much of the process is tradition—selectors need to look at the titles, but it makes it longer process. There is no consortium-wide Patron Driven Acquisitions program.
When setting up the pilot, the institution got people from all departments, cataloging, reference, acquisitions, etc. in order to set up the profile and to understand the mechanics of the processes that go into this type of arrangement. It is important to have a pool of money and not have individual funds.
One library has the faculty select titles, but with no purchasing power. Librarians have a small pool of money that is used for reference. This library is hoping it will change when/if their pilot project is a success.
Creating Efficiencies in the Copy Cataloging Department, recorded by Angela McKinney (Library of Congress).
More and more institutions are relying on e-efficiencies to streamline the copy cataloging process. Initial Bibliographic Control Records (IBCR) are overlaid using a z processor. Additionally, shelf ready cataloging is becoming more available via vendors and other entities. Verification is done primarily in the local system or by using MarcEdit and uploading to the local system. In some cases the Acquisitions Department is doing the copy cataloging and only in certain instances sends an item for original cataloging.
FRBR Interest Group
Qiang Jin, Senior Coordinating Cataloger and NACO Coordinator at University of Illinois at Urbana-Champaign Library gave a presentation based on her upcoming book “Demystifying FRAD : Functional Requirements for Authority Data.” She explained the basic principles of the FRAD model and how it relates to RDA and creating RDA access points.
Shawne Miksa, Associate Professor in the Department of Library and Information Sciences at the University of North Texas, presented her conception of how cataloger tasks relate to the user tasks in all the Functional Requirement models and invited discussion and comment.
There was a question-and-answer session at the end of the presentations.
Linked Library Data Interest Group (with LITA)
The co-chairs welcomed the approximately forty-five attendees, and pointed out the lightning talk sign-up sheets posted to the back of the room. During initial housekeeping and the circulation of the attendance sheet, five or six individuals signed up for lightning talks.
Jon Voss, the organizer of the initial June 2011 Linked Open Data for Libraries Archives and Museums (LOD-LAM) Summit spoke to the group about the LOD-LAM Community (http://lod-lam.net/) and next steps as more cultural heritage institutions explore and utilize these techniques. People attended the initial Summit from around the world, and follow-up events have been held in Washington DC, London, and New Zealand. As this momentum continues to build, Jon urged the community to communicate with one another about what we’re doing and how we’re doing it.
Jon discussed the need to publish proceedings from the Summit and the various follow-up activities. This summary would include information about tools for publishing and ingesting as well as information on copyright, licensing, and how to discuss LOD-LAM with institutions and gain support.
Additional next steps include fund-raising for quick implementations. World War I data was mentioned as an example possibility, and there’s been a subsequent thread about WWI linked data on the LOD-LAM list: https://groups.google.com/group/lod-lam/browse_thread/thread/5a0c2a10059e8c1d
Jon also asked whether anyone had an archive of the #lodlam hashtag, as it was lost when Twapperkeeper discontinued archives.
Reports and Announcements
- Rachel Frick, Director of Digital Library Federation (DLF), mentioned that DLF is continuing to engage in linked data, and has launched an interest group: http://www.diglib.org/community/groups/linkeddata/
- Corey Harper mentioned the Codecademy’s Code Year project, and pointed the group to the listing of more than seventy catalogers learning about programming through this project. http://catcode.pbworks.com/w/page/49329140/Participants
- Becky Yoose has since launched a Q&A page based on the Stack Exchange model to provide a space for the “catcode” community to work together: http://www.libcatcode.org/
- Corey announced the LOD-LAM-NYC event happening on February 23 at the New York Public Library: http://lod-lam.net/summit/2012/01/27/lodlam-nyc/
- Corey also announced the formation of a Linked Open Data interest group within the International Group of Ex Libris Users (IGELU). This is hopefully the first of many vendor specific efforts to encourage library system developers to begin incorporating linked data principles into their products: http://igelu.org/special-interests/lod
Tools and Resources Subgroup Report
Laura Akerman presented the ALA Connect prototype work to collect tools and resources about Library Linked Data: http://connect.ala.org/node/159885
Laura indicated that there are problems with using ALA Connect to manage this kind of resource. It’s difficult to edit, requires a login, is slow to update, is static, and doesn’t provide a mechanism for commenting on specific tools. A variety of other options were discussed, and Rachel offered the DLF sponsored LOD-LAM Zotero Group: http://www.zotero.org/groups/lod-lam
The Tools & Resources subgroup agreed to try using the Zotero Group and to report whether this seems more manageable than the ALA Connect page.
Robin Wendler and Suzanne Pilsk reported on the workshop at the DLF Fall Forum linked data session: http://www.diglib.org/forums/2011forum/schedule/moving-forward-examining-the-need-to-re-tool-lams-data/
Notes from this workshop have been posted on the “LAMs Metadata” blog: http://lamsmetadata.blogspot.com/ They asked the group whether this blog might have a life beyond the workshop, and whether it might be useful to the Interest Group.
Robin and Suzanne also led a discussion on outcomes of the CURATEcamp Coder / Cataloger session at DLF: http://www.diglib.org/forums/2011forum/schedule/curatecamp-catalogers-coders/
Corine Deliot reported on the British National Bibliography linked data efforts. In August a subset of 2.6 million records were converted into over 80 million triples. Ongoing work on cleanup and remediation is now taking place. They’ve also published the RDFS classes and properties used in the published data. Next steps include refreshing the data, continuing to improve normalization and extend the data model, and begin looking at serial and multipart modeling.
Karen Coyle brought up the topic of schema.org and Microdata. A discussion covered where there were gaps in the vocabulary, who in libraries was using it, and whether it meets any of the needs emerging in the linked library data space.
Jenn Riley presented a call to action for libraries to ramp up efforts to become consumers of linked data. We need to be able to get data from multiple places, evaluate it, identify good data sources, and track and collect provenance information. We need a community-wide definition of what consuming linked data means and what user interfaces would look like. She pointed to the outcomes of the Stanford Linked Library Data Workshop, in particular the Technology Plan, as a good example of what we should be discussing: http://www.clir.org/pubs/abstract/reports/pub152
The remainder of the session was spent on a business meeting, planning for the IG’s Preconference at Annual (Friday, June 22, 8.30 am—4.30 pm) and a call for volunteers to take over as co-chairs following the Annual meeting in Anaheim.
MARC Formats Interest Group (with LITA)
The MARC Formats IG held a two-hour discussion meeting at which three invited presenters spoke on desiderata and potential solutions in a post-MARC environment. Speakers were Kelley McGrath, Jennifer Bowen, and Diane Hillmann. Chew Chiat Naun moderated. The meeting room was at capacity and approximately ninety people signed the attendance sheet.
Kelley McGrath spoke on limitations of the MARC standard that its successor should aim to overcome. Jennifer Bowen described how how eXtensible Catalog handles MARC-sourced data in its FRBR-compliant schema, and talked about plans to support linked data. Diane Hillmann spoke on her project to present MARC as a set of RDF properties in the Open Metadata Registry.
A lively discussion followed the presentations, covering such issues as the coexistence of multiple ways of coding similar data, and the ability to preserve granular data in a mixed general environment.
The presentation slides are available via http://connect.ala.org/node/163477. Mark Ehlert, who was originally slated to speak at this meeting but was unable to travel, also contributed a written paper on the relationship between RDA and MARC.
Metadata Interest Group
The 2012 Midwinter Metadata Interest Group meeting featured two presentations on metadata for digital video resources and one on metadata repurposing. Jason Kovari, Metadata Librarian at Cornell University Library, presented “Video Metadata @ Cornell: Implementing Kaltura”; Amy Rushing, Head Librarian, Digital Access Services, University of Texas Libraries, presented “Preservation and Access Metadata for Born-Digital Video”; Maureen P. Walsh, Metadata Librarian, Ohio State University Libraries, presented “Repurposing Metadata for an Institutional Repository.” All three presentations are available at our ALA Connect site: http://connect.ala.org/node/65847
The business meeting included updates from MLA, CC:DA and LITA liaisons, and the programming chairs. The group discussed programming for Annual 2012. One program on the changing role of the cataloger had to be moved from a program to the content of the MIG meeting at Annual due to miscommunication with another group with which MIG was going to collaborate. The other program on data sets will be planned in concert with LITA. Invitations will be sent to suggested speakers soon. This program will take place on Monday at 8am due to a conflict with the LITA President’s program at the original time.
New Members Interest Group
The ALCTS New Members Interest Group met on January 21, 2012 at ALA Midwinter. Amy Jackson, ANMIG chair, welcomed new members and gave an overview of ALCTS. Mary Mastaccio, chair of CaMMS, provided detailed information about CaMMS and how new members could get involved in the section. Betsy Simpson also welcomed new members and talked about the importance of volunteering. Following these introductions, new members were paired with veteran members for mentoring. Pairs discussed individual goals and navigation of the conference. Several mentor/mentee pairs had lunch together following the meeting. Yoko Kudo stepped down as Web Coordinator and was replaced by Emily Sanford.
Public Libraries Technical Services Interest Group
E-materials management in the public library was presented as a topic for discussion. Questions discussed were:
- Does the library load bib records into the ILS, or depend solely on a vendor’s web site?
- If bib are loaded, are items attached as well?
- How much detail is input into both the bib and item (if added) such as a web link or call number, pricing?
- Who gets their records from vendors: (admittedly mostly OverDrive, although one TLC client was present), or catalogs through OCLC?
- Anybody signed with vendor other than OverDrive? (One uses Recorded Books, and one will be working with B&T’s Axis360. Some general discussion about processing issues also occurred.
Ideas were tossed out on Summer ALA topics. One that generated the most interest was RDA—not cataloging training per se, but how to manage the implementation in the library, once the date gets closer. Subtopic such as getting administration and other staff on board. One person volunteered the name of a speaker, and also said she’d organize the presentation (get a commitment, etc. for Anaheim).
Publisher/Vendor/Library Relations (PVLR) Interest Group
The Publisher/Vendor/Library Relations Interest Group (PVLR IG) forum panel discussion was sponsored by the Association for Library Collections & Technical Services (ALCTS) and took place at ALA Midwinter on Monday from 8am to 10am. The topic was What Does Electronic ILL Mean to You? The session was well attended, with between sixty and one hundred attendees present.
Kim Steinle, the PVLR co-chair, moderated the discussion and introduced the speakers. The panel of four experts included two librarians and two industry representatives.
- Nora Dethloff, Assistant Head of Information and Access Services, University of Houston
- Cherié Weible, Acting Head of Central Access Services, University of Illinois at Urbana-Champaign
- Katie Birch, Portfolio Director - Delivery Services, OCLC (UK) Ltd
- Trina Wilson, Product Manager, MyiLibrary, Ingram Library Services
Nora started off with a simple rubric that her ILL team uses to determine how to handle an interlibrary loan request. They ask themselves: 1) do we own it, 2) can we lend it, and 3) where is it. Once these answers are determined, they print, scan, and send. The M.D. Anderson Library handles 20,000 ILL borrowing requests per year, which works out to more than 50 per day. They judge how well they are doing by their fill rate and their turnaround time. Because of this, they want to handle requests as quickly and as cheaply as possible; this means the less they handle a request, the better their stats are.
They track licenses using an Excel spreadsheet. It can be onerous determining ownership and license compliance for ILL, particularly on e-journal content. Licensing language is often convoluted and difficult to understand, so her ILL team will opt sometimes to avoid using e-journal content because it is more complicated than an “ordinary” print request. They basically consider there to be no ILL for e-books; even if it’s allowed (like with Springer). They don’t do it because of all the high-profile legal cases in the news involving academic libraries and copying—there’s a chilling effect on doing any ILL that isn’t the norm. She briefly mentioned Odyssey and the trusted sender in the ILLiad system.
Effectively, electronic ILL for Nora’s team is nonexistent. They are still working in a print world. Even if they own an e-journal article, they have to print it, then scan it into an image, and then send that image as a PDF to their requestor. We as ILL librarians want an easier workflow and we wonder why we signed away those rights for electronic content.
Nora shared a list of her e-ILL wishes:
- let’s figure out how to share e-books
- let’s transmit electronic content electronically
- publishers should respect our rights to copy content we own
- don’t make librarians jump through hoops
- let’s have the same rules apply to everything, everywhere.
Cherié Weible opened her presentation by stating that the University of Illinois at Urbana-Champaign is a net lender, with 65,954 lending and 25,723 borrowing requests. High volume is what’s important to her team, since they are handling so many requests. When they retrieve an OCLC ILL request, they print the article, scan it, and deliver it via Odyssey to the borrowing library. There are too many limitations on loaning e-content, whether it’s e-journal articles or portions of e-books. There are too many different types of e-reading devices, so lending any e-book content is problematic. Their current workflow is quite cumbersome, because of all the restrictions and decision trees. Cherié mentioned many of the same concerns that Nora brought up.
In regard to academic library users and ILL, they have different needs than public library patrons. They need longer loan periods than those provided for with the OverDrive system (only three weeks); they need the ability to renew the check-out easily; the purchase option for e-books should be expanded after the initial loan; they need the ability to export notes and marginalia, and not have it tied to a specific e-book; they prefer that e-books be in PDF format; and they need to be able to search and discover the e-content via their local library catalog and the Internet.
Katie Birch talked about WorldCat. There are seven-thousand member libraries worldwide and they handle ten million requests per year. We would all like to get the user what they want in a timely way: in minutes and hours rather than days and weeks. Katie also mentioned that the interlibrary loan system was first conceived of in 1550 by several Italian libraries that wanted to loan materials to one another.
Electronic ILL to her means being able to really lend electronic content electronically without resorting to clunky systems of printing, scanning, and sending as an image. True e-ILL would allow for short-term access to e-resources and just-in-time purchasing, so that librarians can make quick decisions “on the hoof.”
OCLC’s Article Exchange provides a cloud-based document delivery tool. Large files can be uploaded to a dropbox in the cloud for lenders and users to access via URL and password. Once a file has been picked up for the first time, it will remain available on this site for five days. After five days, the file is removed. A file can be picked up a maximum of five times for each URL/password combination. Files that are never picked up are removed after thirty days. This allows for an easy, three-step enhanced sharing of articles with built-in rights management. The license management tool provides clearly defined decision trees to indicate which collections and titles are licensed for ILL, and any instructions or restrictions for lending licensed content. Workflows for articles held electronically by a lending library are simplified.
Katie showed a slide with several milestones for OCLC’s Marketplace. Libraries will be able to make just in time, buy it decisions rather than placing an ILL with a lending library. By February 2012, staff will have the option to buy an item rather than place an ILL. By May 2012, they will be able to modify their ILL workflows to support buy-it. By August 2012, the ability to define the buy-it profile will be added to the service config. By November 2012, the buy-it profile will be implemented. By February/May 2013, buy-it will be added from the WMS ACQ module. In the future, Marketplace plans to also provide a workflow for an e-book ILL between Marketplace partners.
Trina Wilson talked about the MyiLibrary system. They have more than 55,000 titles available for loan today and more publishers are joining. MyiLibrary provides: the e-book to loan to the patron, an easy-to-use interface for the library, and compensation to publishers for the ILL usage. Publishers see ILL as a lost sale, but don’t seem to realize it’s an upsell opportunity. Publishers are concerned about the “breaking” of the licensing model. The issue is complex.
Role of the Professional Librarian in Technical Services Interest Group
Co-chair Erica Olivier welcomed participants and introduced her co-chair, Shoko Tokoro, and the two focus speakers, Jane Smith and Eugenia Beh, who would speak about Perpetual Access: Peaks and Pitfalls. After the vice-chairs and last years’ chairs were introduced, Olivier asked the approximately forty-seven attendees to introduce themselves.
Jane Smith, Coordinator of Electronic Resources at Texas A&M University, and Eugenia Beh, Electronic Resources Librarian at Texas A&M University, presented on issues associated with perpetual access and license issues with publishers. First they discussed what perpetual access is, briefly talking about ownership vs. subscription models and current practice vs. the historical view. The main part of their presentation focused on the review of nineteen of Texas A&M University Libraries’ current licenses for major journal packages. The speakers examined the licenses to see if publishers offered perpetual access rights and additionally undertook a survey of the publishers in order to clarify license clauses regarding access. Handouts were provided that included the survey questions, examples of actual perpetual access license clauses, and a table of the study’s data. The table recorded whether perpetual access was given, in what format(s), any known fees, whether postcancellation access was allowed through Portico or in other archiving services such as LOCKSS or CLOCKSS, who hosts when titles are transferred to another publisher, and any publisher replies.
The presenters discovered that seventeen of the nineteen publishers in the study did allow perpetual access. However, Beh and Smith noted that while most publishers are willing to offer perpetual access, they don’t always put it in the licenses. Even if perpetual access is mentioned in the license, the license may not contain all of the pertinent details, and not all librarians are willing to ask the publishers about it. Additionally, some licenses are several years old, and so may no longer be completely accurate in regards to perpetual access.
Several of the most common perpetual access terms were: 1) publisher allows electronic non-searchable files in CD/DVD/hard drive formats; 2) publisher charges fee for ongoing online access only if the institution does not subscribe to at any of the publisher’s other titles; and 3) no cost to access previously subscribed content in Portico. However, proof of paid content may be needed for third party hosting, such as what Portico provides. This is often a challenge given retention schedules for financial files—for example, the state of Texas requires public institutions to retain past invoices for only three years and then get rid of them.
Beh and Smith then described how Texas A&M Libraries manage the perpetual access workflow. They enter package codes into their ILS, maintain title lists for previous and current subscriptions, add back file information into the Holdings record in their ILS, and use an Access “to-do” list database to track needed SFX changes in SFX and to document title changes/transfers. A diagram was shown of Texas A&M’s e-journal cancellation workflow, and the presenters discussed the impact of perpetual access on the Cataloging, Acquisitions, and Collection Development units. In the future, Texas A&M plans to use CORAL--an open source ERM—to track perpetual access for individual titles in packages by having resource records for each title instead of having resource records for only licensed packages.
The speakers concluded by asking some questions to prompt audience discussion:
- How to budget for perpetual access in terms of fees and staff time?
- Should lack of perpetual access be a deal breaker in license negotiation?
- Is perpetual access worth it?
- How do you handle perpetual access?
The discussion that followed also covered how audience members were providing access to non-searchable CDs/DVDs at their institutions (many said that the discs were just stacked on a desk or in a filing cabinet), if it was possible to put those files in an institutional repository or host on the library’s web site, and how to deal with the preservation needs of older formats. Some attendees noted that currently the provided ‘access’ is more of an archive than real perpetual access. One audience member stated that her institution has used the fact that they have no way to host or provide access to the files as a reason to prevent cancelling titles. The audience also discussed how to let other library staff, such as ILL staff, know the pertinent information—it was suggested to use ERM and a link resolver to show license clauses in the OPAC. Another attendee suggested checking the contents of the archive files, as she discovered the provided files from one publisher just contained table of contents and the library would have to pay a fee in order to get the full text. The session came to a close at 11:26 am.
In the business portion of the meeting, Betsy Appleton (George Mason University) and Eugenia Beh were invited to join the business meeting.
Shoko, Erica, Allison, Charles, Wanda, Jack, Betsy, and Eugenia met after the presentation to discuss topic ideas for the annual meeting. Possible topics suggested mainly focused around the reorganization/restructuring of technical services, with the following possible subtopics mentioned:
- using consultants such as R2
- where do metadata and digitization fit in best organizationally?
- professional vs. paraprofessional
- reclassification of job positions, including how to work with HR and sell to administration
- workflow design
It was decided among the group that an e-mail be sent out to assess interest in the suggested session topics and to solicit possible presenters.
The chairs shared that the overall ALA session schedule will likely be changed in the future, with sessions becoming shorter in length.
Scholarly Communications Interest Group
The ALCTS Scholarly Communications IG held a panel discussion on “Identifiers, Citation, and Linked Open Data as Part of the Scholarly Communications System: New Developments and New Developments from Old Work.”
Joan Starr, EZID Service Manager and Manager of Strategic and Project Planning at the California Digital Library, Suzanne Pilsk, Metadata Librarian at the Smithsonian Institution Libraries, and Chris Freeland, Technical Director of the Biodiversity Heritage Library and Director of the Center for Biodiversity Informatics at the Missouri Botanical Garden presented on new developments in this space. Identifiers and the ability to reliably cite are key pieces of the underlying infrastructure for the scholarly communications system. With the growing importance of research data as a discoverable and citable piece of the scholarly record, two gaps exist. There has been no means for citing data, and no method for retrieving data from the historical scientific record. Efforts such as DataCite and EZID are addressing the first of these challenges, and the Smithsonian Library and the Biodiversity Heritage Library are looking towards the use of linked data to approach the second. This was a lively discussion about the issues raised by all three participants.
Suzanne Pilsk’s slides are available at: http://www.slideshare.net/SCPilsk/smithsonian-libraries-partnering-in-research
Chris Freeland’s are available at: http://sliwww.slideshare.net/chrisfreeland/bhl-assigning-dois-other-identifiers-to-legacy-literature
Technical Service Managers in Academic Libraries Interest Group
The Technical Services Managers in Academic Libraries Interest Group program drew thirty-nine participants for roundtable discussions led by members of the IG’s steering committee. The general theme of the meeting, “Technical Services in Evolution: Looking into the Future,” was organized around the following six topics: RDA; Acquisitions and Serials in Evolution; Changes in Technical Services staffing, training, and workloads as electronic and digital resources gain more prominence; How we as managers develop the necessary mindset to lead change; New skills current cataloging staff will need in the next few years and how we can provide training; and DDA, social tagging, and other end-user driven trends affecting our work and their implications for the future of tech services.
Table 1 (Facilitator Shannon Tennant) discussed RDA: the preparation, training and resources cataloging staff will need; the education of other departments within library as well as library leadership; and conversations with vendors regarding RDA.
Table 2 (Facilitator Judy Garrison) discussed Acquisitions/Serials: how acquisition departments’ organization, workflows and staffing levels have changed as a result of increased availability of e-books and e-journals, PDA, WCP and shelf-ready processing; how relationships have changed with subject specialists, Systems, and Cataloging; and speculation about how acquisitions departments will look in five years.
Table 3 (Facilitator Annie Wu) discussed how workloads in Technical Services change as more focus is put on electronic and digital resources and more physical materials are outsourced and how staffing and staff training is affected.
Table 4 (Facilitator Roberta Winjum) discussed leading change: how to frame change in a positive way as part of the library’s and university’s strategies and goals, how to achieve staff buy-in through understanding and participation in planning; and the importance of acknowledging that change doesn’t mean past work was not meaningful .
Table 5 (Facilitator Jack Hall) discussed the new skills cataloging staff will need: creating and providing access to digital resources, batch processing and providing access to electronic theses and dissertations , providing non-MARC and non-LC metadata; and transitioning from AACR to RDA.
Table 6 (Facilitator Bruce Evans) discussed end-user trends affecting our work: DDA, social tagging, Just In Case vs. Just In Time collection development, keyword searching, and the future role of catalogers.
Technical Services Workflow Efficiency Interest Group
Our informal, round-table discussion featured a panel of Ann Ellis, Metadata Librarian at Stephen F. Austin State University, Morag Boyd, Head, Special Collections Cataloging at The Ohio State University, Teressa Keenan, Metadata Librarian at The University of Montana and Vicki Sipe, Catalog Librarian, University of Maryland, Baltimore County. The general topic of discussion centered on issues surrounding traditional cataloging staff transitioning into creating metadata.
Each of our panelists introduced themselves and their institutions, providing a brief description of their metadata workflow, including the makeup of their staff. Questions were taken from the audience, as well as from the IG co-chairs, and covered a number of topics:
- Are your digitized collections created in-house or outsourced? What type of collections do you do metadata for?
- How long does it take staff to catalog a MARC item? How long does it take for a metadata item?
- Do you teach metadata schema, or only the work that needs to be done? How is the metadata created? Directly into CONTENTdm, Excel, Access or some other input system?
- What were the reactions from your staff to the metadata work? Is everyone still creating metadata that was trained at the beginning of the process? Why?
- What obstacles have you encountered along the way? (Did any extra technology training have to be done? Did staff have any fear for their jobs because of these new workflow directions?)
- Does the cataloging staff do quality control of images when they are creating the metadata?
- What collections were most challenging for your staff?
- Does anyone do quality control checks of the metadata that is created in-house? Who? (Staff or librarian?)
- What were the reactions from others in the library to this work? Did you do any publicity about the collections or the work amongst other library staff?
What were the lessons learned? What would you do different if you were to start this process over again?