All Those Programs You Missed: Annual 2014 Programs

Each year, ALCTS members volunteer to cover a program they attend at ALA Annual Conference. These efforts enable the rest of us to benefit from their presentations. Whether you attended the conference but missed the program, or couldn’t attend at all, these articles provide a great way to learn about what was covered.

Linked data was a hot topic this year, and people are continuing to digest the new BIBFRAME standard. Many other common topics emerged as well, including the ever-present issues of wearing many hats and working virtually as a team.

Preconferences

Statistics & Reports: Data Driven Decision Making

By Ginger Williams, Wichita State University

This preconference was held Friday, June 27, from 8:30am to 4pm.

Michael Levine-Clark, University of Denver, discussed how his library used data to resolve a campus controversy over just how much of the collection should be stored to increase seating. The library’s goals included establishing clear guidelines for storage, identifying criteria that could be used for future storage decisions, and providing an on-site collection to support all disciplines. The library discussed a variety of data with the faculty committee, including usage by classification and item age, ILL requests, and survey information. As the committee reviewed data, librarians continually asked about the goal of the on-site collection. Levine-Clark pointed out that data was essential, both to convince the Provost and to help faculty reach a compromise. For example, some faculty initially argued that the on-site collection should be humanities oriented, but were surprised to learn that humanities monographs were not the highest circulating part of the collection. Slides are available at http://www.slideshare.net/MichaelLevineClark/levineclark-michael-analyzi....

Beth Picknally Camden, University of Pennsylvania, discussed using technical services production data to make decisions about a fast-track replacement program, serials check-in, and returning books to vendors. Camden emphasized the need to consider multiple data sources, both quantitative and qualitative, when making decisions. She encouraged librarians to consider using samples when working with large data sets, to consider very short point-of-use surveys to collect data for specific needs, and to consider collecting some data for brief periods during the year to minimize the burden of data collection.

Beth Bernhardt, University of North Carolina at Greensboro, talked about using data to prepare for substantial material budget reductions. Based on state budget projections, UNC-Greensboro prepared for three scenarios, a 15, 20, or 25 percent cut to the materials budget. The library began with target amounts for cutting books, serials, databases, and miscellaneous formats. After identifying potential cancellations from cost-per-use (e-journals & databases), overlap analysis (databases), reshelving (print journals), and circulation (continuations) data, library liaisons reviewed titles and requested faculty input; some titles were kept to support accreditation needs. The lack of usage statistics from some smaller publishers made justifying those titles difficult, while some high cost per use titles were needed because they were essential for small programs.

Jeanne Harrell, Texas A&M, reviewed some of the library literature on data driven decision making, then led a group exercise. Each group was given a brief scenario and challenged to identify ways that librarians could use data to assist in making decisions. Throughout the exercise, participants discussed the challenges of collecting data when both money and staffing are tight. While some used tools such as Scholarly Stats or WorldCat Collection Analysis, many participants teach staff to download data into a spreadsheet for analysis by librarians.

Throughout the preconference, speakers emphasized that data can help identify needs and persuade others, but that data must be analyzed in context.

Practical Linked Data with Open Source

By Jane Rosario, University of California, Berkeley

Everything You Ever Wanted to Know about Linked Data*

*But were afraid to ask

OK, maybe not everything, but plenty of practical, useful, and inspirational information was imparted by this workshop at ALA Annual. Organized by Theodore Gerontakos, University of Washington, and Sarah Quimby, Minnesota Historical Society, of the Linked Library Data Interest Group, this day-long workshop held Friday, June 27 from 8:30am-4pm promised, and delivered, a practical, real-world curriculum. It covered executing, implementing, and solving problems by using coding exercises, demonstrating uses of linked data with open source integrated library systems, introducing design patterns for problem-solving, and showing examples of how linked data is being used. You too can enhance your website, legacy data, and much much more using linked data with open source software. It’s entirely possible!

Dan Scott, Systems Librarian, Laurentian University, presented “Structured Data for Libraries: RDFa and schema.org,” an overview of using Resource Description Framework in Attributes (RDFa) and schema.org to enhance HTML description, and led the group in coding exercises. Some relatively simple upgrades to coding can make a website more findable and useable. (For slides, please see http://stuff.coffeecode.net/2014/lld_preconference/)

Galen Charlton, Manager of Implementation, Equinox Software, Inc., gave a talk entitled, “Hybridizing MARC: Using Linked Data in Your ILS Now,” urging us to “embrace incrementalism” by using linked data to enhance legacy MARC records. The shelf-life of a MARC record is years, if not decades, and conversion/improvement is a gradual process. He demonstrated integrating information in the Virtual International Authority File (VIAF) with Evergreen and Koha, both open source software for integrated library systems. This is a practical way of approaching implementation of linked data in the great mass of library metadata that currently exists. (For slides, please see http://connect.ala.org/node/225971)

Richard Urban, Assistant Professor, Florida State University, introduced the concept of design patterns for Linked Open Data in Libraries, Archives, and Museums (LODLAM) in his talk, “Linked Data Patterns for Libraries, Archives, and Museums.” Design patterns are optimized solutions for common problems. In basic steps, one approaches design patterns by (1) giving the pattern a meaningful name, (2) stating the problem that this pattern resolves (this can be stated as a question), (3) describing the problem, then (4) describing the forces that shape the solution to this problem, and finally, (5) stating the solution to the problem. (For slides, please see https://dl.dropboxusercontent.com/u/3881880/LODpreConf.pdf)

Jodi Schneider, ERCIM Marie Cure Fellow, Inria Research Centre, gave a talk entitled, “How Can Structured & Linked Data Serve Users?” She discussed the growth and wider impact and of linked data, showing examples of linked data already in use in public libraries, personal email and calendars, and more. (For slides, please see http://connect.ala.org/node/226233)

The audience was encouraged to participate and contribute examples and experiences throughout. Along with the speakers, facilitators for the workshop included Kevin Ford, Development and MARC Standards Office, Library of Congress; Richard Wallis, Technology Evangelist, OCLC; and Sylvia Southwick, Metadata Specialist, University of Nevada, Las Vegas.

Saturday

International Developments in Library Linked Data: Think Globally, Act Globally

By Alice Pearman, YBP Library Services

Three speakers comprised part one of this program, held from 8:30-10am on Saturday, June 28. Approximately 160 attendees crowded the seats, the back wall, and the floor.

International Relations Committee Chair David Miller introduced the program.

Richard Wallis, Technology Evangelist from OCLC, spoke about entity-based data. He showed examples of different search engine results for the term “Mt. Everest” to show how search engines are using knowledge graphs to pull together descriptive data on that search term.

For libraries, there’s great potential for users to find library resources from search engines, not from the OPAC. European libraries that have opened up library data using semantic web technology have found that up to 80 percent of traffic can come from a search engine.

Sharing our materials is the essence of library services. When we add our materials to WorldCat, WorldCat then publishes to syndication partners like Google Scholar, EBSCO, etc. The links from these aggregator partners link back to the libraries. But, libraries should also consider linking out to authoritative hubs on their own. Those hubs are recognized and identified on the web.

What can a librarian do? Get your resources out there in the web of data. Contribute to WorldCat, and start adding WorldCat Works URIs in your data. Register your institution’s websites on schema.org.

Next, librarian Jodi Schneider provided a short tour of Europe through linked data. In Belgium, we can find FreeYourMetadata.org, as well as the book Linked Data for Libraries, Archives and Museums published (both EU and US editions are available). Both are excellent resources.

The Oslo Public Library in Norway uses RDF linked data as their core metadata format. But they haven’t given up MARC; learn more about what they’re doing on their blog. (Site?)

The Digital Repository of Ireland developed a resource to link to geographic authority control for locations in Ireland; this way other libraries can link to geographic locations and information can be mapped that way. They also have thesaurus construction guidelines: apps.dri.ie/motif

And in France, an excellent French resource to the semantic web, Le Web Semantique en Bibliotheque, was recently published. The IFLA conference will be held in Lyon, France this year, and will include a satellite meeting in Paris titled “Linked data in libraries: let’s make it happen.”

The third speaker was Neil Wilson, who described the British Library experience with linked open library data. They began to offer open data as a service in 2010. Neil noted that government requirements for open public data have grown, both in Europe and in the U.S.

The British Library provides data in MARC and z39.50 for libraries; for researchers, they can provide data in RDF/XML and CSV formatted data, so they can manipulate their data as they wish. More than 1,090 user organizations from 105 countries have used these services. The great benefit, of course, is that opening up data in this way improves access to knowledge, culture. The data links out to provide a greater scope to the material, increase discoverability.

Neil also presented some lessons learned:

  • Communicate what the data is about. Explain what the data is and what it might be used for; provide examples.
  • Document and identify entities within your data: places, people, dates, etc.
  • Partner with researchers and developers - this builds a community of interest and is very effective.
  • Be aware that data conversions often highlight old issues and create new ones.
  • Know how to capture usage of the data and be able to demonstrate its value. You must do this to maintain funding and show the value in your work.
  • Make sure that ever-evolving expectations are met. Government requirements, etc. tend to change.

Slides from part one are now linked in the Conference Scheduler: http://ala14.ala.org/node/14393

Part two of the program was held in a different room, from 10:30-11:30pm. While the room wasn’t as jammed, it was relatively full, mostly with attendees from part one of the program.

Gordon Dunsire continued the linked data discussion with a presentation about RDA. He noted libraries have tons of bibliographic standards all compatible to a linked data format.

The RDA registry site includes many resources: an element set; unconstrained properties for describing objects outside of the FRBR world; different data structure maps; and examples of their use. It can be difficult to make sense of it all. For example, many different schemas have an “audience” element. Unconstrained schemas, such as unconstrained RDA, can create a machine-actionable map among all the similar elements of all the different schemas. (His presentation includes an excellent flow chart of the concept). It’s exciting, but not without risks; we need to do a better job of defining the terminology. A resource intended for an “adult” audience might mean one thing within one country, with all its associated laws and norms, than it does in another country.

These issues need to be addressed, and generally there are two ways to do this: top down, or bottom up. In the top-down method, the top organization examines the local schema, identifies common elements, and refines the global element set to connect down to specific local elements. The issue is that common elements may not cover the same space as local elements. Geographic understanding and terminology might be very different, and can be very hard to reconcile. This speaker believes this method was attempted by IFLA, and failed.

The bottom up method: Publish your local schema(s) in RDF. Then map your local elements from different schema to the lowest common elements, the best you can. This method also has its challenges; the issue with this is that there’s an enormous amount of multiplicity of element sets.

Reinhold Heuvelmann, German National Library, closed the session with a practical discussion about BIBFRAME. He sees BIBFRAME as the point where MARC and linked library data meet. He displayed graphs of where WEMI (work/expression/manifestation/item) meets BIBFRAME -- a WEMI BIBFRAME profile.

The Vocabulary 1.0 was released in January 2014 by Library of Congress. It should be stable this year, and will be updated in 2015. A great many papers and specifications are being written: BIBFRAME Resource Types (June 2013) (http://bibframe.org/documentation/resource-types/); The Relationship Between BIBFRAME And OCLC’s Linked-Data Model Of Bibliographic Description (June 2013) (http://oclc.org/content/dam/research/publications/library/2013/2013-05.pdf ) and also papers on authorities, relationships, and profiles that were published later in 2013.

But, how to use it?

1. LC has released an open source BIBFRAME Editor; as it stands, a programmer may need to help you get this running.

2. There is a ‘BIBFRAME Prototype @ DNB’ - basically a way of entering your data in a traditional way and then with a click, converting that data to the BIBFRAME format. It’s not perfect, by any stretch; the prototype misses some relationships.

3. Project Libhub (libhub.org). A group of institutions who work together as a larger group to convert their data to BIBFRAME, as a large group, not as individuals. Could be advantageous as everyone is learning/working together.

He then discussed the caveats of BIBFRAME. Do we need yet another standard? Do we seriously need another change so soon after implementing RDA? How does BIBFRAME relate to other initiatives, like schema.org? What are the applications for end-users? And is there a standardization process?

Reinhold closed with an amusing prediction of the future that drew many chuckles from the audience, including a 2025 proposal to re-insert ISBD punctuation, and a new schema developed for olfactory description. By 2060, the last MARC-based ILS is finally shut down, followed by Earth’s first warp flight by Zefram Cochrane. BIBFRAME is able to capture emotions of both reason and emotion, capturing data both human and Vulcan – a funny shout out to all Star Trek fans.

Slides from part two are now linked in the Conference Scheduler: http://ala14.ala.org/node/14381

Metadata and Indicators for Open Access

Two perspectives offered by Kelsey Brett and Wendolyn Vermeer

By Kelsey Brett, University of Houston Libraries:

As more materials are being made openly available on the web, the need to standardize metadata practices for open access resources is increasingly important. This program, hosted by the Metadata Interest Group on Saturday, June 28, 2014 at 10:30am, sought to educate the audience about principle initiatives from Jisc and the National Information Standards Organization (NISO) to standardize the description of open access resources.

Ben Showers, head of scholarly and library futures at Jisc (formerly known as JISC, the Joint Information Systems Committee) spoke on two recent Jisc initiatives to standardize metadata and develop a controlled vocabulary for research output. RIOXX is a metadata framework that encourages consistency of metadata used for research. Vocabularies for Open Access (V4OA) is a project to establish a standard vocabulary for use in an Open Access context. V4OA focuses on five areas: embargos, rights/licenses, open access indicators, article processing charges, and versions. Once V4OA is fully developed it will be included in the RIOXX framework. Consistent metadata is necessary in order to facilitate systems interoperability, track and assess research outputs, and report information from institutions to funders. Additionally, these initiatives will help researchers facilitate funder mandate compliance and monitor publication charges which are increasingly important as institutions invest more in open access publishing. More information can be found at Rioxx.net and v4oa.net.

Nettie Lagace, associate director for programs at NISO, spoke on a recent NISO initiative to develop recommended practices for Open Access Metadata and Indicators. The need for open access indicators is necessary due to the increase in open access materials, research funder mandates, the variety of different rights associated with open access resources, and the increase in hybrid journals (subscription journals with open access portions). The NISO working group, made up of librarians, publishers, and systems vendors, recommended including two data tags for all open access items. One is an open access indicator with the label <free_to_read>. The other is a stable URI which points to human-readable license terms with the label <license_ref>. The NISO recommendations were released for public review in January of 2014. The group is currently addressing comments and finalizing the forthcoming recommended practices. The responsibility of implementing the recommended practices will be on publishers, aggregators, and content providers. Implementing these practices will benefit readers, authors, publishers, search engines, academic libraries, and research funders as they navigate the complexities of open access resources.

Organizations like Jisc and NISO are responding to a complex, and often-times frustrating, open access environment. Researchers and funders are coming to value open access publishing, and more information is being made freely available on the web. Future compliance with initiatives such as these will enable libraries to effectively provide access to this new model of information.

By Wendolyn Vermeer, California State Polytechnic University, Pomona:

Presenters Ben Showers, Head of Scholarly and Library futures at Jisc, and Nettie Lagace, Associate Director for Programs at the National Information Standards Organization (NISO), introduced the important tandem initiatives of RIOXX and V4OA.

Ben Showers spoke about the need for a consistent metadata schema around research papers produced in the UK (RIOXX), and a standardized language to populate those fields. UK institutions are accountable for tracking and reporting on research funded by grants, and a number of them have been customizing such schema around their own needs, rather than using out-of-the-box schemata provided with various Current Research Information Systems (CRISs). With the increasing customization of such systems, it has been difficult to track outputs across different scholarly systems, and for institutions and grant funders to “capture and assess the nature and scale of Open Access ‘transactions’.”

Enter V4OA – Vocabularies for Open Access. The project received input from key stakeholders around OA metadata spaces, including UKSG, SPARC Europe, CrossRef, ARMA, and many other important UK organizations (see v4oa.net for the complete listing). The consultation resulted in five key areas for the vocabulary to address:

  • Embargoes (derived from License)
  • Rights/Licenses (LicenseRef)
  • Open access identifier (NISO – “Free to Read” tag)
  • Article Processing Charges (APCs)
  • Versions (NISO)

Embargoes strive to be pragmatic as possible and capture the publication date and available date. Rights capture the URI that points to the terms and conditions. The OAI tag is to be maintained by publishers, and is part of the work that NISO is currently undertaking. APCs are critical in the UK to meet government reporting mandates. Lastly, Versions are still undergoing work, and the group hopes to implement NISO standards for this area.

The RIOXX Application Profile will be the framework for V4OA, with the next version to be released at the end of July 2014. Software plugins for ePrints and DSpace are in development, and repositories hope to implement them in 2014/15. Jisc will provide implementation support in the UK, as research institutions are mandated to demonstrate their research impact by 2020. Finally, the Jisc Monitor tool (http://jiscmonitor.jiscinvolve.org/) will be a shared research reporting service to facilitate efficiencies and interoperability between researchers, institutions, publishers, and funders.

Nettie Lagace spoke to NISO’s involvement with the project. The growth of open access, paired with increasing funder reporting mandates, and more frequent instances of hybrid OA and traditionally licensed works, has resulted in “lots of OA papers with different associated rights and responsibilities.” This landscape can be confusing for researchers, authors, institutions, publishers, funders, search engines, libraries, and readers alike.

A NISO Working Group was formed about a year ago to achieve several objectives, including:

  • Develop a metadata format to describe the readership rights associated with a single scholarly work (a chapter, an article, or other single entity)
  • Recommend mechanisms for publishing and distributing this metadata
  • Report on the feasibility on including downstream re-use rights, and if feasible, include in the project
  • Report on how the adoption of the outputs would answer (or not) specific use cases, to be developed by the Working Group

Membership of the Group is well-rounded, and is comprised of OA Initiatives, libraries, publishers, and research organizations (see http://www.niso.org/workrooms/oami/for the complete roster). Early on, the group abandoned the politically fraught label “open access” for the <free_to_read> tag, which requires minimal metadata (“yes” or “no” to the question of can this work be read or viewed by any user without payment or authentication), and can optionally include embargo dates. Such terminology is mechanical/factual, and avoids the pitfalls of OA modes or “flavors.” The group briefly considered a logo for this tag, but decided that the design and implementation could be left up to the user to display based on the indicative metadata.

The group also looked at the <license_ref> tag, which would be a stable identifier in the form of an HTTP URI. The terms would be human and/or machine readable, and repeatable if different terms are in effect at different periods of time. Some discussion was had about machines making decisions based on license tags (e.g., what to display to the user based on the tag).

It was determined that publishers, aggregators, and content providers would be responsible for providing and distributing the metadata as a regular part of the editorial process. Such metadata could even be included in table of contents and journal alerts.

When the Working Group’s draft report went to the public for review and comments, it received the most response of any NISO document Ms. Lagace has ever seen, with 130 individual points of input from the community (the usual count being 20 or 30). Changes have been made since the initial draft; auxiliary statements have been included about who has the legal responsibility of maintaining the license terms. Some technical work is underway to create a namespace and sample of the code to educate publishers and others who will implement the data.

NISO approval is expected in the next few weeks, at which point the oversight committee is expected to sign off. The document is not a NISO standard, but a recommended practice, allowing it to be used in moving, fungible areas, and to be adopted in full or in part.

Discussion with audience members included the evolution of systems to read the new tags, examples of the new vocabulary on the web, and the desire for link resolvers to click through to OA articles in a hybrid journal instance.

Taking Action: Linked Data for Digital Collection Managers

By Zora Breeding, Vanderbilt University

On Saturday, June 28 at 1pm, Cory Lampert, Head, Digital Collections, and Silvia Southwick, Digital Collections Metadata Librarian, both at University of Nevada, Las Vegas (UNLV), gave a very well-received presentation on their implementation of linked data for their digital collections at UNLV. Through the use of visualization tools and discussion of methodology, they showcased their library’s effort to transform their CONTENTdm metadata into linked data. Their goal was to encourage and inspire audience participants to take away ideas on how to work to “free” our own data.

Our data is currently encapsulated in records, which are contained in “silos” such as our OPACs or digital collections databases. Links between collections have to be created manually and do not express relationships. Our catalogs and digital collections might be on the web, but the data within them is trapped. Our rich metadata is being lost. Our local data is not exposed to the web for searching. Linked data is the future. We must make our metadata machine-readable and available on the web. Cory and Silvia used the 5 Star Data model, http://5stardata.info/ to evaluate their data. They showed how the use of triples (subject-predicate-object) and URIs (uniform resource identifiers) had transformed the searching of their Costume Designs for Showgirls Collection, allowing the searcher to make connections between their data that would never have been anticipated or possible with static metadata.

There is no formula for starting a linked data project – you might as well just do it! The starting point is to deconstruct your metadata as triples. Adding URIs to controlled vocabularies is essential for creating triples and making data searchable on the web. UNLV used OpenRefine (http://openrefine.org/) to clean their existing data, applying shared controlled vocabularies across their collections and adding URIs. For assigning vocabularies to express triples, they started with the Europeana Data Model as their framework. They used multiple sources for URIs, including Library of Congress. Once triples were created, they exported their RDF (Resource Description Framework) files to Mulgara triplestore (http://www.mulgara.org/) and used SPARQL query language for searching. Another open source tool they used for extracting and visualizing relationships in RDF data was RelFinder (http://www.visualdataweb.org/relfinder.php). Cory and Silvia stressed the importance of using existing sources of URIs when possible and publishing your data as open linked data in a shared data space so that you improve discoverability and connections with other related data sets on the web.

Real Leaders in a Virtual World: Tools and Strategies for Success

By Alice Pearman, YBP Library Services

Approximately 40 people attended this session, held Saturday, June 28 at 1pm. The program was organized around a panel of speakers with experience working on virtual committees and teams, including Anne McKee, Program Officer for Resource Sharing, and Executive Director Joni M. Blake, both of the Greater Western Libraries Alliance (GWLA); Betsy Appleton, Electronic Resources Librarian, George Mason University; and Jennifer Duncan, Head of Collections, Merrill-Cazier Library, Utah State University.

Joni Blake explained that the GWLA is comprised of 33 research institutions, nearly all located west of the Mississippi River. Most people who work on committees together never have the chance to meet in person. Over the years, the GWLA has developed a variety of techniques for encouraging people to work together virtually, which helps move projects forward. Many of the suggestions are applicable to any working group, whether in person or virtual, but because virtual groups can be so much more difficult to manage, it makes a big impact when the group follows the following best practices. The following suggestions are from all the members of the panel:

  • Have a clear agenda, with responsible parties for each section clearly noted.
  • When leading a virtual meeting, remember that participants will have a lot of distractions around them. The meeting needs to hold the attention of the attendee.
  • If you’re a participant in a virtual meeting, find something to fidget with so that your hands are occupied, but your mind is clear and focused on the meeting.
  • The project leader should talk informally with group members before formalizing meetings. Gauge their interest level; is there strong interest? Does the project provide value to this person’s organization? Knowing this information before getting everyone together is really helpful, since you won’t get the non-verbal cues during your group meetings that you would normally see face to face.

Managing projects can be completed outside the virtual meeting room in a variety of ways; discussion lists and BaseCamp were both mentioned as common tools. Anne McKee noted it’s also proved valuable to socialize virtually outside of the meetings. Let other members of the group know about life events, like marriages, babies, and so on. Become Facebook friends. Getting to know the team on a personal level really helps, especially when you’ll be working together for an extended period of time.

Ultimately, if it’s at all possible to meet virtual team members in a socialized, face-to-face setting, do it. Getting to know people face to face makes a huge impact on how you work together virtually.

The presentation closed with a short discussion about technology. Practically speaking, virtual meetings rely heavily on the attendees having up-to-date hardware and software. Technical mishaps continue to be the norm, and not the exception. Most face-to-face virtual platforms such as Skype or Google Hangouts work okay for up to 4 or 6 participants, but after that, the quality dissipates. Jennifer Duncan noted that audio-only meetings can be improved greatly simply by investing in a good microphone.

It was clear that while technology has made it possible to communicate in a virtual face-to-face environment, there are still a lot of common pitfalls. Until technology improves to lessen these issues, it’s important to rely on the basics of a well-organized meeting, and be aware that a lot of work can be done via email or other messaging between meetings.

Technical Services Collaboration through Technology

By Rebecca Nous, University at Albany, State University of New York

Technical Services Collaboration through Technology was held on Saturday, June 28 at 1pm, and featured three speakers discussing their experience with using various technologies to facilitate communication and collaboration.

Jesse Koennecke, Cornell University, described using several different programs in his collaboration with Columbia University in their 2CUL partnership. Through the 2CUL partnership, Cornell and Columbia are collaborating on collection development, cataloging, and managing electronic resources. They have begun a number of collaborative initiatives including: coordinating approval plans, shared cataloging in Korean, a POOF (pre-order online form) for selectors, RDA training, a joint working group for managing electronic resources called LERWG (Licensed Electronic Resources Working Group), and have started implementing some common tools to facilitate these projects.

Cornell and Columbia are about four hours away from each other, so maintaining good lines of communication can be a challenge. WebEx and Polycom are two services they use to hold virtual meetings in addition to their actual face-to-face meetings. They also use Basecamp project management program to keep track of ongoing projects, as well as Confluence, a wiki, to share documentation, meeting notes, and project proposals.

For more information on 2CUL, see: http://2cul.org/.

Debra Andreadis, Denison University, discussed using several tools to facilitate collaboration with multiple Ohio library consortia, including CONSORT and Five Colleges of Ohio. Their joint projects include shared ordering and cataloging for monographs, shared institutional repositories (DSpace and CONTENTdm), and shared grant-funded projects.

In addition to meetings and email lists, they are using WordPress to manage a digital projects initiative Mellon Grant. Google Hangouts provide a chat forum and a wiki, while they also share documents for multiple projects using Google Drive. The consortia also use Basecamp for project management.

Charlene Rue, BookOps, detailed many tools that New York Public Library and Brooklyn Public Library use for their collaborative efforts in their BookOps initiative. They are coordinating many aspects of their library operations, including selection, cataloging, physical processing, and sorting. In their first year, BookOps has an impressive record of accomplishment, including winning the 2013 ULC Top Innovator Award, realizing over $3 million in cost savings, and facilitating the Governor’s Island Uni Project.

To foster collaboration, they are using Google Drive to share documents and presentations, as well as Google Hangouts to interact with each other. Training and meetings are held using Skype. To help coordinate material selection, they are sharing carts from their vendor websites with each other, as well as CollectionHQ, a collection management tool. For troubleshooting, they are using ServiceNow, an online ticketing system. The sorting process has been automated using a Lyngsoe Systems material sorting system.

For more information on BookOps, see: https://sites.google.com/a/nypl.org/bookops/

Further information on these tools can be found at:

Holdings Information Forum

By Selina Lin, University of Iowa Libraries

The 2014 ALA ALCTS/CRS Holdings Information Committee Forum was held on Saturday, June 28 at 3pm. Approximately 44 people attended.

There were three speakers: Terry Reese (reeset@gmail.com), creator of MarcEdit and head of Digital Initiatives at Ohio State University, Regina Reynolds, Director, U.S. ISSN Center, Library of Congress, and Joshua Pyle, CTO and President of DublinSix Digital Library Service.

Terry Reese, gave a virtual presentation with a pre-recorded slide show on MarcEdit and OCLC Integration. MarcEdit has been around since 2000, but has since moved beyond working with MARC data. Collaborations with OCLC include: classify service for automatic call number generation; integration with OCLC’s FAST API for subject assignments; a plugin that allows MarcEdit to extract and reimport data for OCLC’s Connexion Local Save files for easier batch editing; integration with the MarcEdit Metadata API.

Terry walked the audience through step-by-step demonstration of how to use MarcEditor in OCLC Connexion to create/read/update bibliographic records, to update/delete institutional holdings, and to retrieve holdings code information about an institution.

All the operations happen in real time, but some keys have limits to the type of data they can edit. Moreover, MarcEdit does not offer record validation, nor does it involve authority data.

To use MARCEdit, one must get a key from OCLC Development Network (http://oclc.org/developer)

Other resources:

Regina Reynolds talked about BIBFRAME and the future of continuing resources. She believes the future of continuing resources is still evolving--more databases (purchased packages, open repositories) and blogs, data sets. On the other hand, print newspapers and magazines are disappearing rapidly, replaced by a greater variety of online sources.

So one might ask: Will social media become part of the scholarly record? Will most web resources simply become data streams? What has NOT changed--yet? (The academic environment: “publish [in a journal] or perish”). All these forces of change will act on continuing resources, and therefore on ISSN.

She then talked about ROAD (Directory of Open Access scholarly Resources: http://road.issn.org/ ), a service under development by the ISSN International Centre (ISSN IC) to provide free access to a subset (journals, conference proceedings and academic repositories) of the ISSN Register. Currently, ISSN is documented on individual MARC records, and therefore does not interoperate on the web. But ROAD will output RDF triplets, a “web language.” Compared to the 40+ years old MARC, BIBFRAME is designed for a linked data environment, and uses RDF triplets, which enable library data elements to interact on the open web, and express relationships.

When it comes to BIBFRAME and serials, there are more questions than answers, such as: Can serials be treated as a form of aggregate work? Can serials be treated as collections? How to handle serial title changes? How to handle other serial relations? But with BIBFRAME and linked data, there are many future possibilities.

Because the FRBR model does not work for serials, and BIBFRAME has not yet developed a workable model for serials, a working group consisted of representatives of the ISSN IC and the ISSN Review Group and Bibliotheque nationale de France (BnF) has developed PRESSoo (Functional Requirements for Bibliographic Records—Object Oriented) to propose answers to issues related to serials and continuing resources. PRESSoo has modeled all serial relationships in the ISSN manual (see: http://www.issn.org/wp-content/uploads/2014/02/PRESSoo_1-0.pdf)

Expressing optimism, Reynolds summed up her presentation with a JFK quote: “Change is the law of life. And those who look only to the past or present are certain to miss the future.”

Further information on BIBFRAME: www.loc.gov/bibframe ; http://bibframe.org/

Josh Pyle discussed The Auditor, the online circulation management software that provides usage logging, holdings verification and performance monitoring for libraries’ e-resources. Mr. Pyle used the example of Claremont College Library’s online catalog to demonstrate how the Auditor works. Installed on an institution’s network, the Auditor passively monitors all network traffic and makes libraries aware of every form of content that is being used and presents this information in COUNTER-style reports. The Auditor’s reports include content from every publisher on every platform, and also reports on content from open access or free information sources. Sophisticated click-stream analysis infers library holdings in real-time. Unexpected access denials, including broken links, are detected immediately. For more information, see: http://www.dublinsix.com.

Metadata Beyond the Library: Consultation and Collaboration with Faculty, Staff and Students

By Linda Smith Griffin, Louisiana State University Libraries

On Saturday, June 28, at 4:30pm, two presenters discussed Metadata Beyond the Library. Jason Kovari, Web Archivist and Metadata Librarian, Humanities & Special Collections, Cornell University, and Lisa McFall, Metadata and Catalog Librarian, Hamilton College, were both speakers.

Throughout the history of libraries, catalog and metadata librarians have used their knowledge and expertise to provide bibliographic access to discover the world’s information. This work is traditionally performed in non-public areas of libraries, commonly referred to as, “the back room.” Today, with the explosion of digital resources and mounting initiatives to provide access to them, technical services librarians are using their knowledge, expertise and skills to expand their roles beyond the traditional by serving in non-traditional roles such as consultants and collaborators on university scholarly communications and digital projects initiatives. Kovari and McCall’s presentation are excellent demonstrations of how technical services librarians, particularly metadata librarians are playing significant roles in working with faculty, students and clients outside of the university. Their outreach efforts and collaborations such as theirs have greatly increased the libraries’ institutional worth and heightened its visibility within and outside of the university.

Lisa McFall began the session discussing her role as metadata consultant in supporting the Digital Humanities Initiative (DHI) at Hamilton College, a small liberal arts college in central New York. She conveyed how libraries can use its current infrastructures to make a case for involvements in Digital Humanities initiatives on campuses. For twenty minutes, McFall walked attendees through the DHI metadata process at Hamilton College. As one of six members on the DHI Collection Development Team she works collaboratively with the team to brainstorm creative solutions that supports innovations in faculty research and analysis. As a consultant, she meets with the faculty, students and DHI co-director to help develop the understanding of the role of metadata in their archives. She creates a list of questions such as, who is the primary target audience for your collection and how do you envision experts in this subject to interact with the collection, as a way to broaden the faculty’s knowledge on various metadata schemas (Dublin Core, MODS, etc.), controlled vocabularies (AAT and LCSH), and the use of customized controlled vocabularies that will make their collections discoverable by various web-based programs. Other services McFall offers to support metadata creation are wikis, built-in support on best practices for metadata, and she is available to field questions from faculty and students. She fosters connectivity beyond the DHI by conducting classroom lectures, holding one-to-one consultative work with students, and she has plans to implement metadata training modules. McFall concluded her presentation with several slides of useful web resources to consult for additional information and training.

Jason Kovari shared Cornell’s pioneering history in the creation and management of providing services that address the entire digital life cycle management production. He acknowledged his work similarities with metadata and consulting services as expressed by McFall, but he fashioned his talk to include examples of collaborations with non- affiliated university clients. This work, as well as work with campus partners, is performed by embedding metadata services into Cornell’s Digital Consulting & Production Services (DCAPS). According to Kovari, DCAPS is a forward-facing service group composed of a team of experts that provide a triage of services to clients that presents a single-point of services to meet all needs surrounding digital media production, copyright services, metadata services, web services and scholarly publishing in general. DCAPS is a for-fee service, depending on the services being offered as a cost of recovery method. Two examples of Digital Humanities projects outside of the university Kovari referenced were a database entitled, Freedom on the Move, a database of fugitives from North American slavery. This Digital Humanities project identifies runaway slave ads found in newspapers. Such ads provided a great deal of information about that time period. Some elements of the project allows crowdsourcing that enables the general public to participate in the creation the database. The second example highlighted was Kathmandu, the first cross country analysis of community groups in Nepal. Kovari’s role as metadata librarian on both projects presented him the opportunity to collaborate with institutions and individuals outside of the university and country. He travelled to Nepal on behalf of DCAPS to provide the consultative services needed to lend guidance on preserving the digital materials that came out of the Kathmandu project. He provided the description and structuring of the directories. He ended by mentioning his involvement in the 2CUL collaboration between Cornell and Columbia that is designed to see how academic libraries can work together to provide content, expertise, and other services. McFall and Kovari’s work have launched them from non-visible technical services roles performed in the back rooms into the forefront of library services whereby increasing the overall institutional worth of their libraries.

ALCTS Affiliates Showcase

By Arthur F. Miller, Princeton University Library

On Saturday, June 28, at 4:30pm, ALCTS had another excellent Affiliates Showcase. The Affiliates Showcase offers the opportunity for excellent programs presented by ALCTS affiliate members at state, regional or other local conferences or meetings to have a second life at ALA.

The first presentation was by Amber Billey, Catalog/Metadata Librarian at the University of Vermont, Bailey/Howe Library, presenting “Time Management for Wearing Many Hats.” She explained that like most librarians she has to wear more and more hats as time goes on, and everyone is asked to do more with less. Among other duties, Amber mentioned that she serves on seven committees in addition to her regular library work.

To manage all of her different responsibilities, she created an Agenda for Time Management with Practical Tips and with Work-Life Balance.

PRACTICAL TIPS (Some of these are specific to cataloguing needs)

PRIORITIZE – This requires contextual decision making:      Needs of users; Resource type; Length of time in Backlog; Fun factor (what the cataloger enjoys)

ORGANIZE – Create To Do Lists (Lots of them) and organize them. Quarterly, Weekly, Daily Lists. Then sort them with her WONDERFUL chart.

                                                            IMPORTANT                                     NOT IMPORTANT

DUE SOON          _________1______________________________2______________

NOT DUE SOON _________3______________________________4______________

REMAIN FLEXIBLE:           Reactive procedures; Proactive responses; Listen; Adapt

KNOW YOUR LIMITS:      Know what is your job; Know what is NOT your job; know when to say No; know when to say Yes.

WORK-LIFE BALANCE:     We work to live.

Cyrus T. Ford, Special Formats Librarian, University of Nevada, Las Vegas, gave the second presentation, “Streaming Videos: A Quick View from Ordering to Cataloging and Access.” He started by giving a brief explanation of streaming media as moving images or audio that are compressed so they can be played without being downloaded and saved first. He went on to discuss Live streams (Internet Television) vs. On-demand streams (Films on Demand).

Cyrus argued that properly used on-demand streaming has many advantages for libraries: no wear and tear on equipment or DVDs/CDs; better price; intellectual property protections built in; ideal for distance education and others.

He then discussed points to be considered by any library that is moving toward streaming media. First is the licensing. Check it carefully! You do not want to agree to restrictions that undercut your use. Then there is the quality of the files and the hosting; problems here can undercut usability. Capacity and duration should be examined as well.

Cyrus also went over the services you need to examine before signing a contract. Among them are their licensing, file backup, technical support and of course maintenance. Also consider if the files are delivered from a remote server or from a local server.

He then moved on to a quick discussion of licensing models, ranging from Flat Fee In-Perpetuity licensing through Pay-Per-View Model (his preference). Cyrus briefly reviewed what he felt were the strengths and weaknesses of these models.

He then listed several sources of free steaming media such as Oyez (www.oyez.org) and YouTube edu (www.youtube.com/edu) as well as several subscription sites: Alexander Street Press (www.alexanderstreet.com) and Lynda (www.lynda.com) among them.

He finished with a sample catalog record for a Frontline program on President Obama.

Sunday

Care of Borrowed Special Collections: Playing Nice with Other People’s Toys

By Jessica Phillips, University of North Texas

On Sunday, June 29, at 8:30am, a team of speakers from Wayne State University and the University of Michigan presented some of the complexities of borrowing and exhibiting Special Collections from other institutions, using their own collaborative experiences as a case study. In 2011, three institutions, Wayne State University Library System, the Cohn-Haddow Center for Judaic Studies at Wayne State University, and the University of Michigan’s Special Collections Department, began discussions to exhibit materials from the University of Michigan’s Jewish Heritage Collection at Wayne State University. Presenters were Cynthia Krolikowski, Special Collections Coordinator for Wayne State University, Martha O’Hara Conway, Director of Special Collections at the University of Michigan Library, Rachael Clark, Wayne State University Library, and Mike Hawthorne, Associate Director of the Library at Wayne State University.

There had been interest in creating a reading room for Special Collections in the library at Wayne State University since 2008, but it did not come together until there was the necessity for it in hosting this exhibit, Judaism in the American Home. Wayne State University Libraries overcame many challenges in preparing a space to serve as exhibition space for the objects to be on display. These included creating detailed environmental reports, ensuring security and insurance for the materials, arranging display cabinets to best allow for traffic flow through the room, providing UV filters for lights, and supplementing existing lighting with LEDs placed within the display cabinets.

For their exhibit of Judaism in the American Home, Wayne State University had three people with different areas of expertise travel to Michigan State University to begin selecting appropriate materials. As items from the University of Michigan’s collection were still being cataloged and conserved, there was some flux in terms of what materials were available to Wayne State University. The staff members selecting materials from Michigan’s Jewish Heritage Collection would photograph and document dimensions for each piece they considered putting on display and would later present their recommendations to a selection committee at Wayne State.

The University of Michigan had eleven pages of documents, in addition to a request to borrow form and pre- and post-loan condition evaluation forms, which needed to be completed in order to ensure a successful loan of their materials. As part of the pre- and post-loan condition evaluation forms, each item was photographed and any pre-existing damage noted. Staff from Wayne State University personally oversaw the transfer of materials to their library and many photographs were taken again throughout the unpacking process as Wayne State University Libraries staff checked the objects against the condition evaluation forms to ensure that no damage had occurred during transit. Materials were yet again photographed and checked against evaluation forms before their return to Michigan State University.

The speakers highlighted some lessons learned from this collaborative project; these included recognizing that libraries probably do have space – even if it isn’t realized yet – to display special collections materials, the importance of asking for all of the lender’s rules, regulations, and forms in advance, allowing the experts to do their jobs, measuring twice for space requirements, considering all environmental hazards in the display area, being prepared to change course as necessary, documenting physical condition, and planning ahead.

A LibGuide for this exhibit is online from http://guides.lib.wayne.edu/JudaicaExhibit?hs=a

Ebooks: Discovering the Virtual Backlog

by Elise Y. Wong, Saint Mary’s College of California

The speakers for this program, held Sunday, June 29 at 10:30am, were Sommer Browning, Head of Electronic Access & Discovery Services, University of Colorado, Denver Auraria Library, and Rhonda Glazier, Director of Collections Management, Kraemer Family Library University of Colorado. Their presentation elucidated the complexity of ebook workflow and identified the problems of the hidden virtual backlog of ebooks. The presentation also offered tips for cataloguers on how to fix these problems and create an efficient workflow in technical services.

Some of the problems resulting in the virtual backlog are inaccurate title lists, front files, unavailable or poor quality MARC records, and pre-published chapters. One common cause of these problems is the murky waters between acquisitions and cataloging. The workflow of ebooks differs from that of traditional books. The handoff between acquisitions and cataloging is often not clear and there can be a lot of back and forth between these two departments. The immediate availability of ebooks is delayed when cataloging needs to go back to acquisition for further clarifications. Access issues of ebooks (e.g. notification, licensing, ownership, tracking, and troubleshooting) also affect the workflows of both departments. In some cases, there is no clear understanding as to what responsibilities are delegated to each staff member. Miscommunication invariably impacts productivity in technical services.

Several strategies to resolve the virtual backlog and improve communication (and collaboration) were presented. Communication should be addressed at every level: selectors, acquisitions, electronic resources, cataloging, public services, and patrons. Documentation of ebooks processing should be established where each workflow and the cross-departmental responsibilities from ordering, tracking, notification, receiving, cataloging, to live access should be clearly assigned. To avoid or minimize backlog and delay, technical services staff needs to create a to-do checklist and schedule reminders on their calendars. Tracking of order notification, access issues, and problems should be centralized to minimize confusion. To increase efficiency, cataloging could develop procedures to handle various types of ebooks packages. In the proactive environment, technical services should master the four areas of ebook processing (handoff, access, role, and knowledge). Beyond their own roles, staff members should know what actions to take when encountering problems and understand the roles of other staff in order to communicate irregularities with one another. The seamless management of ebooks requires recognition of their complex workflow and the processes that are involved, including: defining ebooks as core to the collection, highlighting the work, emphasizing circulation, and marketing.

During the Q & A, the audience responded with additional suggestions on facilitating communication and workflow within technical services. The audience agreed that documentation that includes procedures, flowcharts, checklists, spreadsheets, and linking forms is essential to streamline the workflow. The operation of each department is not a silo, for instance, it is helpful for cataloging to be notified in advance about major ebooks purchases. Regular technical services meetings to discuss improvements and maintenance projects are also recommended.

Successful Outreach: Celebrating Five Years of Preservation Week

By Jeanne Drewes, Library of Congress

On Sunday, June 29 at 1pm, the audience convened to celebrate what has been a very successful ALCTS outreach program. Jessica Phillips, University of North Texas, presented the four speakers. The program started with Jeanne Drewes of the Library of Congress. She recounted how Preservation Week was conceived and developed in 2009 by Library of Congress and IMLS (Institute of Museum of Library Services) staff. A conference of aligned associations and organizations helped to set the parameters for the week and garnered partners in advertising and funding for the program. IMLS provided staff for organizing and developing materials for the website and handouts. Library of Congress provided staff and funding for materials which IMLS also supported the following year. The New Jersey State Library assisted with workshop development for developing outreach for Preservation Week in cultural institutions across the state. The materials developed for the workshops were then mounted on the ALCTS website. 2010 saw great success, and two examples of successful programs were provided by the next two speakers.

Ruth Shasteen spoke about two intergenerational collaborative grants in Illinois involving students (one K-12 public school and one college) and senior citizens. In 2009, Ruth was the library media specialist at Central A&M High School and collaborated with other community partners in the project, “Mining More in Moweaqua.” High school students interviewed older people in the community to produce a multimedia DVD containing the story of the disaster and biographies of the miners along with photos of the community from those years. This was a truly collaborative effort between the Moweaqua Public Library, Moweaqua Coal Mine Museum, Moweaqua historical Society, and Central A&M School District.

Ruth also shared a project from Eureka, Ill., where her sister, Nancy Scott (director of the Eureka Public Library District) collaborated with Maple Lawn Homes and Eureka College on an intergenerational memoir project spanning three years: “From Freshmen to Seniors: Capturing Memoir in the Web of Technology: Partnering Eureka College, Eureka Public Library District, and Maple Lawn Homes (2000-2003).

Patricia Selinger, Preservation and Inventory Management, Virginia Commonwealth University Libraries presented a history of preservation outreach at her library and in the university community. Patricia closed with these words: “Understanding your clientele and your organization’s mission/vision/values is key to the successful outreach effort. The slogan that I use during Preservation Week is: The family treasures of today are the special collections of tomorrow. My goal for Preservation Week outreach is to help my target audience understand the ‘why’ of preservation in the academic library setting, how it makes their university experience better and will help them enjoy their academic memories in the future.”

Nancy E. Kraft, Head of Preservation & Conservation, University of Iowa and Preservation Week Committee co-chair, discussed the future of Preservation Week. She stressed the firm foundation that has been established, highlighting the event map and speaker locator, sponsorships, preservation tool kit, webinars and websites. Areas where PWC would like to expand on is the “Share Your Story” webpage adding more examples of preservation projects, Dear Donia column, booths, and publicity. She shared what participants said in a survey that they would like, discussing how many of the ideas mentioned in the survey could happen at the local level. Kraft also stressed that Preservation Week activities can be held at any time not just during Preservation Week and that past webinars can be used for current programming.

The next two Preservation Weeks are April 26-May 2, 2015, and April 24-April 30, 2016. Sponsors for 2014 were Archival Products, Gaylord, The Media Preserve, Backstage Library Works, Hollinger Metal Edge, ACME Bookbinding, and George Blood Audio and Video.

CaMMS Forum: Translating BIBFRAME, or What is all this #$%!?: Making its potential mutually intelligible to catalogers and coders alike

By Elyssa M. Sanner, Northern Michigan University

The CAMMS Forum was held on Sunday, June 29 from 1-2:30pm to a standing-room-only crowd. The title of the forum was smartly named, and adequately captured the feelings of more than one cataloger. The three presenters strove to distill BIBFRAME into terms that were practical and clearly understood. 

The first presenter, Dorothea Salo, Faculty Associate, School of Library and Information Studies at University of Wisconsin-Madison, was unable to attend the conference but conveniently recorded her presentation in advance. Dorothea focused on discussing computer-friendly data, pointing out that MARC records are not computer-friendly and libraries need to migrate to another method to be a part of the future of the web. She argued that MARC was created when our universe had bounds, but it has been made boundless by the proliferation of the internet. To make library data more computer friendly, Dorothea suggested using unique identifiers that never change or disappear, consistent data, and controlled vocabulary. She suggested that a ‘quick win’ or instant satisfaction would be to include microdata in the HTML of an ILS, as microdata is described at http://www.schema.org.

Philip Schreur, Head, Metadata Department, Stanford University, spoke about LC’s MARC to BIBFRAME converter. He made the conversion from a MARC record to a linked data BIBFRAME record look effortless through the use of the converter, which simply took URLs for MARCXML records from Stanford’s catalog and transformed the data. The BIBFRAME records looked simple to read, and Mr. Schreur pointed out how easy it was to identify which MARC field the data originally came from. He encouraged the crowd to experiment with the converter and report any issues - for example, edition statements and holdings information still needs to be incorporated into the BIBFRAME elements somehow. 

Nannette Naught, Vice President Strategy & Implementation, Information Management Team (IMT), Inc., was the final presenter and reiterated some of the same information as the other presenters: authority control is needed, experimental attitudes and experimentation are essential. She drew a comparison between the MARC/RDA environment and Woody and Buzz in the Toy Story films that highlighted the changing models and attitudes in bibliographic data within the past decade. 

Overall, this forum confirmed my belief that authority control is an essential part of the future of library bibliographic data and provided thoughts and ideas for a way forward.

Fifth Annual ACLTS CMS Collection Management and Development Research Forum

By Heath Martin, University of Kentucky Libraries

The Fifth Annual Collection Management and Development Research Forum, sponsored by the Publications Committee of the ALCTS Collection Management Section (CMS), included two presentations focused on recent research in the areas of collection development and assessment. Approximately 40 people attended the forum this year, held at 4:30 p.m. on Sunday, June 29.

In the forum’s first presentation, entitled “Sustainable Data-Driven Collection Assessment of Print Monographs in an Academic Library,” Amy Ward, Director of Technical Services at Gettysburg College’s Musselman Library, discussed the library’s recent attempts to build an assessment process focused on collection data and liaison input. Ward began by providing background information on the college, the library, and the collections, including past collection assessment practices and lessons learned through their application. This retrospection and the need to develop a new assessment process led to a fundamental question: “Can we develop and implement a sustainable collection assessment process of our print monographs that would combine data-driven decisions with liaison input?” The area of American Politics was selected to pilot the new process, involving 2,785 items in the JK classification.

After evaluating assessment criteria and their relative importance, Ward and her colleagues designed an assessment model that employed “automatic keep” criteria, range-level data collection, title-level data collection, and liaison input and recommendations. Materials qualifying under the first phase of the process, automatic keep criteria, included recently purchased titles not superseded by newer editions, items and topics with recent or high circulation or reserve use, and others. By the conclusion of this first phase, decisions had been made concerning 56 percent of the collection under review. Phase Two of the assessment process involved range-level data collection and review, which took into account age of collection, age of acquisition, rate of usage, and the Resources for College Libraries titles report. Data from this phase of the process was used as a starting point during discussions with the subject liaison for the area. The liaison then was able to assist in the next phase, title-level data collection, and provide recommendations on items remaining to be reviewed.

In the end, Ward and her colleagues were able to reach decisions for the entire pilot collection, eliminating unneeded materials as new titles were added and bringing the age of the collection more in line with needs and expectations. The process resulted in greater understanding of the collection, less reliance on traditional weeding activities, and a way to address space concerns in an ongoing and sustainable way. After the pilot, the new process was expanded to several additional subject areas.

The second offering of the forum came from two colleagues from James Madison University: Genya O’Gara, Director of Collections, and Cheri Duncan, Director of Acquisitions and Cataloging. O’Gara and Duncan’s presentation, titled “Developing Holistic Approaches to Agile Collection Development and Assessment,” addressed the challenge of holistic and agile collection development and assessment amid the evolution of collection formats, content delivery, and scholarly communication. The research focused on developing content development practices responsive to changing needs and piloting assessment methods inclusive of the diversity of scholarly output being made available through the JMU library collections, as well as articulating the value of those collections to library constituents.

O’Gara and Duncan began by describing the content development and assessment practices and workflows in use at JMU, using that background to demonstrate the flow of scholarly communication among university stakeholders and identify potential for productive change. Included among these changes were efforts to redistribute liaison efforts to more proportionately represent the diversifying demands on library expertise, making room for areas such as data management, digitized collections, new publication platforms, and streaming media.

To illustrate these priorities, the presenters demonstrated specific pilot initiatives intended to address these larger issues, which included the collapsing of existing fund structures to consolidate content development activities and accommodate interdisciplinary curricular and research emphases requiring library support. Other pilot projects focused on “collection snapshot” reports made available to subject librarians and other stakeholders, examination of database and journal review factors, and the development of a flexible assessment rubric expressing a suite of tools to better evaluate and articulate the holistic value of library collections to a variety of constituents. The rubric was discussed in detail as the presenters outlined key components and values informing its creation, which included a focus on university values and strengths, patron demographics, curricular priorities, local research outputs, budgetary constraints, scholarly communication practices, and consortial collecting.

PARS Forum

By Karen E.K. Brown, SUNY Albany

Two timely topics were planned for the PARS (Preservation and Reformatting Section) Forum, held Sunday, June 29 at 4:30pm: Midwinter Meeting attendance, and a discussion on work being done by ALA’s Committee on Legislation (http://www.ala.org/groups/committees/ala/ala-lg) towards a national preservation plan for the Federal Depository Library Program (http://www.gpo.gov/libraries/).

Becky Ryder, current Chair of PARS, led both discussions. With members of PARS Executive Committee making up the panel, she and Jake Nadal, outgoing chair, provided some background on the issues surrounding participation at Midwinter Meeting. The central question, after many years of discussion within and all across ALA, is about the value of holding two conferences per year, both to the organization’s coffers and to attendees. PARS is taking baby steps toward making Midwinter Interest Group (IG) meetings optional.

Except for the opening meeting, the Preservation Administrators Interest Group, which will run at both conferences, chairs of the other interest groups will be asked whether they plan to organize a session for Midwinter in time for early registration in September. This will give the membership plenty of advance notice about what to expect so they can plan accordingly. Several of the nearly 40 attendees felt that descriptions for meeting content should also be issued well ahead of time.

Many PARS members find it financially challenging to attend both Annual Conference and the Midwinter Meeting, noting that it can take resources away from going to other professional meetings. For those that do come to Midwinter, one individual remarked that they would then have the chance to participate in meetings held other units, bringing preservation interests to them and bringing their concerns and ideas back to PARS. The biggest shortcoming seems to be ALA’s lack of technology support to really virtualize our conferences and connect to those unable to attend; this should be a goal, even if some present thought it wouldn’t happen anytime soon.

The last part of the forum was a short update on the role PARS has played advising the ALA Committee on Legislation, Federal Depository Library Program (FDLP) Task Force. In the week leading up to Annual Ms. Ryder circulated the 2013 and 2014 (http://connect.ala.org/node/225622) Task Force Reports, along with a copy of the PARS response. Her formal comments are generally very supportive of the work to date, and suggest that the task force is on the right track by respecting preservation basics like avoiding duplication of effort and using recognized standards. A representative of the Government Printing Office in attendance at the forum was able to clarify some of what they were responsible for versus the FDLP. Of special concern seems to be the capture of digital, especially web based, government content.

Library of Congress Bibliographic Framework Initiative (BIBFRAME) Update Forum

By Julie Renee Moore, Fresno State University

SPEAKERS:

  • Beacher J.E. Wiggins, Director for Acquisitions and Bibliographic Access, Library of Congress.
  • Sally Hart McCallum, Chief of the Network Development and MARC Standards Office, Library of Congress.
  • Kevin Ford, Digital Project Coordinator in the Network Development and MARC Standards Office, Library of Congress.
  • Phil E. Schreur, Head of the Metadata Department, Stanford University Libraries.
  • Andrea Leigh, Head of the Moving Image Processing Unit, National Audiovisual Conservation Center, Library of Congress in Culpeper, VA.

Beacher Wiggins introduced the panel of speakers and explained what the Library of Congress has been doing regarding BIBFRAME since the last update at Midwinter 2014.

Sally Hart McCallum reported that they have reorganized the main website, loc.gov/bibframe, which links to bibframe.org. That is where we can find downloadable tools as well as vocabulary. They have also established a registry and a test bed for people who are actually experimenting with BIBFRAME. There is a listserv for the people who are registered. The only membership requirement is that they are actively experimenting with BIBFRAME. In April they posted draft specifications for: 1) authorities and how authorities are viewed in BIBFRAME; 2) how relationships are viewed in BIBFRAME, and 3) profiles. The draft for the profiles document was written by Zepheira. At the end of April, they made the BIBFRAME Editor available. McCallum calls it the “editor interface” since it does not have a “back side” to it. People can experiment with the BIBFRAME Editor now, and set up their own “data store.”

They have been modeling recorded sound and moving image resources, with the assistance of Audiovisual Preservation Solutions. They also worked with Andrea Leigh and the National Audiovisual Conservation Center (NAVCC), which is where they keep all of the moving image and recorded sound materials. That study, which looks at modeling sound recording and moving images in BIBFRAME, is finished and they are hoping to publish it by the end of July 2014.

Other research that will be continuing in 2014-2015:

  • Metaproxy – further examination of extending across their relational databases
  • Request for Proposal (RFP) for the BIBFRAME editor profiling (an acquisitions assistant may not require the same amount of data that a map cataloger requires, for example.)
  • RFP for the BIBFRAME a search and display interface without the back store.

McCallum emphasized that the software products from any research at LC will be made downloadable and open source. The coming year promises to be full of research on these numerous projects.

Kevin Ford talked about and demonstrated the BIBFRAME Editor, as the release was their main focus in the last six months. Zepheira developed the prototype editor. LC wanted to develop their own editor that would be as “pluggable” as possible. They have been continuing to work on the MARC-to-BIBFRAME transformation. They have also been making enhancements to and reorganizing the bibframe.org website, the vocabulary display, and tools. Ford provided some technical examples, showing us what they have done with the BIBFRAME Editor so far. The BIBFRAME Editor is written in JavaScript, so it will run in any browser. (It does not require any special software other than a browser.) Right now, it does not go into a data store – but that is by design. The idea is that the people will be able to plug it into a search-and-display system. (An RFP is out for a search-and-display system.) They have started working on a very basic storage and retrieval application system to be developed for the editor. The Editor runs on Profiles and they hope to issue an RFP for a profile editor application. (Right now, Ford says that any developer who knows JavaScript Object Notation can download the code and create their own profile editor application.) All of the code is available under LC NetDev GitHub, so everyone can take this back to our own software developers so that we can play with all these projects/products that they are working on.

Philip Schreur provided a presentation called, “Stanford and BIBFRAME: Big Data, Big Issues, Big Solutions.” In 2013 the Andrew W. Mellon Foundation awarded a two-year $999,000 grant to Cornell, Harvard and Stanford for Linked Data for Libraries. The project partners are working to: 1) develop an ontology; 2) develop a collection of linked open data resources between each other, and 3) create a suite of open source software tools to make the project reproducible for anyone else who is interested in it. Whenever possible, they are using existing work by existing projects, so as to not have to completely reinvent the wheel. For this reason, they are researching BIBFRAME ontology. Stanford has taken the lead on evaluating BIBFRAME as a possible source for the ontology. Their goal is to 1) create an ontology; 2) be able to share with everyone the semantic editing, display and discovery system, making sure that each instance of the system will support the ingest of semantic data from multiple information sources. For this project, they will be using the Cornell, Harvard, and Stanford catalogs converting as many as 30 million MARC records to BIBFRAME, as well as records in digital repositories, and other bibliographic metadata. The other piece of this is the authority data, which will be handled differently in BIBFRAME. They have devised a process to provide consistent identifiers for names, organizations, and subjects. Schreur provided some very interesting examples in BIBFRAME. He encouraged people who are interested in this to experiment with it, as the involvement of a variety of institutions will provide more voices to the conversation; otherwise, it can become very institution specific.

Andrea Leigh discussed the fascinating types of materials that are housed at the National Audiovisual Conservation Center, Library of Congress in Culpeper, VA. They underwent a research study on using BIBFRAME for their many types of moving images and sound recordings. They have written a report on the study which will be available at the end of July 2014. The report will discuss the special characteristics of AV material and will make some general recommendations for the BIBFRAME model and how the special attributes of these media might be best accommodated. AV information resources are becoming increasingly important, and so is the need to preserve those materials. The title of the report is “BIBFRAME AV Modeling Study: Defining a Flexible Model for Description of Audiovisual Resources.”

http://www.loc.gov/bibframe/pdf/bibframe-avmodelingstudy-may15-2014.pdf .

Questions were fielded after the panel.

Beacher Wiggins let the audience know that they PCC Office recently released a survey to help them assess the community knowledge about BIBFRAME and where they could best address educational issues.

This program was videorecorded, and is available at: http://www.loc.gov/today/cyberlc/feature_wdesc.php?rec=6323

Monday

The Sub-Librarians Scion of the Baker Street Irregulars in the American Library Association

By Alison M. Armstrong, Radford University

Marsha Pollak, Sub-Librarians, B.S.I., A.S.H. opened the 38th (Irregular) Annual Meeting of the Sub-Librarians Scion of the Baker Street Irregulars (B.S.I.) on Monday, June 30, by welcoming us and providing information about them. They are the oldest theme or professional oriented group interested in Sherlock Holmes.

It is tradition to raise a toast to characters. Since the meeting began at 8:30am, the beverage of choice was of the caffeinated variety. The Sub-Librarian of the London Library, Lomax, was toasted in absentia by Paula Perry. Next, we raised our glasses to toast Sherlock Holmes, Bookman and Special Librarian, toasted by Gary Thaden. Hill Barton, (John H. Watson, M.D.,) Library Patron, was third. Dr. Watson was toasted by George Scheetz in absentia. The Baron Gruner, Collector was toasted by Joe Coppola. The last, Kitty Winter, Wronged Woman, was toasted by Elaine Coppola.

Someone mentioned that The Beacon Society offers grants for programs involving Sherlock and literacy.

Following the toasts, the author, Laurie R. King, was introduced. King was inducted into the B.S.I. as “The Red Circle” in 2010. She is the author of the Mary Russell-Sherlock Holmes series and is celebrating the 20th anniversary of The Beekeeper’s Apprentice. Since then, Mary Russell has developed further with eleven more books. King has also written books in two other series, stand-alone titles, as well as companion books and contributed to anthologies.

She started by telling herself stories and then sharing those stories with others. She started writing stories about Mary Russell and found that, despite the fact that she wrote about a feminist woman who insults Sherlock Holmes regularly, she has been welcomed into the Sherlockian universe.

With a laugh, she said that she is surprised when she finds there are people who have not heard of her books, not because she is that pompous, but she thinks her publishers are doing a great job of getting her name out there.

King has been holding a contest annually during National Library Week. This year’s essay’s theme is “How the Beekeeper Changed My Life.” She said that she wrote for all of the girls who finished a Holmes book and thought, “but he doesn’t mean me.”

Her next book, Dreaming Spies will come out in February. Readers will find two different libraries featured in the book. King also admits to having an ethical struggle with one of the things Russell does in a library in the book.

A question and answer period followed. King was asked about other stories and other series and she said she is trying to fit them in. There are a lot of neat items for sale at the CafePress online store. In answer to a question about the phenomenon of Holmes, King said that Holmes is the modern archetype. He is a hero and shaman and leads us to believe that you and I can make a difference through careful thinking. This is powerful in an age when we don’t feel very powerful.

She mentioned that the all of the Holmes stories are in the public domain except for the last ten that remain under copyright. The appeal judge’s ruling, which is on the Free Sherlock! website, is well worth reading.

Toward the end, she said she considers herself a “recovering academic” and loves doing research. She shared a great story about checking out a 1923 book about pig sticking and, after leaving the library, found a live, wild boar in the road in an upscale neighborhood.

For more information about B.S.I.: http://elisanonline.com/sublibrarians/about.html

For more information about Laurie R. King: http://www.laurierking.com

For more information about the Beacon Society: http://www.beaconsociety.com

Free Sherlock! website: http://free-sherlock.com/

American Libraries Q&A with Laurie R. King: http://www.americanlibrariesmagazine.org/blog/qa-laurie-r-king

ALCTS Publisher/Vendor/Library Relations Interest Group Forum:

A Textbook Case: The Changing Textbook Ecosystem in Higher Education: New Opportunities, New Perspectives

By Anneliese Taylor, University of California, San Francisco

This forum, held Monday, June 30 at 8:30am, featured presentations from four speakers, followed by Q&A with an engaged audience. Kris Lange of Ingram kicked things off by talking about his experience selling used books through his company BookFool, recently sold to Ingram. Lange started BookFool while a student at University of Mississippi, with the goal of providing affordable textbooks sold online, directly to students.

The key trends Lange noted related to textbooks and bookstores are:

  • An increase in piracy of textbooks online. The rate of pirated texts reported by student workers at a large university bookstore went from 20 percent of workers last year to 100 percent this year.
  • Fewer courses are requiring paid content.
  • Schools are experimenting with an institutional model, paying the full cost of course materials for students (e.g., University of North Carolina Charlotte). Ingram’s VitalSource is a platform for this model.

Franny Kelly, product manager for eTextbooks at Wiley, gave a snapshot of where textbooks are today from a publisher’s perspective. Publishers are aware that the current textbook model is broken. Wiley conducts focus groups with students to find out what students want. Today, the print textbook market is alive and well. Most students still like using printed books because the book is a simple device, without distractions. They want ebooks to mimic the print environment, with some modifications. Custom tailored textbooks and selected chapters are growing in popularity. And print rentals (e.g. Chegg and amazon) are on the rise. Despite the thriving print market, Kelly predicts that 2016 may be the tipping point toward digital.

When it comes to digital textbooks, the definition of an etext is evolving from flat PDFs, to enhanced etexts, to entire course solutions. Colleges are leading the way in this arena, and integration with learning management systems is becoming a requirement. The shift in the K-12 market is much slower because of school boards. Digital rentals are gaining momentum, with companies like amazon, Chegg, and packback offering students a model of paying a flat rate per semester (around $8.99) for access to content.

What do students want in an etextbook?

  • Portability to different devices.
  • Notetaking and highlighting functionality, and study curation tools (index cards).
  • Social tools to communicate with other students through the interface.
  • Adaptive tools, whereby the book changes what’s presented to the student based on what they already know (via testing).
  • The option to buy a permanent download after the rental term expires.

SUNY (State University of New York) Geneseo has taken a different approach to online textbooks with the creation of Open SUNY Textbooks. Library Director Cyril Oberlander is also the Principal Investigator on the $20,000 grant obtained to establish this open textbook pilot project for SUNY. Open SUNY Textbooks has 15 open access textbooks written and reviewed by SUNY faculty. The titles were selected from among 46 proposals. More than 15,000 unique viewers accessed the texts in the first year.

Five libraries in the SUNY system are participating in the pilot, and it’s been an opportunity for librarians to take on new roles and learn new skills. In order for this pilot to turn into a scalable, viable function, the library must train and partner with a publishing house. Oberlander is moving to the California State University system and wants to connect SUNY’s activities to his new institution. He welcomes exploring library as publisher initiatives with other groups as well.

Chuck Hamaker, Associate University Librarian for Collections and Electronic Resources at University of North Carolina Charlotte (UNCC), made the final presentation. Libraries often say they don’t buy textbooks, even when they do. The UNCC Library is making a point of acquiring online textbooks selected by professors. The library created a database with 130,000 ebook titles and descriptions from multiple publishers (including Wiley, Elsevier, ABC-CLIO, Taylor & Francis, and Cambridge). Professors request ebooks through the database, which initiates an order and reserves listing.

The expense to the library has been reasonable. Purchasing ebooks directly from publishers also works out to be less expensive, based on UNCC’s ebook pricing analysis. And, these platforms allow unlimited simultaneous users, without digital rights management, which benefits users. The campus bookstore tags books available through this program, letting students know they’re available for free online. Their sales have not been hurt; it prevents them from losing more textbook business to competitor bookstores. Publishers benefit from participating in this program as it exposes faculty to content they might not have discovered otherwise. Visit http://library.uncc.edu/etextbooks to see the texts currently available through this pilot project.

The Quiet Strengths of Introverts: ALCTS President’s Program featuring Jennifer Kahnweiler

By Shannon Tennant, Elon University

Despite being a self-confessed extrovert, executive coach and speaker Jennifer Kahnweiler gave a dynamic and interactive program about the strengths of introverts. Kahnweiler began by asking for a show of hands by all the introverts in the audience – and there were a lot! She clarified that introversion is not good or bad, but rather about energy. Introverts are energized from within, by thinking and spending time in their heads. Extroverts, by contrast, are energized by people. Even if they are not in positions of power, introverts can get their ideas across by using what Kahnweiler calls “quiet influence.” Kahnweiler organized her talk into three points:

  • Why quiet influence is right for now
  • How quiet influencers do what they do – 6 strengths and how to use them
  • The absolute impact of quiet influence

Kahnweiler demonstrated the power of quiet influence by asking the audience to think about someone who had made a big impact in their lives with very little fanfare. She compared these people to pebbles thrown in a pond, which cause ripples that without knowing it.

Kahnweiler identified five challenges that introverts face in the typical workplace:

  1. People exhaustion. An introvert needs time away from people to recharge.
  2. Fast decisions. The perceived need for instant decisions can be a challenge for reflective introverts.
  3. Teams. Many places now have work teams for everything, which do not offer introverts the solitude they need for reflection and creative thinking.
  4. Sell yourself. There is a culture of self-promotion in many workplaces, and introverts are reluctant to boast about their accomplishments.
  5. Put on a happy face. Introverts are not loudly cheerful like extroverts, but that does not mean they are depressed.

Kahnweiler stressed that there is a long learning curve for extroverts to learn what it’s like to be an introvert, and vice versa. But the key is awareness. Kahnweiler went on to list the six key strengths of introverts:

  1. Taking quiet time
  2. Preparation
  3. Engaged listening
  4. Focused conversations
  5. Writing
  6. Thoughtful use of social media

She then delved more deeply into two of these, engaged listening and the use of social media.

Engaged listening gives its practitioners a chance to build rapport and to understand people’s concerns at a deeper level. Kahnweiler organized a lively demonstration with Erica Findley and Daniel Lovins of the ALCTS planning committee to illustrate the incorrect and correct ways to listen. Engaged listening required paying attention, making eye contact, and not making it all about the listener. Kahnweiler warned that overusing a strength can make it a weakness. If you listen too much and do not share your ideas, other people will not know where you stand and might overlook your input. Listening too much also can make you the recipient of too many confidences, making it hard to get your own work done. Kahnweiler’s listening tips were to focus when listening. Turn off other thoughts and be present in the moment, and ask yourself what you can learn from the other person, even if it’s what not to do.

The second strength that Kahnweiler analyzed was the thoughtful use of social media. Social media is here to stay, and can allow introverts to reach a broad and diverse audience. Effective use of social media shows people what they have in common, and humor can be a powerful tool. Kahnweiler gave some examples of librarians who use blogs, websites, and tweeting to share their ideas. Social media can be overused. There is a risk of addiction, and often people use media to express only their own thoughts and never engage in dialogue with others. But anyone not using social media now is missing out. Kahnweiler’s tips for using social media thoughtfully were to try it for 15 minutes a day. Try to listen and learn, use tools to organize and filter the information, and post regularly.

In conclusion, Kahnweiler engaged with the audience by asking them to share a challenge and the quiet influence solutions they planned to take.

Continuing Resources Cataloging Committee Update Forum

By Erin Leach, University of Georgia Libraries

This program held on Monday, June 30, at 1:30pm, consisted of updates from representatives from CONSER, the U.S. ISSN Center, and the Continuing Resources Section’s representative to CC:DA, followed by a panel discussion focused on the serials cataloging community’s response to the adoption of RDA.

Les Hawkins, CONSER coordinator, reported that many of the modules in the CONSER Cataloguing Manual have been updated from AACR2 to RDA. These updates are posted on the CONSER website (http://www.loc.gov/aba/pcc/conser/more-documentation.html). The microform cataloguing module review is on hold, however, pending the acceptance of policy decisions submitted to the PCC Standing Committee on Standards. It is expected that these policy decisions will be approved by the Committee and incorporated into LC-PCC Policy Statements by October. Next up in the review process will be the CONSER Editing Guide and Serials Cooperative Cataloging Training Program documents.

Regina Romano Reynolds, Director of the U.S. ISSN Center and Head of the ISSN Section at the Library of Congress gave updates on ISSN activities. Reynolds began by discussing ROAD (http://road.issn.org), a freely-available directory of open access scholarly resources that includes records for download. Reynolds also reported that the Standing Committee for PIE-J (Presentation and identification of e-journals) created a template (http://www.niso.org/workrooms/piej/) for libraries to use when creating correspondence to send to publishers regarding how their e-journals are presented on online platforms.

Adolfo Tarango, Continuing Resources Section representative to CC:DA, reported that there are no serials-specific issues currently under discussion by the committee. CC:DA is currently discussing proposals regarding: production statement transcription guidelines, mark of omission in titles, and a suite of proposals from OLAC and MLA regarding AV cataloging. Tarango also reported that the committee continues to use its blog (http://alcts.ala.org/ccdablog/) as a clearinghouse for working documents.

After the updates concluded, the forum program began.

Peter Fletcher (UCLA), Les Hawkins (CONSER), Ed Jones (National University), Kevin Randall (Northwestern University), and Regina Reynolds (U.S. ISSN Center/Library of Congress) took part in an “ask the experts” discussion moderated by Adolfo Tarango (University of California, San Diego).

Reynolds, Fletcher, Randall, Hawkins, and Tarango began with prepared statements. These statements addressed a wide range of issues including RDA series training, local implementation of RDA, the nature of a serial “work,” and the way in which web-scale discovery interfaces obscure the work done by catalogers. Additionally, audience members asked questions about RDA that were both conceptual and practical in nature. Questions included: use of the ISSN in series authority records, dealing with the removal of the GMD when your system doesn’t displays 33x fields, and preferences in how to record information for online resources.

In both their prepared remarks and in answers to questions posed by the audience, panelists emphasized the importance of catalogers judgment. They agreed that RDA gives cataloger’s an unprecedented level the freedom to make decisions when creating records. Cataloger’s judgment requires record creators to feel empowered to make the decision that they think it best. Reynolds suggested that when exercising cataloger’s judgment, catalogers ask themselves: What harm will it do?

Articles on Demand: Library Perspectives

By Kristin E. Martin, University of Chicago

The program, Articles on Demand: Library Perspectives, sponsored by the ALCTS Continuing Resources Section and RUSA Sharing and Transforming Access to Resources Section, was held on Monday, June 30, from 1-2:30pm, and featured three different speakers. Articles on demand refers to a transactional method of acquiring journal content for library users at the article level, rather than the title level, and was alternately called pay-per-view or document delivery by different speakers. Articles purchased generally become available to an individual user only, and are not added to a library’s collection. For consistency, this write-up will use the terminology articles on demand.

Beth R. Bernhardt, University of North Carolina Greensboro (UNC Greensboro), began the program by providing the historical background of the use of articles on demand at UNC Greensboro. The library began using that model for providing access to journal content when a sizable cut in their serials budget combined with heavy growth in their student population size required a different method for content acquisitions. The articles-on-demand model, instituted in fiscal year 2003/2004, provided the ability to provide quicker access to more articles, both current content and backfiles. UNC Greensboro worked with FirstSearch, EBSCO, American Institute of Physics, Ingenta, ScienceDirect, Wiley, and Ovid. The library tried to provide access to all unsubscribed titles, backfiles, and current content if access via an aggregator included an embargo. Titles were added to the library catalog and users received articles via an authentication mechanism that indicated that the article was individually purchased. From 2003-2006, the cost per article averaged around $20. In 2006, UNC Greensboro shifted away from articles-on-demand after it had the opportunity to enter into multiple Big Deals through consortia, which proved to be more cost-effective per article than individual purchases. However, at the end of the 2000s to the present the library has faced significant cuts to the collections budget, so the library is considering pulling out of the Big Deals to return to the articles-on-demand model.

Susannsa Bossenga, Northeastern Illinois University (NEIU), described the more recent move of the library, in 2007, to experiment and ultimately expand access to journal content using articles on demand. After NEIU predicted several subject funds running out of money due to rising serials subscription prices, they decided to cancel over 50 titles and switch to an articles-on-demand model. Ultimately the library decided to create a mediated program, working through the traditional interlibrary loan channels, to control costs and understand how popular the program would be. ILL staff were also used to verify whether content was already owned by the library and prevent costly duplication. NEIU decided to use the British Library document delivery program, and the program resulted in around $90,000 worth of savings and cost only $22,000 to pay for the individual articles. In 2011, NEIU switched to the Get It Now service from the Copyright Clearance Center, and expanded the program to cover more titles. In the future, the library is investigating the option of giving faculty individual accounts to allow for unmediated article purchases.

Mark England, University of Utah, ended the session with a description of the articles-on-demand program available for Nature Publishing Group (NPG) titles through ReadCube Access. ReadCube (https://www.readcube.com/), owned by Digital Science, is part of McMillan Publishers Limited, as is NPG. ReadCube is a reference management tool, originally developed by scientists at Harvard University in 2007 and made publicly available in 2011. ReadCube Access, developed as a pilot project at the University of Utah, provides an unmediated article delivery service within ReadCube. The library pays for article access, and originally offered access to 29 titles from NPG, now expanded to 37 titles. Articles can be purchased under a limited rental model with heavy digital rights management (DRM) restrictions, a cloud-only perpetual access purchase, or a PDF unrestricted use purchase. Not surprisingly, the PDF purchase option is both the most expensive and the most popular, although Utah, as a cost-saving measure, restricted use to the version with DRM for some titles. In the second year of using ReadCube, Utah had 455 users make 812 purchases and for $9,900, which was less expensive than other options for obtaining the articles, including ILL. ReadCube is in conversations with additional publishers to expand ReadCube Access beyond the NPG titles.

Slides are linked in the Conference Scheduler: http://ala14.ala.org/node/14379

Creating Sustainable AV Preservation in Academic Libraries

By Annie Peterson, Tulane University Library

On Monday, June 30, Howard Besser led a panel of audiovisual preservation professionals who presented different approaches to establishing audiovisual preservation programs in academic libraries.

The first speaker was Siobhan Hagan, audiovisual archivist at the University of Baltimore. Hagan presented on two different models for establishing AV preservation in academic libraries, drawing on her experience at UCLA and the University of Baltimore. At UCLA, the model was to hire a consultant, get grant funding for a temporary position dedicated to AV preservation, and then turn that into a permanent position. At the University of Baltimore, an intern (Hagan)’s work turned into grants and collaborations, followed by increased interest in AV preservation, then restructuring within the library, and finally establishment of the audiovisual archivist position. Hagan noted that in both situations, support from library administration was critical in the successful creation of new audiovisual preservation programs. AV materials from the University of Baltimore’s AV collections are available through the Internet Archive, at https://archive.org/details/ublangsdale

The second speaker was Stefan Elnabli, moving image and sound preservation specialist at Northwestern University Library. Elnabli identified some key factors in establishing an AV preservation program, including advertising services to faculty and curators, collecting playback equipment for obsolete media, working collaboratively across the institution, and promoting all of that work within the field of AV preservation. Elnabli set up a film inspection station, which most of the other panelists had also set up soon after starting in their positions. Elnabli noted that a preservation mission with limited resources is a balancing act, forever. Making decisions about what to preserve, and at what level of preservation, was a challenge seen in all the panelists’ work.

Steven Villereal spoke about starting an AV preservation program at the University of Virginia (UVa). UVa’s preservation program began in 2005, and in 2009 Villereal was brought in for a temporary position, which in 2012 became a permanent, faculty position. Villereal started with small preservation projects to show people what is possible with AV preservation. Villereal has set up a film inspection station, an audio lab, and a video digitization lab. Villereal mentioned the importance of hoarding audiovisual equipment as playback decks become increasingly harder to find.

Hannah Frost, manager of the Stanford Media Preservation Lab, represented the longest running AV preservation program on the panel. Stanford University Library has been building capacity for AV preservation since 2000. Frost spoke about five challenges to sustainability: organizational position and staff funding, lab facilities, playback equipment, born-digital media, and technology choices. Many of the challenges Frost outlined were clearly faced by the other panelists, so her presentation was a good complement to the speakers who were earlier on in the process of facing those challenges.

Besser wrapped up the panel with a brief Q&A session, during which the audience asked both general questions about building up capacity for AV preservation, and specific technical questions about digitization of AV materials.