All Those Programs You Missed: Annual 2013 Programs

Each year, ALCTS members volunteer to cover a program they attend at ALA Annual Conference. These efforts enable the rest of us to benefit from their presentations. Whether you attended the conference but missed the program, or couldn’t attend at all, these articles provide a great way to learn about what was covered.

RDA continued to be a major topic, with at least four programs focusing on RDA implementation. The 2013 Annual Conference also marked an increase in presentations relating to BIBFRAME. Consortia projects were also popular, whether it be consortially shared DDA or print monographs. On a down side, many meetings acknowledged predatory open access publishers, and how best to manage their impact on faculty and publishing in general.

Thursday

Preconference: RDF and Ontologies for the Semantic Web

By Mary Beth Weber, Rutgers University

The primary focus of this preconference, held Thursday, June 27, and led by Steven Miller of the University of Milwaukee, was ontologies. The intended audience was catalogers without information technology (IT) or computer science backgrounds hoping to gain a basic understanding of the Resource Description Framework (RDF) and ontology modeling concepts and terminology. Miller cautioned attendees that RDF, linked data, and related concepts are often hyped up; they will not lead to a utopian future and are still works in progress.

The full-day preconference was divided into six sessions (sessions 2-4 included exercises):

  1. Introduction to the Linked Data and Semantic Web visions
  2. RDF: The Resource Description Framework
  3. Semantic Modeling, Domain Ontologies, and RDF
  4. OWL: Web Ontology Language
  5. SKOS: Simple Knowledge Organization System
  6. Domain Ontology, Vocabulary, and Implementation

1. Introduction to the Linked Data and Semantic Web Visions

The objective of session 1 was to help attendees to gain a better understanding of the Semantic Web and Linked Data. A comparison of the Web and the Semantic Web revealed that the Web is a web of linked documents. The data is unstructured and is connected by hyperlinks. It is typically searched using keywords. In contrast, the Semantic Web consists of a web of linked data. Data is structured and has semantic meaning. This will make parts of the Web function more like a database, enabling more sophisticated searching than keyword. The current Web is designed for use by humans; the Semantic Web will be suited for use by machines. The Semantic Web will work better due to explicit metadata, ontologies (complex and formal collections of terms), logic and inference, and agents, which are software programs that accomplish tasks for humans.

Miller provided historical context for the Semantic Web: 1.0 was the original Web; 2.0 added social media that enabled users to add and create mashups, comments and reviews; 3.0 is another stage in an evolution. The Semantic Web provides the ability to transform parts of the public World Wide Web with linked open data. Semantic Web applications, also known as “semantic environments,” provide applications in closed environments within organizations, such as health care, government, or medical facilities. Linked data and the Semantic Web are not synonymous. Instead, Linked Data uses a less ambitious vision and is a means to achieve the goal of the Semantic Web.

A slide titled “Tim Berners-Lee: ‘Linked Data Design’” outlined four rules of Linked Data design:

  1. use Uniform Resource Identifiers (URIs) as names for things,
  2. use HTTP URIs to enable people to look up those names,
  3. provide useful information following standards such as Resource Description Framework (RDF) or SPARQL Protocol and RDF Query Language (SPARQL), and
  4. include links to other URIs so that searchers can find other things.

Miller recommended books on the Semantic Web:

  • <i>Publishing and Using Cultural Heritage Linked Data on the Semantic Web</i> by Eero Hyvönen
  • <i>A Semantic Web Primer</i> by Grigoris Antoniou and Frank van van Harmelen
  • <i>A Developer’s Guide to the Semantic Web</i> by Liyang Yu
  • <i>Semantic Web for the Working Ontologist: Effective Modeling in RDFS and OWL</i> by Dean Allemang and James Hendler.

2. RDF: The Resource Description Framework

The objective of session 2 was to give attendees the opportunity to gain a basic understanding of RDF as the data model which underlies Linked Data and the Semantic Web. Miller explained that everything that is known about ontologies is based on RDF. The session covered the most important and basic aspects of RDF.

RDF uses a graph-based data model to structure data as statements; statements consist of a subject, predicate, and object. The subject is known as a resource, the predicate is known as a property, and the object is referred to as the value. Each statement is called a triple, and has each of these three parts (resource, property, value). A slide about graphs showed that nodes represented things, and arcs connect nodes and show the relationship between them. RDF provides simple statements about resources. Subjects and predicates of RDF triples must be URIs. Objects of RDF triples can be URIs, or literals (raw text that may be used instead of resources in triples).

Shared vocabularies are used to share and link data. Examples of shard vocabularies are the Dublin Core Metadata Initiative (DCMI) metadata terms, the Virtual International Authority File (VIAF), Library of Congress’ (LC) Linked Data Service vocabularies, Friend of a Friend (FOAF), and Schema.org.

3. Semantic Modeling, Domain Ontologies, and RDF

The objective of session 3 was to enable attendees to understand ontologies as semantic models of specific knowledge domains. Attendees were also introduced to the basic building blocks of ontologies such as classes, subclasses, properties, subproperties, domain and range specifications, and the principle of inheritance. An additional goal was for participants to be able to create a beginning ontology using components covered in the preconference.

The concept of knowledge organization (KO) was introduced. Librarians, metadata specialists, taxonomists, ontologists and others bring structure to data, information, and knowledge resources through the creation of knowledge organization systems (KOS), which include metadata schemas, data and database models, controlled vocabularies, classification schemes, etc. KOSs provide controlled vocabularies through means that include lists, authority files, taxonomies, thesauri, and topic maps.

Ontologies and controlled vocabularies are not interchangeable. An ontology is “a formal model of the things that exist in a specified knowledge domain and the relationship among those things.” “Things” can be concepts, works, persons, places, objects, events, etc. Ontologies are similar to XML schemas.

There are three core components of an ontology:

  1. classes (this may include subclasses and superclasses)- type of thing
  2. properties (this may include subproperties and superproperties)- provide predicates in RDF and connect or relate resources to each other.
  3. instances (also called individuals)- specific entities or concepts of interest.

The first two components make up the ontology proper; the third component provides the RDF data structured by the ontology.

Examples of ontology statements, also known as triples, were provided. They consist of class definition statements, property definition statements, and individual/instance statements. Class definition statements may consist of Parent (isA Class), Mother (isA Class), Mother (subClassOf Parent), Child (isA Class). Property definition statements could include isMotherOf (isA Property), isMotherOf (domain Mother), isMotherOf (range Child). Individual/instance statements might include MarialTaylor (isA Mother), AdamJTaylor (isA Child), and MarialTaylor (isMotherOf AdamJTaylor).

The concept of a knowledge base was introduced. A knowledge base consists of ontology proper and individuals (RDF instances). RDFS is a simple RDF-based language used to encode RDF ontologies. Ontologies must be encoded in XML. Key RDFS elements include resource, class, literal, property, type, and domain.

Two of the most widely known ontology editors were discussed: TopBraid

( www.Topquadrant.com/products/TB_Composer.htm l) and Protègè ( http://protege.stanford.edu ).

4. OWL: Web Ontology Language

Session 4 was devoted to OWL. One of the session objectives was to enable participants to understand the kinds of queries that an OWL ontology and knowledge base can answer for end users. OWL has two types of elements: class and thing. There are two kinds of properties in OWL: object properties (relate objects to other objects) and data type properties (relate objects to datatype values).

5. SKOS: Simple Knowledge Organization System

Session 5 was devoted to SKOS. SKOS concepts consist of concept, concept scheme, inclusion into a concept scheme, and top concept. Examples of SKOS implementations include the National Agricultural Library’s Agricultural Thesaurus, LC’s Linked Data Service: Authorities and Vocabularies: Subjects, VIAF, and LC’s Linked Data Service: Authorities and Vocabularies: Names.

6. Domain Ontology, Vocabulary, and Implementation

Session 6 was devoted to domain ontology, vocabulary, and implementation examples. The session objective was to look at real-world examples of domain ontologies, models, and linked data such as FOAF, Europeana, VIAF, and LC’s Linked Data Service.

Preconference: Shared Print Monographs: Making It Work

By Susan L. Kendall, Dr. Martin Luther King Jr. Library, San Jose State University

On Thursday, June 27, librarians from all types of libraries, including special libraries and public and private academic institutions, convened for an ALCTS preconference about shared print monographs. Rick Lugg, Partner, <a href=” http://sustainablecollections.com/ ”>Sustainable Collection Services</a> (CSC) facilitated a panel of presenters who represented the broad spectrum of experience with implementing a shared print collection.

The opening presentation by Rick included a comprehensive review of the history and issues involving shared print monographs. He discussed libraries’ problems with growing print collections but diminishing space allocations. Academic libraries are experiencing overcrowded stacks but declining circulation statistics. A traditional approach to de-selection of library materials is costly and time consuming. A shared print monographs collection within a consortium provides a possible solution for preserving collections and could eliminate print while establishing criteria to ensure the integrity of the collection.

Workshop panelists explored how a consortium would begin and sustain a shared print monograph collection. The presentations ranged from the initial implementation process to other libraries that have put into practice a shared print collection.

California State University (CSU) system is in the initial development stage of implementing a shared print collection with six designated Southern California CSU libraries as beta sites for working out the details of a shared print program. Lessons learned so far included the complexity of communication within and across campuses; the establishment of criteria; and preservation of all communications, decisions, and criteria.

The Maine Shared Print Strategy (MSPS), a firmly established consortium, re-iterated the importance of establishing a strong communications network. MSPS is unique in that the shared print agreements were between public and academic libraries. MSPS received an IMLS grant to hire staff and pay for associated costs in beginning the shared print program after developing a strategy for collaboration statewide. With the help of the IMLS grant, the MSPS has been able to parse out work to committees and hired contractual staff. It has been crucial to send out documentation before meetings, so that meetings are focused and there are agreed action points and goals.

While the MSPS is an example of a hired model for effectively maintaining and shared print program, CONNECT/NY project has relied on a volunteer model. CONNECT/NY is a consortium between several private colleges in New York. This consortium agreed that if a college did not want to participate in the shared print program that would not present a problem. Coordinators were identified to keep the project focused. Once again, the importance of documentation for meetings and webinars was critical to ensure quality communication. The CONNECT/NY team cautioned the audience that a shared print program is a continuing work in progress which demands good communication and documentation.

Documentation has also been important in the success of the Michigan Shared Print Initiative (MI-SPI). The development of a strong Memorandum of Understanding (MOU) guarantees that there is a shared understanding between all participants. The MOU is a record of shared expectations and agreements that help shepherd this ongoing project. MI-SPI pointed out that the Center for Research Libraries is a beneficial resource, with sample MOUs that will help provide the wording necessary in beginning projects.

This ALCTS workshop was a successful introduction to the concept and administration of shared print programs; the on-the-ground experiences and recommendations from the panelists were very useful.

Friday

ALCTS 101

By Deborah Ryszka, University of Delaware

ALCTS 101, an annual event where new and potential members can learn about ALCTS, was held on Friday, June 28, 2013 from 7:30 to 9:30 p.m. in the Hyatt Regency McCormick Place. Librarians just starting out in the profession and library school students joined veteran ALCTS members for a fun-filled evening, organized and hosted by the ALCTS Membership Committee and the ALCTS New Members Interest Group (ANMIG).

The focus of ALCTS 101 is to provide recent and prospective ALCTS members with information about ALCTS, its structure, and what it has to offer its members in terms of professional involvement and continuing education opportunities.

Carolynne Myall, President of ALCTS, and Genevieve Owens, incoming President of ALCTS, opened the evening’s program by welcoming attendees to the event and to ALCTS. They briefly described the important work that happens with ALCTS and why ALCTS is such a dynamic professional organization. Both leaders encouraged attendees to consider becoming involved in ALCTS by volunteering to serve on a committee. Additionally, both described the committee appointment process within ALCTS and the different types of committee appointments, including virtual, available to those who fill out and submit a volunteer form.

The main portion of the evening was a speed-networking event where ALCTS 101 guests went from table-to-table to hear about ALCTS, its five sections, and other timely professional topics, including publishing in ALCTS.

True to a speed-networking format, attendees were encouraged to move from table-to-table every six or seven minutes, after a cue to change tables was given. Each table was manned by several knowledgeable ALCTS members and they enthusiastically welcomed the opportunity to discuss what ALCTS had to offer as guests visited with them throughout the evening. Because of the intimate nature of the event, veteran ALCTS members encouraged conversation and dialogue, and were pleased to answer questions related to ALCTS and its activities. Many of the expert ALCTS members who manned the speed-networking tables personalized their tables by bringing candy or by having handouts at their tables. Each table had a bright and decorative sign on it, as well as informative subject-specific handouts, highlighting the meetings ALCTS and its sections were sponsoring and hosting throughout the conference in Chicago.

At the end of the evening prizes were given out to attendees who filled out registration forms as they came into the event. Certificates for ALCTS webinars and ALCTS membership renewals were awarded to seven lucky guests. Library school students in attendance were presented with Starbucks gift cards.

The evening concluded with a short ALCTS New Members Interest Group Business meeting, with several of the ALCTS 101 guests joining the meeting. For the coming year, Erin Boyd and Emily Sanford will serve as co-chairs of the interest group. Deana Groves and Elyssa Sanner have agreed to act as co-chair elects of the group.

After the event, those who attended commented that they enjoyed the evening and the chance to interact and network with like-minded colleagues. Those who visited the AS, CaMMS, CMS, CRS, and PARS tables enjoyed the opportunity to have in-depth and personal onversations with section leaders and experts.

Saturday

The Research Footprint: Libraries Tracking and Enhancing Scholarly and Scientific Impact

by Tammy S. Sugarman, Georgia State University Library

As research and scholarship are increasingly being produced and shared by non-traditional publishing and distribution mechanisms, scholars are looking for ways to assess the impact of their research and scholarship that take these new online pathways into account. Seventy-five attendees got up early for this program held Saturday, June 29 at 8:30 am. Four speakers described how they are developing and using alternative metrics to help researchers at their respective institutions assess the impact of research and scholarly activities.

Jason Priem of the University of North Carolina at Chapel Hill discussed how scholarly “conversations,” which are part of the research process, and used to be ephemeral, are now taking shape on the web via blogs, social media, social bookmarking, reference management systems, to name a few, and can be captured (e.g. twitter conversations following conferences) as they leave footprints. For these reasons, citations now tell only part of the impact story. The tracks of this “Web-native” research are too useful to ignore because they provide faster evaluation, and can show broader impact by taking into account activity outside the narrow world of the academy. Alternative metrics can show the impact of significant but not cited work as well as impact from non-peer-reviewed sources. The tools to show research impact using new metrics based on the social web are still being developed though there are some out there (for a good list of altmetrics apps, see  http://altmetrics.org/tools/ ). Priem shared that he had recently gone to Washington, D.C. to speak about altmetrics with NIH funders, which he sees as a positive sign that funders are beginning to recognize that new metrics have a role to play in measuring research impact.

The next two speakers, Carli Sarli and Kristi Holmes from the Becker Medical Library, Washington University School of Medicine in St. Louis, each described their experiences working with biomedical researchers at their institution to investigate the impact of faculty research beyond publication citations and impact factor. Sarli worked for several months to find out the impact of one researcher’s six highly cited articles. She talked to research investigators, healthcare providers, federal agencies and others to “measure what matters.” The result has been the creation of the <a href=” https://becker.wustl.edu/impact-assessment/model ”>Becker Medical Library Model for Assessment of Research Impact () which is “a framework for tracking diffusion of research outputs and activities to locate indicators that demonstrate evidence of biomedical research impact.” The model gives examples of indicators of research outputs and activities (for example, a diagnostic tool that identifies a disease, developed as a result of a research study). Other indicators could be in the area of legislation and policy or community benefit. Holmes is focusing on putting the model to work by educating researchers on how to proactively curate their own work and understand the value of the data they produce so they can be strategic about enhancing the impact of their work. Both speakers emphasized that the model is a work in progress and encouraged others to contribute to the model to improve it.

The final speaker was Rush Miller, Hillman University Librarian and director of the University Library System at the University of Pittsburgh. Miller spoke about his library’s decision to use the commercial product <a href=”http://www.plumanalytics.com”>PlumX</a> to measure the research impact of the university’s faculty. The product “harvests and aggregates…data exhaust” that is produced on the web through social media, captures, and mentions, and then provides impact metrics that compliment traditional publication measures such as the H-index. The library is responsible for collecting publication data from faculty (any scholarly output with standard identifiers such as DOI, ISBN, etc.) and entering the information in Pitt’s institutional repository, <a href=”http://d-scholarship.pitt.edu”>d-Scholarship@Pitt</a>. PlumX then harvests the records and builds a profile for each researcher which includes online artifacts beyond what was collected in Pitt’s IR. Thirty-two researchers participated in the pilot project and feedback was generally positive. Future plans are to rollout the product to all Pitt researchers.

E-book Data Evaluation through the Eyes of an Academic Librarian and a Public Librarian: A Tale of Two Libraries

by Julie Renee Moore, California State University, Fresno

In this well-attended program presented on Saturday, June 29 at 10:30 a.m., and sponsored by the Acquisitions Section (AS) of ALCTS, Adam Wathen, Collection Development Manager, Johnson County Library, and Ellen Safley, director of libraries, University of Texas-Dallas, provided a dynamic program about e-book data from the public library and academic library perspectives.

Wathen, representing a public librarian perspective, expressed that decisions about e-books (as well as many other public librarian collection development tasks and services provided) are increasingly data driven. He emphasized the importance of interpreting the data through a lens that gives relevant context for users (as opposed to giving the audience data without the story surrounding the data – data without the context can be misinterpreted or meaningless). Wathen did a sampling and found that the bottom line cost per circulation over a 5-month period was $9.98 for print books vs. $13.58 for e-books. He warned that it is not always appropriate to compare print to electronic because the user base may be different (unknown) and not each checkout represents usage. (Approximately 5 percent of their e-book help calls were from a patron who was able to check the e-book out, but couldn’t download it onto their e-reader). Additionally, the statistics are skewed a bit for various reasons, one of them being that their e-book collection is more current than their print collection. Also, the highest circulating print books are children’s books, which are rarely published as e-books. Wathen found other considerations with regard to e-books to include: volatility in the e-book marketplace (publishing, distribution, consumption); uneven content; some publishers have not yet entered the e-book market; lack of ownership; uneven cost models for libraries.

Safley represented the academic librarian perspective, and reported that no one vendor meets all of their needs. Her library has many vendors for subscriptions, patron-driven acquisitions, librarian purchases, databases of historical books, and e-books for e-readers. All titles are cataloged. However, she has found that not all of the cataloging is created equally. They are always having issues with access, ordering, and duplication. Safley suggests looking at usage statistics carefully, as each vendor prepares different statistics. Most importantly, look at what your statistics are saying about your patrons. She has found that e-book usage is huge … and e-book turnaways are huge. Statistics can be rendered in many different ways: by database; by number of sessions, pages printed, downloads; by publisher, by LC classification, by publication date; and of course, you can determine the cost per use. Safley is making an effort to move everyone over to electronic—and if a resource is available electronically, that is the preference. Well over half of her library collection is now electronic; most are serials. There is a QR code to the library catalog on the library’s shelves, allowing for easier access for people with smart phones, and also serving to advertise the e-books. Safley emphasized the use of data to defend the use of electronic resources. She recommends being patron-focused rather than collection-focused, and encourages moving library dollars to electronic resources rather than print.

Tools for Creating and Managing Embedded Metadata

by Stacie Traill, University of Minnesota

“Tools for Creating and Managing Embedded Metadata,” a program sponsored by the ALCTS Metadata Interest Group, was held on Saturday, June 29, 2013, from 10-11 am. Two presenters, Rachel Jaffe of Binghamton University and Kyle Banerjee of Oregon Health and Science University, shared their experiences working with embedded metadata in collections of digital photographs and images.

Jaffe described Binghamton University’s metadata management workflow for photos and images of campus events and places, with a focus on the mapping process used for photographer-assigned keywords and descriptive metadata originally created in Photoshop. At the beginning of any new project, the metadata librarian creates a template for that project’s metadata. To develop the template, the librarian decides which fields to retain, then how to reformat them. Some common reformatting choices are inverting names originally entered in direct order, and changing the case of terms originally supplied in all capital letters.

One of the more complex parts of the template development process is the mapping of embedded keywords to Library of Congress Subject Headings (LCSH). Not every keyword is mapped to a subject heading, since some are not helpful to users. One area of potential confusion in the mapping process is ambiguous terms: for example, “memorial” denotes a specific campus location rather than a more generic meaning of the word, and must be mapped appropriately.

Once the mapping is completed, it is saved as a file that the script calls on as it processes the embedded metadata. The processed metadata file is then ingested into the Rosetta digital preservation system, along with the photos themselves. Once the digital objects and their metadata are in Rosetta, they are published to the discovery system (Primo).

Kyle Banerjee of Oregon Health and Science University presented on the use of facial recognition software to create embedded metadata in image files. Banerjee began by outlining some of the reasons why it is helpful to automate assignment of some types of metadata: creating metadata manually is slow, difficult, error prone, incomplete, and requires highly skilled and knowledgeable staff. Automating the process has several advantages: images may be processed in bulk, and embedding the metadata in image files greatly improves search engine visibility. Banerjee noted that about 70 percent of hits on OHSU’s digital images came from Google, which indexes embedded metadata rather than external metadata (such as HTML header metadata).

Banerjee then offered detailed discussion of how to accomplish one specific and important metadata task: naming persons in photographs through the use of facial recognition software. Banerjee emphasized that this can be done with minimal technical resources and skills: free software tools used and some modest scripting ability are all that is required to make facial recognition and automated name assignment possible.

Banerjee recommends Google’s free <a href=”http://picasa.google.com/“>Picasa</a> desktop client for facial recognition. Picasa analyzes image files in a designated directory, and asks the user to confirm whether similar faces are the same person. Picasa’s accuracy increases as more photos are matched. Initial processing is very fast, taking only a few minutes to process thousands of images. Banerjee then uses <a href=”http://www.sno.phy.queensu.ca/~phil/exiftool/ “>Exiftool</a>, a free software tool for reading, writing, and manipulating metadata in audio, video, and image files, to embed the metadata created by Picasa into the original image files. Exiftool is available for all platforms and can handle many different metadata standards and file types. This is important, because Banerjee notes that adequate image metadata requires the use of three different standards: Exif (for technical metadata), IPTC (for many descriptive fields), and XMP (for archive- and library-specific information).

Banerjee cautioned the audience to test carefully before making any batch metadata modifications and to understand the limits of facial recognition, but strongly encouraged learning to use and manipulate embedded metadata. It is easy to do, reduces potential for error, and increases efficiency in creating and processing image metadata.

Taken together, Jaffe’s and Banerjee’s presentations offered complementary views of how embedded metadata can be used and enhanced to improve discovery of digital image collections, whether through a dedicated library discovery system like Primo or through general web searching via Google. Session attendees also received specific and practical information about software tools, processes, and workflows they might use to work with embedded metadata.

Developing NextGen Leaders in Your Library and the Profession: Grow Your Own

by Betsy Appleton, George Mason University Libraries

The ALCTS Leadership Development Committee sponsored the program, “Developing NextGen Leaders in Your Library and the Profession: Grow Your Own” on Saturday, June 29, 2013 from 1to 2:30 p.m. A panel of speakers including Stanley Wilder, Jenny Emanuel Taylor, and Keri Cascio spoke about Millennials entering the professional librarianship, particularly in leadership roles. This panel discussed how the distinctive characteristics of these librarians might influence succession planning, library leadership, and the future of our profession.

Wilder opened the discussion with an overview of the changing demographics of the profession, and the demographic characteristics of Millennials. He then shared some shifts in statistics in the past thirty years. In summary, reference is no longer the point of entry into the profession. Functional specialist positions—positions that have a market outside librarianship—tend to be the positions that are open now. These positions also may be the best compensated. Wilder concluded with some career advice for Millennials: take any opportunity to get more education; if you have a job, hold on to it; and finally, if you don’t want to supervise, “find a place to hide.”

Jenny Emanuel Taylor reported preliminary findings of her qualitative research regarding new librarians, including some surprises. Her data showed that the profession has become more female, compared to ALA 1980 demographic studies. In contrast to Wilder’s finding that reference is no longer the entry point to the profession, 60 percent of the new librarians Emanuel Taylor surveyed worked in reference or public services; 30 percent worked in archives; less than 10 percent started their career in metadata and technical services. Not all the data were surprising: these new librarians were drawn to librarianship for a variety of familiar reasons, such as the desire to stay in the academic environment. Sixty percent of new librarians surveyed worked in academic libraries when they were students. She also noted findings that librarianship is viewed as a very socially responsible career choice.

Additionally, there was a recurring thread through the interviews Emanuel Taylor conducted: the new librarians had a lot to say about how they wanted to be managed. Many were expected to be the “technology people” in their libraries, regardless of whether they were prepared for that role. Their definition of “technical savvy” also differed from the understanding of their managers. There was a sense that these varying definitions of technical savvy resulted in more work required of the new librarians compared to their colleagues. While these new librarians feel little sense of entitlement, they do want to have their opinions heard, and want to be part of the decision process.

Emanuel Taylor closed her portion of the presentation with some advice to managers of Millennials: include these new librarians in your decision-making processes. Invite them to be part of project management—not just implementation. Ensure that they have a sense of purpose, since they are very disillusioned by thankless tasks.

Keri Cascio closed the presentation portion of the panel with some practical advice for newer librarians from a mid-career professional and as a member of the “bridge generation” between Boomers and Millennials. Millennials that desire to take on more leadership roles that are not available in their jobs can find such roles in ALA. Additionally, mentoring flows both ways: more experienced professionals can learn the “tech savvy” from newer librarians, and newer librarians can gain insight into the professional culture from their more experienced colleagues.

There are some barriers, real and perceived, to taking association leadership roles. Work-life balance and a shortage of committee appointments are perceived barriers: the work of the association is truly available to anyone who wants to do it, and yet the work of the association is not such that it will ultimately become an undue burden for a professional. The real barrier to association leadership, particularly for new librarians, is the cost of travel and training.

A lively question and answer session concluded the panel, and many managers and Millennials shared their experiences and contributed to the discussion.

Publish With ALCTS!

by Jeanne Drewes, Library of Congress

Approximately 50 people attended the “Publish with ALCTS!” program, held Saturday, June 29, 2013 from 1 to 2:30 p.m. The program was an opportunity to learn about publishing opportunities with Association for Library Collections and Technical Services (ALCTS). ALCTS offers a wide variety of publishing options including its peer-reviewed journal Library Resources and Technical Services, ALCTS Newsletter Online, ALCTS Monographs, and the Sudden Selectors’ series. A panel of ALCTS publications editors provided potential authors with concrete advice for getting their work published with ALCTS.

Dina Giambi, Publications Committee chair, provided opening remarks. Dina has been active in ALCTS for many years, and inspired the audience with the importance of sharing experiences across organizations through publications. She spoke about the long term value of publishing for the library community and the importance to the profession of documenting the here and now for the future.

Library Resources and Technical Services (LRTS) Editor Mary Beth Weber talked about the value of a juried journal where all submissions are double blind peer reviewed, which is often a requirement for publication for tenure-track librarians. Mary Beth also cited the value of LRTS and its excellent standing as a valued library publication. She spoke to the submission and recommendation process and the assistance that can and is offered to authors who wish to publish but may need some assistance and/or guidance.

ALCTS Newsletter Online Editor Alice Platt spoke about the ease in publishing in the newsletter, especially right after ALA meetings when reports on programs and meetings are welcomed by members. The newsletter is also a venue for topics of human interest that focus on the people in our profession.

Sudden Selectors’ Series Editor Helene Williams presented an overview of her publication. The Sudden Selectors’ Series is has serviced many librarians new to publishing, who may be paired with a more seasoned author to capture all aspects of selection criteria and means to gain knowledge about special selection areas.

ALCTS Monographs Editor Jeanne Drewes quieted fears authors might have in being expected to write a monograph-length title. In truth, the most expected publication in this area is with multiple authors writing chapters on a unified topic.

z687 Editor Pamela Thomas presented ALCTS’ newest avenue for publication. This online publication offers an opportunity for an author to test the publishing waters, inviting a variety of topics and a venue for testing ideas that could develop into a more extensive, peer-reviewed article or chapter. z687 is not juried, but offers a new author venue for publication on process and design of work in the areas of ALCTS.

ALCTS Communications Specialist Christine McConnell closed the panel presentations. Christine likened the process of publishing to The Wizard of Oz, but clarified the mysteries with an excellent step-by-step explanation of the editorial production process. She explained the format, style guide requirements, proofing, and revision steps, demystifying the process and making the timeline for publications understandable.

There was time for questions and answers at the end of the session, and would-be authors had the opportunity to speak individually with any editors they wished to approach. To find out more about ALCTS publishing opportunities go to the website at http://www.ala.org/alcts/resources

Leveraging Your Linked Data: How Promotion of Linked Data Gives Small Projects Big Visibility, the Holdings Information Committee Forum

by Selina Lin, University of Iowa Libraries

The 2013 ALA Annual forum sponsored by the CRS Holdings Information Committee took place on June 29 from 3 to 4 p.m.; approximately forty people attended. The two speakers were Philip Evan Schreur of Stanford University Libraries and Richard Wallis of OCLC.

Schreur discussed the Stanford Community Academic Profiles (CAP), a virtual workspace created by Stanford University’s School of Medicine to support collaboration among its faculty, interns, graduate students and postdocs. It features sections such as general profile and contact information, professional overview (administrative appointment, professional education), community and international works, current research interests and publications. Name headings for individual faculty come from Stanford PeopleSoft software, guaranteeing each name has a unique Stanford ID. Publications linked to profiles are generated from PubMed in an automated feed, so Medical Subject Headings (MeSH) are used as lingua franca to link to individual members’ research interests.

The result was such a success that Stanford decided to implement it for the whole campus by teaming up with the library. The library then used its Stanford Authority File for all Stanford’s faculty, complete with individual profiles and publications. Each authority entry has an URI so it can be used as linked data. There are internal concerns, however, in reconciling names in Stanford Authority File (SAF) with those of regular authority file, and headings provided by traditional authority vendor, who only supplied journal authors’ headings. An automated process was developed to match Stanford Authority File with OCLC and Virtual International Authority File (VIAF). Additional information such as birth date, publication citation from SAF was added to VIAF. The advantages of this new model for authority creation are threefold: allowing staff to concentrate on authority work that will take more time; self-established headings for journal authors; increased visibility of Stanford faculty through VIAF.

(See: http://www.youtube.com/watch?v=jvCNFXbS0T4&feature=youtu.be for the presentation)

Richard Wallis, OCLC’s “Technology Evangelist,” used contrasting slides of past (card catalog) and present (web catalog) to illustrate changes of time and technology and how we can reach our users today. Statistics show majority of our users use search engines to seek information, but our data isn’t accessible that way. Why? Because Google and other search engines don’t understand our cataloging standards (MARC, ISBD, OAI-PMH, RDA, Z39.50, ONIX, etc.) That is where OCLC’s WorldCat Linked Data can help. Begun in June 2012, the experiment used <a href=”http://www.schema.org”>Schema.org</a> and embedded RDFa (Resource Description Framework) to provide links to Dewey, LCSH, LCNAF, DOI, VIAF, and FAST. He demonstrated how linked data works with a real WorldCat example. Each linked data is represented by an URI distributed across the web. Schema.org, though not good enough for library data exchange, it is almost good enough for sharing library data with the world. Additionally, LC’s initiative BIBFRAME is based on linked data and as a web of data. “It is the foundation of the future of bibliographic description that happens on, in, and as part of the web and the networked world we live in.” Summing up, Wallis declared: “We are moving from cataloguing to http://catalinking.”

The presentation can be viewed in YouTube from http://www.youtube.com/watch?v=V8y3bwOFz6k

Multiple Identities: Managing Authorities in Repositories and Digital Collections

by Jeremy Myntti, University of Utah

The program “Multiple Identities: Managing Authorities in Repositories and Digital Collections” was held on Saturday, June 29, 2013 from 4:30 to 5:30 p.m. This program discussed two examples of how authority control is being used to maintain the forms of names in an institutional repository and MeSH subject terms in a digital collection.

David Palmer, associate university librarian for digital strategies and technical services at the University of Hong Kong, shared his experience with rectifying names in the HKU Scholars Hub institutional repository. The challenge presented is that there are generally many forms of a name in their institutional repository, including the name in Chinese script, multiple Romanization systems such as Wade Giles and Pinyin, the published form(s) of the name, and the preferred form of the name.

In 2009, a project to clean up disambiguated names and to link multiple forms of names to the correct person was undertaken, funded by two grants awarded from the HKU University Research Committee (URC) and Knowledge Exchange Office (KEO). An internal authority file was created in DSpace, which held the data linking multiple forms of names together through relational tables. Information in this authority database included citation data and author data from Scopus, Web of Knowledge, and Google Scholar, as well as a few other sources.

Some of the outcomes of this project include being able to link advisors with their graduate students, displaying bibliometrics showing citation counts for a given author, linking people with the grants that they have been awarded, and creating visualizations in order to link co-authors together. The next step for this project will be to register the names in ORCID, which will allow them to add authority data for non-HKU authors that have items in the HKU Scholars Hub.

The second presentation was by Donald Brower, Banurekha Lakshminarayanan, and Natalie Meyers from the University of Notre Dame. They discussed the VECNet Digital Library project, a digital repository containing information and solutions for eradicating malaria, a preventable yet deadly disease which takes the life of a child every minute. In order to provide a method for browsing and searching items in this repository, the National Library of Medicine’s Medical Subject Headings (MeSH) are being used to create a browsing tree as well as facets for identifying the information that a patron needs. There are sixteen MeSH trees being used with some descriptors being assigned to more than one of the trees.

In addition to providing the MeSH terms for browsing, they are incorporating the synonyms and broader terms from the authority records in order to provide better access through the cross references. These variant forms of terms will help the diverse group of users of this collection to be able to better identify the information that they seek.

Future plans for this project include incorporating ORCID IDs for authors, creating a cartographic user interface to aid in the discoverability of items related to a particular geographic place, providing the data for export as RDF, and developing an auto-ingestion/auto-cataloging tool for adding new items to VECNet. The software for this project is set to be rolled out for users in September 2013.

Second Annual ALCTS Affiliates Showcase

by Shannon Tennant, Elon University, and Arthur F. Miller, Princeton University Library

The Affiliates Showcase, held Saturday, June 29 from 4:30 to 5:30 p.m., gives members of ALCTS-affiliated state and regional technical services groups a chance to present their local programming at ALA. This year’s presenters were Rebecca L. Mugridge, formerly of Penn State, and Patrick Roth and Doug Way of Grand Valley State University in Michigan. More than thirty-five attendees enjoyed this program, overseen by Elaine Franco.

Tech Services Assessment in Pennsylvania Academic Libraries

Rebecca raised the issue that while assessment is increasingly important to academic libraries, technical services departments are often left out. Technical services departments have an abundance of quantitative data, but qualitative data and anecdotal evidence are also useful. Rebecca compiled a list of the academic libraries in Pennsylvania suitable to be surveyed. She asked twelve questions about whether these libraries conducted assessment, how they did it, how they shared the results, and what actions were taken as a result of the assessment. Sixty-three libraries out of 120 responded. Most conducted some kind of assessment of technical services, ranging from gathering statistical and usage data to customer service surveys, focus groups, and benchmarking with other libraries. The goal of assessment was usually to improve efficiency and services, and these were the most common outcomes. Some libraries also were able to implement staff reallocation, change vendors, identify training needs, and offer new services as a result of their assessment. Results were usually reported by means of an annual report or other report to the library administration. Rebecca argued that technical services deserves the same attention that libraries give other departments and that publicly shared assessments can help technical services departments develop stories that will demonstrate our value to the administration, who may not understand what we do.

Anatomy of a Rules-based Weeding Project

Co-author Julie Garrison could not be present. Like so many libraries, the Grand Valley library had too many books for their space. Weeding literature indicates that only a small percentage of a collection supports the majority of its use. Thanks to technologies such as digitization, purchase-on-demand, and interlibrary loan, ownership of any one title by multiple libraries is not critical. Since Grand Valley was building a new library, they decided to close their offsite storage facility, which held volumes with a 1 percent circulation rate. But before the books were moved, an aggressive weeding project was necessary. Typical weeding procedures would be too expensive: estimated staff time alone was 240 work days. Instead, the librarians worked with Sustainable Collection Services to compile a list of candidates for weeding using criteria such as use, age, positive industry reviews, and holdings in nearby libraries. When the list was complete, it was reviewed by the liaison (subject) librarians. Any selected book that would be retained in the collection had to be justified; for example, the book was a classic, by a major author, or part of a set or series. Rather than resisting, the liaisons proved to be more aggressive and added 9,000 titles to the de-accessioning list. Once the list was complete, 86 percent of the proposed titles were deleted using efficient batch processes in Innovative Interfaces’ Millennium system and OCLC’s WorldCat. The books were physically discarded by one temporary employee and specially trained student workers. More than 33,000 items were discarded, all without technical services staff touching a single book! To build on their work, an ongoing weeding program using these criteria was set up. Joining the Michigan Shared Print Initiative also expanded their criteria post-project.

Sunday

RDA Vendor Update Forum

by Rebecca Nous, University at Albany, State University of New York

The RDA Vendor Update Forum was held on Sunday, June 30 from 8:30 to 10:30 am. Representatives from five vendors (Innovative, The Library Corporation, Ex Libris, VTLS, and Follett) spoke about their products’ current support of RDA, as well as what they are currently working on regarding RDA compliance and plans for the future.

Georgia Fujikawa of Innovative indicated they currently offer a free RDA service commitment, allowing libraries to request loads of new fields and table adjustments as needed, validating RDA data, displaying RDA data in the OPAC and discovery layer, and supporting the development of future RDA standards. Innovative now supports 040$e to denote an RDA record as well as 33X $a, $b, and $2 (which may be all or partially visible to the public, or suppressed completely from public displays). Innovative is currently working on RDA Toolkit integration within Sierra and context sensitive help features, RDA-driven material type icons using data from 33X fields, and adding an editor for 33X fields using drop down lists and controlled vocabulary.

Heather Powers of The Library Corporation (TLC) said that their products are currently compliant with all RDA standards, integrate context specific help features, and that authority control and relator terms are supported. TLC is currently testing a MARC record conversion product to facilitate transforming existing records into RDA compliant records, while avoiding a mix of AACR2, hybrid, and RDA records in the catalog. When the prototype for this product is complete, TLC will be seeking early adopters and will also begin testing a more FRBR-ized display and additional facets in their PAC.

Ex Libris’ Mike Dicus indicated that their products will continue to support a hybrid record environment into the future. ALEPH, Voyager, and Alma all support the creation of RDA records as well as importing them from other sources. Primo now provides more granular facets based on FRBR. Ex Libris is currently developing a link to the RDA Toolkit for ALEPH, Voyager, and Alma, and expects to have context sensitive help integrated within the next twelve months.

Robert Pillow of VTLS described several scenarios for libraries’ transitions to RDA. VTLS is providing an RDA sandbox available through VTLS, allowing librarians to test and practice with RDA through 2013. VTLS is offering a record conversion service and is using fixed fields to create material type icons in the OPAC. Pillow suggested looking at Jewish Public Library in Montreal for an example of a VTLS RDA compliant OPAC display (suggested searches: Fifty shades of grey; Songs from the road).

Follett’s Jim Bourassa treated the audience to five simple words: “Destiny 11 is RDA compliant.” Follett also offers a record conversion program that allows librarians to select which records to convert, and Alliance Plus allows the librarian to choose which record format they would prefer to import.

Continuing Resources Standards Committee Forum

by Natalie Sommerville, Duke University Libraries

The Continuing Resources Standards Committee Forum took place on Sunday, June 30, 2013 from 10:30 to 11:30 a.m. and welcomed nine attendees. The Forum featured an update from the U.S. ISSN Center and a description of the Journal Article Versions (JAV) recommendations that the National Information Standards Organization (NISO) published in 2008.

Naomi Kietzke Young, Principal Serials Cataloger, University of Florida, George A. Smathers Libraries, provided an update on behalf of Regina Romano Reynolds, ISSN Coordinator at the Library of Congress U.S. ISSN Center. A recent initiative at the U.S. ISSN Center has been to work on batch assignment of ISSNs. Once this semi-automatic process is in place, the Center will no longer have to assign ISSNs one at a time. The U.S. ISSN Center has also been working to ensure that changes associated with RDA do not cause new ISSNs to be assigned to earlier and later titles.

The U.S. ISSN Center has recently faced a sudden deluge of requests for ISSNs from predatory journal publishers. These publishers bill their journals as open access titles, but they do not have credible editorial boards. Content is sometimes limited to one article per issue. At other times, these publishers take content from credible journals to produce a full issue. Jeffrey Beall’s Scholarly Open Access site provides information about these predatory processes as well as lists of predatory publishers and journal names. Naomi exhorted attendees to work with faculty at their institutions to ensure that they are aware of Jeffrey Beall’s site. Due to the high volume of requests from predatory publishers, the U.S. ISSN Center has restricted granting ISSNs to these publishers until they show a published issue containing more than one article. This is the first time in its history that the U.S. ISSN Center has denied requests for ISSNs.

Nettie Lagace, Associate Director for Programs at NISO discussed the Journal Article Versions (JAV): Recommendations of the NISO/ALPSP JAV Technical Working Group that was published in April 2008 in partnership with the Association of Learned and Professional Society Publishers. The JAV recommendations provide a simple method for describing versions of online journal articles in all stages of their lifecycle, from pre-publication to the final published version. Use of the JAV recommendations clarifies the relationship among different versions of an article, allowing institutional repositories to provide a complete record of faculty members’ work and libraries to provide access to the version that most appropriately meets users’ needs.

A major component of the JAV recommendations is the Recommended Terms and Definitions for Journal Article Versions:

  • Author’s Original (AO)
  • Submitted Manuscript Under Review (SMUR)
  • Accepted Manuscript (AM)
  • Proof (P), Version of Record (VoR)
  • Corrected Version of Record (CVoR)
  • Enhanced Version of Record (EVoR)

Currently, a group is working on an addendum to the JAV recommendations to provide a more specific scope and definition for the term Proof. Additionally, this group has been charged with revisiting the idea of incorporating the JAV terms into a standard metadata framework. If the recommendations in the addendum are approved, they will be incorporated into the published JAV recommendations.

Perspectives on Demand-Driven Acquisitions in a Consortial Environment

by Josephine Crawford, Kansas State Libraries

Approximately seventy librarians attended the ALCTS Continuing Resource Section program in Chicago, held Sunday, June 30 from 10:30-11:30 am, to learn more about the advantages using demand driven acquisitions in a consortial environment. The three speakers provided background and insights on somewhat different DDA programs, while bringing to light a few common trends and issues of concern.

OhioLink is a large consortium serving ninety member libraries with 600,000 users and 81,000 ebooks at this point. According to Dan Gottlieb, Associate Dean for Collections & Technical Services, University of Cincinnati, Ohio library collections have been considered to be a shared state resource for 20+ years, with a mature program in place of patron-initiated circulations and delivery five days a week.

Dan described some early forays into e-book acquisitions, including rental programs from Safari, direct purchase from Oxford and Springer, and then an important experiment with NetLibrary. Eventually the NetLibrary investment resulted in the acquisition of 15,000 e-books, and the collection was assumed by EBSCO.

Over time concerns arose including price trigger points that were considered too low (due to lack of experience with the new acquisition method) coupled with a high level of print duplication. Meanwhile, some libraries worked independently with various content providers. In 2011, combining community funds to create a shared funding pool, a new pilot was organized with YBP Library Services on the ebrary platform.

Karen Wilhoit, Associate University Librarian for Collections, Wright State University, outlined the implementation steps of the YBP pilot. Three publishers (Ashgate, Rowman & Littlefield, Cambridge) were selected due to a high rate of acquisitions by Ohio libraries and the publishers’ track record of offering online access concurrently with print. Libraries with pre-existing arrangements decided independently whether to set to slip or block the participating publishers.

From project start to roll-out, Karen reported that the process took a full year, longer than had been anticipated. She found that the publishers desire a certain amount of assured sales, in order to justify supporting the DDA business model. Although formal evaluation has not yet been completed, she did mention that titles purchased from Rowan were seeing at least one use.

Michael Levine-Clark, Associate Dean for Scholarly Communication and Collections Services, Penrose Library, University of Denver, reported on the experience of the Colorado Alliance of Research Libraries (CARL). Similar to OhioLink, CARL saw advantages of a shared purchase program, whether spread across the whole consortium or clustered in a subset of interested libraries. In May 2012, CARL initiated a DDA pilot with YBP, using both ebrary and EBL, beginning with 2012 imprints from specific publishers—an “imperfect mix” due to publisher variations.

Using past purchasing data provided by YBP, the project methodology examines cost and usage and also determines if more publishers should be added over time. Of the 12 CARL libraries, nine opted to participate up front, while three opted out.

Michael came into this project supporting DDA for a single library but was not convinced if the effort would show clear benefits at the consortium level. He discussed the contract negotiation strategy used to determine the short term loan fees and the point at which a purchase is triggered. Each institution decided whether to block the print book through their own profiles, Michael noting that some libraries needed to ensure against duplication as funds were redirected from print to electronic, whereas other libraries saw duplication as beneficial to users.

Michael determined that money was saved in the first year alone, and is confident that the cost savings will improve over time as usage mounts. He created formal “what if” analyses so that each participating library could compare actual costs to the estimated cost if they had acted independently. He is attempting to tailor his “what if” scenarios to implementation decisions such as applying print dollars to e-books or accounting for life-cycle savings when moving from print to electronic.

A member of the audience asked about usage patterns. Michael noted that he often sees a cluster of use with e-books over a short period of time, unlike print books where many are used just once, if that.

The one-hour session proved to be valuable for academic librarians contemplating a DDA program, whether joining with a consortium, or going it alone.

Staff Retooling: Adapting to Change in Technical Services

by Valentine K. Muyumba, Indiana State University

Approximately 168 people attended this program, sponsored by the Acquisitions Section. Held Sunday, June 30, from 10:30 to 11:30 a.m., the presenters included Anne C. Elguindi, associate director, Virtual Library of Virginia; Kari Schmidt, electronic resources librarian and acting co-Director of Information Delivery Services at American University Library, who presented together. Jack Montgomery, professor, coordinator, Collection Services, Western Kentucky University Libraries, followed with his presentation.

Elguindi and Schmidt described ten strategies of change management in technical services:

  1. Document, document, document: one can never document enough. Keep responsibilities clear and hold people accountable; and when in doubt, revisit the document.
  2. Make strategic compromises: Always remember that “your people are your best resource.” Make sure to keep it friendly, positive and constructive and promote change and pick your battles wisely.
  3. Library administration: communicate in both directions. Always keep staff aware of holistic changes – give context and be direct about the vision.
  4. Give everyone a voice in the changes being made and look at the leadership, at all levels, within the people involved and encourage problem solving by naysayers.
  5. Create capacity with an open view: be creative with redistributing work and consider blended positions. A good example is that of e-resources cataloger whose work includes reserves.
  6. View change as a multi-year challenge and build into the workflow.
  7. Acknowledge the burden and value of legacy print work: make an effort to move print to offsite storage. Enable the creation of a research commons, and plan for an overall decrease in a day-to-day print work while systematically integrating large print legacy collection projects.
  8. Visualize workflows: Periodically draw out and review workflows with staff to highlight interdependencies. This is important in trying to avoid redundancies and helpful in managing workflows.
  9. Inter-unit collaboration is critical.
  10. Promote a project management culture by prioritizing projects with inter-unity dependencies and assign roles and duties among units so staff must work collaboratively.

Jack Montgomery’s presentation focused on “managing the transition and giving insights on managing external and internal change in Technical Services.” He presented two definitions of change:

  • Change and Transition: Change is an external event or series of events that impact us in some manner.
  • Change vs. Transition: William Bridges: “When we talk about change, we naturally focus on the outcome that the change will produce”; “Transition is different. The starting point for transition is not the outcome, but the ending that you have to make to leave the old situation behind”.

Montgomery went on to say that changes are well orchestrated, and that change affects change in other places. Our personal reactions to change are established in childhood and brought to work along with our lunch. They produce:

  • Loss of a sense of identity
  • Feelings of disorientation and emotional shock
  • Heightened anxiety and self-doubt
  • Real grief for that which is past or “The Halo effect”
  • Denial behaviors, which create resistance

One has to pay attention to the “passive resistance” as well:

  • I just say yes and then I don’t do it
  • Feigned ignorance, “I just don’t understand” or manipulative avoidance
  • Procrastination beyond the normal…
  • Withholding information, and support
  • Standing by and allowing the effort to fail
  • Sabotage and malicious compliance

Planning for change/transition is critical. We need to:

  • Look at your organizational cultures and be honest
  • Look at and chart your people individually and their position within the formal/informal social groups
  • Ask yourself: what actually is going to change; what are the possible side effects or secondary reactions to this change; who is going to gain and more importantly who is going to lose or have to give up something; what is over for everyone.

Montgomery also talked about the role of the manager during the transition: leadership and building confidence in others. Leadership also works toward building a sense of community and “We-Ness”, and creating a climate of values, such as dignity, civility, and the belief in the importance of one’s work.

BIBFRAME THING

Managing Projects: From Ideas to Reality

by Ginger Williams, Wichita State University

Jointly sponsored by ACRL and ALCTS’ Acquisitions Section, the program was held Sunday, June 30, from 2 to 2:30 p.m. Robin Buser, supervisor of technical services at Columbus State Community College, introduced principles of project management that she learned while becoming a certified project manager. She explained that a project has a defined scope that is constrained by time limits, cost limits, and quality expectations. Project management focuses on managing change, so an ongoing activity is a process, not a project. Robin introduced the five stages of project management: initiate, plan, execute, control, and close. She emphasized that the initiation stage, where the scope of the project is defined is critical to success. She also distinguished between traditional project management, where a detailed project plan is established early, and agile project management, commonly used in IT settings, where a basic plan is established, prototypes developed, and the plan updated based on prototypes. Robin recommended <i>A Guide to the Project Management Body of Knowledge (PMBOK Guide)</i> as the AACR2 of project managers.

Boaz Nadav-Manes, director of acquisitions and automated technical services at Cornell University, discussed managing people involved in a project. He suggested beginning by talking with others about project ideas, to be sure that the project is needed and does not duplicate existing tools, before proceeding with a project proposal. He identified three major roles: client, project manager, and implementers, pointing out that all of these groups need a clear understanding of project goals. Since most projects require collaboration, it is critical that everyone understands how their work contributes to the goal and fits into the entire project, to minimize delays. He also pointed out that the project manager’s major role is to communicate and ensure that everyone is working on high priority parts of the project, so even if the project sponsor decides to end the project early, the group will have achieved the most important parts of task.

Diane Marshbank, acquisitions director of Chicago Public Library, discussed the initial stages of CPL’s patron-driven acquisitions pilot project. CPL’s project began when they were invited to apply for a grant to pilot PDA in a public library. During the initiation phase, they identified stakeholders, clarified parameters with the grantors, and determined the scope of the project to be getting print books into patron hands as quickly as possible with the entire process being seamless for patrons. During the planning stage, Marshbank used flowcharts extensively to diagram how existing processes could be modified to incorporate PDA ; she pointed out that flowcharts are a great tool for if-then process planning. CPL is currently in the execution stage of the project, with the grant application submitted.

The audience of about seventy-five people had several questions, most focusing on tools for project management and ways to learn project management skills. The presenters recommended a variety of tools, ranging from simple and free to complex and expensive. Gantt Charts are useful for outlining tasks and responsibilities; Buser mentioned Tom’s Planner, www.tomsplanner.com, as an easy and free/inexpensive option. Nadev-Manes mentioned that Excel spreadsheets can be set up in Google Docs for everyone to access as a free alternative, but emphasized that the project manager will still need to talk with everyone regularly to keep the project on track. BaseCamp, www.basecamp.com, is online project management software that allows an owner to be assigned to each task; as each task is completed, the owner of the next task is notified; BaseCamp offers free trials. The presenters also suggested searching for “project chart templates” or “work breakdown structure” to find additional tools and examples. Buser recommended acquiring the PMBOK from the Project Management Institute to learn about project management and as a reference; she also suggested that textbooks used in project management degree programs could be useful, although they focus on business rather than library settings.

RDA Vendor Update Forum

by Rebecca Nous, University at Albany, State University of New York

The RDA Vendor Update Forum was held on Sunday, June 30 from 8:30 to 10:30 am. Representatives from five vendors (Innovative, The Library Corporation, Ex Libris, VTLS, and Follett) spoke about their products’ current support of RDA, as well as what they are currently working on regarding RDA compliance and plans for the future.

Georgia Fujikawa of Innovative indicated they currently offer a free RDA service commitment, allowing libraries to request loads of new fields and table adjustments as needed, validating RDA data, displaying RDA data in the OPAC and discovery layer, and supporting the development of future RDA standards. Innovative now supports 040$e to denote an RDA record as well as 33X $a, $b, and $2 (which may be all or partially visible to the public, or suppressed completely from public displays). Innovative is currently working on RDA Toolkit integration within Sierra and context sensitive help features, RDA-driven material type icons using data from 33X fields, and adding an editor for 33X fields using drop down lists and controlled vocabulary.

Heather Powers of The Library Corporation (TLC) said that their products are currently compliant with all RDA standards, integrate context specific help features, and that authority control and relator terms are supported. TLC is currently testing a MARC record conversion product to facilitate transforming existing records into RDA compliant records, while avoiding a mix of AACR2, hybrid, and RDA records in the catalog. When the prototype for this product is complete, TLC will be seeking early adopters and will also begin testing a more FRBR-ized display and additional facets in their PAC.

Ex Libris’ Mike Dicus indicated that their products will continue to support a hybrid record environment into the future. ALEPH, Voyager, and Alma all support the creation of RDA records as well as importing them from other sources. Primo now provides more granular facets based on FRBR. Ex Libris is currently developing a link to the RDA Toolkit for ALEPH, Voyager, and Alma, and expects to have context sensitive help integrated within the next twelve months.

Robert Pillow of VTLS described several scenarios for libraries’ transitions to RDA. VTLS is providing an RDA sandbox available through VTLS, allowing librarians to test and practice with RDA through 2013. VTLS is offering a record conversion service and is using fixed fields to create material type icons in the OPAC. Pillow suggested looking at Jewish Public Library in Montreal for an example of a VTLS RDA compliant OPAC display (suggested searches: Fifty shades of grey; Songs from the road).

Follett’s Jim Bourassa treated the audience to five simple words: “Destiny 11 is RDA compliant.” Follett also offers a record conversion program that allows librarians to select which records to convert, and Alliance Plus allows the librarian to choose which record format they would prefer to import.

The “Twilight” of AACR2 and the “Breaking Dawn” of RDA

by Linda S. Geisler, Library of Congress

The Cataloging of Children's Materials Committee (ALCTS) was pleased to sponsor the program, The Twilight of AACR2 and the Breaking Dawn of RDA, which was held from 3 to 4:30 p.m. on Sunday, June 30. CCM Chair Linda Geisler introduced the program and recognized program organizer Patricia Ratkovich. Approximately 180 people attended.

The program was presented by Barbara Schultz-Jones, director of the school library program in the College of Information, University of North Texas, and RDA implementation committee member; and Richard L. Hasenyager, director of Library Services for the New York City Department of Education and the New York City School Library System.

Participants received handouts that contained worksheets that they could use to create an RDA record for a board game, Twilight Saga, Scene It, and an RDA record for an ebook, Twilight, by Stephenie Meyer. As the presenters covered each data element of a record, the audience was encouraged to complete that element on their bibliographic records.

The presenters provided a brief historical overview of RDA. They reviewed terms such as core elements, relationship designators, WEMI, and related works that by now may have been familiar to the audience. The presenters noted the online RDA Toolkit’s ease of use. It is not organized by type of resource; rather, the instructions apply to all types of resources. The presenters emphasized that much of the data in RDA is not fully useable by current library systems, and the benefits of RDA will be more apparent as library systems increase their capabilities.

The presenters explained how RDA is displayed by MARC21. In this context, they pointed out the following facts relevant to their audience:

Fictitious authors are now allowed in RDA. (Under AACR2 rules, they were only allowed as subject headings). It would be coded as:

100 0 $a Miss Piggy, $e author

Another example is an added access point for the cat, Socks, who was the addressee for the letters in the work, Letters to Socks:

700 0 $a Socks $c (Cat), $d 1989-2009, $e addressee

The General Material Designation (GMD), 245 $h, has been replaced by three new MARC fields, the 336, 337, and 338 fields. They are more specific in describing the physical item and are intended to be machine actionable. The most common 336-338 will be for printed books and is given as "text" (336), "unmediated" (337), "volume" (338). For an e-book, it is given as "text" (336), "computer" (337), "online resource" (338). The library may not want to display to users the term “unmediated.” They are encoded one way, but they don't have to display in that way to users.

In RDA, there is no limit to the number of persons, corporate bodies, or families that can be listed in the statement of responsibility. There is the option to omit names and indicate the number of names omitted. If three names are omitted, it would be provided in the RDA record as "[and three others].” The librarian can make the decision based on either the needs of the library or the likelihood that the other names will be searched by users.

Relationship designators are used more frequently in RDA. The cataloger records one or more appropriate terms from the list in RDA, Appendix I, to an authorized access point that shows the nature of the relationship more specifically between the access point and the work. If none of the terms listed in Appendix I is appropriate or sufficiently specific, add a term designating the nature of the relationship as concisely as possible. For example:

700 1_ $ Kadushin, Ilyana, $e <b>narrator</b>

700 1_ $a Hardwicke, Catherine, $e <b>film director</b>

The publication, distribution field is now represented by the 264 field. It takes the information found in the former 260 field (publication, distribution, etc.; imprint) and separates it into functions of production, publication, distribution, manufacture statements, and copyright date. It is also repeatable. The copyright date for the work is given in a separate 264 field. Latin abbreviations are no longer used when the place or publisher is unknown, e.g., S.l., s.n. If neither a known nor a probable location can be determined, record [Place of publication not identified]. If no publisher is named within the resource itself, record [publisher not identified.]

There are new MARC21 fields for works and expressions that allow the addition of more specific details to the records. These include 046 (special coded dates), 344 (sound characteristics), 345 (projection characteristics), 346 (video characteristics), 347 (digital file characteristics), 380 (form of work), 381 (other distinguishing characteristics), and 385 (audience characteristics). Definitions for these fields are available at http://www.loc.gov/marc/bibliographic.

An important decision is deciding on the type of description the library will make. There are three types of descriptions in RDA: comprehensive, analytical, hierarchical. They are defined in RDA 1.5. A comprehensive description is used to describe the resource as a whole. For example, a kit consisting of a digital videodisc, a model, and an instruction booklet would be described by one record. An analytical description is used to describe a part of a larger resource. In the example above, a separate record would be made for the videodisc. A hierarchical description is used to describe a resource consisting of two or more parts. It combines both a comprehensive description of the whole and analytical descriptions of one or more of the parts.

Several questions were asked at the end of the presentation. One related to recording dimensions in the system of measurement preferred by the agency preparing the description. Agencies can now use inches if they desire in the extent of item, field 300. Library of Congress only uses inches for discs and audio carriers; they continue to use centimeters for the size of books. Another question related to providing the extent of item for an ebook. It would be recorded as “online resource” and the number of pages would not be provided. Lastly, regarding subject instructions in RDA, it was pointed out that chapters for subject access will be eventually issued.

The complete PowerPoint presentation is available on the ALA Connect page for the Cataloging of Children’s Materials Committee at http://connect.ala.org/node/64303

“Emerging Research in Collection Management and Development,” the Fourth Annual Collections Research Forum

by Heath Martin, University of Kentucky

The Fourth Annual Collections Research Forum, sponsored by the Publications Committee of the ALCTS Collection Management Section (CMS), included two presentations focused on recent research in the area of collection management and development. Approximately 40 people attended the forum, held at the McCormick Place Convention Center at 4:30 p.m. on Sunday, June 30.

In the forum’s first presentation, entitled “Video Discoverability,” Jane Burke, vice president for Strategic Initiatives at ProQuest, discussed the company’s recent research in curating institutional video collections. Burke began by contrasting the lack of curation and discoverability of such collections with the increasing volume and complexity of content and the emerging role of institutions as media producers. She identified several problems hindering institutions from taking full advantage of their video content, including unmet user expectations for access, content siloing, insufficient cataloging and indexing, unavailability to discovery services, and rights management and use concerns. In response to these issues, Burke explained, ProQuest developed a research agenda aimed at analyzing the problems and investigating the possibility of providing an affordable service to address them.

The first phase of the research consisted of two primary components: a series of market surveys conducted by professional research firms and a focus group consisting of twelve experienced multimedia librarians who met on multiple occasions between 2011 and 2013. The results of the research confirmed the existence of the problems and suggested a possible market for third-party services to address them.

After forming a pilot group of academic institutions to test the hypothesis, the company tested the idea of “utilizing automated transcription to create indexing data that would promote discoverability and usability of institutionally created video.” The pilot partner institutions met four times over the period of one year and contributed 520 pieces of video to the test project. The elements of the potential service tested were deposit, digitization, transcript creation, metadata creation and indexing, discover, and streaming and hosting. With the pilot partners contributing content and ProQuest managing the process and handling much of the indexing and metadata, two outside companies, Crawford Media Services and Ramp, were enlisted to handle the digitization and transcription work, respectively.

In the end, the test service confirmed the value and likely feasibility of an end-to-end third-party video curation service, while also revealing challenges associated with the process, such as the difficulty of working with certain types of materials less amenable to automated transcription and the need for manual editing capability at some points in the process.

The second offering of the forum came from two colleagues from the University of Wisconsin-Eau Claire: Stephanie Wical, periodicals and electronic resources librarian, and R. Todd Vandenbark, reference and instruction librarian. Wical and Vandenbark’s presentation, “Building a Stronger Collection: The Art of Combining Citation Studies and Usage Statistics,” looked at areas and degrees of overlap between usage statistics analysis and citation studies and how using the two approaches in conjunction with one another can help libraries ensure that collections are adequately supporting faculty research interests. Their research focused on three guiding questions, asking if faculty members at their institution are publishing in the same journals from which they are getting their information, if faculty publish at similar levels in journals demonstrating high usage, and what resources are most suitable to support the research of faculty.

After situating their own study in relation to past research, Wical and Vandenbark described their methodology, which involved collecting and analyzing data associated with faculty members in the departments of nursing, chemistry, biology, and math. This research resulted in the compilation of two lists, one containing the journals in which the faculty members under study published and the other including the journals cited in those faculty members’ own research. This data allowed the researchers to analyze the overlap among which journals were cited, which were places of publication, and which were included in the library’s collections. Usage statistics were then gathered for Wiley titles on the lists, revealing a trend indicating higher use for cited journals and, even more so, for journals in which the faculty had published.

Wical and Vandenbark cited several advantages to their research approach, including strategic and targeted examination of usage statistics, opportunities for meaningful visualization, and a productive interplay among separate types of collections analysis data. Drawbacks cited were tediousness and time consumption, limited availability of usage statistics, and occasional inadequacies in usage and citation data. As next steps in their research, Wical and Vandenbark hope to apply the same data collection and analysis to other journal packages and full-text databases, include additional departments, and share results indicating strong collections support for faculty interests with key faculty and administrators in the academic units under study.

“Discussion to Plan a Study on the Impacts of Regional Climates on Library Collections,” the PARS Forum

by Walter Cybulski, National Library of Medicine

On Sunday, June 30, from 4:30 to 5:30 p.m., facilitators Bogus and Clareson led discussion of a potential project that would inform evaluation of books under consideration for relocation to offsite storage or, if necessary, withdrawal from a collection. Many of the estimated fifty attendees contributed to a lively discussion that touched on a number of ongoing concerns regarding the preservation of printed works. Print collections keep growing. Many institutions are faced with storage space shortages, though in the current economy there is unlikely to be readily available funding for new construction. Humidity control poses additional challenges to libraries in the southeastern region, where a new facility would require a sophisticated heating, ventilating and air conditioning (HVAC) system. Such systems tend to be costly to design, operate and maintain. Several participants commented that existing HVAC systems in structures throughout the southeast were struggling to control seasonal moisture loads.

As the discussion evolved, it became clear that regional climate was only one of a number of factors to be considered in the evaluation of material to be stored offsite or withdrawn. Librarians seeking to reduce the numbers of locally held copies will need to find out who else has identical duplicates and determine if those copies are in better condition. This would be especially true in cases where digital surrogates are discovered to be of unacceptable quality. Holdings statements may be incomplete and item level condition evaluation information is scarce. There was general agreement among attendees that items should not be withdrawn from collections without first reviewing detailed edition, pagination and completeness data of copies held elsewhere. Additional factors include rarity, provenance, the impact of use on copy condition, and the amount of staff time required to complete a thorough assessment. The need to locate and evaluate copies of works covered by copyright is probably greater, since there are not likely to be publicly available digital surrogates.

One participant suggested looking at “Identical Books: The Condition of Books in Different Nationally Significant Libraries,” a project undertaken by the British Library and a number of other institutions in 2008 and funded by the Andrew Mellon Foundation.

Clareson and Bogus concluded that the next step will be to bring together a smaller group of preservation administrators and other stakeholders, including a statistician, a conservator, and representatives of faculty and library administration, to clarify the main issues and possibly develop a planning grant.

Confessions of a Digital Packrat: ALCTS President's Program with Erin McKean

by Matthew Beacom, Beinecke Rare Book and Manuscript Library, Yale University

In the ALCTS President’s Program at 10:30 Monday morning, July 1, Erin McKean, self-proclaimed dictionary evangelist, editor, author, blogger, lexicographer, and founder of Wordnik.com, delighted an audience of about 150 attendees with her entertaining and thought-provoking talk, “Confessions of a Digital Packrat.” Her talk was word-rich and clever, witty and insightful, intimate and impersonal, amusing and professionally pertinent. In her talk, McKean mined her twin obsessions with words and dresses—she is also the author of a novel, The Secret Lives of Dresses—to create a fun, informative, and provocative presentation on ideas and actions central to the technological and social revolutions we might call big data, linked data, and social media. Without explicitly mentioning any of these things, her examples from the distinct worlds of lexicography and dresses cleverly and subtly illuminated our own concerns with collections, catalogs, access, and use.

  • How do we mine large digital data sets to find the one piece of evidence we need?
  • How do we satisfy our need for completeness, for authoritativeness?
  • How do we deal with—gather, organize, access—so many different digital storehouses available to us but with varying degrees of difficulty?
  • How do we make transformative uses of big data sets, of social media, and linked data to create something new that is useful and valuable?
  • What makes a digital collection attractive to users?

McKean addressed such issues as these only indirectly, lightly and yet deeply, and she did so as much by her example and attitude as by her topics and references. Her examples came largely from her experiences as a lexicographer, as the founder of wordnik.com. McKean grounded her work philosophically in Wittgenstein’s proposition, “the meaning of a word is it use in the language.” Thus, she explained that wordnik.com is a dictionary that works by gathering examples of use—sentences—rather than grinding usage into formal definitions. The example of “fender,” a word meaning, in certain contexts, tiara, carried us through her methods for discovering, gathering examples of use, and revealing—not asserting with an expert’s authority—the meanings of words.

McKean riffed on the meaning of packrat. Characteristically, she joked about the distinctions between “collector,” “packrat,” and “hoarder,” declaring that we would be familiar with the spirit in which one would say, “I’m a collector; you’re a packrat; she’s a hoarder.” Her riff led to an understanding of collection building (what we call hoarding when a library does it) as an act grounded in scarcity and fear; the fear of missing out, the fear of incompleteness, the fear of insufficiency. “Archivists,” she declares, “are hoarders with institutional backing.”

In addition to her work as lexicographer, as editor of Verbatim: The Language Quarterly, and as author of books such as of Weird and Wonderful Words, More Weird and Wonderful Words, and Totally Weird and Wonderful Words, McKean writes about dresses—in her blog, A Dress A Day, and in her newest book, The Hundred Dresses. Her integration of what she does and who she is was best shown by her wearing a dress that she had made herself from a fabric with a pattern of tiny books. She hoards fabric as well as words, and said that there is never enough fabric or words. In her presentation and her self-presentation, McKean embodied a remarkable confluence of characteristics: intellectual rigor, playfulness, wit, passion, a commitment to social collaboration online, innovation, entrepreneurial spirit, panache, and a DIY gusto that left the audience smiling.

“RDA and Serials Cataloging,” the Continuing Resources Cataloging Committee Update Forum

by Shana L. McDanold, Georgetown University

Cecilia Genereaux, current chair of the Continuing Resources Cataloging Committee, welcomed between sixty and seventy attendees to the forum, which was held Monday from 1 to 2:30 p.m.

Before the forum began, updates from the ISSN Center, CONSER, and CC:DA were provided. Regina Romano Reynolds, director of the U.S. ISSN Center and head of the ISSN Section at the Library of Congress, began the program with an update from the ISSN Center. She reported that the NISO Recommended Practices on the Presentation and Identification of E-Journals (PIE-J) was published in early April 2013, and is available for free from NISO. Reynolds also reported the ISSN Center has changed their practice for digital reproductions of print journals and that all online versions, both born-digital and digitized versions, will share a single ISSN for the online version of the journal.

Reynolds spoke about the challenge the ISSN centers around the world face dealing with the open access predatory publishers and ISSN requests, and encouraged academic librarians to educate their faculty and administration about them. Finally, she highlighted a few current activities and groups of the ISSN network and centers: the ISSN Review Group is reviewing and updating the ISSN manual and exploring how to harmonize ISSN practices with RDA, the ISSN Network is actively promoting its newsletter and Facebook page as a way to keep abreast of ISSN news, and finally the ISSN Center is participating in a subgroup of BIBFRAME working on modeling BIBFRAME for serials.

Library of Congress’ CONSER Coordinator Les Hawkins also provided an update. RDA training and the updating of the CONSER Cataloging Manual to RDA guidelines is the focus of late. CONSER also provided feedback to a German National Library proposal regarding whether the latest issue should be (or can be) used for description on a bibliographic record in lieu of the earliest issue available. Finally, Hawkins highlighted a few remaining RDA-related issues related to serials including reproductions, and CONSER's participation on the PCC groups investigating options for these questions.

Adolfo Tarango provided an update from the Committee on Cataloging: Description and Access (CC:DA). CC:DA has discussed variant titles on access points in authority records, the report from the Task Group on Relationship Designators in (RDA) Appendix K, the Task Group on Recording Relationships, the Task Group on Machine-Actionable Data, the Task Force on Place Names to simplify creating place names, the recommendations of the American Association of Law Librarians (AALL) for the treatment of treaties under RDA, the need for clarification for what is to be included in the statement of responsibility for audio-visual materials, the treatment of pseudonyms for corporate bodies, and the inconsistencies in how color is indicated in records and the meaning of the terms being used. Tarango also highlighted the announcement by ALA Publishing that RDA will be published as an annual ebook, and the website redesign for the CC:DA including the posting of minutes as blog posts.

The Forum

Once the committee updates were complete, the forum program, focused on RDA and serials cataloging, began. Becky Culbertson, electronic resources cataloging librarian at the California Digital Library presented “Provider Neutral Serials and RDA.” The premise of the provider-neutral (p-n) model is to include in the shared bibliographic record only the information about the resource that is common to all versions of the resource, and is relevant to online resources only. It is a supplementary set of instructions for RDA, and is indicated in records by the presence of a second |e (always after the RDA designation) in the 040 field: 040 |a xxx |b eng |e RDA |e pn |c xxx. Culbertson encouraged attendees to include relationship designators/terms for all relationships (not just the core requirement of the creator role). She highlighted the fast-track process that is being established for users to add or revise relationship designators/terms in RDA through the PCC Standing Committee on Standards. She also encouraged attendees to clean up existing records to make them RDA and p-n compliant by deleting certain fields that are not RDA or p-n compliant and adding other additional information and RDA fields. Culbertson highlighted some troublesome fields to pay attention to when editing p-n records, including using the 050 for LC call numbers with a second indicator of 4, putting package names in the |3 of the 856 field only, and issues with the use of some genre headings.

Regina Romano Reynolds presented “ISSN and RDA: Bridging the Differences.” Reynolds spoke about how to customize the RDA rules to fit the needs of the ISSN Network by asking for some adaptations of RDA while simultaneously adjusting ISSN practices. The ISSN Network has no common cataloging code, which adds a layer of complication to the customization of the ISSN Manual (the common rule book for the network). She highlighted several areas where RDA and the manual conflict, including format changes, major and minor rules, the use of the key title, series, work/expression differentiation and the role of the ISSN, and the use of 33x fields in ISSN records.

Reynolds then discussed the larger role of the ISSN and the ISSN-L and where it fits into the Functional Requirements for Bibliographic Records (FRBR) model and how the ISSN will fit within the model proposed by the BIBFRAME working group. Finally, she highlighted PRESSoo as a possible ontology to highlight the changes over time and how it could be integrated into BIBFRAME to meet the needs of the ISSN network.

The last presentation covered the “Impact of RDA on Serials Cataloging,” presented by Shana L. McDanold, head of Metadata Services at Georgetown University. The presentation highlighted and summarized the changes to serials cataloging resulting from the implementation of RDA. McDanold began with a general overview of key points of change, including the new approach to cataloging that RDA uses for describing materials/resources/things, its focus on relationships, the flexibility RDA offers, and changes that apply to cataloging of all materials such as vocabulary changes, no longer using abbreviations, and the use of the more granular 33x fields. Changes to serials cataloging specifically were highlighted next, focusing on exceptions for transcription, and that differentiation is at the work and expression level and no longer at the manifestation (format) level. The difference in when to differentiate also impacts the rules for when to make a new description (record) for a resource and major and minor title change practices.

McDanold emphasized the increase of hybrid records during the transition to RDA, identifying hybrid records as those that contain elements from more than one content standard. She highlighted that the goal of editing any record should be to improve user access via the FRBR user tasks and for the foreseeable future this will mean creating more hybrid records but balanced that by cautioning against removing valid information and editing a record for stylistic purposes. Finally, McDanold concluded by mentioning several ongoing issues related to serials cataloging and RDA including reproductions and the changes in major and minor title changes and description practices.

There was a brief question-and-answer session for all the presenters, including the possibility of exploring the option of a repeatable 022 with indicators/subfields to indicate time frame, etc. for time the ISSN applied to the resource. The audience was largely in favor of exploring the possibilities.