ALCTS Program Reports, 2015 Annual Conference

ALCTS sponsored 15 programs during the 2015 ALA Annual Conference. Following are reports from 14 of those programs.

ALCTS 101

Reported by Art Miller

On June 26 the ALCTS New Members Interest Group (ANMIG) and the Association for Library Collections and Technical Services (ALCTS) Membership Committee hosted ALCTS 101, an orientation to the division for prospective and new ALCTS members. As always, the point of this event was to provide a chance for interested people to ask questions or just talk to experienced members of ALCTS in an informal, low-pressure environment: organizational musical chairs without the music or the competition. The current and incoming ALCTS officers, Mary Page, Norm Medeiros, and Vicki Sipe, opened the program with stories about their ALCTS journeys and encouragement to get involved in the association.

Each year there is some fine tuning. For example, the time for each session at each table was extended this year to ten minutes to allow more interaction among participants. The idea of some fun continued with a prize given away between each session. As people got ready to go to their next table a name was called out from the check-in list and their prize was announced. The prizes ranged from Starbucks gift cards and handmade lanyards to an iPad donated by Springer.

Each table had at least two experienced members of ALCTS who could talk about the one of the ten interest areas ranging from interest Groups to publishing to navigating your first annual conference. Many of the table leaders were ALCTS section chairs, interest group leaders, committee chairs or members, and others who have served in a wide variety of roles within ALCTS.

There was a good turnout, including about 60 newcomers, and most of the visitors stayed for the entire meeting. The prizes and the brownies may have helped with that. Immediately after the presentation there was a brief business meeting for ANMIG. Approximately eight people stayed to talk briefly to some of the officers, including Hayley Moreno and Alissa Hafele, about connecting with ANMIG and possibly becoming actively involved.

This year there was also one other new innovation, an After Party. Everyone was invited to gather afterwards at a nearby venue to socialize for a little while. This was very informal with no agenda and no sign in list for those who showed up, but around two dozen people were part of this. I admit, I ended taking a cab home after a little over an hour leaving behind some of the … more energetic members. (Sounds less envious than younger.)

Overall, this was a successful program that we can look forward to repeating next year.

ALCTS Preservation Showdown

Reported by Anna Neatrour

The ALCTS preservation showdown offered a traditional debate style forum for addressing issues in resource allocation, conservation, and digitization. Moderator Annie Peterson from Tulane University posed the statement “Funding to support rare book and manuscript collections should be entirely dedicated to digitization, not to conservation treatment of original artifacts” to teams of three librarians who argued the pro and con side. The affirmative side was represented by Tom Teper, Ellen Cunningham-Kruppa, and Emily Shaw, who argued that since the world is a dangerous place for cultural artifacts, the best use of scarce funding is digitization to preserve rare materials for all to use. The negative side was argued by Tom Hyry, Michelle Cloonan, and Melissa Hubard. They made the argument that a digital surrogate is a poor substitute for the personal engagement with history that occurs when researchers are able to interact with well-conserved books and artifacts.

Responses from both sides delved into the issues further, with the affirmative side pointing out that digitization assists with preservation by minimizing the need to handle the physical object. Minimal conservation to stabilize items before or after digitization may be all that is necessary, and providing a digital surrogate is the best way to promote universal access to cultural heritage materials. The negative side countered that “analog is hot” right now, with people clearly wanting a physical connection to the physical instead of the digital.

The debate continued to be lively when questions and comments were solicited from the audience. Audience members lined up with questions to further explore the issues articulated by the panel of debators. Specific case studies and large-scale digitization programs like HathiTrust were discussed. The closing statements touched on the need for conservation and digitization to exist together in Special Collections departments, without making extreme funding decisions that undercut the potential for opening up access and new ways of interacting with cultural heritage artifacts that digitization allows, while still spending money on physical conservation for the precious materials that are the most unique items in library collections.

The unique debate-style format for this program allowed the panelists to take on pressing funding issues for digitization and preservation of Special Collections in a lively program, which resulted in a great deal of audience engagement and participation.

The program was co-sponsored by ALCTS, ALCTS Preservation and Reformatting Section, and ACRL Rare Books and Manuscripts Section.

Audio Digitization: Starting Out Right

Reported by Lee Sochay, Michigan State University

On Sunday, June 28, 2015 Pamela Vadakan (California Light and Sound), Maureen Russell (UCLA Ethnomusicology Archive), and Howard Besser (founding Director of NYU’s MA degree in moving image archiving preservation), gave a content rich presentation on planning and implementing audiovisual digitization programs. Best practices, the selection of tools and resources, and the legal issues were highlighted in the presentation.

Pamela Vadakan started with the preservation needs assessment. The audiovisual digitization project depends on physical considerations such as the size of the collection. A rule of thumb is over 500 items means designing at the collection level while less than 500 items points to a box level or an item level inventory. CALIPR  is a free tool for collection level assessment. The Archivists’ Toolkit assesses the box level while the IMAP Cataloging Tutorial works for the item level. For condition assessment, Pamela says to “allow the objects to speak for themselves.” There are clues from the information around the item as to what is on it. Pamela recommends the seven steps of inspection of condition described by the Specs Bros here.

Pamela highlighted best practices for file formats, outsourcing, selecting best sources, and accessibility. And lastly, to develop a long term preservation strategy, pay attention to infrastructure, redundancy, file integrity and validation, and means of access.

Maureen Russell described the triage process for item selection. Highest priorities include uniqueness, format degradation, technical obsolescence, the cultural and historical value. Maureen highlighted the importance of subject knowledge by showing examples demonstrating the differences between non-specialist cataloging and descriptions developed by the subject specialists. Digitization also requires technical expertise. Maureen uses the rule of 5,000 developed by Dietrich Schuller, Head of the Phonogrammarchiv. Invest in in-house digitization for 5,000 or more items while outsourcing for anything less. Lastly, while covering the benefits of online access, Maureen pointed out that the most important benefit is that the voices of our ancestors will be heard by current and future generations.

Howard talked about the legal issues with audio preservation. Preservation rights are unclear when reformatting analog to digital and the Library Exceptions clause treats media different from text. Also, the determination of when copyright for audio expires depends on unclear jurisdiction of prevailing law. Audio recorded before 1972 falls under individual state laws which results in constant litigation.

Underlying rights, the rights to the different pieces of a recording, is a very pervasive and widespread issue. Often, these underlying rights run out while the item is in our collections resulting in inaccessibility. To deal with these complexities, Howard recommends taking the approach of minimizing risk rather than looking for absolutes, alleviate issues by obtaining releases or donor agreements, and try to balance the creator’s needs with those of the users.

Howard’s recommended resources include the Fair Use Checklist, the Checklist on Copies for Private Study, and the Checklist on Copies for Preservation or Replacement.

All three speakers recommend the ARSC Guide to Audio Preservation for anyone interested in audio preservation.

Coming to Terms with the New LC Vocabularies: Genre/Form (Literature, Music, General), Demographic Groups, and Medium of Performance

Reported by Yoko Kudo

The program was held on Monday, June 29, 2015 at 1:00PM, featuring the following three speakers:

  • Janis L. Young (Library of Congress, Policy and Standards Division)
  • Adam Schiff (University of Washington Libraries)
  • Hermine Vermeij (UCLA Cataloging & Metadata Center)

Janis Young outlined the three new faceted vocabularies, namely, Library of Congress Genre/Form Terms for Library and Archival Materials (LCGFT), LC Medium of Performance Thesaurus for Music (LCMPT), and LC Demographic Group Terms (LCDGT). She started by explaining that the Library of Congress Subject Headings (LCSH) encompass various kind of information other than “true” topics. The problem is that the computer cannot distinguish the differences among topics, genres/forms, etc. with identical coding, which is causing a situation where users have to manually distinguish among them. Three new vocabularies were developed to solve this problem. Describing a distinct aspect of a work, each vocabulary allows for more flexibility and precision in searching.

LCGFT projects for cartography, law, moving images, and non-musical sound recordings are "finished," in a sense that they are implemented in LC’s original cataloging and new proposals are accepted. LC Policy and Standards Division (PSD) is accepting proposals for LCMPT through the SACO Music Funnel.

Approximately 400 pilot terms were approved for LCDGT. Introduction and guiding principles for the pilot is available here. A manual that is similar to the Subject Headings Manual is under development, and will be available through Cataloger’s Desktop. No immediate impact is anticipated on LCSH from the new vocabularies. All LCSH headings remain valid, but some headings that are never used as “true” topics (e.g., Waltzes) will eventually be canceled. All three vocabularies are available through Classification Web and LC’s Linked Data Service.

Adam Schiff summarized the work of the Subject Analysis Committee (SAC) Subcommittee on Genre/Form Implementation (SGFI). The projects undertaken by SGFI are LCGFT general terms and literature terms, associated MARC development, and policies and best practices. LCGFT general terms include terms that are not specific to a particular discipline (e.g., abridgments), terms for ephemera (e.g., calendars), and others. After negotiation with LC PSD, 190 terms were approved and distributed. Many of the terms have corresponding headings or form subdivisions in LCSH, but may not be identical to them. Ten top terms were established to gather the terms into broad categories. Explicit aspects such as audience and creator characteristics that are often included in LCSH are out of scope for LCGFT (e.g., Children's poetry). Such information can be recorded in the new and revised MARC fields: 385 (audience characteristics), 386 (creator/contributor characteristics), 388 (time period of creation), 046 (special coded dates), and 370 (associated place). LCDGT is being developed as a preferred vocabulary for these fields, but any other authorized vocabularies can be used.

Hermine Vermeij talked about the LCGFT music terms and LCMPT. Most LCGFT terms were taken from LCSH. There were 567 proposals approved in February, 2015. There are 15 top terms under "Music." The full hierarchy of the terms are available at MLA website (http://www.musiclibraryassoc.org/page/cmc_genremediumproj). For LCMPT, over 800 terms were approved with three top terms (Ensemble, Performer, and Visuals). Some gathering terms were taken from the Sachs-Hornbostel instrument classification system. Best practices for LCGFT is available at http://www.musiclibraryassoc.org/page/cmc_genremediumproj, and one for LCMPT is at http://c.ymcdn.com/sites/www.musiclibraryassoc.org/resource/resmgr/BCC_Resources/ProvisionalBestPracticesforU.pdf.

Data Clean-Up: Let’s Not Sweep It Under The Rug

Reported by Scott Piepenburg

“Data Clean-Up: Let’s Not Sweep It Under The Rug” was presented during the 2015 ALA Annual Conference on Saturday, June 27, with three speakers:

  • Kyle Banerjee (Digital collections and metadata librarian, Oregon Health and Science University)
  • Amy Rudersdorf (Assistant director for content, Digital Public Library of America (DPLA))
  • Terry Reese (Associate professor, head, digital initiatives, Ohio State University)

Kyle led off the program with a fast discussion of data migration, and how it is a way of life for librarians, not just migrating MARC data from one system to another, but different types of data across different platforms. The biggest caveat given by Kyle was that under no circumstances should one use Excel because of the formatting issues it has; rather, OpenRefine is a much better choice (openrefine.org).

Perhaps one of the biggest concerns is the differences between systems; not just integrated library systems (ILS) but desiring to migrate different types of content data between incompatible systems. A challenge is how people have creatively used defined data fields for local purposes which will have unintended consequences down the road. A warning was given to not fob off data analysis on technical people who don’t understand library data because they will not understand how the data is supposed to perform. During a migration, data should be manually examined and do not rely solely on machine or other’s interpretation of that data.

Amy talked about tarnished data; that is, data is that is “pretty good” but has some “rough edges.” The most important thing is to find data that “plays well with others.” A challenge for her organization (DPLA) is that they don’t house the data, or even the metadata for the data, but rather they are a repository for finding aids to that metadata. Their goal is to ingest the data into their processes, normalize it, and then organize it in a useful form. An example of this might be data about photographs; what they will do is take the information in the data and try and create “normalized” data. This could be a picture of a church in a given location; they would bounce the location against Bing (and specifically not Google) to get coordinate and/or geospatial data that can be used in a search structure. They then place the normalized data into the DPLA portal so it can play well with the other data there.

The biggest challenge they face is poor granularity, hindering the ability to link ideas and concepts that are similar and related to other sources in the DPLA universe. Amy’s ultimate goal is to create a “Hydra in a box” product that, using a graphical user interface (GUI) would allow local users and others to do much of the initial and routine data transformation.

Terry made it very clear he was not going to shill the ubiquitous MARCEdit product; rather, he wanted to talk about some of the tools that software uses “behind the scenes” to automate data normalization and clean up. While much data clean-up is fairly “routine” there is what is called the “last mile problem” where the end of a project can result in taking the most time because of the problems the little pieces of data require. Along those lines, he also discussed that we are making linked data hard and not as easy as it could be and that we need to lose the “catalogers” mind-set.

Terry then went on to describe that while MARCEdit is for catalogers, many of the tools out there are being created for programmers who have no idea what the data looks like and how it should function. We need to make the tools more MARC-agnostic and not worry about the different flavors of MARC that exist. In closing, he urged the community to enable software packages to better talk to each other through APIs and not be so proprietary with their functionality.

The question session brought out two interesting observations. The first is that developers of our tools are not working with us, but rather from a data programmer’s mind. The other is that too much documentation can be a bad thing. If you can’t remember why the data is there, then you probably don’t need it. The last statement was certainly food for thought.

Getting Started with Library Linked Open Data: Lessons from UNLV and NCSU

By Jeremy Myntti, University of Utah

This program, held on Saturday, June 28, was delivered to large crowd in a nearly packed room. Eric Hanson (Electronic Resources Librarian at North Carolina State University Libraries) started off the program by speaking about the NCSU Organization Name Linked Data project. This project consisted of making the organization name authorities maintained at NCSU available as Linked Open Data (LOD). Hanson said that there were five main steps that they took to transform this data to LOD:

  1. Model – identify terms from existing linked data vocabularies that can be used to describe your source data. Examples include Dublin Core, RDF Schema, SKOS, OWL, and FOAF.
  2. Clean-up – before converting data from one format to another, it is best to clean-up the data and make sure that it is in a consistent, structured format.
  3. Augment – links (URIs) to related information contained in other linked data should be added to your data in order to make the data truly linked data. Some example sites that NCSU has used include VIAF, DBpedia, ISNI, and Freebase.
  4. Convert – there are multiple tools that can be used to transform the data into RDF. NCSU used XSLT to transform their data. Part of this step will also involve deciding on which RDF serialization would be best to use for publishing the project (e.g., N-Triples, RDF-XML, Turtle, etc.).
  5. Publish – make the linked data available on the web with an open license for others to re-use. As others use the data, there needs to be a plan in place for updating and maintaining the data.

Cory Lampert (Head of Digital Collections at the University of Nevada, Las Vegas) then spoke about the Linked Open Data project that they have been working on for a couple of years. A goal of this project was to convert digital collection metadata records into linked data. There were three main phases to this project:

  1. Phase 1: clean data exported from CONTENTdm
  2. Phase 2: import, prepare, reconcile data, generate triples, and export as RDF, all using OpenRefine
  3. Phase 3: Import and publish data in a linked data triple store (Mulgara and Virtuoso)

This project has helped UNLV rethink how they create and maintain their metadata. Some of the major points on this include using well established controlled vocabularies that are already using linked data standards, follow rigorous rules for data entry, create local controlled vocabularies, and share those vocabularies across all collections.

After the two different projects were discussed, the two speakers showed how they exchanged data to make sure that their processes would work with data from other libraries. The results weren’t perfect when other data was used in either process, but they were able to identify ways to improve both projects by using data that they weren’t familiar with.

This program ended with the speakers giving a few pointers for how to get started with a linked data project at your own library. This included starting off with a simple project to learn about linked data which will help you learn and make the second project much easier. Librarians have been working with authority control for many decades which can play a major role in how linked data is used.

Is Technical Services Dead? Creating our Future

Reported by Shannon Tennant, Elon University

Mary Beth Weber served as moderator at this program that took place on Sunday, June 28, 2015. Four speakers addressed this timely question.

Amy Weiss (Florida State University) began by posing the question, “Will traditional Technical Services survive?” Her first answer was “maybe.” Change is inevitable in all aspects of libraries, not just Technical Services. For example, the reference desk has been transformed. Weiss’s second (humorous) answer was “kinda.” Traditional technical services tasks will continue to be performed, just not in the same ways. We will still be concerned with the procurement of materials, creating access points, preparing materials for use, and getting rid of outdated and damaged materials. Weiss then examined trends in traditional technical services. For example, serials have survived the transition from print to electronic format and their management has only increased in complexity. E-resources departments perform the same basic tasks as traditional Technical Services departments do, so utilize the same skills. Weiss concludes that though many things about technical services have changed, our values and the need for our skills remain constant.

Julie Moore (CSU-Fresno) spoke next. Her talk was punningly titled “Metadata, MARC, and Mo(o)re.” No one who attended this program will be able to forget the description (and illustrations) of how to catalog pig lungs! Moore focused on the future of cataloging. MARC (Machine Readable Cataloging) will continue to matter as libraries implement RDA (Resource Description and Access). RDA is a content standard, not an encoding or display standard, and it will not be a solution to all of cataloging’s issues. Moore reminded the audience that there is nothing fast or easy about recording human endeavor, which is what cataloging does. Though there are calls to abandon MARC and other formats, Moore asserted that consistency will continue to be an important factor. Moore concluded it is too early to tell whether the BIBFRAME (Bibliographic Framework) data model will prove viable. But whatever the future brings, we will continue to rely on catalogers’ judgment and expertise.

Elyssa Gould (University of Michigan Law Library) spoke about “Skills for the Future of Technical Services.” Gould suggested we learn from past transitions, such as the switch from print to online catalogs in the 1980s, which were driven by technical services’ needs. Gould expected the definition of technical services to expand beyond its current meaning to encompass other kinds of metadata creation. Staff will need to develop new skills. Gould suggested looking at job postings to determine which skills are in demand. She also recommended studying our own institutions and trying to help with existing projects. Acquiring new skills means using creativity, collaboration, initiative, communication, and time management, as well as a commitment to continuous learning. Technical services staff must continue the conversation by listening to each other, paying attention to written and spoken scholarship in our field, and teaching the future of technical services to our colleagues.

The final speaker was Erin Boyd (Irving Public Library). Boyd spoke about the need for technical services advocacy in our libraries. Technical services statistics do not yield the same impact as reference statistics, and our specialized processes are not clear to those in other departments. It is not enough to tell people what we do; we must communicate how it impacts the success of other activities in the library. For example, authority control seems unnecessary until you show that patrons can not find materials without correct access points. Boyd recommended formulating an “elevator speech” which summarizes your points in a brief, memorable way. She suggested several ways to advocate in the library, including serving on public services committees, cross-training, getting involved in professional organizations, encouraging technical services education in library schools, and developing relationships with vendors. She urged technical services librarians to think of ourselves as library ambassadors.

Leading the Charge: Practical management tools and tips for new technical Services managers

Reported by Wendy West, University at Albany, State University of New York

There were four speakers for this program, held Saturday, June 27. The program was a series of four lightning talks discussing the topics such as the impact of management, modeling good relationships, and building relationships with staff and across the organization. The presenters offered a variety of ways to build rapport and gain your staff's trust, assess current procedures and workflows, and introduce staff to change in positive light.

Tricia Mackenzie (George Mason University), spoke about the RDA staff training conducted at George Mason University Libraries. Resource Description & Metadata Services (RDMS), part of the Technical Services Group, developed a timeline for training that began in the summer of 2011. Planning and instruction for training included workshops, instruction, workbooks, hands on practice and post training tools. Records were reviewed to track staff progress, to provide feedback, spot trends and errors and to decide when to release staff from the review process. Challenges during the training included cataloger’s interest and skill levels, misunderstandings on what to upgrade and the large quantity of times spent on the review process. While backlogs developed, the quality of the records increased. Currently, all catalogers are enriching records with RDA content.

Teressa Keenan (University of Montana), described the process of doing a time management study at the Maureen & Mike Mansfield Library. Ms. Keenan emphasized the importance of viewing the data from the study in the context of the total labor. She found the study would have benefited from more detailed instruction, inclusion of the student employees, and better communication and transparency within and beyond the department emphasizing data will be used. An important limitation of the study was the potential inaccuracy of data being collected. Ms. Keenan stressed future time management studies should include careful timing of data collection, consideration of limitations, combining with other data, and having a variety of potential uses for the data collected. Benefits from the study include developing a better understanding atmosphere and work of the department, finding improvement opportunities, allocation of staff resources, identifying technology needs, documenting work trends, and benchmarking.

Dana Miller (University of Nevada, Reno), discussed the faculty experience taking the Everything DiSC assessment tests. After taking the test, participants are given a DiSC profile and map. The results analyze behavioral tendencies in a given environment, in this instance, the workplace and how that impacts your ability to interact with others with different profiles. The takeaway for the individual is to be able to identify the different cues people demonstrate that help identify their style providing insights when working with others. Being aware of this allows individuals to use different approaches to effectively direct, delegate, develop, and motivate others. Individuals learn to recognize that their style is not received in a positive manner by differing styles. Ms. Miller emphasized that, while the assessment provides guidance, the profile is not a fixed, static assessment and that there can be fluctuation in an individual’s profile and behaviors.

The final speaker, Susan A. Massey (University of North Florida), presented on the importance of building rapport with staff. She underscored that middle management has a responsibility to create a stable work environment. To do this they need to demonstrate to staff that they are trustworthy, fair, loyal, self-aware, and that they demonstrate integrity. It is also important for managers to share information about themselves with staff, communicate openly, and to offer their skills, experience and knowledge to staff. She emphasized the value of being authentic and real in communications. Ms. Massey discussed Stephen Covey’s circles of concern and influence as a model for managers. She stressed that managers should find their own leadership style, find ways to assist staff in exceling at their jobs and achieve their goals, and direct staff to focus on serving library users. To be effective, managers should believe in their staff and encourage them, express how much they value their staff, create opportunities for growth, and assist staff with their concerns. It is important for staff to see their manager as engaged and interactive, which can be demonstrated by showing their humor, planning events, and celebrating.

Librarians Without Borders: International Outreach

Reported by Greg Borman

On Saturday, June 27, 2015, three presenters discussed “Librarians Without Borders: International Outreach.” Each speaker had their own unique take on the subject, and what follows are summaries of their presentations.

Recovering Liberia’s National Documents by Jacob Nadal (Executive Director, Research Collections and Preservation Consortium (ReCAP), Princeton University Library)

In 2005, while serving as Head of the E. Lingle Craig Preservation Laboratory at Indiana University, Nadal and his working partners gained valuable insights regarding archival records that had survived in Liberia following a civil war that began in 1999 and ended in 2003. Estimates show that 250,000 people were killed and 1,000,000 displaced during the strife. By 2008, all recovered material was given conservation treatment and saved on microfilm. Nadal pointed out that the Liberian government’s Center for National Documents and Records Agency (CNDRA), Indiana University’s Liberian Collections project, and Jimmy and Rosalynn Carter’s the Carter Center all played vital roles in locating and preserving Liberia’s national documents. These documents included the Executive Mansion Archives, government and civil records including those relating to marriage and land conveyances, and the personal papers of Liberian politicians and writers. That Nadal and his cohorts were able to recover and preserve materials so central to Liberia’s identity, including the original Constitution of the country drafted in 1847, is nothing short of miraculous.

SLIS Service: Learning Projects Around the World by Jessica Phillips (Head of Preservation, University of North Texas)

Phillips discussed a University of North Texas Department of Library and Information Sciences program involving travels to a variety of countries to work on library projects. The majority of participants have been both graduate students and faculty involved with the department. The first study abroad program occurred in Thailand in 2003, where work was done at the Chiang Rai Montessori School. The program returned to Thailand from 2004-2006, focusing on K-12 schools. Work completed included cataloging and classification of materials, weeding and repair, developing library policies and procedures, and training librarians and faculty at the schools. A trip to Albania in 2008 involved extensive work at a school library, whereas 2010 saw participants going to Ukraine to completely redesign another school library. Since then, the program has sent students and faculty to Peru, Russia, Germany, and the Czech Republic. On each trip, participants also get to do a fair amount of traveling and sightseeing, adding to the experience of going abroad. All in all, it appears that the University of North Texas has created an excellent program that exposes those involved to an array of geographical locations and worthwhile library projects. Perhaps more library schools in the U.S. should follow suit.

International Outreach: Ecuador, Manipur by Becky Ryder (Library Director, Keeneland Library)

While Ryder’s slides covered work she did in both Ecuador and Manipur, for this presentation she focused on her consultant work in Manipur. At the time, Ryder was Head of Preservation Services at the University of Kentucky. Manipur is a small state in northeastern India with a history of seeking secession from India, and the project that Ryder and others undertook there was dubbed “Digitize Manipur.” The project’s primary initiator was L. Somi Roy, a New York-based media curator originally from Manipur. In the end, five individuals (including Roy and Ryder) involved with both the University of Kentucky and The British Library made the trip to Manipur during April 21-27, 2008. They focused on examining Manipur’s historic manuscripts, as well as preparing to digitize materials in order to establish access to them. Both the Manipur State Archive and private archives were investigated. The group of five also took part in A Colloquium on International Digital Preservation and Conservation & Scholarship In Manipur: An Exploration of an International Learning Community, which was organized by Manipur University. Additionally, those who made the trip attended a number of cultural events so that they might better understand the archival materials they were examining. Upon returning from the journey, the British Library assessed the condition of the materials through a written report and recommended next steps in the project. Overall, Ryder convincingly reported that “Digitize Manipur” participants appropriately immersed themselves in Manipuri culture while preparing key manuscripts for exposure through digitization.

Managing Transliteration of Bibliographic Data

By Glen Wiley, University of Miami

This program was held on Saturday, June 27 and sponsored by the ALCTS CaMMS (Cataloging & Metadata Management Section) Committee on Cataloging: Asian & African Materials. It was co-sponsored by the Africana Librarians Council, the Asian, African, Middle Eastern Section of ACRL, the Committee on Research Materials on Southeast Asia, the Middle East Librarians Association, and the Committee on South Asian Librarians & Documentation.

Margaret Hughes (Metadata Librarian, Stanford University) was the moderator of the program. She introduced the speakers at this session and the program’s principles. This program revealed the history and principles of the production and distribution of data in non-Roman scripts, as well as how it might change in the future.

Deborah Anderson (Research Linguist, University of California-Berkeley), gave a well-received presentation in three parts. The first part focused on the international character encoding standard of Unicode basics by giving an overview of the standard, the Unicode web site, and the reasoning of how it is the backbone of multilingual text representation. Also, she introduced projects such as the Unicode Common Locale Data Repository project, and the University of California Berkley Script Encoding Initiative. Some modern minority and historic script examples were shown, like the Japanese script and the 'Masaram Gondi' script. It is important for all scripts to get into the Unicode standard in order for them to be accessible and discoverable in the online world of today. The second part was about advantages of the transliteration tables of non-Latin scripts and the common transliteration issues with bibliographic data. While the BGN/PCGN Romanization and the ALA-LC Romanization tables are extremely helpful, they only cover a limited number of languages. Many scripts are missing and text representation can be hard with different fonts, keyboards, and software. Two transliteration tools that can be helpful are the Common Locale Data Repository tool and the Google transliteration input tool. The third part focused on the future of multilingual search and how hundreds of scripts have yet to be encoded. In general, scholar-supported scripts, scripts used in digitization projects, and more modern, commonly used scripts get higher priority in the Unicode-focused projects.

Steven Loomis (Technical Lead, IBM Global Foundations Technology Team), explained the building blocks for accessing multilingual data through the Unicode Common Locale Data Repository (CLDR) project. From around the world, this project collects language and region-specific data used in order to help build software. Coverage of individual languages varies widely, but there are close to a hundred languages fully covered in the project. A demo of the CLDR project data in the CLDR survey tool was given for the audience. The survey tool is the voting process for deciding the transliteration of different languages. Ultimately, the CLDR data could be very useful for the discoverability of multilingual records and with more transliteration tools. Lastly, the speaker briefly showed the International Components for Unicode (ICU) Transforms tool. The Transforms tool processes Unicode text from one script to another script. Both of these tools could be helpful towards automating the translation of various multilingual data, towards our future library linked data environment, and towards solving character encoding issues for libraries.

Open Source Software & Technical Services: Kuali OLE, GOKb and VuFind

Reported by Gina Solares, University of San Francisco

This Monday afternoon session highlighted technical services librarians’ roles in open source library system development. Presenters discussed their own experience working with Kuali OLE (Open Library Environment), GOKb (Global Open Knowledgebase) and VuFind, and emphasized the benefits of working with software that is built and shaped by libraries for libraries.

Beth Picknally Camden (Goldstein Director of Information Processing, University of Pennsylvania Libraries), started off the session by clarifying the difference between open source and community source. Community source builds on the practices of open source, in that some institutions commit resources, people and money, to developing the software. Picknally Camden suggested that technical services librarians might contribute to community source projects in the areas of governance, analysis, specifications, scoping, testing, implementation, training, and communications. She has been involved with Kuali OLE governance, including the functional council and communications team, and gave an overview of the development schedule and roadmap.

Kristin Martin (Electronic Resources Management Librarian, The University of Chicago Library) discussed her experience as a lead subject matter expert for the e-acquisitions team of Kuali OLE. In addition to crafting user stories to inform development, she also was involved in developing a data model to support electronic resource management (ERM) functions within OLE. Martin described how she was able to use the ticketing system for bugs & enhancements in conjunction with documentation of particular problems to communicate directly with programmers. Martin admitted that it has taken a long time to get the OLE project’s ERM module up and running due to changing membership in the team, changes in desired functionality, and software limitations. Software development is a complex process especially when our library resources, acquisition, and workflows are complex.

Kristen Wilson (Associate Head of Acquisitions and Discovery, North Carolina State University Libraries) presented next, discussing her work with GOKb (Global Open Knowledgebase). The goal of GOKb is to be a community-managed knowledge base for e-resources. It will integrate with Kuali OLE and help librarians share the load of managing e-resources, taking advantage of the network effect. Wilson identified areas where librarians, as subject matter experts, can inform and test data models that are developed by technical experts. She invited attendees to request a guest login at http://gokb.kuali.org/gokb/ to see the knowledge base for themselves and get involved with the project.

The session concluded with Lisa McColl (Cataloging/Metadata Librarian, Lehigh University), talking about Lehigh’s experience implementing VuFind & Kuali OLE. With approximately 4,900 undergraduates and 2,100 graduate students, Lehigh University is the smallest of the Kuali OLE partners. Despite their smaller size, they have three developers working on VuFind, Kuali OLE and their library systems. Her observation has been that implementing open source has given the staff a sense of empowerment. They are able to “look under the hood" of their systems and are working directly with people who can change things in the system. McColl discussed how the Describe portion of OLE allows her to manipulate and enhance batches of records in a very robust way.

Following the presentations, attendees were invited to ask questions. The first questioner asked the panelists if they knew of open source plans to move the underlying data infrastructure from MARC to linked data. Presenters suggested looking at the BIBFLOW project at the University of California, Davis, which is using a forked version of Kuali OLE built on a triple store for linked data. Presenters mentioned that in Kuali OLE, records are stored in MARC XML. There is also a prototype for Dublin Core data in Kuali OLE, and they will be looking closely at results from the BIBFLOW project to see what could be reintegrated into the shared Kuali OLE development.

The next question centered around the practice of combining or separating print and electronic holdings on a single record. Martin reported that at The University of Chicago Library, they are creating separate print and electronic records. This practice does not aggregate holdings for display to the user, but it does allow for more accurate statistics, record management, and supports the concept that electronic is not an “add-on” to print material, but a separate resource to be managed.

The next questioner asked panelists what skillsets they have developed from working on these projects. Wilson mentioned that she’s learned useful skills around writing functional specifications, how to communicate effectively with developers, software testing, and a broader view of the development process. She also mentioned that she has gotten more familiar with regular expressions, working with MARC XML, and using MarcEdit for data cleanup. Martin added that learning how to report problems so that they can be understood and replicated has been very useful. She has become more focused in the way that she gives feedback about the software and is better able to articulate problems and expectations. Picknally Camden cited her experience with group collaboration tools to support global teams such as WebEx, Skype, Google Drive, and Jira for issue tracking. These tools and experience have helped her become more efficient in communicating and working with a distributed group.

The last questioner inquired about the worldwide scope of these open source projects. Panelists mentioned interest in Kuali OLE from organizations in Germany, Finland, Australia, United Kingdom, France, Japan, and New Zealand. Picknally Camden suggested that interested parties could subscribe to the two open Kuali OLE e-mail lists for more information.

The presentation slides can be found here.

Three Short Stories about Deep Reading in the Digital Age: ALCTS President’s Program featuring Maryanne Wolf

Reported by Chelcie Juliet Rowell, Wake Forest University

Easily looking like a member of the American Library Association herself, Maryanne Wolf began the ALCTS President’s Program by directly addressing attendees, saying “I look at this audience as the guardians of knowledge, the group of people who will help others find ways of expressing what we think and what we feel.”

Wolf is the Director of the Center for Reading and Language Research at Tufts University and the author of Proust and the Squid: The Story and Science of the Reading Brain. A developmental psychologist whose research centers upon the neurological underpinnings of reading and language, she’s articulate, energetic, and a natural ally of librarians and the library profession. Her talk for the ALCTS President’s Program was structured around three “short stories” about deep reading in the digital age.

The first short story: How in the world did this brain of ours ever learn to read? Whence the cognitive process of deep reading?

Here’s a mystery posed by Wolf. The human brain was never born to read. We have genetic programs for oral language, for example, but none for reading. How, then, does the brain learn to read with no genetic program or specific reading center? The answer lies in our brain’s plasticity. We take existing circuits of neurons—originally designed for vision, language, and cognition—and we forge new reading circuits.

Deep reading adds milliseconds to the reading process, milliseconds during which we connect our background knowledge to what we’re reading, take on perspectives different from our own, and visualize imagery. During this crucial slowing down, we make inferences and draw deductions and inductions. We reflect, and we generate novel thought.

When we expose ourselves to syntactic complexity—the slowly unfurling sentences of Henry James come to mind—we actually grow our capacity for cognitive density. As Wolf argued passionately before an audience that seemed to hang on her every word, if syntax reflects the shape and convolution of our thoughts, we diminish syntax at the risk of diminishing our own thinking.

Deep reading also enables us to experience the affective dimension of reading—the way Chimamanda Ngozi Adichie’s characters’ sense of being at home (or not) in their own skin gets under our own.

The second short story: Threats and opportunities of deep reading

What, then, are the implications for our reading brain in a digital culture that emphasizes speed? Remember, the circuits we have forged so that we can read, and read deeply, are malleable. Could practices of skimming or multitasking short-circuit our reading circuits?

Different media have different use qualities, developing some cognitive skills at the expense of others. Reading in print affords slower, concentrated reading processes, activating the astonishing array of cognitive processes that fill those few additional milliseconds. Reading in a digital medium, on the other hand, affords speed, efficiency, and multitasking.

For Wolf, the question is not reading in print versus reading new media. Instead, Wolf wants a “truly biliterate brain,” one that transitions easily from task demands to deep reading.

The third short story: Toward a democratization of knowledge

Wolf’s third and final story framed a problem and proposed a solution. The problem: There are 57 million children in the world who will never become literate and 200 million more with no functional literacy. The solution: Wolf leads a team that’s trying to create an experience on a tablet that helps children learn to read, even those children who live in remote parts of the world and have no access to education or even electricity. Using circuits of the reading brain as the basis of design, they are trying to create a learning experience that would allow children to be guided by choice, since they have no access to teachers.

Wolf sees literacy as our great imperative — to ensure full literacy for every child and every form of challenge. If we are both what we read and how we read, then librarians are in a position to advocate how readers read, as well as what readers read.

Quoting Anthony Grafton, who is referring to the entrance of the New York Public Library on Fifth Avenue, Wolf closed her talk with these words: “If you want deeper knowledge, you will have to take the narrower path that leads between the lions and up the stairs.”

If this year’s ALCTS President’s Program whetted your appetite for deep reading, as it did mine, Maryanne Wolf has two books forthcoming in 2016 — What It Means to Read: A Literacy Agenda for the Digital Age (Oxford University Press) written with Stephanie Gottwald for a scholarly audience, and Letters to the Good Reader: The Contemplative Dimension in the Future Reading Brain (Harper Collins) for a broad public audience. I look forward to immersing myself in both, curled up in my favorite reading spot with a steaming cup of tea.

To the MOOC and Beyond! Adventures in Online Learning, Copyright and Massive Open Online Courses

Reported by Rebecca Nous, University at Albany, State University of New York

“To the MOOC and Beyond! Adventures in Online Learning, Copyright and Massive Open Online Courses” was held on Sunday, June 28, 2015, and featured three speakers who discussed their experiences with Massive Open Online Courses (MOOCs).

Heather Staines (ProQuest SIPX), provided an introduction to MOOCs and discussed trends ProQuest SIPX has found through their experience supporting them. ProQuest SIPX helps to reduce the cost of MOOC course readings by checking to see if readings are available via library subscriptions or are open access, and facilitates licensing, purchasing, and invoicing where new access to copyrighted material is required. In the 35 MOOCs ProQuest SIPX has supported, they have found that the majority of those taking them are from the United States. Courses typically require between nine and twelve readings, and journal articles are slightly more common than books. Content created by the instructor is the most commonly accessed, but the way materials are presented by the instructor affects access rates as well. Content referred to directly in course videos or written notes is more likely to be accessed than content not specifically mentioned by the instructor. Publishers may provide free or reduced cost access to licensed content in exchange for data on student access and use, and engage in geography-based pricing by offering discounted access to those in developing nations.

Ronda Rowe (University of Texas at Austin Libraries), presented “Not Just MOOCing Around,” a discussion of her institution’s experience with MOOCs. University of Texas at Austin joined edX in 2012, and offered their first four MOOCs in 2013. The University of Texas at Austin Center for Teaching and Learning supports their MOOCs in collaboration with the Libraries. The Libraries have created a new Learning Commons, which incorporates a Media Lab, to support the University’s MOOCs. At University of Texas at Austin, the University owns all content created for the MOOC, and the instructor is permitted to use that content for the course. This differs from traditional courses, for which faculty-created content is the property of that faculty member. As of yet, the Libraries have not been asked to include MOOC content in the University’s digital repository, but Rowe expects that to happen in the future.

Mimi Calter (Stanford University Libraries), presented “Evolution of Online Learning Support,” and reminded the audience that “online learning is still learning.” Because of that, Stanford decided that MOOCs should transition from being under the purview of the head of online learning to the Vice Provost for Teaching and Learning, like all other courses. Offering MOOCs has impacted the Stanford University Libraries in several key ways. Though the Libraries do not act as a copyright clearance center for faculty, the Libraries encourage copyright education and compliance by distributing an annual copyright reminder to all faculty, staff, and students, including distance learning guidelines. The Libraries also work to ensure that licenses for online content are appropriate for supporting online learning needs. In addition, librarians are actively participating in MOOCs, which highlights library collections as well as raising the profile of the Libraries’ faculty.

There was lively discussion during question and answer period relating to issues of course ownership. One attendee asked about copyright for student-created content in MOOCs. Both Rowe and Calter indicated that students own their own work, but agree to make any online posts, chats, or other coursework public when they sign up for the MOOC. Another attendee raised the question of ownership in relation to non-MOOC online course content and course recordings, and pointed out that there are many issues related to online courses that need to be addressed, including whether faculty can take their online course content with them if they leave an institution and who owns recordings of online courses.

What Drives Collection Assessment: The “Why” That Brings You to “How”

Reported by Jianrong Wang, Stockton University

How is your collection assessment conducted? What factors are taken into consideration in this process? Three speakers each told their stories in this program sponsored by the Association for Library Collections & Technical Services (ALCTS) Collection Management Section (CMS) and Reference and User Services Association Program (RUSA) held on Monday, June 29, 2015. Approximately 300 people attended the program.

Betty Landesman, Chair of CMS Continuing Education Committee, from University of Baltimore, introduced the program.

With vivid images, Michael Leach (Head of Collection Development, Harvard University), described their collection assessment experience in weeding materials during a renovation of the library between 2013 and 2014. With faculty on board, they focused on collection usage. They analyzed collection usage statistics by subject and patron type. Circulation data collected over twenty-five years indicated a slow decline in print materials. User needs assessments were also conducted via surveys, focus groups and observations of students' usage patterns in the library. A good return rate of print survey forms was noticed. Three scenarios were developed in material retention strategies. Based on the collected data, they adopted Scenario Two, in which 80% of the collection was weeded. Leach summarized their key lessons learned in the process as:

  • More data is better than less.
  • Visuals are critical when working with faculty.

Stephanie Schmitt (Assistant Technical Services and Systems Librarian, University of California Hastings College of the Law) spoke about their project in repurposing the library space with a goal of reducing the physical space of one floor and removing 40% of the shelving space. Although the project operated with limited time and budget, as well as misunderstandings by faculty, they completed it successfully. The end result was an empty floor with a reduction of over 162,000 volumes from the collection. The key factors that ensured the success were:

  • Insistence in getting needed support from the library administration
  • Effective project management in coordinating library staff and the operations
  • Smooth communication flow to college administration, faculty and students
  • Functional collection analysis criteria
  • Sufficient gluten-free pizzas

Schmitt’s presentation slides are available here.

Bleue Benton (Oak Park Public Library) talked about their collection evaluation project, which applied a collection- and user-centered approach to determine their collections’ strengths, weaknesses, physical condition, and age. The goal was to get a genuine and honest view of their collection. The evidence-based assessment started with questions like:

  • What do we see?
  • How does it smell?
  • Does the collection seem to reflect what users ask for?
  • Who uses this area of the collection?

One example Benton gave was their evaluation of diversity. A library-wide open discussion on diversity, collection balance, and unintended bias was conducted. They tackled the “open, hidden, and institutionalized racism, sexism, and ageism” and trusted that their collection should serve, reflect and welcome everyone. “Don’t be afraid to try something out,” Benton advised. The presentation handouts are available at ALA’s conference website. The library’s project report can be found under their Library Toolkit for the Transgender Resource Collection.

This successful program was ended with many questions from the audience. Some of them were:

Q: How did you work with faculty who were against weeding? A: Communicating.

Q: How do you balance the budget between electronic and print resources? A: Not to get multiple copies when e-format exists.

Q: What tools were used in non-fiction materials? A: No commercial tools. Look what’s changing. Weed on ongoing basis.

Q: [For Benton] What comes after the assessment? Did you find diversity materials not circulate well and add back? A: Balance with thoughtfulness. Children’s books should be current.