ALCTS Member Reports on Archiving 2005 Conference

Yvonne Carignan, University of Maryland, College Park

Archiving 2005, the second such conference, brought together librarians, conservators, archivists, curators, scientists, and vendors with the stated goal of building an international community with specialties in image archiving and preservation. Global it was, with speakers and attendees from all over the world. Find the program, and abstracts of some of the papers given online. The conference, sponsored by the Society for Imaging Science and Technology and held April 26–29, 2005 in Washington, D.C., started with a day of tutorials or workshops, with tracks on “Formats and Metadata,” “Imaging Science and Archiving,” and “Media and Storage.”

Following the tutorials was a three-day conference of “technical papers.” One of the most rewarding aspects of this conference was the “single tracking” format, because everyone benefited from the questions and comments of the entire spectrum of specialists in attendance. Each day began with a keynote session and continued with several topical sessions. The themes included rethinking repositories, hard copy permanence, imaging, digital archiving metadata, formats for digital archiving, and migration methods and tools.

The preservation community was well represented, with ALCTS Preservation and Reformatting Section (PARS) members Jacob Nadal presenting “Keeping the Bits in Place: A Case Study of Raster Image Migration,” Robin Dale chairing the session, “Digital Archiving—Metadata,” and Stephen Chapman presenting “Microfilm: A Preservation Technology for the 21st Century?”

Presentations ranged from practical advice, to high tech, to broad views. An example of practical advice came from Andrew Wilson from the National Archives of Australia. Wilson, in a presentation titled “An Open Source Tool for Migrating Digital Records for Long-term Preservation,” described his institution’s policy of preserving Australian Government digital records by converting “born digital” documents originally created using proprietary software. To avoid possible loss of documents from control of software products by a commercial entity, the National Archives of Australia (NAA) has mandated conversion of proprietary software digital objects to open source software. For this purpose, NAA has developed a tool called Xena, or XML Electronic Normalizing of Archives, to convert records from a wide variety of proprietary software to open (free), fully documented formats for archival preservation. Wilson included specific open formats and a Web site for information on them. Wilson also stressed that data formats provide the key to long-term preservation of electronic records. NAA’s strategy provides a clearly conceived model for other institutions to emulate.

Other talks were highly technical. For example, “A Study on a Viewing System for Museum Collections using High-Definition Images,” delivered by Fumio Adachi of the National Museum of Japanese History, was at the level of technical specialists.

In contrast with the highly technical presentations, keynote sessions took the broad view. In the first day’s keynote address, Deanna Marcum (Library of Congress) placed digital archiving in context of the entire gamut of preservation activities in which the Library of Congress engages.

Clifford Lynch (Coalition of Networked Information) provided the second day’s keynote remarks in a talk titled “Archiving, Stewardship, Curation: From the Personal to the Global Sphere.” Lynch described enormous investment in digital archiving by governments in Europe, Asia, and Canada, but only limited amounts in the U.S. Noting how Google’s announcement of agreements with libraries to scan their collections captured the public’s imagination, Lynch argued that the result will be both more and less than the print collections. Lynch saw the digital surrogates as a substitute for those who otherwise do not have access to the great library collections. Google’s digitized books do not replace the paper originals, but are better than nothing for many users.

Lynch noted some public confusion about what Google is doing, with some expecting the material to be synthesized and repackaged like History Channel programs. Lynch also talked about the enormous challenges posed by the technology dependence of many fields’ research and documentation—use of simulation and visualization, data sets, and databases to advance scholarly work. Lynch posed questions including “Are we up to managing all the data?” and “What have we committed to?” Lynch cited a U.S. National Science Foundation Board report that identifies the need for a data management plan. Lynch then went on to explore issues including the quantity of data to save, the problem of files that cannot be migrated, and privacy and the question of “can you make it go away?” Finally, Lynch referred to the problem of hoarding too much data, and noted the archivists’ model for making organized decisions about what to discard.

Michael Keller (Stanford University) provided a final wrap-up called “Late Breaking News,” in which he addressed the question, “Does Google’s scanning of entire libraries mean the death or rebirth of libraries?” Besides pointing out the many unknowns about Google’s book scanning project—will there be “print on demand” for out of copyright scanned books, and if so, will there be a charge for that or for viewing more than a snippet? Keller described possible benefits to libraries of Google’s work: providing an index to books and links to libraries’ online public catalogs; source libraries receiving files to store and use locally; and ultimately, Google’s project may identify preservation needs.

Keller predicted that Stanford University Library might use the Google product for subtle searching—looking for ideas, not just words—and for recommendations of related material used by categories of people such as faculty or graduate students. Keller also identified challenges, including storage of all the new digital files, copyright issues, and accusations of cultural imperialism. The latter refers to objections by certain European countries that it is not fair to have Google make so many English language titles freely available. Keller suggested, however, that if the Google project leads other nations to put their books on the Web, so much the better. In fact, Google is scanning collections that include many languages besides English. Keller concluded that he would like to see all books digitized to serve users better, but that Stanford will keep the physical objects and continue to buy printed books.