Midwinter 2009 Reports

Volunteer Reporters Cover ALCTS Forums and Events in Denver

ALCTS members who attended the ALA Midwinter Meeting 2009 in Denver provided the following summary reports. We thank the volunteers who covered a program or event sponsored by ALCTS or one of its units. Their efforts enable the rest of us to benefit from their presentations. We regret that volunteers were not available to report on all the forums.

Breaking Down the Silos: Planning for Discovery Tools for Library 2.0

An ALCTS Symposium

Robert Ellett, Ike Skelton Library, Joint Forces Staff College

Dina Giambi, ALCTS President, and Assistant Director, Library Technical Services, University of Delaware, convened the symposium by stating that online catalogs are not the major or the only information-finding tool as libraries have multiformat collections. This symposium was designed to explore what tools are available to improve access to make resources available.

Robert Wolven, Associate University Librarian for Bibliographic Services and Collection Development, Columbia University Libraries, discussed the current issue in terms of silos, haystacks, and beaver dams. He used the analogies of finding a needle in a haystack because the two resemble each other so closely and that beavers build dams for two reasons—to keep certain unwanted elements such as predators out, but also to keep nutrients and desired food sources in. Wolven indicated that information finding tools must accomplish this same purpose in regard to the relevancy of search result sets. Search tools must isolate pockets of information that users need to discover in answering their queries. Wolven defined information silos as metadata search tools that are built and maintained separately from each other. Silos can be institutionally based, commercially based, or professionally based (created for the profession). What are these silos designed to do? Library-based silos assist in the retrieval of physical objects, while commercially-based silos

Roy Tennant, Senior Program Officer, OCLC Research, addressed how federated search systems (also referred to as Metasearch or crossdatabase systems) unify access to multiple electronic sources. Research indicates that users prefer the Google search model with fewer places to search for all available content. Tennant noted that integrated library systems are still needed but will in the future only perform what he called “backroom” functions of acquisitions, cataloging, system administration and reporting, and serials management. These systems need an application program interface (API) to satisfy the users’ needs. APIs parse data so that relevancy ranking and alternative, personalized displays can be customized by the user. APIs also allow hyperlinks to be sent to open uniform resource locator (URL) resolvers so that full-text documents can be acquired. He cited the University of British Columbia’s MetaLib system as an example.

Marshall Breeding, Director for Innovative Technologies and Research, Vanderbilt University Libraries, discussed options for next-generation library catalog interfaces. He cited studies indicating that college students begin their searches with search engines 89 percent of the time and only begin with the library websites 2 percent of the time. He discussed the library’s competition concentrating on Google Scholar, Amazon.com, Wikipedia, and Ask.com as information sources. Breeding lamented how the ILS market has relatively weak keyword search engines and lack good relevancy sorting. He called for a redesign of library catalogs beginning with more powerful search engines, a more elegant presentation, and greater availability of digital resources within the catalog. He recommended incorporating more Web 2.0 tools into the catalog to encourage a more social and collaborative approach. Integrating weblogs, wikis, tagging, social bookmarking, user ratings, and user reviews will bring the user back to the catalog. Features such as “did you mean?”, validated spell checking, automated inclusion of authorized and related terms, and the more like this—recommendation service will assist in making the system better respond to the user’s query.

Enhancements available for searching the library online catalog were discussed. New products to allow federated searching (searching across multiple databases including the online catalog) were showcased. Experts from various library types discussed best practices product implementation in their libraries.

Implementing an Institutional Repository: Benefits and Challenges

An ALCTS Symposium

Maggie Horn, SUNY System Administration

Greg Tananbaum, Consultant, ScholarNext, set the stage in his keynote address “Institutional Repositories: The Promises of Yesterday,” by discussing the six promises of Institutional Repositories (hereinafter referred to as IRs) and then rating them via his Sarah Hughes Gold Medal scale.

  • Promise 1, concrete response to scholarly communication crises, rated 3 Sarahs (out of 4).
  • Promise 2, expanded access to scholarly information, rated 2.5 Sarahs.
  • Promise 3, highlight depth and breadth of institution’s intellectual output, rated 2 Sarahs.
  • Promise 4, accelerating effect on “information wants to be free” movement, rated 3 Sarahs.
  • Promise 5, potential breeding ground for new generation of university-founded e-journals, rated 2.5 Sarahs.
  • Promise 6, low adoption costs for authors, rated the lowest at 1.5 Sarahs.

Georgia Harper, Scholarly Communications Advisor, University of Texas at Austin Libraries, then stepped up to the plate with a presentation on “Open Access and Digital Copyright.” She surprised some in the audience with her statement that copyright had little or nothing to do with promoting or protecting creativity, but is nothing other than a state-enforced monopoly, placing artificial limits on supply. In the world of open access, Harper believes that eventually it will become more costly for publishers to enforce copyright than the monetary return. Since consumers and creators now have choices for how they obtain or create their works, the choices fundamentally undermine the justification for the monopoly and thus will erode copyright.

Four of the remaining five speakers then spoke about their own IRs and the challenges that they met. Leah Vanderjagt, Digital Repository Services Librarian, University of Alberta Libraries, gave a presentation titled “Early Implementation Work for IR Management: Sifting through Choices for an Emerging Area of Service.” Marilyn Billings, Scholarly Communication and Special Initiatives Librarian, University of Massachusetts, Amherst, addressed “To Host or Not to Host, or, Decisions along the Way to a Successful Hosted Repository Investment.” Jessica Branco Colati, Project Director, Alliance Digital Repository, Colorado Alliance of Research Libraries, addressed the concerns of a consortial IR with “Constructing Consortial Digital Repository Services: A Look at the Core Components and Communities of the Alliance Digital Repository.”

Finally, Bob Gerrity, Director of Library Systems, Boston College Libraries, addressed “Moving from a Hosted to Local IR Platform: the Good, the Bad, and the Ugly.” Common to all the presentations was the need to have faculty involvement. Faculty want to know “what’s in it for me,” since their loyalties appear to be first to themselves; second to their discipline; and third to the institution. The idea “if you build it, they will come” does not work. You must show that this repository of is of benefit to the individual … and to the institution.

Robert Tansley, Engineer, Google Inc. gave a very entertaining and audience-muttering presentation titled “Pick Your Battles; Some Do’s and Don’ts When Building an Institutional Repository.” For the catalogers/metadata specialists in the audience, his statement “Don’t fret about metadata—get over it” was something of a pinprick. Another point which probably led to some angst in the audience was that no one is really that different and we should not be reinventing and customizing our IRs such that we customize ourselves into a corner. A big “do” was to make sure that your IR can be crawled by search engineers: you want to be discovered!

Finally, Tananbaum returned for a closing presentation, “Institutional Repositories: The Promises of Tomorrow.” He stressed that implementing an IR is quite different from the deliberative processes to which we have all become accustomed—you eventually have to leap into the void. He reiterated points made by earlier speakers and reminded attendees that the biggest challenges were: faculty buy-in, coordination across the institution, coordination across institutions.

Some of the presentations are available on the ALA Midwinter wiki.

Are We Ready for E-Book Approval Plans?

Acquisitions Section Midwinter Forum

Michael Wright, University of Iowa

Lynda Fuller Clendenning, Indiana University Bloomington moderated and introduced the speakers: John Elliott, YBP; Carolyn Morris, Coutts; Ron Boehm, ABC-CLIO; Jay Henry, Blackwell Book Services; and Steven Bosch, University of Arizona. Fuller Clendenning posed the question “Are we ready? Maybe we’re past ready.”

The intent of the forum was to put the notion of e-book approvals into the larger context of e-book acquisitions. Some general issues include:

  • How do e-books differ from print? Are they the same?
  • What are the pricing models?
  • Building print and e-collections. How to profile e-book plans. How can duplicates be prevented?
  • What is the future of e-books?
  • Can e-book approval plans be handled in the same manner as print?
  • There are lots of new service models, including delivery to mobile devices.
  • Will there be a discount?
  • Will print and electronic business models differ?
  • The whole concept of collection building seems to be on the table.

Ron Boehm, ABC-CLIO noted that there are two traditional ways libraries get materials: direct from the publisher and non-direct from a vendor. Direct from the publisher is a common way to acquire materials, and is a typical model for books and e-books. Publishers offer selection tolls, ordering assistance, shipping and order authentication, hosting (for e-) billing, and can provide OPAC data.

With nondirect orders, a publisher supplies metadata about a title to a vendor who decides whether or not to pick it up. He noted that most vendors use a just in time inventory plan. The publisher ships books, confirms orders, and the vendor, for example, may be stocking a warehouse from which the vendor fulfills order, they bill libraries and provide customer service.

E-books tend to mess up this process. Nondirect e-book orders involve files, metadata and a third party e-book host (EBL, MyiLibrary, etc.) which loads files, has selection tools, and assists with authentication and troubleshooting. Orders can be placed either with distributor (YBP, Coutts) or third party host, and whichever is chosen then must work out all the details. Both publishers and third party vendors provide technical, administrative and usage stats.

The question of why all e-books are not available on all platforms was raised. Boehm noted this is largely due to economic reasons. Distributors/vendors choose a business model which allows a range of commissions. Publishers may not be willing to give up as much of the price as the distributor wants. Sometimes it can take years for the two parties to agree.

Sometimes the reason an e-book is not available on a particular platform due to logistics and timing: it takes time to work through a contract, to do file transfers and loading the e-books onto the platform. The reason may also be strategic. A publisher does not want their e-books on a particular platform for whatever reason.

There are also different types of business models: buy and lease. Pros to buying include ownership (long-term access is guaranteed). The cons are that there are fewer titles for users for a given cash outlay. Furthermore, while ownership is certain over the long term, access is not.

Pros for a leasing model include: libraries get more titles for a given annual cost – sometimes as many as four to five times more than under a purchase model. Collection development is simplified to a degree, since if collection grows in numbers, lease per title declines each year. The cons are that collection may include titles the library might not have bought, and future budgets are uncertain, which may lead to a sudden loss of access.

In Boehm’s opinion, the long-term impact of buying versus leasing favors leasing. Given the same cash outlay, an institution generally has more books and content available to users through leasing rather than through purchase. The real problem is the uncertainly of budgets from year to year.

Another publishing model is the e-book database. These are carefully crafted defined sets of materials on a specific subject with few, if any, gaps. New content is added regularly and the sets often include discovery tools. Libraries can have more confidence in the buying decision because they have studied the scope of the database.

Boehm noted that publishers tend to prefer e-book collections where there is a cross-searchable selection of books which can approach database levels of comprehensiveness. Such collections may be static or have a growing number of titles, but the number of titles matters. Fewer platforms with a larger numbers of titles may facilitate the success of future federated search tools. He noted that choosing only one platform is not necessary. Boehm mentioned that the collection may include purchased titles or leased titles, or a mix, and that publishers often choose a mix. He noted that a future option should include ease to buy: X years of lease = purchase.

Boehm believes that all available platforms should be captured by the distributor and that they should be selectable in the approval plan setup.

John Elliott, YBP, noted that through their GOBI online selection tool, YBP offers e-books from the aggregators NetLibrary, ebrary, and EBL. There are 392,744 aggregator e-books in GOBI as of January 2009, and the number is rapidly growing. Although these are from aggregators, they are usually listed as individual titles in GOBI. There are also publisher direct e-books in GOBI from publishers such as IGI, SAGE, Springer, etc. ABC-CLIO and Greenwood will soon be available.

YBP has e-slip plans available and they can mirror a customer’s printed books approval plan, although they are auto-profiled from the matching printed book plan. In other words, the work from print books approval profiles informs the profiling process for e-books and the slips delivered via GOBI. YBP offers current publications and or older titles. A library’s existing aggregator e-book holdings can be loaded into GOBI as a duplicate control measure.

Options for an e-book slip plan include sending slips for all aggregators, which allows libraries to pick and choose, or sending slips for specific publications. GOBI slips show if an item is an e-book, and GOBI will also show if a print edition of the same title was already shipped to the library, thus allowing duplicate control across formats. This is similar to functionality with print books. YBP is working to integrate both p and electronic approval plans for a summer 2009 rollout. This is a huge project for YBP, touching all aspects of the business and includes changes to GOBI, changes to the profiling system as well as management systems. Their goal is to match the service level of the print plans, while integrating electronic titles. Services will include giving libraries the option of buying the first available format, or print, or electronic. Titles will be profiled as they are now, with the book “in hand.” Substitution rules will determine the appropriate format/binding/aggregator, etc. Titles will be profiled only once, when first available, regardless of whether they are an electronic or print edition.

YBP profiled more print books than e-books since there is a tendency for e-books to lag the availability of print. YBP is doing a great deal of preparation, both in terms of development and programming, and they are also working with buyers, catalogers, and profilers to revamp YBP’s internal processes. This will eventually include account structures, vendor relationships, editing existing approval plans and setting up new ones, and of course, GOBI training.

Carolyn Morris, Director of New Business Development, Coutts began by providing brief background details about MyiLibrary, the aggregator purchased by Ingram (Coutts’ corporate parent) in 2006. Previous to this, MyiLibrary and Coutts took a different approach to e-materials in academic libraries. Coutts chose to create a proprietary platform because they wanted to control the customer experience with e-books rather than relying on another vendor, and she reported that they learned from the paths blazed by other companies. At this time, Coutts/MyiLibrary aggregates e-books online to simultaneous users, licensed in perpetuity. MyiLibrary is browser based (no platform) and contains 175,000 titles, 20,000 audio e-books, and presently serves 1000+ customers.

Coutts has integrated e-books into Oasis, their online selection interface. Their print book approval plans include books as well. Libraries choose a format preference by subject or publisher. Essentially, Coutts treats e-books as just another binding, and workflows mimic those for print.

Their biggest discovery has been that libraries did not exactly line up for e-books and there has not yet been high demand for them. She expects that part of this is availability, with the aforementioned between print and electronic. She believes libraries are not only waiting for more e-content (the lag is often sixty days for big publications) but that price is coming into play. She noted in particular the lack of discounts for e-books since libraries pay list price plus an additional cost. Academic libraries have also expressed concern that their most important patrons (faculty) may not accept the format. Coutts/MyiLibrary has not sold a lot of e-books on approval, but has sold many through packages, including deals from Cambridge University Press, Oxford, Elsevier, and Springer, to name a few. There are a significant and growing number of new academic titles from these presses available as e-books each year. A substantial percent of approval sales come from these publishers, but package sales have also been noteworthy. However, she noted that publishers sold some packages directly, and this raises issues. If libraries are getting core collections via pub packages, will approval plans still have value? Will vendors be able to support labor intensive approval operations? Approvals require a lot of manual processes and are not cheap for vendors to manage. If publishers take revenue by selling packages on their own, it could affect approvals in the future.

In traditional collection development, the future is a gamble in terms of correctly selecting what patrons will use and results have never been more than mixed. The odds of getting it right worsen as patrons become increasingly aware of resources available outside of libraries such as Google. Coutts can use their approval plan structures to load files of bibliographic records into a library’s OPAC and allow patrons to select and purchase the titles they discover. Libraries only pay for the titles that get used. This offers access to the patron-driven universe via the OPAC.

Will the future of e-book purchases be title by title? This is the norm in the United Kingdom where libraries buy titles in multiple copies to support classroom instruction. In this situation, e-books are cheaper. On the other hand, in Canada, lots of e-book packages are negotiated via consortia.

Morris noted that in the United States there has been more caution and felt that economics will drive the future. Increasingly patrons are more demanding, while budgets and staff numbers are shrinking. This may tip the balance towards package purchases simply because they are more efficient than buying title by title. Indeed, title by title e-book selection may go by the wayside.

It is becoming more complex: publication on demand; print, electronic and print, print and electronic on demand, etc. Libraries are likely to look for support from vendors for aggregation, simplified licensing and invoicing, duplicate control, streamlined metadata, bundled print and electronic, as well as e-only and print on demand. All of this will make libraries more reliant on vendors for collection development assistance.

Jay Henry, Manager of Online Products, Blackwell Book Services, began by noting that Blackwell differentiates between e-books and other print and e-products. Why is e different than print? E-books are treated like monographs, can be profiled, they can notify libraries of availability, accomplishing the same mission as print, but they can also save expense of cost and time opposed to delivering a physical book. E-books can be presented for review upon loading into the OPAC.

Henry believes that timing is the real challenge. The lifecycle of a book from the vendor’s perspective is that when publication of a print title is announced, the book is purchased by vendor, which creates a profile, the book gets into approval flow, and is announced to libraries. The e-book version is typically re-released later. There is always a lag behind print, but there is a great deal of variation in how long. He encouraged publishers to move to a point where the e-version is available earlier in publication cycle. Sometimes publishers (and thus Blackwell) can announce print and electronic titles at same time, but as noted, there is usually an e-book lag. With approvals, Blackwell makes a link from the electronic to the print profile. In a perfect world, it would be preferable to start with electronic and link to other editions, and notify libraries of availability only once. Currently print drives the process because it comes out first. When electronic content is provided first, an item is loaded to aggregators right away and vendors can do more with the content. This also would allow vendors to consolidate workflows, which translates to better value for libraries. It would also make previews available for e-editions, rather than the traditional shipping of print approval titles. The work can then be evaluated by selectors, and possibly even by patrons. If a print version is desired, it could be delivered later. Ultimately, this could be an attempt for real-time content delivery. Henry predicts libraries will greater input from patrons in regard to e-books.

While timing is important, to achieve a true e-book approval plan, vendors must expand the profile not only for the library but also for the book. For books, items to profile might include platform availability, format availability, access options, and print/electronic availability and timing. These book-related profile elements could then match to library’s needs, such as preferred platform, format preference, preferred access type, vendor awareness of already-purchased packages or subscriptions, and timing tolerance in terms of print/e availability.

The question “So what is the value of vendors?” was raised. Vendors are the aggregators of aggregations and are the one source that brings these materials and options all together. They produce the best view across platforms, publishers, and formats. Vendors have developed workflows with libraries and have powerful search tools which allow similar content from multiple sources to be evaluated within streamlined processes which include prior purchases. He noted that Blackwell has used Endeca to provide faceted view of libraries’ purchases including all manifestations of individual works.

So are we ready? Is now the time to perfectly duplicate the print approval plan across a mix of print and electronic titles? Henry noted that the answer is “no.” Are we ready for an electronic only approval plan? Henry suggested yes. If simultaneous release of print and electronic versions becomes the reality, it is possible to augment library approval profiles to identify each manifestation of a work and provide a mechanism for acquiring the item.

As an industry and as a supply chain, Henry thinks that e-book approvals will soon be the norm. He also believes that patron-driven purchasing will become larger, but will not supplant approval plans. Vendors are the leaders in this area, with content aggregators not far behind.

Steven Bosch, University of Arizona, led off with a question: Are libraries ready for e-book approval plans, from the library perspective? He noted that he was prefacing his remarks with caution, and that his statements may sound negative. While he likes e-books, and understands their pros and cons, libraries need to examine their uses before adopting new business models. He also mentioned that some publishers have e-book arrangements which are very user friendly (to libraries) while others are not.

Platforms and content display are very important. Flexible displays, including use on PDAs, handhelds, and the like, are important. The choice to not make e-books available on via portable technologies creates a barrier, as do limits on use (printing, downloading) which are very unpopular with users. Other unpopular limitations from some publishers include the inability to cut and paste, and restrictions which do not allow for interlibrary loan. Bosch reminded attendees that if collections are changing at large libraries, one must bear in mind that small public libraries do not have the same access. This could create a have/have not population in terms of access to information. Pricing is an issue since e-books are typically more expensive than print.

E-books, Bosch noted, are great for text discovery, but readability is not terrific. If patrons need to read cover to cover, an electronic format is not good. Timing of publication for electronic editions is important, and when considering approval plans, time lags from print to electronic are not acceptable. E-books may be preferred format, but losing six months of access while waiting for the e-edition becomes self-defeating. Interestingly, while e-books have many drawbacks, they are more heavily used, while print books typically are not. Still, caution in selection must be used. Another consideration is that if a library is buying print books that are not being used, they will not want to pay 160 percent retail cost to buy e-books that also will not be used.

So despite it all, Bosch believes libraries are ready for this change. Vendors are ready to go, and are waiting for libraries to come on board. He noted that he has been looking forward to e-book approval plans for a long time. Since libraries are now making the choice to purchase e-books, it makes sense to use the same supply chain as for print.

Libraries must also consider the issue of archival access: e-books do not yet have the same safety nets as exist for e-journals. This will probably change as e-books become more prevalent, as was the case with serials.

Finally, is the same old supply chain the one libraries want to use? There may be business models better suited to delivering the content. As we adopt e-formats, do we want them to fit it into the print supply chain?

A question and answer session followed. Carolyn Morris started off with a question to the audience: Is a sixty day time lag really a big issue? She noted that lots of selectors wait for various reasons. The audience’s answer was a resounding, “Yes it is a big deal!” Funds may be distributed as they become available and this is a use it or lose it situation. When selectors are ready to buy, they want to buy, not wait. Another attendee noted that selectors are working online, and want to be able to order when slips arrive and not wait for weeks or months for the e-version to be available.

Other questions that were raised and discussed include:

Why is print available first? Why not electronic? Documents are born digital. Since ABC-CLIO publishes both versions simultaneously, why don’t others? A publisher representative in the audience noted that content is born digital, then provided to aggregators to process and this is what causes the delay. In addition, some publishers put electronic on their own platforms, not waiting for vendors or aggregators, but there are still lags. They often lack the data conversion ability to get electronic content in shape. Publishers with their own platforms are a little quicker to get e out to libraries.

The speakers were asked for guidance in what information should be given to library selectors about the future for archiving? The panelists agreed that this was a difficult question to answer. Traditionally, there has been a perception that research libraries are repositories. Economics may show that it is a fallacy that libraries are archiving the scholarly record since there have been many, many significant cuts. Research libraries are not doing that job well.

A comment was made that librarians have noted that patron driven profiles are still profiles, but rather than loading books, bibliographies are loaded and patrons are making the choices. Columbia found patrons bought a lot that selectors did not want in the collection.

It was noted that librarians are frustrated by the different platforms, print restrictions, etc. and it is hoped that there will be some standardization in the future. Publishers seem to be more lenient than aggregators. This is complicated by the fact that the restrictions are often the result of system architecture (in the case of aggregators) and this needs to be addressed on the front end when buying the platform. Aggregator agreements with publishers are usually universal as the restrictions are built into aggregator’s systems. ABC-CLIO started working with NetLibrary years ago and they saw no value in imposing single user restrictions. ABC-CLIO wanted simultaneous users and agreed that restrictions on platforms are important to know up front. A blog titled No Shelf Required was suggested as a good resource for librarians and vendors YBP often faces the same situation in that restrictions may be implicit in the aggregators’ architecture. Legal agreements can also come into play. Sometimes when a customer wants something more and YBP approaches an aggregator, they in turn approach the publisher, and the and publisher is willing to expand access.

No interlibrary loan is a common restriction for e-books. From ABC-CLIO’s perspective it is important to articulate what interlibrary loan implies. Is it making a copy of an article? This is fine. Opening the cloud to fifty libraries is not acceptable and wrecks havoc on a vendor’s business model. ILL restrictions raise a concern for consortial environments. In such a situation, not allowing ILL goes against the consortial mission. Blackwell noted that they are working on this, but that it may be expensive and it can be technically difficult. MyiLibrary has an e-book loan feature, but they pay the publishers for it. It was suggested that libraries need to sort this issue out with aggregators and publishers. .

The speakers were asked if any of them were considering delivery of content to mobile devices. Blackwell finds this to be a challenging environment since the various mobile platforms may not display e-books the way publishers want them to appear. NISO is working through this, and some aggregators have adopted their standard. Since content can be moved into different devices, standardization has been difficult. It would be an error for platforms not to move in this direction. Delivering content to mobile devices is something users want. Johns Hopkins has test-piloted some such service, but there is currently no platform that makes them want to invest in approval plans. Patrons are aware of their options in the nonacademic sphere (Kindle, etc.) so it is only a matter of time before the expectation becomes large. In addition, students want to interface course management systems using mobile devices so they can choose content linked in the CMS.

FRBR and RDA: A Glimpse into the Future of Cataloging and Public Displays

Cataloging and Classification Section Forum

Anne Sleeman, ALCTS Paper Series Editor

Barbara Tillett, Chief of the Policy and Standards Division (CPSO), Library of Congress, opened the forum with background information on RDA ( Resource Description and Access), a content standard developed for the digital environment. It is built on the FRBR (Functional Requirements for Bibliographic Records) and FRAD (Functional Requirements for Authority Data) standards as well as the IFLA International Cataloging Principles. We can now integrate bibliographic data with anything else in the Internet environment. We have content but do not yet have delivery mechanisms. During the transition between what we currently have and the models for the future, we will continue to need authority, bibliographic and holdings records. RDA will bridge the current and new systems with tables to map elements from RDA, MARC, Dublin Core, etc. to each other. Authority control will need to include registries for controlled vocabularies; linking data is essential. Future systems will take full advantage of data mined from catalogers’ work. The vision for the future is linked data available on the web, done once and shared with the world.

Diane Vizine-Goetz, OCLC Research Scientist, reported what users want. A survey of WorldCat users revealed that they want more subject information, summaries, abstracts and tables of contents. She also reported on an OCLC research project expand use of FRBR clusters in WorldCat with Fiction Finder, which included richer descriptions and navigation help to fulfill user needs. Researchers used data available in all of the records in each FRBR cluster to provide more avenues to access each work. More information about the project can be found at http://www.oclc.org/research/researchworks and http://fictionfinder.oclc.org.

John Espley of VTLS and Robert McDonald of Indiana University described current and future implementations of FRBR in integrated library systems. Espley maintained that FRBR is useful for filling hold requests as demonstrated by VTLS. McDonald described the OLE (Open Library Environment) Project ( http://oleproject.org). Indiana University is coordinating the project among members of the library community using grant funding and open source software and development methods to create a new ILS to work with the data in legacy systems and e-content.

Collecting Free Web Resources: Selection, Archiving, Metadata, Access

Collection Management and Development Section Forum

Cindy Schofield-Bodt, Southern Connecticut State University

A lively online discussion earlier this year led to this forum that explored how web resources are selected for archiving, what harvesting tools are being used and how they are achieving bibliographic control.

Melanie Wacker, Metadata Coordinator for Columbia University Libraries, introduced the topic by raising questions about issues such as finding and repairing broken links, keeping track of updated pages, making use of constantly emerging and evolving technologies and appropriate tools and resources. She cited as good examples of web archiving the National Library of the Netherlands, and the periodic snapshot of the entire Austrian national web although it was agreed that these models may not translate well for the United States’ universe of web material. The two speakers followed with descriptions of their projects.

Tracey Seneca’s responsibilities as Web Archiving Service Manager at the California Digital Library grew out of the Web-at-Risk project funded by the National Digital Information Infrastructure and Preservation Program (NDIIPP). She discussed the merits of the Web Archiving Project and the workflow issues libraries need to resolve to move forward with similar projects. The top three issues driving her work are putting web archiving tools in the hands of librarians, integrating web archiving into existing workflow and fostering collaboration. Three tools that are available for librarians to use in archiving websites are Archive-it, COCL Digital Archive—CONTENTdm and the Web Curator Tool. Though most web-crawler tools use open source software, institutions still need to plan for storage infrastructure and servers from which to run the crawlers. Seneca likened a web archive collection plan to other familiar concepts—traditional subject based resource collecting, serial collecting in that variations of content are regularly repeated, and three-dimensional structural storage. She also noted some of the challenges of collecting local and state government documents.

Catalog librarian Alex Thurman described the web archiving workflow at Columbia University where he is responsible for collecting and organizing web resources related to human rights as part of the Columbia University Library’s Center for Human Rights Documentation and Research. The project is meant to “systematically capture and archive the websites of human rights non-governmental organizations based in the U.S. and in other countries.” Thurman uses web crawlers to capture sites on a regular basis and provides metadata with relevant access points to organize the findings and make them accessible to researchers. The frequency of recrawls is dependent on variables such as at-risk content, high research interest or knowledge of frequent updates. Access to captured material is guided by permission policies and negotiation. The Delicious page that was created to survey and tag existing web content related to human rights is available at http://www.delicious.com/hrwebproject. More useful resources follow.

Toolsets for Web Archiving

Additional Information

Holdings Information Forum: E-serials Holdings: Whether, Why, and How?

CRS Committee on Holdings Information Forum

Susan Thomas, Indiana University

This forum, composed of a four member panel, presented and discussed different methods and perspectives on the management and display of electronic serial holdings data.

Heather Staines, Springer, discussed use of the DOI (Digital Object Identifier) to identify and provide a stable link to content. She noted that DOI information can be used to identify any type of electronic content, will remain constant despite changes in ownership of the data, and makes the link from citation to content work transparently. She stressed the importance of depositing DOI and metadata information into CrossRef, the largest DOI registration agency to make the content readily available.

Rebecca Guenther, Library of Congress, discussed the advantages and limits of using ONIX SOH (Serials Online Holding) to communicate detailed e-serials holdings information from the publisher end to MARC21 systems.

Peter McCracken, co-founder and Director of Research for Serials Solutions, focused his discussion on the importance of enumeration displays in holdings lists and link resolvers, noting that chronology displays tend to be prevalent while enumeration in e-holdings displays tends to occur less frequently. He concluded by noting that, despite the demand and necessity for enumeration in e-serials holdings data, it rarely has any affect on users access to content, and that the cost to fully incorporate enumeration in holdings data is truly cost-prohibitive.

Myrtle Myers, OCLC, discussed the use and display of e-serial holding data in WorldCat, noting how that information is being implemented for resource sharing and collection analysis purposes.

Questions from the packed audience focused on the importance of chronology and enumeration.

Power point handouts are available on the ALA Midwinter Wiki.

Standards Update Forum

Continuing Resources Section Forum

Gracemary Smulewitz, Rutgers University

Speakers: Peter McCracken (Serials Solutions); Karen Wetzel (NISO) substituting for Ted Koppel (Auto-Graphics)

Peter McCracken reported on the work of KBART, the Knowledge Bases and Related Tools Working Group. KBART represents a partnership between UKSG, the UK Serial Interest Group, and NISO. In 2006, UKSG commissioned a report to identify problems and inefficiencies with the journal article data exchange in the OpenURL supply chain with the hope of maximizing OpenURL linking. In response to the report, the KBART Working Group was established to identify specific problems and recommend best practices and to inform and educate all who participate in that supply chain.

The intent of KBART is to get better data for everyone, from those who provide the data (publishers and aggregators); those who process the data, link resolvers and ERMS; those who present the data, libraries and consortia, to those who use the data.

Peter McCracken, Serials Solutions and Charlie Rabble, TBI Communications, formerly Ingenta, chair the KBART Core Working Group. There are many group members representing research libraries, publishers, vendors and link resolvers.

KBART is addressing the problems found in URLs that contain bad data and bad formatting which produce bad results. These problems produce both false positives and false negatives. If the user is led through a resolver to a URL that suggests that the full text is available and it is not (false positive), or if the resolver finds no URL and actually one exists (false negative), the process is not working well. Some of the incorrect data is a result of title changes or ceased publications that are not reflected appropriately. Other errors are often the result of an improper data structure.

Some content providers are unaware of the process and do not know why it matters. It is essential to get the KBART word out to our profession. KBART may be able to solve the problems through education and advocacy. KBART can offer more and better examples of OpenURLs and better examples of accurately transferred content.

Adam Chandler is leading a project at Cornell to capture a massive amount of OpenURLs and make them available in a database. He will examine them to see what is working. This information will help in educating all parties. KBART hopes to incorporate this into a best practices approach and will not attempt to develop a standard. Many large companies have already invested considerable money and time into the OpenURL process, therefore it would be very difficult to go back to the beginning to implement a standard.

KBART has been working for just over a year and a draft report has been produced. The report will be available following Peter McCracken‘s review.

Contact information about KBART: Peter McCracken (NISO, Co-chair); Charlie Rapple (UKSG Co-chair) TBI Communications

McCracken opened the floor to questions and the audience presented examples of problematic URLs including selective full text, e-books and print material.

Karen Wetzel of NISO reported on the progress of the CORE Working Group. She announced that she was filling in for Ted Koppel, who had a conflict and could not present. CORE stands for the cost of resource exchange. CORE was established as an effort to build on a white paper published by Medeiros et al. regarding acquisitions and data elements for exchange between the ERMS, vendors and the ILS systems. ERMS customers want the ability to find acquisitions information while working in the ERMS.

Ed Riding of SIRSI/DYNIX, Jeff Aippersbach of Serials Solutions, and Ted Koppel surveyed various ERMS and ILS vendors to determine the feasibility and then discussed the proposal with NISO and NISO agreed to partner. They then solicited for members.

The first meeting of the CORE Working Group was on August 6, 2008. The group is meeting bimonthly and the membership consists of librarians, vendors and consortia. The group recognizes that there are broader applications beyond the ERMS and ILS exchange, and that the possibilities of vendors and consortium exchanges also must be considered.

Between August and September, the group developed use cases. In October, they analyzed those cases for common needs such as vocabulary, data elements and product information. From October through November, use cases were refined and core elements were identified. In November a message structure and transport mechanism were established. From December through January, concentration has been on drafting a document.

Some decisions that were reached:

  • Two levels of query have been identified:
  • Cost information only
  • Cost information and product information
  • The types of queries
  • Send information for one particular transaction
  • Send all information
  • The XML structure has been developed, and is simple and compact and can be seen in the draft.

The group determined that the SUSHI protocol web service mechanism is a good general envelope for transport and can be used by CORE virtually without changes. It also leverages the vendors’ efforts with SUSHI.

As of January 2009, NISO has completed the first draft of the written draft. The group is reading, clarifying, editing and illustrating and adding examples. The final draft publication is expected soon. Vendors will then write their applications to use SUSHI and CORE and once tested will report on problems.

The CORE Working group contact information: Ted Koppel, Ed Riding, Jeff Aippersbach.

Iowa Libraries Recover and Thrive in the Aftermath of the June 2008 Midwest Flooding

Preservation and Reformatting Section Forum

Stephanie Lamson, University of Washington

In June 2008, the Cedar and Iowa Rivers overflowed, reaching 500-year flood levels and forcing the evacuation of significant sections of Cedar Rapids (1,300 city blocks) and Iowa City, including the University of Iowa (UI) campus. Downtown Cedar Rapids was under nine feet of water. This forum centered on the most effective ways libraries and museums might respond to such potentially devastating flood conditions.

Nancy Baker (Director, UI Libraries) explained the University of Iowa Libraries’ efforts to evacuate 100 library staff within a six-hour period and to secure collections, servers, and buildings in anticipation of flooding. Nancy Kraft (Preservation Librarian, UI Libraries) recounted her work with the African American Museum of Iowa and the National Czech and Slovak Museum, both located in Cedar Rapids, in the recovery of their collections after the floods. Tracy McDonough (Regional Director of Marketing, Belfor) related her company’s past experiences with salvaging water-damaged materials in the aftermath of a number of natural disasters.

All three presentations stressed the importance of:

  • being prepared with up-to-date disaster plans, including contact information
  • knowing your salvage priorities in advance for efficient triage
  • applying best practices to house collections (proper storage improves recovery)
  • responding within 24-72 hours to control the ambient environment and begin salvaging materials
  • protecting all staff and volunteers to minimize health hazards
  • maintaining communication channels and staying visible and open for business
  • acknowledging the inevitability of setbacks, stress, and a long recovery period

Although floodwaters caused serious damage to the University of Iowa’s art and music library buildings and seeped into the main library’s basement (three inches), no collections were lost due to water damage. The UI Libraries managed to maintain staff communications and provide library services, despite power outages, long closures, and physical displacements,

The two Cedar Rapids museums suffered significant damage to their buildings and collections. They, however, were able to rescue many of their most important collections before the actual flooding and to salvage damaged collections and offer programs to the public afterward. For all of the institutions, the long recovery continues.

Presentations by Baker and Kraft are available online. See also the Preservation Beat blog and the list of resources at the end of Kraft’s presentation.

Who’s at the Wheel? What We’ve Learned about Patron-Driven Collection Development

Publisher-Vendor Library Relations Forum

Katharine Farrell, Princeton University

Patron-driven selection has become part of the collection development programs at many academic libraries for print and electronic resources. It is a component of the business strategy of some vendors, and is a development about which publishers should know more.

The purpose of the forum was to consider several case studies in patron-driven selection and discuss these experiences with the panel and audience. Steven Bosch, University of Arizona introduced the panelists and moderated the session.

Lynn Wiley, University of Illinois, Urbana-Champaign, spoke about the experience at UIUC in creating a patron initiated selection model for print materials. She described some of the obstacles to successful delivery of patron requests for new material, including budget implications, policies of partner libraries on lending new titles, and so forth. UIUC has a local request and delivery system that is part of a state-wide consortia offering twenty-four hour turnaround time. This service includes an own-to-loan component, currently a pilot project that allows patrons to request purchase of titles that meet certain basic criteria (not a textbook, computer manual, etc.). The library is making in-process data available in the local OPAC and is considering further exposing this data at the consortia level.

Rick Lugg, R2 Consulting, challenged the notion that patrons are not qualified to select material for the collection. He questioned the effectiveness of the long cherished model of expert selection for collection development. Citing the Kent study*, he noted that only 37.5 percent of the sampled collections circulated. He reported on the experience at the University of Vermont, where the library loaded records from three e-book providers into their catalog and allowed users to select for purchase. One result of this was that the library spent $150,000 less over the course of a year than they expected to spend on this material. Lugg pointed out that this had implications for publishers in the possibility of reduced front list titles, and reduced sales per title. Publisher income will be less predictable, which will likely result in higher per unit costs.

Kari Paulson, eBook Library, offered background on the development of EBL which launched with a demand driven acquisition model. The company now has four years of data on how this model has been working. That data indicates that patron initiated purchases see higher continued circulation. She noted that Brown University loaded all the EBL MARC records into their catalog and allowed patrons pay-per-view or purchase options. She echoed earlier comments about budget implications.

Jim Dooley, University of California, Merced, offered a case study of EBL and Coutts MyiLibrary implementations at UC Merced. He pointed out that this was a new UC campus that opened in 2005. The library is intended to be primarily electronic resources. Patron initiated acquisition with EBL was implemented in response to direct, immediate need; 30 percent of selected titles are accessed more than once. MyiLibrary began as a print books approval plan, but morphed into a patron driven plan for sciences; 44 percent of selected titles are accessed more than twice. Dooley then outlined some of the implementation issues to consider including timely loading of MARC records, usage statistics, profiling the titles the library makes available for patron initiated purchase. He stated that this had delivered a huge savings in staff time as well as satisfying patron needs.

A question and answer session followed.

* Kent, Alan. Use of library materials: the University of Pittsburgh study. New York: M. Dekker, 1979.

RDA Update Forum

Yoko Kudo, Texas A&M University

Shawne Miksa, University of North Texas, Chair of the RDA Implementation Task Force, opened the forum by introducing four speakers: John Attig, Nannette Naught, Don Chatman, and Beacher Wiggins.

John Attig, ALA representative to the Joint Steering Committee (JSC), provided a brief update on the draft of Resource Description and Access ( RDA). The draft was released on November 17, 2008, and responses from national constituencies were due on February 2, 2009. In the meeting that will be held from March 12 to March 20, 2009, the JSC will review received comments, and make final decisions on all outstanding issues raised in those comments. They will also begin considering maintenance issues during this meeting. The JSC plans to finalize the full text in early July.

Nannette Naught, RDA Document Manager from IMT, Inc., provided a quick demonstration of RDA Online. She began by introducing the three tabs ( RDA, RDA Tools, RDA Resources) on the navigation panel, and demonstrated different features and functionality available on each tab.

  • RDA: browsing, searching and downloading RDA content; adding bookmarks and annotations to text; customizing the views of the document panel.
  • RDA Tools: Entering the content through different contexts (such as the FRBR entities and cataloging workflow); creating and sharing custom examples.

RDA Resources: Access RDA related resources including AACR2.

The online demo, including the help system, will be available in February 2009.

Don Chatman, ALA Publishing, shared the provisional ideas for subscription and pricing options for the RDA product. Subscription options will be offered at different pricing levels similar to other online products such as Cataloger’s Desktop. A one-time purchase option is also considered to meet the needs for rule browsing. A question was raised from the audience about the publication of a concise edition and translations to other languages. Chatman and Attig stated that it had yet to be determined. Naught, in response to another question from the floor, stressed that the product would be fully Web based, not a client-server application.

Beacher Wiggins, Library of Congress, outlined the schedule and method of the RDA testing. The testing, led by the three U.S. national libraries (LC, National Agricultural Library, National Library of Medicine), is scheduled to start in July when the final draft of RDA should be ready for use. The testing period will be six months in total. The first three months will be a period for training, and the next three months will be spent on formal testing where about twenty testers will create records using both RDA and AACR2 in the current cataloging environment. The application form for test participation will be available in a few weeks. Wiggins emphasized the importance of selecting testers from different communities and a wide range of backgrounds. The three national libraries plan to make a decision, based on the assessment of the test records, whether to recommend the launch of RDA between January and March 2010. A public website will be set up to share all the documentation produced and other information exchanged during the testing process.

Ed.’s Note: We are fortunate to also have reports on the following programs, which also took place during the ALA Midwinter Meeting in Denver.

Next-Generation Bibliographic Control: What Is the Brave New World?

Shana L. McDanold, University of Pennsylvania

The Ex Libris sponsored program opened with a presentation from Diane Hillman on the role of metadata in our future. She framed her remarks around how to incorporate changes, building value, and user participation. In order to change, Hillman reviewed what elements must be left behind for us to be successful. She urged us to move beyond the world of data in isolated silos, the view of metadata based on catalog cards, the search interfaces even librarians dislike, and finally the limited functions of current library software.

Hillman then moved into a review of various data standards out there, discussing how they’re changing the world of metadata. She encouraged attendees to explore the National Science Digital Library Metadata Registry and get involved in the sandbox. Hillman emphasized that we must build new frameworks that are flexible, where MARC is only one of many possible metadata formats, and focused on the service without commoditizing the output of the data. She sees a new paradigm in which we make better use of machines to manage the ever increasing data, where innovation is key, and where we can accommodate data in a variety of packaging.

But Hillman stressed that user participation is necessary to make this all happen. This is not a challenge to privacy, but rather calling for an end to descriptive objectivity. She stressed that users will only participate in something if they get something back. Hillman cited the semantic web as an example of this, as it contains open linked data rather than packages. Users are encouraged to play, and flexibility is rewarded. It also presents a challenge for vendors and users to examine our past assumptions about the world of metadata and move forward.

The next speaker was Corey Harper. Harper continued where Hillman left off, discussing the importance of open environments and open data of things like the semantic web. He stressed the importance of user participation again, discussing how the informal feeds into the formal development of relationships and structures. There is a need for experimentation and testing that comes from the grassroots movements to create the tools and infrastructures necessary for the formal institution-based specifications around data.

Harper also focused on the importance of linked open data as found in environments like the semantic web, encouraging us to open up the silos mentioned by Hillman in her talk. Library data can be added value to the community if we open it up to experimentation. He described several projects built around library data, including the Library of Congress Subject Headings in SKOS, and the Bibliographic Ontology Specification worked on by the Library of Sweden. Harper ended is presentation with a plea to open data to allow for experimentation. This open data model will create a distributed bibliographic control environment where linked data is the support structure.

The final speaker of the session was Kathryn Harnish of Ex Libris. She described the new Unified Resource Management (URM) model being developed by Ex Libris and its new Metadata Management System (MMS). They allow for new efficiencies within libraries by allowing for flexibility and the ability to take advantage of technologies to improve workflows. While it is still under development, the goal of the project is to provide an infrastructure that would be governed by the community using it, creating a collaborative and flexible environment for experimentation based on a solid foundation.

Program for Cooperative Cataloging Participants Meeting

Shana L. McDanold, University of Pennsylvania

Chair David Banush opened the meeting followed by brief updates by Banush and various groups on the activities of the PCC and its Standing Committees over the previous six months. All reports and updates are available on the PCC website.

Banush introduced Karen Calhoun of OCLC. The focus of the program was a presentation by Calhoun that Janet Hawk and she wrote titled “Online Catalogs: What End Users and Librarians Want.” The report reviewed the results of the recent 2008 market research study completed by OCLC regarding the quality of metadata in WorldCat. The final report is expected to be published in February 2009.

Calhoun began the presentation discussing how the study of data quality in WorldCat can be generalized to online catalogs in general. Our catalogs are built on the rules for a dictionary catalog written by Charles Cutter, defining the library profession’s definition of catalog quality. Past studies of user behavior have concluded that keyword searching reigns. The current factors that influence end users’ definition of catalog quality are environments like Amazon and Google Books. These sites have different types of full records than a traditional library profession defined full record.

The objectives of the metadata quality research study were to identify and compare the metadata expectations of librarians and end users. The research methodologies included focus groups, a pop-up survey, and a librarians’ survey. For end users, delivery is as important if not more important than discovery. Seamless integration from discovery to delivery is key for users. No distinction was made between the FRBR defined user tasks, rather they were one single process. End user recommendations included improved search relevance, more links to online full text (make it easier to connect), add more summaries and abstracts, and add more details to the results.

The librarians’ survey results were slightly different. Discovery was of more importance to librarians, and it was seen in the enhancement recommendations. For all librarians, regardless of location, type of library, and library job, merging duplicates and adding table of contents to records were the top recommendations. Other recommendations included a desire to move towards a “social cataloging” model, increased accuracy and currency of holdings, and education about what users want.

The differences between the end users’ and librarians’ results came down the goals of each group. Librarians’ goals were searching for known items while end users were searching with no defined end goal.

Calhoun ended her presentation with a few ideas for discussion. Catalogs have a lot of varied constituents. Despite these variations, however, there are some commonalities across groups. The definition of quality is different for end users and librarians, due to the goal of each group.

The session concluded with a brief question/answer and comment period. There were very few questions. Comments included the observation that discovery can take place elsewhere, but that delivery may be dependent on the catalog. Enhancements like numbers and other data quality aspects have importance for the behind the scenes direct flow from discovery to delivery, acting as important elements for links and APIs.

OCLC Enhance Sharing Session

Shana L. McDanold, University of Pennsylvania

Jay Weitz, head of the Enhance Program, opened the session with a few remarks and introduced two guests: Glenn Patton and Karen Calhoun. Weitz highlighted several items from the News From OCLC handout, including the email address for questions/concerns about the OCLC Record Use Policy, and that the 2009 OCLC MARC update will include both MARC 21 updates no. 8 (Oct. 2007) and no. 9 (Oct. 2008) and is expected to be completed the 2nd quarter of 2009. Other news was an update on the project to control Personal Name headings and the pre-population of the LC NACO authority file with non-Latin scripts. Both projects have completed their first phase.

Weitz then introduced Glenn Patton. Patton reported on the WorldCat Quality Research project completed during the last calendar year that asked “what is quality in a catalog?” focusing specifically on WorldCat and its various users. In the past twenty years, there has been recognition that the perception of quality depends on the individual (such as a specialist, pragmatist and the end user). The objectives of the Metadata Quality Research project were to look at multiple perspectives (both end users and librarians) and define a new WorldCat Quality Program from the results. The methodology for the research involved multiple approaches: focus groups, a pop-up survey, and a survey for librarians.

The results from the end user surveys showed that delivery is as important, if not more important, than discovery and there is a need and expectation for a seamless flow from discovery to delivery (as few steps as possible). The end user recommendations that were common across focus groups and the pop-up survey included improved search relevance, adding more links to full text, adding more summaries/abstracts and details to search results, and adding professional reviews for discovery assistance. Patton highlighted that it is important to note that there are some questions we cannot ask of end users because the users are not fluent in the vocabulary. We can, however, extrapolate from the results that end users do like faceted browsing, FRBR groupings and links such as “more like this” or “did you mean.”

The librarian survey focused on evaluating what metadata is needed to identify item, the attributes most liked, and recommended enhancements. Over half of the respondents reported multiple areas of responsibility. Enhancement requests from the librarian survey (so far) include merging duplicates, making it easier to make corrections to bibliographic records (which is the purpose of the Expert Community Experiment), more emphasis on accuracy and currency of holdings, more enrichment data added to records (table of contents, summaries and cover art; possibly using APIs and working with outside suppliers to link in data) and more education for end users.

In a comparison of librarian and end user results, both groups identified the author as the most important component of discovery, but it also reflected the different searching purposes of each group. Delivery showed the same rankings for both groups. Results of the recommended enhancements showed important differences. In terms of discovery, end users want evaluative data, while librarians want quality control to increase access (such as merging of duplicates and typo correction). Both groups want more links on the delivery side, but librarians also asked for an increase in holdings accuracy which reflected the perspective of interlibrary loan needs.

Patton noted that the full report of the survey results will be made available in February 2009 as part of the perceptions series of reports. Patton then opened up the floor to questions and comments. Questions included scalability of enhancements, relationship to FRBR tasks and the potential impact of FRBR displays.

After the question and answer session, Patton presented what OCLC is doing in response. Activities include merging more data in batch loads such as classification numbers, subject headings, and non-Latin data, as well as looking at more categories of information in incoming records to include and when including it is appropriate, and the newly redesigned duplicate detection and resolution (DDR) software.

The discussion then turned to the Expert Community Experiment. Patton outlined the history behind its development, its goals, and the timeline. The 6 month experiment allows users with a Full level authorization or higher to make additions and changes in almost all fields in almost all records and all formats. Currently excluded are all records coded PCC (both BIBCO and CONSER). However, LC records not coded PCC are included. It is important to note that existing capabilities will not change; the experiment is only the expansion of abilities. More information and links to upcoming webinars can be found online.

This was followed by a brief discussion and brainstorming session regarding possible changes to the Enhance program if the Expert Community Experiment is a success. Weitz promised to keep the Enhance community informed via the Enhance email discussion list. Sign up for the discussion list.