JOLA Volume 3, Number 1, March 1970
A concept for mechanized descriptive cataloging is presented, together with four areas of research to be undertaken.
Use of an IBM 1401 computer and a single keypunch operation for changing a college book collection from Dewey Decimal to Library of Congress classification; for acquisitions, accounting and circulation procedures; and for production of a list of periodical holdings. A mark-sense reproducer is used for the circulation system.
A computer based laboratory for library science students to use in class assignments and for independent projects has been developed and used for one year at Syracuse University. MARC Pilot Project tapes formed the data base. Different computer programs and various samples of the MARC file (48,000 records, approx.) were used for search and retrieval operations. Data bases, programs, and seven different class assignments are described and evaluated for their impact on library education in general and individual students and faculty in particular.
A centralized data base of MARC II records distributed by the Library of Congress is discussed. The data base is operated by the Oklahoma Department of Libraries and is available to any library that can make use of it. The history, creation, operation, uses, advantages, disadvantages, cost and future plans of the data base are included, as well as flowcharts (both system and detail) and sample outputs.
In the development of library systems, the movement today is toward the so-called "total" or integrated system. This raises certain design and implementation questions, such as: what functions should be on-line, real time and what should be done off line in a batch mode; should one operate in a time-share environment or is a dedicated system preferred; is it practical to design and implement a total system or is the selective implementation of a series of applications to be preferred. Although it may not be feasible in most cases to design and install a total system in a single operation, it is shown how a series of application programs can become the incremental development of such a system.
Key-to-address conversion algorithms which have been used for a large, direct access file are compared with respect to record density and access time. Cumulative distribution functions are plotted to demonstrate the distribution of addresses generated by each method. The long-standing practice of counting address collisions is shown to be less valuable in judging algorithm effectiveness than considering the maximum number of contiguously occupied file locations.