Volume 12,No 2, May 2005

Technology Electronic Reviews (TER) logo

       Volume 12, Number 2, May 2005

Technology Electronic Reviews ( TER) is a publication of the Library and Information Technology Association.

Technology Electronic Reviews (ISSN: 1533-9165) is a periodical copyright © 2005 by the American Library Association. Documents in this issue, subject to copyright by the American Library Association or by the authors of the documents, may be reproduced for noncommercial, educational, or scientific purposes granted by Sections 107 and 108 of the Copyright Revision Act of 1976, provided that the copyright statement and source for that material are clearly acknowledged and that the material is reproduced without alteration. None of these documents may be reproduced or adapted for commercial distribution without the prior written permission of the designated copyright holder for the specific documents.


Contents:


EDITORIAL: TER Readership Survey

by Sharon Rankin, TER Editor

TER has just completed a ten- year publishing history and former editor Tom Wilson in the previous issue discussed this achievement. Tom's editorial has prompted this editor to decide to include an editorial column as a regular feature of future TERissues.

The purpose of the column will be to share with you, our readers some of the work the Editorial Board is doing to continue to make this publication relevant for your professional work.

A few of you took the time to respond to the TER Readership Survey included in past issues. In this May 2005 issue, we have included links again to the survey. I encourage you to send us your feedback on the publication, the survey or a particular review article.

The TER Editorial Board meets at each annual and midwinter American Libraries Association (ALA) meeting.This coming meeting in Chicago we will be discussing how to improve this publication as part of an annual review of achievements.

This month we also say goodbye and thank you to a number of TER Editorial Board members who have completed their terms. Former editor, Adriene Lim (Wayne State University), Paul J. Bracke, Publisher Relations Editor (Arizona Health Sciences Library), Linda Robinson Barr (Texas Lutheran University), Kathlene Hanson, (California State University, Monterey Bay) and Stacey Voeller (Minnesota State University).

This issue contains five book reviews. Cindy Schofield-Bolt, a regular TER reviewer, provides insight into the Semantic Web with her review of A Semantic Web Primer . Brad Eden, former TER Board member reviews a very technical publication on applied informetrics. There are two software books reviewed, Java Development with Ant and PayPal Hacks, which will interest the software developers and programmers among you. The issue is rounded out by a review of the IT Ethics Handbook.

As the academic year comes to a close, many of us are preparing annual reports for both our departments and ourselves and thinking about goals for the coming year. Consider adding a TER book review to your goals for the coming year. If you cannot find a suitable title on the TER inventory, choose one of your library's titles and submit this suggestion to me. Think of TER as an e-journal where you can gain experience in writing reviews for publication. Choose and title and get in touch!


REVIEW OF: Grigioris Antoniou and Frand van Harveten. (2004). A Semantic Web Primer. MIT Press: Cambridge.


by Cindy Schofield-Bodt

A Semantic Web Primer is one of a series of books intended to guide designers of the evolving generation of "cooperative information systems" (p. xv). The editors' premise is that the newest need for information systems is evolving from independent database applications to demands for "information services that are homogenous in their presentation and interaction patterns, open in their software architecture, and global in their scope" (p. xv). As long as databases remain individualized and closed to all but the primary intended users, issues of data semantics remain irrelevant. The World Wide Web (WWW) moved databases, application programs and users onto a public and open platform that required a new presentation format. This book is about the emerging technology termed "Semantic Web" and discusses techniques that promise to dramatically improve the current WWW and its use.

By its authors' definition, A Semantic Web Primer is more textbook than reference work. It has a structured organization into chapters and sections that cover current models and envisioned changes, clear figures and diagrams, extensive bibliographic notes, references, and in-depth index. This book could usefully reside on a software designer's bookshelf to be referenced during the course of designing application software.

The most important chapter is the first, "The Semantic Web Vision", as this is where the authors lay out the current web structure, outline its advantages and limitations and define the Semantic Web vision.

The over-arching obstacle to achieving the highest level of information retrieval is that at present the meaning of web content is not "machine-accessible." Current search engines are keyword-based and as such are plagued by challenges: They can not distinguish relevance among a plethora of retrieved pages, may not retrieve any results, are highly dependent on specific vocabulary, and display results in single web pages from which partial information units must be manually extracted. The authors refer to a new plan that would revolutionize web content by representing it in a "form that is more easily machine-processable and to use intelligent techniques to take advantage of these representations"; they call this the "semantic web initiative" (p. 3).

Chapter 1 describes how the current web is structured and how the Semantic Web will allow a much more advanced knowledge-managed system (p. 4). The examples include business-to-consumer, business-to-business, and electronic commerce cases as well as a "personal agent" scenario. All are greatly enhanced by the semantic web application.

One goal is to enable users to continue to use tools to develop web pages rather than rely on computer science experts. This begs the question "why should users abandon HTML for Semantic Web language?" The answer seems to lie in a natural evolution to better and more efficient tools. As large organizations adopt XML (extensible markup language) and RDF (Research Description Framework) standards, both programming steps toward the Semantic Web vision, momentum will lead to more and more "tool vendors and end users adopting the technology" (p. 9).

Perhaps the most noticeable software addition necessary to enable the Semantic Web is the "agent." Agent software will act "autonomously and proactively to collect and organize information and present choices for the user to select from" based on a person's predetermined profile (p. 15). Agents will make use of technologies that already exist at various levels of development: metadata, ontologies and logic.

The final concept in outlining Semantic Web Development is that of a "layered" approach. As standards are established two principles will be followed: downward compatibility and upward partial understanding. It will be necessary for researchers to recognize fixed points of agreement. Each standard should be able to interpret and use information written at a lower level as well as anticipate and take advantage of the higher level that may still be in development. The base layer is fixed in XML with other data models, such as RDF (maintaining XML syntax) layering on top of XML.

Chapters 2 and 3 discuss XML and RDF related technologies and schemas. XML is introduced as an SGML (standard generalized markup language), which was developed because HTML (hypertext markup language) has inherent shortcomings that limit the ability to describe the structure of the information represented. Both HTML and XML represent information using tags and both are readable by humans, but XML documents are more easily accessible to machines.

As this reviewer's background is library technical services, with roots in and moderate allegiance to the MARC cataloging record, the example that describes a book in Chapter two, was of special interest. All of the problems of description in Semantic Web format were described, but there was not acknowledgement of the tag-based database management properties of the revolutionary MARC languaged developed in the 1960s to describe bibliographic data.

Perhaps the most significant difference between HTML, MARC and XML applications is the fact that XML does not have a fixed set of tags; users define tags of their own. XML has become a preferred uniform data exchange format between applications, acting as the agreed upon standard between companies and their customers and business partners (p.26).

Just as XML has become the "de facto standard for the representation of structured information on the Web" and a major tool in supporting machine processing of information (p. 55), RDF provides the foundation for representing and processing metadata. RDF has an XML-based syntax and supports semantic interoperability. The two standards complement each other. RDF has an XML-based syntax and supports semantic interoperability.

Beginning with chapter 4, the reader is presented with the various propositions for standardizing and describing the semantics of language in a machine readable way. OWL is emerging as the standard language. It is described by the W3C (World Wide Web Consortium) as being "designed for use by applications that need to process the content of information instead of just presenting information to humans". OWL facilitates greater machine interpretability of Web content than that supported by XML, RDF, and RDF Schema (RDF-S) by providing additional vocabulary along with a formal semantics. OWL has three increasingly-expressive sub languages: OWL Lite, OWL DL, and OWL Full." (World Wide Web Consortium Recommendation Abstract, http://www.w3.org/TR/2004/REC-owl-features-20040210/).

Chapters 6, 7, and 8 present rules, applications and engineering principles that should be applied as the Semantic Web grows within the confines of accepted standards. These chapters continue to follow the pattern of earlier chapters including chapter summaries, suggested reading (or clicking) and exercises and projects. Suggested readings include published books, journal references (often available in electronic format) and relevant web sites.

If the book is to be used as a class text, then presumably the professor will be available to review the exercises and projects. There is no answer key to the exercises and projects and indeed some of the suggested projects are so open-ended that it would not be possible to print the "correct answer" in the text. All of the exercises are preceded by material in the chapter that outlines the structure or data treatment being requested, making it possible, albeit difficult, to use the book independently to expand one's understanding of the languages, terminologies and technological innovations that will lead to the Semantic Web.

One industry web site describes The Semantic Web as providing "a common framework that allows data to be shared and reused across application, enterprise and community boundaries. It is a collaborative effort led by W3C with participation from a large number of researchers and industrial partners". (W3C technology and society doman www.w3.org/2001/sw/).

According to the authors of The Semantic Web Primer, "The Semantic Web will not be a new global information highway parallel to the existing World Wide Web; instead it will gradually evolve out of the existing Web" (p. 3). This book provides an overview of the Semantic Web and the software database structures that will need to be in place in order for the vision to come to life. More advanced readers will appreciate the orderly review of concepts and the logical progression in the presentation of new material. This reviewer recommends the book for intermediate readers with some knowledge of XML and current web page design and editing concepts.

Cindy Schofield-Bodt, Librarian, Southern Connecticut State University, is currently the Buley Library Technical Services Division Head. She has taught cataloging and acquisitions courses as an adjunct at the SCSU Graduate School of Library And Information Science.

Copyright © 2005 by Cindy Schofield-Bodt. This document may be reproduced in whole or in part for noncommercial, educational, or scientific purposes, provided that the preceding copyright statement and source are clearly acknowledged. All other rights are reserved. For permission to reproduce or adapt this document or any part of it for commercial distribution, address requests to the author at schofieldbc1@southernct.edu

 


REVIEW OF: Erik Hatcher and Steve Loughran. (2003). Java Development with Ant. Manning: Greenwich, CT.

by Mark Cyzyk

As a long-time web application architect and developer, yet someone who is a novice at Java programming, I must say that server-side Java web application programming and deployment is incredibly complicated, particularly the deployment aspect. Other web application platforms, such as ASP, PHP, Perl, Python, and Cold Fusion, don't really have the same notion of application "deployment" as what holds in the Java world. With these other application platforms one simply places uncompiled files of code on a suitably configured web server, configures whatever data source names (DSNs) may be required for backend database connectivity, and configures other extraneous services, e.g., scheduled jobs, full-text indexing engines, third-party libraries, etc., and the application is ready for use. With Java this is different. Very different.

First, Java requires not only just an external language interpreter for compilation and execution of application code, it requires a full-blown external application server, something as large, complicated, and resource-intensive as a web server. The application server can either run in tandem with a web server or can run on its own and serve up Java applications without any other intervening technology. Secondly, Java is not a scripting language like the languages listed above. Scripting languages are dynamically compiled. Java code, servlets at least, must be pre-compiled into bytecode before they can execute within the bounds of a Java application server. And third, the way this compilation and deployment to the Java application server happens is somewhat complicated, with a fairly complex series of conventions governing it, e.g., a directory tree containing suitably-named folders and files for each individual application. Pre-compiled Java class files must be placed precisely into this directory tree to be properly executable by the parent Java application server.

Ant is a tool that was built to automate compiling and deploying code, not only for Java web applications, but for standalone Java software as well. As I learn more about Ant, and read this nicely written book by Erik Hatcher and Steve Loughran, co-committers on the open source Ant project, I've come to think of Ant as a large batch file or system macro interpreter.

Ant works by reading in a structured XML file, build.xml, and responding to the commands ("directives") contained within./p>

As someone who is new to Java web application development and deployment, and someone who is really not at a point to be capable of generating custom, from scratch Ant build files, I found the three chapters particularly useful in bringing me quickly to an understanding of how it all works; Chapter 7: Deployment, Chapter 12: Developing for the Web and Chapter 18: Production Deployment.

Chapter 7: Deployment illustrates how to deploy a Java application using Ant in four different scenarios: FTP-based distribution of a packaged application; Email-based distribution of a packaged application; Local deployment to Tomcat; and Remote deployment to Tomcat.

Copying a directory tree to a remote server via FTP seems, at first, to be a pretty easy task to do programmatically, until, that is, you stop and think about everything that could go wrong. Here the authors focus on how to connect to an FTP server using the directive, how to first determine whether that server is even available using the directive, and how to pause the build process at appropriate times using the and directives.

Distributing a zipped up application via email is straightforward. The directive contains all the email-related attributes you would expect, e.g., "from"; "bcclist"; "mailhost"; "subject".

The heart of this chapter, however, is deploying web applications to local and remote Tomcat servers. Here the authors provide nice examples of using the Ant directive to install either a single WAR file into its appropriate location in the Tomcat directory tree, or alternatively using the directive to do likewise on a remote Tomcat installation.

Chapter 12: Developing for the Web provides a nice illustration and explanation of how server-side Java web applications are structured, the directory conventions used and meta-data files required for proper deployment of a web application to a Java servlet container. The chapter then delves deeper into automated pre-compilation of Java Server Pages (JSP), including JSPs that rely on tag libraries, generation of "deployment descriptor" files, i.e., small XML files that contain metadata about the web application, and using HttpUnit or Cactus (two open-source projects) to automatically test the deployment.

Chapter 18: Production Deployment tied everything together for me. The authors briefly address the challenges of deployment to different application servers/servlet containers, each of which may have slightly different implementations of official specifications, differing demands for underlying versions of Java, etc. The authors then proceed to locate and define how and where in the development-to-production software engineering process Ant fits in. The thought here is that, in a production environment, responsibility for the software is passed from developers and a test system to operators and a bona fide production environment. Ant can and should be used to streamline this movement of code out of development and testing and into full-blown production. The rest of the chapter provides a nicely detailed illustration of a sample Ant build and deploy scenario, moving code from test to development using Ant's various "Power Tools." It concludes with suggested techniques that could be used to verify that the build and deployment processes were successful. Again, this chapter provided, for me, a nice finale to the whole Ant deployment process, despite the fact that it is only about two-thirds of the way through this encyclopaedic book!

Although I have focused this review on these three chapters and on web application deployment techniques, both Ant and this book include much more. Other topics and capabilities of Ant broached in the book include:

  • Automated code testing with JUnit
  • Executing external programs
  • XDoclet (which appears to be some sort of code generator for something called "Attribute-Oriented Programming" in Java)
  • Ant and XML data
  • Ant and Enterprise Java Beans (EJB) development
  • Ant and Web Services
  • And a set of very useful appendices, including one addressing the integration of Ant with your local IDE.

As with the O'Reilly books, technical books published by Manning are in general excellent. I've read and reviewed Manning publications in the past, and have always enjoyed their chatty style and superb typography and layout, something crucial to the usefulness of any technical book, in my opinion.

This book is recommended for the experienced Java programmer as well as for the novice wishing to tweak a pre-existing build file or simply attempting to gain an understanding of the culture and conventions of Java application programming and deployment.

Mark Cyzyk, formerly Head of Information Technology in the Albert S. Cook Library at Towson University, is the Web Architect at the Johns Hopkins University in Baltimore, Maryland.

Copyright © 2005 by Mark Cyzyk. This document may be reproduced in whole or in part for noncommercial, educational, or scientific purposes, provided that the preceding copyright statement and source are clearly acknowledged. All other rights are reserved. For permission to reproduce or adapt this document or any part of it for commercial distribution, address requests to the author at mcyzyk@jhu.edu

 


REVIEW OF: Stephen Northcutt. (2004). IT Ethics Handbook: Right and Wrong for IT Professionals. Rockland, MA: Syngress Publishing, Inc.

by James L. Van Roekel

Stephen Northcutt's IT Ethics Handbook is an educational and entertaining volume covering everything from spying on employees to handling data to end-user license agreements and privacy. In each topic of his 600+ page book, Northcutt describes the ethics issue, offers "Conservative" and "Liberal" arguments, and gives his summary offering direction to the reader: "When your company wants to take an ethical risk of this nature, it is legitimate for you to go along with it. However, if doing so would violate your sense of honesty, by all means you should refuse" (p. 317). These issues are grouped into larger chapters of broader scope. Each chapter closes with a summary and a Frequently Asked Questions (F.A.Q.) section. I particularly enjoyed the conservative and liberal arguments and found myself thinking more on where my opinions fell, rather than about the author's summary, which was, typically somewhere along the middle.

Chapter 1 deals with system administration and operations, including responsibilities for system implementers, day-to-day administration, and networking. Chapter 2 discusses auditing online, independent auditing, and conflicts of interest. Chapter 3 addresses vulnerability disclosure. Chapter 4 discusses issues the digital postmaster may encounter. E-mail scams are interestingly considered in chapter 5. Chapters 6 through 11 address information security officers, programmers and systems analysts, database administration, ISPs, dealing with coworkers, and end-user and employee computer security.

Chapters 12 to 16 cover customer ethics, the trusted assistant, contractors and consultants, mobile computer security, and personal computer users. Chapters 17 through 20 discuss penetration testing (determining an entity's IT security risk; i.e., from hackers and the like), content providing (data integrity and authority), privacy, and employer ethics.

Chapter 21, Conclusion, wraps the book up by including "The Ten Commandments of Computer Ethics" by the Computer Ethics Institute and "The IT Professional Code of Ethics." IT Ethics Handbook will be useful for IT professionals, librarians, educators, and interested readers in general. The book's layout is clear and well designed. I would recommend this book to those listed above, especially those in the IT education field. IT Ethics Handbook may be read from cover to cover, though it may be more helpful and interesting to use dictionary-style. The table of contents lists subheadings under each chapter's main headings, allowing for this type of browsing.

James L. Van Roekel is Director of Academic Instructional Technology and Distance Learning at Sam Houston State University, Huntsville, TX.

Copyright © 2005 by James L. Van Roekel. This document may be reproduced in whole or in part for noncommercial, educational, or scientific purposes, provided that the preceding copyright statement and source are clearly acknowledged. All other rights are reserved. For permission to reproduce or adapt this document or any part of it for commercial distribution, address requests to the author at vanroekel@shsu.edu.

 


REVIEW OF: Sofield, Shannon, Dave Nielsen, and Dave Burchell. (2005). PayPal Hacks: 100 Industrial-Strength Tips & Tools. Sebastopol, CA: O'Reilly.

by Janet A. Crum

Founded in 1998 and acquired by EBay in 2002, PayPal has become the leading electronic payment processor, with over 65 million registered users (Pay Pal2005). PayPal has become so popular because it makes e-commerce easy for both buyers and sellers. Buyers with a bank account or credit card can send money via PayPal to nearly anyone with an e-mail address, with no service charges. Sellers can accept secure payments for goods or services sold online, including credit card payments, without special equipment or high monthly account fees.

Despite Pay Pal's growing popularity, until 2004 only one book had been published devoted entirely to PayPal. That book, PayPal in 30 Pages or Less (2003), appears to provide basic information for buyers and sellers new to PayPal (based on a review of the table of contents at http://www.amazon.com). Then, in 2004 O'Reilly published PayPal Hacks, the first book devoted to all things PayPal. In early 2005, Wiley published a competing title, PayPal for Dummies, which covers some of the same material. Both books reveal that there's much more to PayPal than what the casual EBay buyer or seller sees. It is in fact a sophisticated suite of e-commerce tools that can power everything from an EBay auction to an online garage sale to a high-volume online business. With either of these books, tech-savvy online entrepreneurs can learn how to make the most of these tools to--in the words of every over-hyped web marketing ad in the late 1990's --"maximize their e-commerce potential" without hiring expensive consultants or purchasing expensive software packages. PayPal Hacks, however, provides more power tools for power users, while PayPal for Dummies provides a better general overview of PayPal and its capabilities.

This review will focus on PayPal Hacks, providing comparisons to PayPal for Dummies where appropriate to help readers determine which book will better meet their needs.

While the preface defines a "hack" as "a 'quick-and-dirty' solution to a problem, or a clever way to get something done" (p. xvi), in reality many of the "hacks" explain basic and advanced features offered by PayPal. Others are examples of the second half of the definition, demonstrating clever ways to accomplish important e-commerce tasks using PayPal. Each hack is presented separately and intended to be self-contained. Readers will have trouble understanding the more advanced hacks, however, if they do not understand the basics presented in the first three chapters. An icon appears at the beginning of each hack to rate its complexity: beginner, moderate, or expert. Taken together, the 100 hacks provide an in-depth guide to using PayPal for e-commerce.

The hacks are divided into eight chapters. The first three chapters include tips and tricks for using PayPal as a buyer and seller and provide an overview of PayPal functionality. Chapter 1, Account Management, includes hacks 1-9 and covers setting up and maintaining a PayPal account. Chapter 2, Making Payments (hacks 10-16), covers various ways to send money, while chapter 3, Selling with PayPal (hacks 17-27) addresses how to request money, get money, give refunds, handle disputes, etc. No coding is required to use any of the
hacks in these first three chapters. All involve interacting with the PayPal site only. PayPal for Dummies covers much of the same information, along with more material tailored to novice sellers. Examples include entire chapters devoted to selling items on EBay and using PayPal and EBay tools for calculating shipping costs. Also, the narrative style of PayPal for Dummies, as opposed to the discrete hacks approach, may be friendlier for those new to PayPal and/or e-commerce.

Chapter 4, Payment Buttons (hacks 28-44) explains how to create payment buttons for web sites and e-mail messages. Chapter 5, Storefronts and Shopping Carts (hacks 45-60) moves beyond payment buttons to full-fledged online stores and includes lots of ideas for customizing payment options and the user's shopping experience. Chapter 6, Managing Subscriptions (hacks 61-64) explains how to charge for subscriptions with PayPal and limit online access to paid subscribers. The hacks in these chapters primarily involve working with HTML forms and a little JavaScript. No heavy-duty programming is required. PayPal for Dummies covers many of the same general topics but in less depth and with less-detailed code examples. It does include some excellent tables, however, outlining PayPal item variables, transaction variables, etc.

The hacks in chapters 7 and 8 are the most code-intensive. Chapter 7, IPN and PDT (hacks 65-86), covers the ins and outs of using Pay Pal's Instant Payment Notification and Payment Data Transfer tools to update a merchant site with payment information in real time. Chapter 8, The PayPal web Services API (hacks 87-100) explains how to build desktop applications that interact with PayPal and automate various business processes to gain efficiency. Most of the hacks in these two chapters require lots of programming, but each hack includes sample code that readers may modify and use. PayPal for Dummies provides information on how to get started with PayPal web Services, the PayPal Sandbox, and setting up IPN and PDT, but the information is very general and includes few code examples.

Code examples in PayPal Hacks are written in several combinations of languages and platforms. Client-side scripting examples are written in JavaScript. Most server-side scripting hacks are written in VBScript and designed to run on a web server that supports Microsoft Active Server Pages (ASP), while desktop applications are written in C# and require the Microsoft .NET Framework. Many hacks require a database and use Structured Query Language (SQL) to query it. Code examples "were tested against Microsoft SQL Server 2000 or better, but with some small modifications the examples will work with any popular RDBMS, such as MySQL or Oracle" (p. xx). Readers using scripting languages or platforms that differ from the examples will need some programming expertise to port the examples to their own systems. According to the preface, permission is granted to use code samples in programs and documentation. Users must contact the publisher for permission only when "reproducing a significant portion of code" (p. xxiii).

Overall, PayPal Hacks provides a comprehensive look at PayPal for current and aspiring web merchants. It is best suited for an intermediate audience; users without HTML or programming experience will have trouble using many of the hacks. PayPal for Dummies provides a comprehensive overview of PayPal, perfect for novice to intermediate users. The best choice for novice and intermediate users is to read PayPal for Dummies first, to develop broad knowledge of PayPal and its capabilities. Then read PayPal Hacks for in-depth tips, tricks, and code. If you must choose only one: buyers and casual sellers will get more out of PayPal for Dummies, while sellers who intend to use PayPal heavily or integrate it into e-commerce sites or applications should read PayPal Packs. With either book, readers whose experience with PayPal is limited to casual buying and selling on EBay will be amazed at the many advanced features and services that PayPal offers. Maybe some of us will even be inspired to become full-fledged online entrepreneurs.

References:

Collingwood, David. PayPal in 30 Pages or Less. (2003). Timesaver Books: Champlain, NY.
PayPal. (2005). Retrieved April 27, 2005, from http://en.wikipedia.org/wiki/PayPal
Rosenborg, Victoria. (2005). PayPal for Dummies. Wiley: Hoboken, NJ.

Janet A. Crum is Head, Library Systems & Cataloging, at Oregon Health & Science University in Portland, Oregon.

Copyright © 2005 by Janet A. Crum. This document may be reproduced in whole or in part for noncommercial, educational, or scientific purposes, provided that the preceding copyright statement and source are clearly acknowledged. All other rights are reserved. For permission to reproduce or adapt this document or any part of it for commercial distribution, address requests to the author at crumj@ohsu.edu.

 


REVIEW OF: Dietmar Wolfram. (2003) Applied Informetrics for Information Retrieval Research. Westport, CT: Libraries Unlimited. (New Directions in Information Management; 36).

by Bradford Lee Eden

This book is the first monographic work that focuses on the relationships between and among the fields of information retrieval and informetrics. The author acknowledges that a number of books on information retrieval have been written in the last four decades, but he indicates that they have not included specific references on how both informetrics and information retrieval research interact and inform each other.

Information retrieval (IR) studies the processes involved in access to, storage of, and representation of information containers, along with the systems that implement IR concepts. Informetrics is the study of quantifiable regularities in recorded discourse. Bibliometrics and scientometrics are older names for this field. Researchers in informetrics are interested in understanding and discovering patterns in how information is used and produced. Examples of research in this area include popular search topics on Internet search engines and usage patterns of books in libraries.

After the introductory chapter, in which the author explains the various distinctions and differences between these two topics, Chapter 2 deals with key concepts related to information retrieval and information retrieval systems. Distinctions between IR systems and database management systems, the variety of IR system models (Boolean and related set theoretic, vector space, probabilistic, and browsing), and visualization are just some of the topics dealt with in this chapter. Chapter 3 introduces readers to the sub-discipline of informetrics. Various informetrics models include Lotka's law of author productivity, Bradford's law of scatter, and Zipf's law of word frequencies. Citation and co-citation analysis are discussed, along with science indicators and policy, to name a few of the topics covered. Chapter 4 examines data collection and model development techniques. This chapter gets into some pretty heavy mathematical formulas, including reverse J-shaped distributions, potential cumulative growth models, and minimum chi-square estimations. Chapter 5 moves into informetrics and IR system content, which the author calls collections of information production processes (IPPs). Term distribution, indexing exhaustively research, term co-occurrence, and document citations and hyperlinks are covered here. Chapter 6 looks at informetrics and IR system use, which discusses research related to user analysis and user studies. Chapter 8 moves into applications that use informetrics and IR to assist in modeling, simulation, file design, space planning, process design, implementation, and evaluation of IR systems. Finally, Chapter 8 both summarizes and examines the future of informetric studies for IR systems, as well as the potential for information visualization research. An extensive reference section and index are included.

This book is highly technical. Although the author attempts to keep information understandable and concise, this book is primarily written for others involved in these disciplines, and therefore it is difficult for generalist librarians to understand. Most librarians would have a hard time following lines of thought, as it is geared towards information scientists, computer scientists, and doctoral students in schools of library and information science. For those interested in a well-written, concise documentation and history of the fields of information retrieval and of informetrics, however, this book fits the bill.

Bradford Lee Eden is Head, Web and Digitization Services at the University of Nevada, Las Vegas Libraries.

Copyright © 2005 by Bradford Lee Eden. This document may be reproduced in whole or in part for noncommercial, educational, or scientific purposes, provided that the preceding copyright statement and source are clearly acknowledged. All other rights are reserved. For permission to reproduce or adapt this document or any part of it for commercial distribution, address requests to the author at beden@ccmail.nevada.edu.

 


TER Readership Survey

 

 

 

About TER

The TER Editor is Sharon Rankin, McGill University ( sharon.rankin@mcgill.ca). Editorial Board Members are: Linda Robinson Barr, Texas Lutheran University ( lbarr@tlu.edu); Paul J. Bracke, Arizona Health Sciences Library ( paul@ahsl.arizona.edu); Kathlene Hanson, California State University, Monterey Bay ( kathlene_hanson@csumb.edu); Adriene Lim, Wayne State University ( ab7155@wayne.edu); Tierney Morse McGill, Colorado State University ( tmcgill@manta.colostate.edu); Florence Tang, Mercer University, Atlanta ( tang_fy@mercer.edu); Stacey Voeller, Minnesota State University ( voeller@mnstate.edu); Laura Wrubel, University of Maryland ( lwrubel@umd.edu); and Michael Yukin, University of Nevada, Las Vegas ( michael.yunkin@ccmail.nevada.edu).