TER Volume 6, Issue 3, May 1999


 - telecommunications electronic reviews

Volume 6, Issue 3, May 1999

Telecommunications Electronic Reviews (TER) is a publication of the Library and Information Technology Association.

Telecommunications Electronic Reviews (ISSN: 1075-9972) is a periodical copyright © 1999 by the American Library Association. Documents in this issue, subject to copyright by the American Library Association or by the authors of the documents, may be reproduced for noncommercial, educational, or scientific purposes granted by Sections 107 and 108 of the Copyright Revision Act of 1976, provided that the copyright statement and source for that material are clearly acknowledged and that the material is reproduced without alteration. None of these documents may be reproduced or adapted for commercial distribution without the prior written permission of the designated copyright holder for the specific documents.


REVIEW OF: Ari Luotonen. Web Proxy Servers. Upper Saddle River, NJ: Prentice Hall, 1998.

by Thomas Dyer

Proxy servers are a way of providing security and caching capabilities for networks. For a company which has an intranet, and wants to control traffic into and out of the network, a proxy is the logical solution. With the information superhighway becoming more prone to traffic jams, proxy servers provide a means of caching resources so the multiple requests for the same information don't add to the congestion. Web Proxy Servers is loaded with examples and with Luotonen's wealth of experience on the subject; the book is well written and easy to read. Ari Luotonen has developed proxy servers and software for Netscape and CERN. The book is broken up into parts with several chapters in each part.

The first two parts of the book are introductions to the hardware and software that make up a proxy server. Part 1 covers Overview of Firewalls, Overview of Proxy Servers, and Internal Server Architecture. For the beginner the overview is easy to understand and to the point. Luotonen even defines key terms which bring the reader up-to-date on proxy lingo. You may want to grab a drink before starting part 2. Everything you should need to know about the HTTP (HyperText Transport Protocol), cookies, ICP (Internet Control Protocol), and all the other commonly used protocols are covered in part 2. There is a wealth of information in chapter 4 which covers HTTP.

Parts 3 and 4 cover the functionality of a proxy server. Topics such as caching, filtering, monitoring, and access control are covered here. I didn't realize the legal issues regarding copyright violations and caching, but Luotonen addresses the restrictions that each places on the other in chapter 9. Filtering is another important function of a proxy server, and is covered in detail in part 4.

Security and performance are the topics of parts 5 and 6. Security is one of the main reasons for having a proxy server, and chapter 15 discusses encryption and authentication. Setting up the security is the focus of chapter 16. As anyone who is reading this book probably knows, the most important and usually the most complex part of installing large applications is the setup and configuration. One thing that was really nice about this chapter is that it addresses the concern over creating new security holes while maximizing security in other areas. Performance is the focus of part 6. Since server software differs, part 6 doesn't go too deep into specifics. It does cover, however, the basic ideas behind setting up a proxy with performance in mind.

Reading about it is one thing, but if you're eventually going to want to set up a server, part 7 covers installation and configuration. I found this part of the book particularly interesting. Chapter 21 is on case studies and discusses several mock companies of different sizes and demonstrates how a proxy server should be installed and configured.

Web Proxy Servers is very well written and is a good introduction to the topic of proxy servers. The chapters on performance, security, and the appendix on auto-configuration of the client should be a useful reference for the first-time administrator. For anyone who is new to proxy servers, this book should prove to be beneficial.

Thomas Dyer (tdyer@ghg.net)is studying computer science at the University of Houston Clear Lake.

Copyright © 1999 by Thomas Dyer. This document may be reproduced in whole or in part for noncommercial, educational, or scientific purposes, provided that the preceding copyright statement and source are clearly acknowledged. All other rights are reserved. For permission to reproduce or adapt this document or any part of it for commercial distribution, address requests to the author at tdyer@ghg.net.

REVIEW OF: George Metes, John Gundry, and Paul Bradish. Agile Networking: Competing through the Internet and Intranets. Upper Saddle River, NJ: Prentice Hall, 1998.

by Diane Mayo

This is a book that focuses on the management issues of using technologies, not the technical issues of implementing them. While it describes an interesting and exciting environment for collaborative work, it ignores the very real issues of motivating staff, gaining commitment to new procedures, and keeping up morale when adopting and using new technologies. It is a book written for organizations which have accepted and embraced change as a way of life and are looking for suggestions on implementing and supporting new ways of working. It doesn't have much to offer those who still need to be convinced that the pain is worth the gain.

The first quarter of the book, part 1, is a not particularly well done re-write of 1990s business literature on creating a vision, putting the customer first, dealing with the pace of change, and empowering staff. This is all cast in the "agile" concepts of the Agility Forum in Bethlehem, PA. The overuse of the word "agile" is highly annoying; I nearly stopped reading the book. I'm glad I didn't.

Agility is defined as "an enterprise-wide strategy for being competitive in conditions of change." The authors believe in a team oriented approach to solving problems and developing products and services. They also believe that a team structure is crucial to building a learning organization and that "agility" is the result of continuous learning on the part of every member in an enterprise.

The last three quarters of the book describe the use of communications technologies to support team work. Although the examples focus primarily on for-profit oriented corporations, these technologies are equally appropriate for any organization trying to achieve productivity gains and stimulate collaborative work through its technology investments.

Part 2 describes the uses of groupware and intranet publishing for facilitating communication within an organization, or between an organization and its business partners. The two options are compared, focusing on the differences between "publishing" and "communicating" and offering suggestions on which approach is best for achieving specific types of results.

The discussion of groupware highlights the advantages of a technical environment in which group communications are organized and archived by topic with the same tools used to create them. Used to link team members separated by time, geography, or other commitments, groupware makes it possible for project participants to "pull" information as they want or need it, rather than receiving it as a "push" from an electronic mail system. Groupware can also be the foundation of a knowledge management system, capturing both "concurrent knowledge" as it is created and used by project teams, and storing project source data for "archival knowledge" management.

Part 3 addresses the operational results which can be achieved by implementing technologies that support new communications models. Project-oriented teams, purpose-driven and competency-based, are prime candidates for benefiting from network-based communications. Actual implementations are described including those that cross traditional organizational boundaries to unite team members from different departments or organizations. The authors point out that the technology allows project teams to coalesce to achieve a specific strategic result and then disband. Members can then integrate into other results-oriented teams spreading the knowledge gained in networked teaming to other parts of an organization. One included example is the development of the California State University's Distributed Learning Services project.

An organization which has implemented technology-based communications and project-oriented teams will find its organizational structure for supporting employees needs to change with the environment. Specific examples of the impacts of changing communications on training departments and legal departments are provided to highlight the issues. The changing role of managers in a networked team environment is discussed as well. The authors make an interesting argument for a management structure in which management's raison d'etre is to "help the work get done" where the manager's role is to support the needs of project teams.

Part 4 provides a framework for implementing multi-organization alliances or partnerships. Cross-functional work within an organization, outsourcing, and aggregation are all discussed. Specific examples are provided for both the commercial and the educational markets including a description of the technology enhanced planning process for the Distributed Learning Services initiative at the California State Universities.

This book won't help a technical staff install or manage anything, but it may help turn an administrator's thoughts to some of the ways an organization's technology investment could deliver additional benefits.

Diane Mayo (ipartner@ix.netcom.com) is a consultant with Information Partners in Cleveland, Ohio.

Copyright © 1999 by Diane Mayo. This document may be reproduced in whole or in part for noncommercial, educational, or scientific purposes, provided that the preceding copyright statement and source are clearly acknowledged. All other rights are reserved. For permission to reproduce or adapt this document or any part of it for commercial distribution, address requests to the author at ipartner@ix.netcom.com.

REVIEW OF: Mark F. Komarinski and Cary Collett. Linux System Administration Handbook. Upper Saddle River, NJ: Prentice-Hall, 1998.

by Ray Olszewski

Linux is a version of the Unix operating system that is available for free over the Internet, and at low cost as part of several CD-ROM distributions. The name itself is a portmanteau word, combining the name of the system's creator (Linux Torvalds) with Unix. Developed in the 1990s to run on Intel-based computers, Linux has grown from its beginnings as an unstable plaything into a rock-solid operating system, and it is used worldwide on millions (the exact number is a matter of controversy) of Internet servers and a smaller number of desktop workstations. In recent years, versions have been developed for the Macintosh, the DEC Alpha, and some Sun hardware.

For libraries and schools, Linux is an important addition to the scope of choices for operating systems. For small networks, it is easily the most cost effective choice for providing Internet services, offering good performance and reliability on inexpensive equipment. In many cases, it can be an effective replacement for Windows NT, Netware, or Appleshare for providing file and print services to workstations. In some specialized cases, it can even be a good choice for desktop use.

Technically, Linux itself is only the "kernel," a single, complex program that provides access to system resources (keyboard, display, disk and tape drives, serial ports, etc.), manages memory, and schedules the use of CPU time for other processes. Like any operating system, though, an actual Linux distribution includes much more. All the standard distributions include a collection of Unix tools, commonly known as the GNU (Gnu is Not Unix) packages, created by the Free Software Foundation; programming and scripting languages from GNU and elsewhere; a GUI (Graphical User Interface) called X Window or X11; a Web server from the Apache Group; and a variety of user-contributed software. Taken together, the software adds up to a collection that provides a complete set of tools for running an Internet server (email, ftp, a Web site, Domain Name Service, and so on).

What Linux lacks is documentation. It doesn't lack this in quantity--if anything, there are too many kinds of Linux documentation. For example, there are standard Unix "man" pages; html documents; "How-To's" and "Mini-How-To's;" general guides to Linux setup, system administration, and networking; a variety of FAQs (Frequently Asked Questions, with answers); and a dozen or so newsgroups that regularly discuss Linux. Unfortunately, most of this stuff is written by people who are more interested in writing programs and improving Linux than in describing clearly how to use Linux. So, with the occasional sterling exception, most of this free documentation ranges from murky to downright impenetrable, especially for the beginning user (or "newbie").

Linux System Administration Handbook and similar books attempt to plug this hole, providing coherent explanations of how to maintain Linux servers and workstations. Authors Komarinski and Collett are experienced Linux hackers, and their intimate knowledge of the practical details of day-to-day operation of Linux shows throughout their writing. At about 320 pages (excluding appendices), the book is not overly long, and the authors seek to pack it full of useful advice while still covering at least the highlights of every important topic related to maintaining Linux systems.

Chapters touch on all the main issues involving use of Linux on servers--installation, user accounts, networking, file and printer sharing (including support for Appletalk and the Windows "Network Neighborhood"), email, Usenet news, running ftp and Web servers, and security. They also cover system administration tools, shell scripting, scripting and programming languages such as C, Perl, Python, and TCL/TK, the X11 user interface, database engines, and installation of less common peripherals (scanners, tape drives, etc.). Although they are less focused on Linux desktop workstations, they include a chapter describing some desktop applications--word processors, drawing programs, and symbolic math packages.

In all, the range of topics covered by the book is impressive. The wannabe Linux System Administrator will, from a look at the table of contents, get a good sense of the range of information he or she needs to learn. Unfortunately, in the rush to say at least a little bit about everything, clarity falls by the wayside in the actual presentation of the substance itself.

Material is presented in a confusing order. For example, the chapter on "Applications for Linux" discusses many X11-based programs, but X11 itself is not described until 4 chapters later. Chapter 2 begins with a detailed discussion (actually, the best I've ever found) of how to configure the LILO loader, but it doesn't include information about how to install the LILO loader or even explain very clearly what LILO is.

In other cases, the authors describe options so selectively as to bewilder. When discussing sendmail, the program that handles email connections to other Internet servers, they breeze through most configuration issues, touching on but a half dozen options, selected with no apparent rhyme or reason. Their description of the BIND (Berkeley Internet Name Daemon) 8.1 package--a new version quite different from earlier versions, and as such unfamiliar to me--left me without a clue as to how to configure the new BIND.

At times, the authors seem more intent on showing how clever they are than on being helpful. For example, when discussing sendmail, they note the program's complexity and observe, "You may have seen that really thick book with a bat on it that gets you into the nitty-gritty of sendmail. It gets really nitty-gritty." (p. 86) This comment has a footnote, which reads, in its entirety: "If you have anything to do with setting up sendmail, get that book." Good advice, but what book? The cognoscenti will recognize the reference (to Costales et al. Sendmail. O'Reilly and Associates, 1991--the O'Reilly books are known for their animal covers), but the newbie will be lost.

Similarly, the authors at one point debate the relative merits of Perl and Python as scripting/programming languages. As an experienced Perl programmer, I could follow and sometimes appreciate this argument. But it was written for readers who know at least one, preferably both of the languages, accompanied by far too little description of either Perl or Python to give someone new to Linux the background to follow the debate.

The book includes a CD-ROM that contains the "Lite" version of Caldera OpenLinux. One of the half-dozen major Linux distributors, Caldera mainly sells highly "commercialized" variants of Linux, versions that are "enhanced" by the addition of commercial packages including the Netscape Web server, a proprietary X11 GUI server, and more. The "Lite" version is limited to the standard freeware basics of Linux with only a few custom additions.

Caldera OpenLinux Lite is about typical of basic Linux distributions. It has a friendly front end that does a lot of the work needed to customize Linux for a particular computer: identifying hard drives, CD-ROM drives, Ethernet cards, and the like (but not, alas, mice and video cards--a serious failing). Compared to the other Linuxes I've installed, this one seemed slow to install but otherwise had only the usual sorts of problems. These problems are enough to stymie a beginner, though, and the book's appendices on installation are of no more help than the README files included on the CD. Once installed (choosing the "standard" install option), OpenLinux Lite has the normal mix of Unix commands, programming languages, and specialized tools. Out of the box, it would easily provide email, Web hosting, and other basic Internet services for a small to medium site (business, library, or school).

The CD does have the limitation of containing a version of Linux that is over a year old (the files are dated April 1997). This is often a problem with CDS that come with books, since the shelf life of a book is typically longer than that of a software revision. Readers interested in experimenting with Linux would do better to select from the many CD versions of Linux that are available separately from books. Prices range from a few dollars to several hundred depending on the version, but all are more up to date than this book's version. Look at http://linuxcentral .com/ for a wide range of choices. In the end, I am reluctant to recommend this book. The beginning Linux sysadmin will find it too confusing and episodic in its coverage. The experienced sysadmin will be able to bridge the gaps readily but will find too little in the way of new, useful knowledge. Anyone who has no hands-on experience with Linux will find it simply incomprehensible.

Ray Olszewski (http://www.comarre.com/ray.html) is a consulting economist and statistician. His work includes development of custom Web-based software to support on-line research. He spent three years as Network Manager at The Nueva School, a private K-8 school in Hillsborough, California.

Copyright © 1999 by Ray Olszewski. This document may be reproduced in whole or in part for noncommercial, educational, or scientific purposes, provided that the preceding copyright statement and source are clearly acknowledged. All other rights are reserved. For permission to reproduce or adapt this document or any part of it for commercial distribution, address requests to the author at ray@comarre.com.

REVIEW OF: Uyless Black. Residential Broadband Networks: XDSL, HFC, and Fixed Wireless Access. Upper Saddle River, NJ: Prentice Hall, 1998.

by Michael B. Spring

This book is included in the Emerging Communications Technologies part of Advanced Communications Technologies series produced by Prentice Hall. The book addresses what is quickly emerging as the major bottleneck in our current telecommunications system--the local loop. For those who have access to high-speed Internet connections, it is clear that there is a long way to go before local loop based Internet connections measure up to the demands of current World Wide Web usage. As a treatment of design alternatives for the local loop, Black succeeds, as he almost always does on these topics, with clarity, brevity, and accuracy. Black provides a coherent statement of the problem and gives the reader a clear set of criteria for analysis of the issues.

Black assumes that some large percentage of organizations and the vast majority of homes will be connected to information services via a public utility such as the phone company or a cable company. The question addressed is what modifications need to be made in the wiring and signalling techniques used by these service providers to accommodate the new services. The book is very clear about the important decisions to be made in this design process. As just one example, the author addresses the differences between a cable television termination and a phone termination. The wiring plant for the phone delivers power as well as data signal to the termination point while the wiring plant for the cable system does not. When the power is out to a home, the phone continues to work while the cable TV does not. Will the local loop for new information services be modeled on the telephone or the cable connection?

Black sets out to address three major topics: wire-based and wireless schemes for the local loop, coding and modulation alternatives, and protocols that will manage the various services that will be delivered over the loop. He does a magnificent job of laying out the alternatives and describing the dependencies. The chapters on fixed wireless access and hybrid fiber coax solutions are particularly good. He shows the relationship between SONET backbones and various residential broadband technologies with particular clarity. His chapter on GR-303, the overview architecture specified by Bellcore for an Integrated Digital Loop Carrier (IDLC), provides a solid and simple framework within which to examine the various alternatives.

The book would be a stronger tool for some readers if the first chapter were expanded. At one point, the author cites the statistics on Web traffic which are becoming well known facts. For example, while the number of transfers between client and server, related to World Wide Web traffic, are about equally divided, the number of bytes transferred from servers to clients are more than 10 times greater than the number from client to server. Most readers will have some appreciation for this as the driving force behind the development of Asymmetrical Digital Subscriber Line (ADSL) technology which transforms channels with equal upstream and downstream bandwidth allocation to asymmetrical bandwidth allocations. In the case of the local loop, the asymmetry favors the downstream bandwidth. Black does an admirable job of describing ADSL and the related modulation and coding issues. The needs analysis in the opening chapter, however, provides a somewhat simplistic picture of the current and projected uses of the network that should define the precise nature of the local loop architecture and capabilities.

In defense of Black, no one knows for sure how the network will be used given the rapid rate with which the technologies are changing. It was not long ago that telecommunications scientists and engineers were proud of saying that there was so much excess capacity in the network that we could never use it. A little software package called Mosaic and a trivial little protocol called http ended that boast in a mere three-year period. The emergence of the World Wide Web has dramatically reshaped our projections for usage. This book would be stronger with some more careful analysis of the likely bandwidth consumers of the coming years. Will video on demand really become the driving force? Will it be video conferences? Will it be electronic commerce? With a firmer grasp of the possible or likely needs, it would be easier to assess the alternatives so well put forward by Black.

This book will not have a long shelf life, given the rapidity with which the solutions are evolving. At the same time, it is a must for engineers and researchers working at the design and deployment of the local loop components of the next generation network.

Michael B. Spring (spring@imap.pitt.edu; http://www.sis.pitt.edu/~spring)is an Associate Professor in the Department of Information Science and Telecommunications at the University of Pittsburgh.

Copyright © 1999 by Michael B. Spring. This document may be reproduced in whole or in part for noncommercial, educational, or scientific purposes, provided that the preceding copyright statement and source are clearly acknowledged. All other rights are reserved. For permission to reproduce or adapt this document or any part of it for commercial distribution, address requests to the author at spring@imap.pitt.edu.

About TER