Remote Observation Strategies for Usability Testing
Observation is the cornerstone of usability testing and an important strategy in evaluating library Web sites. Traditionally, test administrators have directly observed test users as they interact with the Web site interface. Remote observation offers an alternative that may facilitate the testing process and offer additional capabilities. Usability testing during the California State University San Marcos (CSUSM) Library Web site redesign used a simple remote observation strategy to view the test user's screen on another computer removed from the test location. The library investigated Timbuktu, NetMeeting, and Camtasia as potential software tools to assist in remote observation.
Usability testing has become an important component of Web site development for many libraries. Libraries are user-centered organizations. They provide an entire service--reference--just to help users find information. It is important that their Web sites also meet their patrons' information needs in a user-friendly fashion. The best way to improve a library Web site's usability is to observe users interacting with it and then incorporate their feedback into the site's design.
Norlin and Winters state, "the objective of usability testing is to evaluate the Web site from the user's perspective." 1 Usability testing uses a variety of methods to evaluate a Web site. Battleson, Booth, and Weintrop divide usability testing into three categories: (1) inquiry, which includes focus groups and questionnaires; (2) inspection, which includes heuristic evaluation (comparison of site elements with a list of usability design principles); and (3) formal usability testing, also known as formal observation. 2 Of the various usability testing techniques commonly used to evaluate Web interfaces, only a few, such as heuristic evaluation, solely depend on the Web developer's expertise. Most usability tests directly involve users in evaluating the interface. For example, card sorting asks users how they would organize the site; matching tests check if users can correctly associate the intended meanings with their icons; and questionnaires and focus groups solicit feedback on users' needs. 3
The best-known usability test is the formal observation of the user interacting with the product to be tested. It is the classic usability test. It is so central that the term "usability test" is often synonymous with user observation. It is so important that organizations are willing to hire full-time usability experts and build special laboratory environments to facilitate the observation process. Most usability experts value the feedback from observation more highly than that of other usability tests--so highly that they are willing to cut corners just to make sure it is done. Krug expresses it most passionately: "Testing one user is 100 percent better than testing none. Testing always works. Even the worst test with the wrong user will show you things you can do that will improve your site." 4
Libraries do not have special observation rooms or full-time experts, so they must make do with existing facilities and personnel to conduct usability observations. Doing usability testing on a budget has been a theme in the usability community since Nielsen's 1989 paper, "Usability Engineering at a Discount." 5 With the proliferation of networking technology and the advent of the Web, new ways of conducting usability observations have become possible. Just because most libraries' usability efforts are on a budget does not mean they don't have access to powerful tools to facilitate and enhance user observations.
This paper reviews the direct observation process typically used in library usability studies and introduces an alternative method--remote observation. How the California State University San Marcos (CSUSM) Library applied remote observation to usability testing is described, along with examined software tools. The strengths and weaknesses of the three software packages as remote observation tools are compared against each other and more traditional tools such as video. Finally, several new capabilities made possible by remote observation are discussed.
In formal usability observation, test users are observed interacting with the Web interface as they perform specific tasks. The tasks should represent real life situations. Hackos and Redish state that it is desirable "to observe users performing a task they ordinarily work on." 6 The tasks should indicate the specific results users should achieve and should be small enough to fit within the testing time frame. Nielsen suggests that the test should start with a simple task to increase test users' confidence and end with a task that produces a tangible result so users feels they have accomplished something. 7
Generally, it is considered important that the test subjects represent real users of the finished site. In cases where the site has several distinct audiences, Nielsen recommends testing additional users. 8 Norlin and Winters suggest identifying a target group that represents the site's primary users. 9 Krug, on the other hand, believes "it doesn't much matter who you test." 10 While it is desirable to find users representative of the target audience, he feels most site problems will be found by anyone with basic Web knowledge.
Many evaluators recommend testing only a small number of users, often referring to Nielsen's 2000 Alertbox article in which he explains that five users find 85 percent of the problems in a site. 11 Krug and Hackos and Redish suggest two or three users may be sufficient. 12 Spool, who had originally also endorsed a low number of test subjects, has recently challenged the idea that five users are sufficient, suggesting that it may take many more to find the majority of problems in a site. 13 One of the benefits of testing fewer users is the ability to create a more nimble, affordable process that can encourage iterative design phases and multiple usability tests.
It is important to conduct tests involving users in an ethical fashion. While usability testing is not physically dangerous, it can be distressing to users. It is the test administrator's job to minimize the psychological risk. 14 In addition to making sure the user feels as comfortable as possible during the test, the test administrator should also ensure the confidentiality of information obtained from the test. The principle of confidentiality and privacy is particularly important in libraries, which abide by the American Library Association's Code of Ethics. 15 Informed consent is a mechanism to provide information about the test to the user along with an explanation of their rights. In academic and research settings, a formal process is usually established which lays out the rules and procedures for conducting tests involving human subjects. 16
Typically, in a formal usability observation the test administrators include a facilitator and one or more observers. The facilitator gives the test participant instructions and assists the user through the test without actually directing them. She usually debriefs the user at the end of the test using a questionnaire or structured interview. Chisman, Diller, and Walbridge found that debriefing can be particular helpful at getting at intangible interpretations of what constituted success for the user. One of the key roles of the facilitator is to provide reassurance and encouragement to the user. A common theme throughout the literature is the importance of reassuring the user that it is the Web interface that is being tested, not their skill in using the Web. 17 The other key role of the facilitator is to encourage the user to think aloud in order to "to get at users' inferences, intuitions, and mental models as well as their reasons for the specific steps they take and decisions they make while doing the task." 18
The role of the observer is to take notes on the user's interaction with the interface. It can be very difficult for the facilitator to take notes and interact with the user at the same time. Observers can record significant remarks from the facilitator as well as the user's comments, on-screen actions, path through the site, and time to complete the tasks, leaving the facilitator free to interact with the user. Hackos and Redish recommend the observer "overdo the note taking" on the theory you can't have too much information. 19 Most studies used one observer in addition to the facilitator, but there can be as many observers as the room will hold. Krug and others advocate inviting stakeholders--decision makers and people involved in the development of the site--to observe. 20 Seeing how users actually interact with the Web site can be a powerful way to convince stakeholders and Web designers to make changes needed to meet users' needs.
The observer tries to stay out of the user's sight and be as unobtrusive as possible. Being observed can be intimidating to users, particularly when several people are watching them, and their performance and behavior may be affected. The tendency of users to change their behavior when they are aware their performance is being monitored is known as the Hawthorne effect. 21 Nielsen recommends conducting usability observations with as few observers as possible. 22 The observation process also has pitfalls for facilitators and observers. It is tempting to offer advice during the test or to get caught up in the drama and forget to take notes. Other methods of observation may provide more detachment for the observer and intrude less on the test user.
Some organizations have built special laboratories in which to conduct usability tests. The laboratories usually have a separate observation room with a one-way window overlooking the user test area. Video cameras transmit information on the test screen to observers in a special observation room. The separate room helps promote objectivity and allows multiple observers to freely discuss the test as it is happening.
Traditionally, usability observation studies use video cameras to record the test, whether or not a usability lab is available. One video camera tapes the screen while a second camera may be used to monitor the user's facial expressions and record comments. An advantage of videotaping is that it allows in-depth observation to be made later in a controlled setting, without the user present. The observer can review a particular behavior several times or share information with colleagues.
Libraries do not usually have access to a special usability laboratory or even necessarily video equipment. None of the library sites in the literature reviewed used a laboratory, although several appropriated a classroom or conference room for their testing. Only two of the twelve library case studies reviewed used video cameras. 23 Many libraries do not have ready access to this type of equipment, nor do they tend to have testing situations that easily accommodate setting up video equipment. In addition, the equipment itself can be intrusive and intimidating to the user. While acknowledging the advantages of laboratories and video equipment, Nielsen does not consider either essential for conducting practical usability tests. 24
In usability studies users typically are observed directly by the test administrators. However, observing users remotely is a viable alternative that is being explored. Ivory and Hearst describe remote testing as methods that allow you to test users in a different location. 25 The evaluator doesn't observe the user directly, but rather observes or gathers data over the network. Hartson et al. define remote usability testing more specifically to be "usability evaluation wherein the evaluator, performing observation and analysis, is separated in space and/or time from the user." 26
Same-time/different-place testing means the test administrator observes the user at the same time they are performing the test but from a remote location, usually by observing the user's screen over the network using such specialized software as PC Anywhere. The test administrator can communicate with and listen to the user via speakerphone or, in the networked environment, a computer microphone. This type of live viewing of a remote site is what remote testing most commonly means. Different-time/different-place testing means the test administrator observes the user's actions later, usually from some kind of recorded medium. Different-time/different-place testing may rely on the user activating special software on a computer and sending the results to the test administrator. The test administrator cannot directly observe and interact with users in different-time/different-place testing.
Remote observation offers several potential benefits over direct observation. A primary benefit is the ability to reach more users in more locations than traditional methods allow. In particular, it can make it easier to gain access to and schedule users, particularly in certain groups such as busy administrators or faculty. 27 Certain types of remote observation may also be less intrusive to test users; possibly reducing the Hawthorne effect. The network offers alternative methods to administer usability tests. Hammontree, Weiler, and Nayak state, "usability evaluators can now view computer networks and modem connections as frameworks upon which distributed usability labs can be constructed and all network or modem accessible machines as potential windows into remote test sites." 28
Hartson et al. describe seven types of remote evaluation:
- Portable evaluation involves taking a testing laboratory (basically such equipment as video cameras) to the user's location.
- Local evaluation at a remote site requires the test administrator to send the prototypes and interface to be tested to the user, who then evaluates them and sends the results back to the test administrators. Sometimes local evaluation takes the form of subcontracting with a local expert to do formal usability testing with a group of users and having the results sent back.
- Remote questionnaire/survey embeds a questionnaire in the application itself so that feedback is requested at appropriate places as the user works with the application.
- Remote control evaluation controls a local computer from another computer at a remote site. Software such as Timbuktu or PC Anywhere establishes the connection over the network. Audio capture can be made via phone or computer microphone.
- Video conferencing uses a computer-to-computer teleconferencing connection over the network to capture input from a video camera at the user's site.
- Instrumented remote evaluation, similar to remote questionnaire, embeds code in the application to be tested and logs significant events or actions by the user.
- Semi-instrumented remote evaluations require the user identify significant events affecting their performance and trigger the logging mechanism. 29
Many of the researchers in remote usability testing are searching for ways to automate and thereby simplify data collection and, possibly, usability information analysis. A number of usability analytic tools have been developed. For example, WebSAT checks HTML code against a set of usability guidelines, and NetRaker helps create online surveys that gather feedback as users interact with the site. While some tools have been developed to facilitate gathering and analyzing data from usability observations, they are not particularly useful for remote observation. 30
The most useful remote observation tools have actually been developed for other purposes. Preston lists nine tools that assist in remote observation over a network that are equivalent to Hartson's remote control type of evaluation. 31 These tools come from a variety of software backgrounds, including remote control, support desk and customer service, telecommuting, system administration, and video chat. Hammontree, Weiler, and Nayak recommend tools developed for cooperative computer-support work as being particularly well suited for remote usability testing. 32 They define these tools as being able to provide three capabilities: sharing window applications between two or more computers in real-time; sharing a common whiteboard to write on and paste screen shots; and displaying live video of the remote person on the screen. Hammontree's definition expands on the videoconference type of remote evaluation that Hartson describes. 33 CSUSM found that the most basic capability needed for remote observation was a screen viewer. Many of the tools Preston lists provide this capability in one form or another, particularly as part of remote control, support desk, and application sharing functions. 34
Remote observation over the network, particularly same-time/different-place testing, is vulnerable to technical difficulties. Not only must the test administrator ensure the Web site and underlying Internet technology is functioning, but also that the additional observation software layered over the top is working. Before using remote observation software over the Internet, it is important to test it ahead of time outside the local network to check for problems with firewalls. Most non-Web-based software will not work with all operating systems. If the remote computer is a Macintosh, it may not be controllable by PC-based software. The Web is also not completely uniform in technology. Different browsers and different browser versions can cause unpredictable results when you test outside of a controlled environment. A final issue in working with remote test users that Hong et al. mentions is the need to install software on the user's local computer, and, in some testing situations, to return data from the user's machine to the test administrator. 35
CSUSM Remote Usability Observation
In spring 1999 we initiated a project to redesign the CSUSM library Web site. The primary mission of the redesign was to address concerns from users and librarians on finding material on the site. Several evaluation strategies were used to guide redesign efforts. Early in the project, we conducted an informal observation of the old site to identify specific problems, particularly relating to navigation. We held focus groups and distributed questionnaires to better understand students' library Web needs and their opinions on Web site improvements. Finally, we examined logs of Web site activity to identify high-use areas of the existing Web site. We used this information to build the site navigation structure and develop the look and feel of the site.
Near the end of the project, we conducted a formal usability observation to evaluate the prototype of the new site. The goal of the observation was to determine how successful users were in navigating our new interface. We studied five volunteer users selected from student employees of the library. Three sets of questions were developed for the test. A questionnaire at the beginning of the test session gathered demographic information. The formal observation test consisted of eight typical student research tasks. At the end of the test, the facilitator was joined by the observer to debrief the user with a series of questions on their overall opinion of the site.
The test administrators consisted of a facilitator and an observer. The facilitator was the user's advocate and responsible for all interaction with the user. The facilitator established the initial contact with the user, explained the basic purpose of the test, and scheduled the test session. During the test, the facilitator gave users their tasks, waiting until one was completed before providing the next task. She encouraged users to think aloud, provided reassurance, and prompted users to keep them moving or to find out their thoughts. The facilitator was responsible for recording significant user comments and observations of the users' attitudes and behavior.
CSUSM developed a remote observation strategy. Rather than having the observer in the same room with the user, directly viewing the test screen, or using a video camera to record the screen for later evaluation, we used a screen viewer to allow remote observation. The screen viewer enabled the observer to sit in a separate room and view the user's interaction with the Web site interface on his or her own computer screen. Using Hartson's criteria, we conducted a same-time/different-place remote observation using a remote control evaluation tool to view the user's screen. 36
We chose to observe from a remote location to improve both the user's comfort and our concentration. We felt a silent observer behind the user's shoulder would be intimidating, unlike the facilitator, who could establish a friendly, interactive relationship with the user. Informal observations at the beginning of the development process indicated that it was easy for the observer to become caught up in the action and lose focus. In our formal observation, we wanted to provide the observer with a more detached, objective environment to facilitate focused observation and detailed recording. We accomplished this by using remote observation.
The observer was introduced to the user at the start of the test and participated in debriefing the user at the end. Otherwise, the observer adjourned to a separate room. In our case, the room was close enough for the observer to hear the conversation between the interviewer and the user, which helped to put the user's actions into context. We used Timbuktu computer support software as our screen viewer. We selected Timbuktu because it was already installed on our computers as part of the campus computer center's remote support efforts.
The role of the observer was to watch the user's interaction with the interface. The observer recorded users' selections, including button clicks and input in boxes, and noted their paths through the site. She also recorded less obvious cues such as hovers over buttons and dithering between areas of the page. We felt that a great deal could be learned about a user's thought process by his or her mouse behaviors before actually making a selection. This also helped compensate for not being able to see and perhaps hear the user. The observer found hovering could indicate confusion or indecision over the meaning of the button label. Dithering back and forth between different buttons seemed to indicate confusion as to which of two or more choices was the correct one. The facilitator's recorded observations of the user's behavior at these points, as well as overheard comments, helped confirm the observer's interpretation of the mouse behaviors on the screen.
Usability testing showed that students were indeed more successful in navigating the new site. One problem observed with the old site was that students would select the Site Search button as a first choice for conducting their subject search. No one made that mistake with the new Site Contents label. The Research Hub secondary navigation page significantly reduced confusion about where various resources were located within the site. In looking for books and journal articles, users were fairly evenly split on whether they used the site's menu structure or the quick link buttons that led directly to the heavily used library catalog and online indexes. Since users didn't clearly prefer one navigation structure over the other, we felt justified in offering both types of navigation. All of the students tested indicated a clear preference for the look and navigation structure on the new site.
Usability observation revealed that the most serious problems with the new library Web site concerned terminology. The librarians worked together to develop labels for the site but had a lot of difficulty identifying clear terms for some resources. For example, "online indexes" was a compromise that substituted library jargon, "indexes," for computer jargon, "databases," and used the generic "online" rather than "journals" because the sources included other types of materials. In recognition of the confusing nature of some labels, the final site included explanatory roll-overs on the Research Hub secondary navigation page where the resources were listed. The users tested made extensive use of these explanations. Their eyes caught such descriptive terms as "book," "journal articles," and "media," which contributed to their success in selecting the right resource to use. During the debriefing, several users were enthusiastic about the explanations and suggested the use of even more roll-overs, especially on the home page. As a result, supplementary explanations were added to all selections on the homepage, secondary navigation pages, and navigation elements on the content pages. Most of these additional explanations simply used ALT tags, which proved to be beneficial when we revised the site for ADA accessibility. While roll-over and ALT tag explanations helped compensate for inadequate terminology, creating more intuitive labels will be a major focus of the next Web site redesign.
Terminology also caused problems with navigating our basic menu. We designed the site to separate information about library services from the electronic resources, which we called the Research Hub. The idea was to create an area where users would be able to see all their research choices in one place. Once in the Research Hub navigation screen, we found that users were very successful in identifying the various resources available to answer their questions. Testing, however, showed several of the users had trouble getting to that page in the first place--they were selecting "Library Services" and ignoring the Research Hub button. We quickly determined the solution was to bring the Research Hub menu choices up to the main homepage. Unfortunately, by conducting a usability test at the end of the project, we had run out of time and resources to make such a major change to the site before it was released. If we had conducted our usability observation tests early in the site-development process it would have been much easier to incorporate the results in the final design.
Remote Observation Tools
CSUSM has tried three different types of software for remote observation: Timbuktu, Microsoft NetMeeting, and Camtasia. We used Timbuktu as our screen viewer for the usability test because it was already installed on our computers. Shortly after our testing was completed, the campus stopped using Timbuktu. As a result, we experimented with Microsoft's NetMeeting, which was readily available as part of Microsoft Windows on our newer machines. Although designed to facilitate meetings, the software includes the ability to view a screen on a remote computer. Both Timbuktu and NetMeeting are same-time/different-place screen viewers based on network technology. A third product, Camtasia, is a free-standing screen recorder. After seeing a colleague's transcript of an information literacy session recorded with Camtasia, it became apparent that it would be an ideal method for recording usability observation tests.
Timbuktu Pro for Windows
Timbuktu by Netopia is a remote control technical support program. 37 The software is used by computer support personnel to remotely troubleshoot and maintain end-user computers. It can transfer files, load programs, run remote programs, or control the user's computer from the help desk. The program also allows you to simply observe the user's actions. Timbuktu must be installed on both the test user's computer and the observer's computer. To establish an observation session, the observer selects the Observe option and then contacts the user's computer using either the computer name or IP address. The software sends a message to the user's computer asking for permission to make the connection. As soon as the user accepts the connection, their screen appears on the observer's monitor. The observer can adjust the viewing area but cannot control anything on the user's computer. Figure 1 shows the Timbuktu connection window open on the observer's computer with the test user's screen visible in the Remote window with the black background. Timbuktu is one of the few products with versions for Macintosh computers. Timbuktu Pro for Windows costs about $170 for a twin-pack; Timbuktu for Mac OS twin-pack is about $190.
Microsoft's Windows NetMeeting is designed around a conference call metaphor and enables two or more people to communicate or meet online. 38 A session is started by calling another user's computer using the computer's name, IP address, or the user's network name. The called computer, as in Timbuktu, must accept the call in order to participate. The computers that are connected are participants. Upon acceptance, you have a connection to the computer and can talk if the microphone is enabled, but you must take further steps to see the user's desktop. The observed or remote computer must share their desktop. The Share function allows another computer to run programs on the remote computer. To simply observe, the Share function is activated with only the desktop specified. Figure 2 shows the NetMeeting connection window with the Share window open as it appears on the test user's screen.
Like Timbuktu, users can chat, transfer files, and run remote programs, but with NetMeeting they also have a whiteboard collaboration tool and options to simplify hosting meetings. One of the most significant differences is that NetMeeting allows you to activate the computer's microphone so that you can listen and even talk to the user. You can also add a video capture card and video camera to see the user and enable video conferencing. Windows NetMeeting 3 is included in Windows 2000 and XP and available as a free download for other Windows PCs.
TechSmith's Camtasia operates under a different principle than the two screen viewers, Timbuktu and NetMeeting. It is not intended to facilitate live interaction between two computers or two users. It is, instead, a screen recorder. 39 It records all the action that takes place on the screen with the expectation that it will be played back at a later time. As the name implies, Camtasia mimics the actions of a video camera. As a result, it shares many of the benefits of videotaping while avoiding some of the pitfalls. Camtasia is best used for different-location/different-time remote observation.
Camtasia consists of three pieces of software: the Camtasia Recorder, the Camtasia Player, and the Camtasia Producer. The Camtasia Recorder must be installed on the computer to be observed. Before the test begins, a test administrator sets the software to record. From that time forward, until it is stopped, Camtasia records all the action on the screen. Recordings are saved in the industry standard .avi file format. Sound can also be recorded using the computer microphone.
The Camtasia Recorder also offers a number of special features. In addition to recording the entire screen, the recording can be limited to a particular window or area of the screen. The recorder can be set to highlight the mouse clicks. For example, a highly visible red circle can appear around the mouse pointer as it clicks, making it easy for the observer to see significant actions on the screen. On the other hand, these highlights are occurring on the user's computer as they are using it. It may be too disruptive to use this feature while attempting to test a user in a natural setting.
The recorded file is viewed with the special Camtasia Player. The capture plays back in real-time, but the player includes standard controls to pause, fast forward, and rewind the recording. The quality of the playback can vary depending on how well the recording software was configured for the machine it was installed on. Without any configuration, the recorded image can be somewhat degraded and the cursor movements jumpy. After configuration, the quality becomes quite acceptable.
Camtasia comes with an editor that trims and joins .avi file clips. The Camtasia Producer is much simpler to use than such full-scale video editors as Adobe Premier. Long, dull stretches of inactivity or repetitious problems can be cut out so it is easier to focus on the primary problems. Recordings from several sessions can be cut together. For instance, you could combine sequences from several tests to illustrate a problem common among several users or to show the diversity in user approaches. Audio can be added to the file during the editing process. The edited files can be saved in a variety of formats, including RealMedia, Quicktime, and animated GIFs. Figure 3 shows Camtasia Producer with several clips from different recordings of the NetMeeting setup process. The editing window includes a screen shot of the Camtasia Recorder setup window. Camtasia 3, including the recorder, player, and producer, is about $150 for a single-user license. Educational and group license pricing are available.
Comparison of Remote Observation Tools
Screen viewer software allows the test administrator to observe the test user's screen from a remote location. Screen viewers such as Timbuktu and NetMeeting do not require any special equipment or testing room. In a way, screen viewer software gives the test administrators the option to mimic the usability lab's one-way mirror observation room environment using their normal facilities. The software provides an excellent quality image of the user's screen, better than that provided by most video recordings. Software with screen viewing capability is readily available and relatively inexpensive. On the other hand, screen viewing software needs to be installed on the test user's computer. This is not a problem when the computer is under the test administrator's control. However, if the test is run on the user's own computer, the test administrators must arrange for the user to acquire and install the appropriate software, making sure the viewer works with the particular operating system on the test machines. An advantage of NetMeeting is that it is already installed on Windows 2000 and later operating systems. Finally, screen viewers may encounter firewall issues trying to access other locations over the Internet.
Removing the observer from the vicinity of the user can create issues that need to be addressed. It is difficult to facilitate the user's test experience if all test administrators are remote from the user. The role of the facilitator is very important, particularly in reassuring the user and prompting them to think aloud. Voice technology such as speakerphones or software such as NetMeeting can help compensate for separation between test subjects and administrators.
Recording technology captures the user's screen actions as they occur. Videotape is the primary recording medium referred to in most studies; however, you can also record with audio tape and software like Camtasia. The primary advantage of recording is the ability to view the test session at a later time. The problem with same-time observation, whether direct or using a remote screen viewer, is that the observer has a single pass at data collection. The observer has to make decisions about what is important to record and can't review alternative data later. With a recording, the observer can control the session, pausing it to take notes or to repeat a section to enhance understanding. The recorded observation can be shared with colleagues for additional input or to show stakeholders why certain features do not work. 40
Video recording offers several other advantages over screen viewers. Audio is always available as part of the recording, although it is not interactive. It is possible to capture the user's face showing their reactions as well as the screen activity itself. Finally, video taping does not require control of the user's computer, no software needs to be installed or configured. On the other hand, video recording equipment can be as intrusive as the presence of human observers in the room with the users. The room itself needs to facilitate placement of the video equipment. The library may need to acquire or borrow equipment, such as video cameras, tripods, and VCRs. The image can roll because of the difference in frames per second between the camera and the monitor, resulting in poor playback quality of the videotaped computer screen. A scan converter can correct this problem but adds another piece of equipment to be acquired and setup. Evaluation can take much longer when writing observations from videotape--up to ten times the duration of the original user test, according to some estimates. In many cases, no one bothers to view the tape at a later time--it is never used. 41
Camtasia shares many of the benefits of video cameras while escaping some of the problems. Since no equipment is involved, it is as unobtrusive to the user as the screen viewers and even allows unattended recording. It is much easier and quicker to scan a Camtasia file than a video tape, which may also increase the likelihood that the recording will be viewed later. The file is easy to edit, and image quality is better than videotape. Since Camtasia is only on the user's machine, there are no issues with firewalls and communication over networks. The completed file can be sent to the test administrator in a variety of ways. Disadvantages of Camtasia include the fact the software must be purchased and installed on the test user's machine just like screen viewers. Unlike video cameras, Camtasia only records the action on the screen; it can't provide a view of the user's face.
Camtasia can assist single test administrators. Trying to facilitate the test, observe users' behavior, and take meaningful and extensive observation notes at the same time is difficult. Camtasia allows a single test administrator to focus on facilitation without worrying about equipment or extensive note taking. Later he or she can go back and supplement the notes on the user's screen interactions from the recording.
Of course, libraries are not restricted to using just one remote observation tool. It is possible to combine them to benefit from various features. Camtasia can record a testing session that is remotely observed live using a screen viewer. For instance, when we tested the NetMeeting screen viewer, we also activated Camtasia to record the session. The recording was then used to demonstrate the use of NetMeeting in remote observation. 42
Other Possibilities for Remote Observation
Remote observation opens up new possibilities in library usability evaluation. Test users do not have to be physically proximate to the test administrator. In fact, test users do not even have to be aware that they are being observed, although this may raise ethical issues. Test administrators can more easily show developers and stakeholders how users actually interact with the Web site, making it easier to explain why changes need to be made.
Screen viewing technology allows usability experts to expand their access to users for testing. It is possible to see the user's screen whether they are next door or in the next state. Whether or not the library wants to geographically expand the users they test, remote observation can also make scheduling test sessions easier. It is possible to test busy faculty in their offices and students off-campus on their home computers. The only requirement is that they have the software installed and are connected to a network. Test administrators can facilitate the test session and listen to the user by using software that activates a computer microphone or by using the telephone.
Remote observation tools open up the possibility of anonymously watching users as they interact with the Web site in a natural setting. For instance, a screen viewer--or better yet, a screen recorder installed on a public reference machine--allows the usability expert to watch how walk-up users in the library behave during spontaneous searches. The advantage of anonymous observation is the ability to see completely natural search behaviors that are not influenced by the testing situation. However, the uncontrolled nature of the observation can make it difficult to interpret the results. Preece points out that the reason for the observation influences the method used. 43 If the test administrator needs feedback on specific tasks, same-time observation with controlled questions provides better results. On the other hand, if the test administrator wants to see how users utilize technology in general, observation in the natural environment may be more suitable.
A number of usability experts point out the best way to convince a programmer or other stakeholder to change is to show them how successfully, or unsuccessfully, users interact with their site. Traditionally, the easiest way to show user interaction to a programmer or decision maker was to invite them to observe the testing from the observation booth in a usability lab. Libraries often do not have the option of using a special lab, but it is possible to setup a screen viewer on a computer away from the testing situation, either in an observation room or in the stakeholder's own office. Recording the test session allows you to share the information with others after the test has been completed. However, most stakeholders that are peripherally involved with the project may not be willing to sit through hours of tapes. Camtasia's Producer makes it possible to easily edit test recordings into a highlights version that points out the most significant issues.
Remote observation opens up some additional questions on the ethical testing of users. For instance, when remotely observing the screen of a public machine, the user's identity is anonymous to the test administrator. Is it a violation of the user's privacy to observe their actions without permission if their identity is unknown? If we don't know who the user is, can we then assume confidentiality is assured? Is confidentiality a problem if the user is required to login to the computer or a record is kept of individuals who sign up to use the library computers? Do rules regarding informed consent and human subject research apply when the user is never directly engaged in the research? How are the user's rights and confidentiality maintained if people other than the test administrators are allowed to view the test session or a recording of it?
Web sites are no longer just the public relations face of the library. With the advent of Web-based catalogs and journal indexes, access to most library resources depends on the library's Web site. The dependency is tightened as libraries add actual content online, such as full-text articles and e-books. Users have come to expect to use these resources without the help of a librarian. A well-designed Web site has become essential for serving users.
The usability experts reviewed all agree that observation is the most informative usability test for effective Web interface design. Remote observation tools can facilitate the observation process and enable libraries to accomplish more with their limited resources. Screen viewers can help compensate for ad hoc testing facilities by allowing observation to take place in another room, which also makes the observation process less intrusive to the test user. Software enables recording test sessions without acquiring additional equipment, which may be obtrusive to the test user, and helps solo test administrators conduct a more thorough usability test. Remote viewing and recording technologies can also expand a library's usability testing efforts by reaching users they might otherwise not be able to test, observing users searching in their natural environment, and sharing the results with stakeholders persuasively.
References and Notes
3. Battleson, Booth, and Weintrop, "Usability Testing of an Academic Library Web Site"; Nicole Campbell, Usability Assessment of Library-Related Web Sites: Methods and Case Studies, LITA Guide #7 (Chicago: ALA, 2001).
5. Jakob Nielsen, "Usability Engineering at a Discount," in G. Salvendy and M. J. Smith, eds., Designing and Using Human-Computer Interfaces and Knowledge-based Systems: HCI International '89: Third International Conference on Human-Computer Interaction (Amsterdam: Elsevier Science Publishers, 1989), 394-401.
8. Jakob Nielsen, Why You Only Need to Test with Five Users, Alertbox, Mar. 19, 2000. Accessed Feb. 5, 2002, www.useit.
13. Jared Spool, "Eight Is More than Enough," User Interface Engineering. Accessed Mar. 13, 2002, world.std.com/~uieweb/eight.htm. Jared Spool and Will Schroeder, "Testing Web Sites: Five Users Is Nowhere Near Enough," presented at CHI 2001: Conference on Human Factors in Computing Systems, Seattle, Wash., Mar. 31-Apr. 5, 2001. Accessed Mar. 27, 2002, www.winwriters.com/download/chi01_spool.pdf. In a conversation with Jared Spool on Mar. 21, 2002, he suggested testing two users a week for the duration of the Web development project to more accurately assess the usability of the site.
15. American Library Association, Code of Ethics of the American Library Association. Accessed Mar. 29, 2002, www.ala.org/alaorg/oif/ethics.html.
16. Oliver K. Burmeister, Usability Testing: Revisiting Informed Consent Procedures for Testing Internet Sites. Accessed Sept. 30, 2002, www.jrpit.flinders.edu.au/confpapers/
17. Janet Chisman, Karen Diller, and Sharon Walbridge, "Usability Testing: A Case Study," College and Research Libraries 60, no. 6 (Nov. 1999), 552-69; Battleson, Booth, and Weintrop, "Usability Testing of an Academic Library Web Site"; Hackos and Redish, User and Task Analysis; Norlin and Winters, Usability Testing for Library Web Sites.
21. Chisman, Diller, and Walbridge, "Usability Testing: A Case Study"; Susan McMullen, "Usability Testing in a Library Web Site Redesign Project," Reference Services Review 29, no. 1 (2001): 7-22; Jenny Preece, Human-Computer Interaction (Reading, Mass.: Addison-Wesley, 1994).
23. Campbell, Usability Assessment of Library-Related Web Sites; Battleson, Booth, and Weintrop, "Usability Testing of an Academic Library Web Site"; Chisman, Diller, and Walbridge, "Usability Testing: A Case Study"; McMullen, "Usability Testing in a Library Web Site Redesign Project"; Norlin and Winters, Usability Testing for Library Web Sites.
24. Jakob Nielsen, Usability Laboratories: A 1994 Survey, Useit. Accessed Feb. 5, 2002, www.useit.com/papers/uselabs.html; Nielsen, Usability Engineering.
26. H. Rex Hartson et al., "Remote Evaluation: The Network As an Extension of the Usability Laboratory," in Conference Proceedings on Human Factors in Computing Systems, Vancouver, British Columbia, 1996. Accessed Feb. 5, 2002, www.acm.org/sigchi/chi96/proceedings/papers/Hartson/hrh_txt.htm.
30. Jean Scholtz, "Adaptation of Traditional Usability Testing Methods for Remote Testing," presented at 34th Annual Hawaii International Conference on System Sciences, Maui, Jan. 3-6, 2001. Accessed Sept. 30, 2002, www.itl.nist.gov/iad/IADpapers/hicss2001-final.pdf. Andrew Chak, "Usability Tools: A Useful Start," New Architect. Accessed Mar. 27, 2002, www.webtechniques.com/archives/2000/08/stratrevu Matthew Klee, "Fast, Cheap, and In Control: Exploring New Data Capture Techniques," presented at User Interface 6 West, San Francisco, Calif., March 18-21, 2002, UIE Research Forum (Bradford, Mass.: User Interface Engineering, 2002).
31. Alice Preston, "Remote Usability Testing Tools," reprinted from Usability Interface 5, no. 3 (Jan. 1999). Accessed Mar. 26, 2001, www.stcsig.org/usability/newsletter/9901-remote-tools.html.
37. Timbuktu Pro Remote Control Software, Netopia, 2002. Accessed Mar. 13, 2002, www.netopia.com/en-us/software/products/tb2/index.html.
38. Windows NetMeeting, June 25, 2001. Accessed Mar. 13, 2002, www.microsoft.com/windows/netmeeting/default.asp.
39. Camtasia, TechSmith, Nov. 5, 2001. Accessed Mar. 13, 2002, www.techsmith.com/products/camtasia/features30.asp.
Susan M. Thompson ( firstname.lastname@example.org) is Systems Coordinator, California State University San Marcos.