Libraries and User Interfaces

By Jason Griffey |

During the past few months, I’ve been lucky enough to give a handful of presentations with a similar theme: What does the Post-PC world mean for libraries? In the talks, I cover a lot of ground, ranging from why we should care about shifts in information consumption on devices, how we should determine where to focus our attention when we have limited resources (and we always have limited resources in libraries), and what we can expect from the next 3-5 years when it comes to delivering information to our patrons.

One of the aspects of information delivery that is overlooked by libraries is the user interface with which people interact with our digital resources, whether our website, catalog, or the content that we either create or purchase. The biggest shift in the past few years has been towards touch-based interfaces, driven by smartphones and tablets. We’ve had exemplars to follow in this area for a half-decade now. Nearly every major website has a mobile-and-tablet version that makes touch a more pleasant experience for users, with features like increased touch-target sizes, more scroll-based navigation , and other touch-specific flourishes. Amazon, Google, the New York Times, all have changed their websites to improve the experience for people using the iPad or other tablets.

How many libraries do you know that have designed a version of their website solely for patrons using a touchscreen?

Judging from asking that question as I present across the country, I think the answer is very, very few. And this is only one small, small piece of the way user interfaces are changing, and will continue to change, during the next 3-5 years. Interacting with gestures (think of using the Xbox Kinect) or voice (a la Apple’s Siri service) is growing in popularity and may soon be a normal part of the way we use our devices.

The natural progression of computers' integration into our lives seem to be leading towards invisibility. We are using computers, in the form of smartphones and tablets, on the go; whereas we used to always be stationary when interacting with computers. With experiments like Google’s Project Glass and Valve games looking into wearable computing, it seems to me that we are almost certainly less than a decade from on-the-go, heads-up displays being a relatively normal part of our lives.

How will libraries be ready to deliver information in that way, when we’re already a half-decade behind?