Emma Boettcher

Bento-box searching

(comparative testing of three approaches to search)

the challenge

There are three major patterns for library search.

Bento-box style search results
An example of Bento-box search from the University of Michigan. I did not design this.

Maintaining separate discovery tools isn't necessarily the best thing for the user, but I didn't know which of these alternatives would necessarily be better, or whether they'd have usability problems of their own. I needed to see how users interacted with different styles of search.

approach

I'd tried to do that before, in a guerrilla testing style, by asking participants to search in one Bento-style interface, and then search in a different Bento-style interface. Unfortunately, the results were inconclusive - they weren't spending enough time with interfaces for me to see where they struggled. Plus, I wanted to see them interact with all three styles, not just Bento.

But prototyping each style would be costly, especially if I wanted to see participants engage deeply with each style. Instead, I tracked down library websites from other schools, one to represent each style of search. (While I could have used UChicago's for one of them, I thought that might bias the participants.) Though participants wouldn't have the necessary credentials to access all the functionality, they'd be able to get to enough.

After reviewing the literature, I came up with four types of tasks for each system:

Participants were asked to start on the homepage of each library site. They were free to use any aspect of the site, but from what I'd seen in web analytics, they were most likely to start by using the search tools, rather than looking for a guide or trying to consult a subject librarian. After they completed the tasks for each site, I asked them to evaluate it using the System Usability Scale.

After they completed the tasks, I asked them open-ended questions about their experience. However, the participants usually wanted to talk about their overall preference first. If I were repeating the study, I'd reorder the questions to go with that natural inclination.

reporting and impact

Originally, my goal was to report usability problems that were unique to each interface. How often did people search for articles in a discovery tool that didn't even index them? Would the Bento search layout prevent users from seeing a target result? How would they adjust to seeing library search results in a Google-like interface?

In addition to coding usability issues presented in the data, I also presented quantitative statistics, like the results of the System Usability Scale.

graph of System Usability Scale results
Screenshot of a graph from the report. The x-axis, which groups participants' scores by school, has been removed to protect the innocent.

As I was processing the data, I noticed other patterns that lent themselves to quantitative statistics. For each interface, the participants had to find three resources on a topic, so I also reported what tools they were using to look for these resources, and the diversity of the resources they identified. The number of participants wasn't large enough to draw definite conclusions, but it prompted questions on whether increased usability would expose students to the greatest number of resources.

I knew at the beginning that this project by itself wouldn't be enough to launch immediately into developing a Bento-style search, nor would it be enough to reject the idea completely. But watching participants deeply engage with three styles of search vastly increased the information the Library about the pros and cons of each option. I can now bring a more complete UX perspective to the table when discussing discovery tools, instead of the discussion focusing on development resources, and we've established a starting point from which to inquire further.