(comparative testing of three approaches to search)
the challengeThere are three major patterns for library search.
- Separate discovery tools based on the type of resource. A library might have separate tools for journals, databases, and articles, plus its catalog for materials like books and media. Users have to search separately in each one, which requires understanding what each of them offers.
- On the other end, some libraries have started employing a single index for all their resources, and displaying search results in one column, similar to Google or Amazon.
- Splitting the difference is a Bento-box search, which maintains separate indexes for each type of resource, but allows the user to search and see some results from all of them simultaneously.
Maintaining separate discovery tools isn't necessarily the best thing for the user, but I didn't know which of these alternatives would necessarily be better, or whether they'd have usability problems of their own. I needed to see how users interacted with different styles of search.
I'd tried to do that before, in a guerrilla testing style, by asking participants to search in one Bento-style interface, and then search in a different Bento-style interface. Unfortunately, the results were inconclusive - they weren't spending enough time with interfaces for me to see where they struggled. Plus, I wanted to see them interact with all three styles, not just Bento.
But prototyping each style would be costly, especially if I wanted to see participants engage deeply with each style. Instead, I tracked down library websites from other schools, one to represent each style of search. (While I could have used UChicago's for one of them, I thought that might bias the participants.) Though participants wouldn't have the necessary credentials to access all the functionality, they'd be able to get to enough.
After reviewing the literature, I came up with four types of tasks for each system:
- find a book with a full citation
- find an article with a full citation
- find some resource with an incomplete citation (e.g., a translation of the Odyssey published after 2000)
- find three resources on a general topic
Participants were asked to start on the homepage of each library site. They were free to use any aspect of the site, but from what I'd seen in web analytics, they were most likely to start by using the search tools, rather than looking for a guide or trying to consult a subject librarian. After they completed the tasks for each site, I asked them to evaluate it using the System Usability Scale.
After they completed the tasks, I asked them open-ended questions about their experience. However, the participants usually wanted to talk about their overall preference first. If I were repeating the study, I'd reorder the questions to go with that natural inclination.
reporting and impact
Originally, my goal was to report usability problems that were unique to each interface. How often did people search for articles in a discovery tool that didn't even index them? Would the Bento search layout prevent users from seeing a target result? How would they adjust to seeing library search results in a Google-like interface?
In addition to coding usability issues presented in the data, I also presented quantitative statistics, like the results of the System Usability Scale.
As I was processing the data, I noticed other patterns that lent themselves to quantitative statistics. For each interface, the participants had to find three resources on a topic, so I also reported what tools they were using to look for these resources, and the diversity of the resources they identified. The number of participants wasn't large enough to draw definite conclusions, but it prompted questions on whether increased usability would expose students to the greatest number of resources.
I knew at the beginning that this project by itself wouldn't be enough to launch immediately into developing a Bento-style search, nor would it be enough to reject the idea completely. But watching participants deeply engage with three styles of search vastly increased the information the Library about the pros and cons of each option. I can now bring a more complete UX perspective to the table when discussing discovery tools, instead of the discussion focusing on development resources, and we've established a starting point from which to inquire further.