The online library user experience is complicated – you have clusters of different systems, like your library website, a catalog, your research guides, your databases, Learning Management Systems, ILL, your proxy server, A-Z lists, etc (and even with some form of next-gen/discovery you are only combining 2 or 3 of these systems). Plus users are coming from and going between these systems in all sorts of ways – the ways we think (and hope) they use it, but also from Google searches, other campus sites, social media posts. Then, often these systems have different interfaces and terminology.
It is all very confusing!
So, how do we find out what users are doing? Where are the pain points? What are some things that we can do to help? Four quick options are:
· Analytics is an obvious and easy place to start, but look for some of the outliers – unexpected popular pages and landing pages. With a bit of extra work/coding you can also do event tracking to get beyond what pages users are visiting to where they are clicking and other scenarios.
· Another great thing to do is some sort of transaction log – from your reference/circ/tech desks, IM and other service points. We use SharePoint for this but there are other options. Great for seeing where there are recurring or timely problems. We watch it daily to identify and troubleshoot access problems, and longer term to try and pick up usage or problem trends that we can address with systems changes.
· Some form of in-person user testing is very helpful – give users tasks to complete and watch to see if they are able to finish the task, how long it took and what steps they took in the process.
· Finally, ask your staff! Reference librarians, student workers will often know common problem spots.
After doing all four of these items, here are the major problems we saw:
· Using wrong tool or wrong site for what they are looking for. Confusion about different systems and what they are each for. For example: citation and subject searches in A-Z Journals or in LibGuides. “Search box syndrome” – if there’s a search box, people will use it!
· Search problems – wrong search type, word order, spelling errors, etc. Different systems all handle these issues differently and give different results.
So, now what to do with the information you’ve gathered?
I break it down into 2 categories – Static (or manual) and Dynamic (API, jQuery) changes and below I’ll give some examples of both.
· Manual changes
o Example: based on looking at our analytics and question logs we make home page ads, blog posts, and social media posts to respond to current issues and events from the last day on our website.
· Dynamic changes
o Example: Popular databases list automatically populated from previous days analytics using API’s
Here are some examples of things I’ve done in our various systems:
· By making some manual style changes we were able to make it much simpler visually to use by establishing more hierarchy, and sorting print/online holdings
· But we also used APIs to get more dynamic details about a journal holding: description, peer reviewed status, RSS feeds.
· Used the Summon API to see dynamically test if a title is indexed in Summon and allow a scoped search if it is. This has been very popular – used nearly 3k times this year! Bounce rate down almost 10%.
After looking at our A-Z journal stats we realized a large % of our page hits (nearly 30%) were getting zero results! Variety of reasons for why this happened, but here’s what we did –
· Added a “No results” message – DUH! To do this I used jQuery to search the page content for the words “Sorry, your search for returned no results.” If it was there then I add in a special message. View an example
· Added a step by step help guide – did you mean to search using title contains
rather than begins with,
etc. This uses a tool named JoyRide
· Created an auto-complete search input using Summon and Ulrichs APIs. The Ulrich’s API had the advantage of being faster (since it is only journals, not also articles, books, etc), but it had the disadvantage of not being able to limit to our holdings. So we went with a Summon API search, limited to Journals and within our holdings.
· For the case where someone was using the wrong tool (searching for a subject, or citation) we added
helper links to Summon from the “No results” page in the Journal portal. In many cases if the person would have been searching in Summon they would have found what they were looking for!
I took a similar approach with our Link Resolver (example
· Manual changes to improve the design and readability
· Added dynamic visual helpers – “try this first, then this…” again using JoyRide
From talking to staff we found out that many ILL requests were for items that we actually own!
So we added a scoped Summon search box in the Article request process, a catalog search to the Book request process, and in the Dissertation request we added a Summon search box scoped to only dissertations. Each of these scoped search boxes is also automatically populated from the OpenURL request if it is present.
Since doing this we found that the number of requests for items we own has dropped nearly in half each of the last 3 years!
Through analytics and testing we found that many users weren’t going past the first page of a guide. We also heard from our staff and the LibGuides email list that students often don’t seem to see the navigational tabs. So we changed our page design – simplified the header by moving things to the footer, also made design changes to make the tabs/pages more obvious.
We created boxes for popular databases and guides, Illiad, Summon, Catalog searches, etc that our staff could re-use in their own guides.
Using APIs from LibGuides we created a “Related guides” dynamic link that gets added to each guide to cross-link to other guides in the same subject area.
We created a dynamic mapping tool so one a user found an item in the catalog they could find it on the shelf. This feature is very popular, even for staff and student workers. Traffic is coming from catalog and journal pages not from our site – the tool needed to be in the flow of the user From this example page
click the location link (St Thomas – O’Shaughnessy Frey Library Stacks)
Your campus search engine is another important tool.
· Auto-complete enabled, based on actual search usage data
· Even if you don’t have the level of access to do these sort of changes directly you should be able to submit requests for keymatches/ads to your administrator.
· Also, you can make sure your content is indexed/findable by submitting URL’s for your sites to the administrator. You’ll want to do this for all domains/sub-domains where your content is located to make it searchable – this would include your website, research guides, repositories, digital collections, etc.
We created a custom landing page on our website from Blackboard.
· Removed the normal page header since it is inside of a frameset which made it look strange.
· Set all links to open in new window so the user doesn’t lose their place in Blackboard when they click a link.
· Tracked usage
We’ve also created custom faculty and student content that displays after logging into Blackboard.
Finally, we built a custom Blackboard Building Block that allows an instructor to easily choose and embed links to research guides inside of a course.
· Simplified – less content and links
· Used tabs and accordions to cluster content and not display all at once.
· Auto-complete when possible (database list, search, Summon, journal title)
· Added help buttons
· Chat widgets across all sites/systems where it was possible.
· Look at your usage patterns and try to find problem areas
· Make changes/additions that make sense in the context of the system you are in.
· Try to guide the user rather than showing every option and link on a given page.
· Start small – what are some manual content additions and linking strategies that could help your users in a complex system?
· Make small changes, add analytics code to measure whether users are trying them.