This project was part of an assignment for a workshop in Usability Testing in my first year at the iSchool. I worked with a team of four other students to build, pilot, and run a small-scale usability test of the website for Toronto’s Centre for Social Innovation, which was then passed on to the organization.
Our test did not specify a problem with the CSI site so as not to bias results. Rather, we aimed to evaluate the effectiveness, efficiency, and ease of use of the site, by observing users’ actions in attempting to gain a basic understanding of what the CSI is. Our central questions in this test were as follows:
[list style=”list1″ bullet_color=”#1e90ff”]
- Is the user able to find relevant membership options?
- Is the user able to find community events and workshops?
- Is the user able to find possible job postings?
- Is the user able to understand associated costs of involvement with the CSI?
We first developed a detailed usability test plan outlining our test goals and objectives, methodology, and logistical details for testing. We then piloted the test with several users and iterated on our plan based on the results, making changes where users made mistakes as a result of issues with the test rather than with the site.
We next conducted the usability test with a small group of five users. Our test included a background questionnaire, five tasks, a card sorting activity, a post-test questionnaire, and a product reaction form.
We aggregated our test results and found themes in our findings through a number of methods including small-scale statistical analysis using the System Usability Scale and creating affinity diagrams to generate data from qualitative statements, card sorting, and product reaction forms.
We put our findings into a detailed report which was delivered to the CSI and outlined several usability issues with the site, as well as our recommendations to resolve those issues.
This experience made me a big fan of card sorting. It’s a cheap, easy way to generate a lot of highly useful data and evaluate the effectiveness of a navigation scheme.
I also learned the importance of not bloating tests. We wanted to include several different methods in our test to strengthen our findings, but as a result, several of our tests went over the allotted time. In piloting, it’s important to watch out for timing to make sure you can get all your activities done with all your users.
I learned about the difficulty of forming mental models on sites that deal in unfamiliar concepts. A lot of the questions we heard from users during our testing were about the heart of the CSI’s work, such as: “What is social innovation?” It’s critical to help users fill in those knowledge gaps when designing IA, and to test sites to ensure that your conceptual model as a designer aligns with users’ mental models, or their actual experience of the product.