When to ask and when to shut up: How to get visitor feedback on digital interactives

Tanya Treptow, Centralis, USA, Kathi Kaiser, Centralis, USA

Abstract

While it is critical to talk to visitors about their experiences, getting constructive feedback on digital interactives is not as simple as just asking. When visitors encounter usability issues, they may know that they are confused, but not why or what to do about it. In this how-to session, you’ll learn observational and interviewing techniques that go beyond what visitors can tell you to reveal what they know, how they think, and what they need. We’ll begin by describing common sources of frustration in digital experiences, such as lack of clarity about purpose, difficulty navigating available options, ambiguous language, and challenges with physical manipulation. We’ll use real-life examples from museums and other institutions to illustrate these difficulties and how the causes of the problems may not be evident by only watching visitors or asking them questions. Next, we’ll demonstrate a series of techniques for observing and interviewing users that enable you to diagnose the underlying causes of their difficulties. Hypothesis testing, just-in-time probing, echoing and progressive disclosure and other approaches can unlock the underlying causes of usability issues. Visitors come in all kinds, so we’ll also share tips for drawing out people who are shy, focusing participants who stray off track, and gathering constructive criticism from those who are overly eager to please. You’ll leave the session excited to gather feedback from your visitors and armed with new skills to do so!

Keywords: In-gallery evaluation, usability, visitor experience, mobile apps, design, interviewing

1. Introduction

While it’s critical to talk to visitors about their experiences, getting constructive feedback on digital interactives is not as simple as just asking. When visitors encounter usability issues, they may know that they are confused, but not why or what to do about it. In this how-to session, we’ll share observational and interviewing techniques that go beyond what visitors can tell you to reveal what they know, how they think, and what they need.

Museum professionals have reached consensus that in the mobile/digital age there is an “urgent need for user involvement in the design process” (Taylor, 2006). In the past decade, museums have increasingly conducted formative evaluation with visitors to help inform gallery design. The goal of formative evaluation typically involves “identifying user needs and ensuring that the planned resource will meet these needs” (Dawson et al., 2004). A range of research techniques are used as a base for formative evaluation, with varied goals:

  • Interviews highlight visitor needs and considerations
  • Surveys measure preferences and opinions
  • Observation reveals visitor behaviors
  • Usability testing examines the cognitive process behind behavior

Museums typically employ some formative evaluation techniques more often than others. Interviews, observation, and surveys are often used to gain insight on visitor’s engagement with digital interactives (Damala & Kockelcorn, 2006; Vavoula & Sharples, 2009; Pattison et al., 2014). Other methods, such as usability testing and cognitive walkthroughs, seem less common, despite the increased reliance on usability testing in designing user-focused museum websites (e.g., Peacock & Brownbill, 2007; Tasich & Villaespesa, 2013). This lack may be because usability testing is viewed as a narrow method, with a focus only on heuristics of button size, colors, etc., or as labor intensive and challenging to implement.

However, by asking the “right” users the “right” questions, usability testing has the potential to quickly provide broad feedback on visitor expectations, goals, and understanding. Usability testing can quickly determine the reasons why visitors may not use an interactive as intended and can immediately identify design solutions that can enable learning and exploration. Museum practitioners are beginning to promote the importance of the usability of technology as a foundation in a wider evaluation of the mobile learning experience (Vavoula & Sharples, 2009). It may be a no-brainer to ask visitors for their feedback, but what does that conversation ideally look like? How can we maximize the benefits of this research technique?

2. Test with “real users”

The first step in getting visitor feedback on digital interactives is to include “real users” in your research. “Real users” are individuals who are likely to use the interface in question; they should be recruited to match the demographic and psychographic profile of the target audience, and also have expressed an interest in gallery technology either through past behavior or statements about likely future use.

However, usability test participants do not need to be actual visitors in the midst of a live visit to the institution. Intercepting people in the gallery to gather their feedback in real time presents practical challenges that can limit the amount of data that can be gathered efficiently. Visitors may be willing to spare a few moments, but may not wish to interrupt their experience for enough time to fully test the interface. It may be difficult to identify visitors who fit specific target segments from those who have chosen to visit the institution on a given day. Finally, as we’ll describe below, usability testing often involves allowing users to become confused, lost, or otherwise struggle with an interface; this is likely not the type of experience an actual visitor would like to have while striving to enjoy their time at the institution.

For these reasons, we recommend pre-recruiting participants and inviting them to the institution for the specific purpose of testing the interface. In addition to matching desired demographic and/or psychographic characteristics, participants should not have previously used the interface to be tested, and should not be employees of the institution or agency that is developing the interactive. These “real users” serve as proxies for first-time users of the interactive, providing the greatest opportunity to identify and diagnose usability issues. Offering a modest incentive, such as a cash reward or free admission to the institution for the day, also helps motivate them to keep their appointment and provide insightful feedback during the session.

Pre-recruiting “real users” to participate in facilitated usability testing is not meant to be a replacement for observational studies that explore actual visitors’ authentic experiences; these are complementary methods that yield unique data. Usability testing strives to identify and diagnose characteristics of the design that cause confusion, frustration, and/or abandonment of the interactive. To accomplish this, the method must be more like an experiment than an ethnographic study. The next step in the experiment is to devise tasks for the participants to complete.

3. Give people tasks

Digital interactives are typically designed to support a range of goals, such as educating visitors about a topic, helping them discover a broader range of materials in an exhibit, or engaging them in a new way of thinking. To accomplish these goals in a digital interface, users must successfully complete tasks that may seem more mundane but are critical to success: things like starting the interactive, navigating to specific pages, locating supplementary content, and determining how to move forward in the experience. These types of tasks form the basis of usability testing and serve as a framework for gathering visitor feedback on both the design and learning goals of the interactive.

The tasks that users complete in a usability study should be created to test hypotheses about problems visitors may encounter in actual use. For example, are you concerned that the map does not distinguish between the first and second floors? Ask the participant to find an object on the second floor while on the first floor. Are you worried that they won’t realize there are videos about an object? Ask them to find information that is only mentioned in a video. These types of tasks present the opportunity to determine if users can discover and effectively use elements of the interface.

Watching participants succeed and enjoy their experience is fun and rewarding. The most fruitful moments in usability testing, however, are when participants encounter roadblocks and become frustrated. These are moments of opportunity for the researcher to learn why: what characteristics of the design are contributing to the problem, and how they can be fixed. Diagnosing usability issues requires artful interaction with the participant; we can’t simply ask them to explain why they’re confused, because they often don’t know. Through careful questioning and observation, participant and researcher can discover together the source of the problem.

4. Ask questions (carefully)

So you have your user, you have your tasks, and you are ready to evaluate whether an interface is meeting user needs. What types of data should you capture to better improve a user’s experience?

Usability testing requires different skills of moderating than in an observation-based study. We typically use techniques of hypothesis testing, just-in-time probing, echoing, and progressive disclosure to unlock the underlying causes of usability issues. Visitors come in all kinds, so these techniques can be combined with ways to draw out people who are shy, to focus participants who stray off track, and to gather constructive criticism from those who are overly eager to please.

In advance of every session, we help participants get in the correct mindset for success. In this context, success does not mean correct completion of tasks. There are no right or wrong answers in a usability test. An effective usability test is one in which a participant feels comfortable sharing their thoughts and explaining how they understand the world around them. At the beginning of sessions, we explicitly encourage participants to “think aloud” to help us understand ways to improve their experience. For example:

I’d like you to think aloud for me as you work. What I mean is that I’d like you to tell me what you see on the device, what you’re thinking about, what you’re trying to decide, what you like or don’t like, etc. These things help me understand your perspective.

Because of this “think-aloud” protocol, once a test begins, the researcher has opportunities to ask open-ended questions to better understand where a participant is coming from. Many resources are available to new researchers that describe these techniques (Dumas & Loring, 2008; Rubin & Chisnell, 2008; Krug, 2009; Barnum, 2011 ). We find that many question-based techniques are easier to understand through examples. Here are some common situations in usability tests and how we identify usability issues without biasing participant responses:

Participant: “I don’t know where to start.”

When participants express frustration at the beginning of a task, the interface typically lacks clarity of purpose or has too many options. The participant may be overwhelmed by choice. However, they may just be anxious about doing things right. You can use this moment as an opportunity to find out which it is. Reassure the participant that there are no wrong answers and ask them how they understand their task: “If you were in this situation for real, what would you need to know first?” “What kinds of things would be most important to you?” Many times, a participant will volunteer important considerations for their own expectations and desires.

Participant: “I don’t see anything related to what I want to do.”

In this case, it is likely that a participant has missed available options, either because they are out of view or because terminology is ambiguous. To determine the source of the problem, take the interface away and ask the participant to describe what was on the screen. They will naturally omit items they did not see or understand. After a few moments, bring the interface back and ask them to review it with fresh eyes. Participants will often comment on areas they are seeing for the first time, or had seen but wrongly dismissed as irrelevant to their task.

Participant: “I was on the right track, but now I’m lost.”

When a participant is lost within an interface, a common reason is that the system does not provide enough feedback on where they are. Perhaps the user initially expected the device to work differently or misunderstood the options available or the meaning of a term. It’s important not to praise a participant’s action at this point (“You’re doing a good job!”), as this creates worry of being judged for an error. Instead, encourage them to explore what they think happened (“Tell me more about that…”). If you’ve adequately prepped participants in advance that it is okay to encounter difficulty, participants are more likely to feel comfortable articulating what caused them to feel lost and what could have helped them along the way.

Participant: “It doesn’t seem to be working.”

It’s likely that the participant encountered difficulty in physically maneuvering within the device, anticipated a different interaction, or missed an important step. In essence, the user has a different mental model of the world than the designers of a device intended. It is important to understand the severity of this kind of mismatch. By providing the minimum of guidance forward (often called just-in-time probing), you can learn whether a participant can self-correct. Common questions we ask are: “Is this the only option?” and “What about this area of the page; what do you see here?”

Participant: “I just can’t find it!”

When a participant is stumped and frustrated, it’s important to acknowledge their feelings. When a participant articulates their emotions, they are less likely to shut down and instead can help you understand the reasons behind a strong reaction. We often ask things like: “How do you feel at this point? Tell me more about what happened.” With this method, you can note the severity of an issue and also engage a participant to reflect on how the interface could have helped them more along the way. If a participant never realized what caused a problem, consider taking time at the end of the session to explain the intended interaction and ask them to reflect on what they would have preferred.

Participant: “I should have paid more attention.”

Participants often blame themselves for difficulties or errors. It is important to reassure a participant and better understand the causes behind such a response. Remind the participant that they haven’t done anything wrong: “It’s okay if you found something challenging.” Then redirect them to focus on what they see as causes of the issue in the design: “What do you think kept you from recognizing this?”

Participant: “I would never do it this way in real life.”

Sometimes a participant has different priorities than suggested in a predetermined task, and this is an important moment to learn from. They may think the interface is inefficient or may even mistrust the purpose of the task. When a participant finds a task unrealistic, ask clarifying questions to understand their primary considerations: “Tell me more about how you would prefer to do this.” Usually, this type of conversation can help identify key user needs, functionality, or ways of learning that may not be addressed in a current design.

Participant: “This is all just fine.”

Sometimes participants make this comment when they worry about being rude by making a negative comment. Other times, a participant doesn’t realize that they missed something or never explored an area of an interface. If you think a participant is just being polite, encourage candidness by sharing that you did not design the site (even if you did!), so negative feedback won’t hurt your feelings. If a participant doesn’t realize the extent of an error, that’s okay also. Encourage them to verbalize their thoughts, but concentrate on their behavior to identify moments of confusion.

5. Prioritizing and iterating

Learning how to moderate usability sessions effectively is the first step in improving the user experience of digital interactives. It’s also important, after conducting sessions, to identify the most critical issues to address and whether an issue is a quick fix or a more essential change to better engage visitor needs. It can be useful to bring together all observers of a usability session to work as a group to review trends of common behaviors in the sessions. What were the most common trends of behavior among participants? Did many people experience a problem and, if so, why? Would the problem be serious for those who experienced it, or just an inconvenience? From these types of questions, you can quickly prioritize a rough list of major issues and gain consensus on priorities to improve in the design that will enable users’ goals for learning.

Usability testing is a formative evaluation method that can be used at multiple points in developing a design, including with paper prototypes, digital works-in-progress, and final installations in the galleries. Usability testing can also provide insight for future initiatives by suggesting best practices that are sensitive to the experience of a specific museum space and a unique range of visitors.

References and Resources

Barnum, C. M. (2011). Usability Testing Essentials: Ready, Set… Test! Burlington: Elsevier.

Crawford, V. (2005). “Framework for Design and Evaluation of Mobile Applications in Informal Learning Contexts.” Proceedings of the Electronic Guidebook Forum 2005. San Francisco: Exploratorium, 46–48. Consulted January 27, 2015. Available at: http://www.exploratorium.edu/guidebook/eguides_forum2005.pdf

Damala, A., & H. Kockelcorn. (2006). “Evaluation strategies for mobile museum guides: a theoretical framework.” Proceedings of the Third International Conference of Museology: Audiovisuals as Cultural Heritage and their Use in Museums. Mytilene, Greece (in press). Consulted January 27, 2015. Available at: http://areti.freewebspace.com/pdf_files/avicom2006.pdf

Dawson, D., Alice Grant, Paul Miller, & John Perkins. (2004). “User Evaluation: Sharing Expertise to Build Shared Values.” Museums and the Web 2004: Proceedings. Toronto: Archives & Museum Informatics. Consulted January 27, 2015. http://www.archimuse.com/mw2004/papers/dawson/dawson.html

Diamond, J. (2009). Practical Evaluation Guide: Tools for Museums and Other Informal Educational Settings, second edition. Lanham: AltaMira Press.

Dumas, J. S., & B. A. Loring. (2008). Moderating Usability Tests: Principles and Practices for Interacting. Amsterdam: Morgan Kaufmann/Elsevier.

Filippini-Fantoni, S., & J. P. Bowen. (2008). “Mobile Multimedia: Reflections from Ten Years of Practice.” In L. Tallon & K. Walker (eds.). Digital Technologies and the Museum Experience: Handheld Guides and Other Media. Lanham: AltaMira Press. 79–96

Galloway, S., & J. Stanley. (2004). “Thinking outside the box: galleries, Museums and evaluation.” Museum and Society 2(2). 125–146. Consulted January 27, 2015. Available at: https://www2.le.ac.uk/departments/museumstudies/museumsociety/documents/volumes/galloway.pdf

Goodman, E., et al. (2012). Observing the User Experience: A Practitioner’s Guide to User Research, second edition. Waltham: Morgan Kaufmann.

Korn, R. & M. Borun. (eds.). (1999). Introduction to Museum Evaluation. Washington, D.C.: American Association of Museums.

Krug, S. (2005) Don’t Make Me Think: A Common Sense Approach to Web Usability, second edition. Thousand Oaks: New Riders.

Krug, S. (2009). Rocket surgery made easy: The do-it-yourself guide to finding and fixing usability problems. Berkeley: New Riders.

Leung, L. (2012). “Users as Learners: Rethinking Digital Experiences as Inherently Educational.” In L. Leung (ed.). Digital Experience Design: Ideas, Industries, Interaction. Bristol & Chicago: Intellect, 15–24.

Pattison, S., et. al. (2014). Team-based inquiry: practical guide for using evaluation to improve informal education experiences, second edition. Nanoscale Informal Science Education Network. Published August 25, 2014. Consulted January 27, 2015. http://www.nisenet.org/catalog/tools_guides/team-based_inquiry_guide

Peacock, D., & J. Brownbill. (2007). “Audiences, Visitors, Users: Reconceptualising Users of Museum On-line Content and Services.” In J. Trant & D. Bearman (eds.). Museums and the Web 2007: Proceedings. Toronto: Archives & Museum Informatics. Published March 1, 2007. Consulted January 27, 2015. http://www.archimuse.com/mw2007/papers/peacock/peacock.html

Rubin, J., & Chisnell, D. (2008). Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests, second edition. Indianapolis: Wiley Publishing.

Tasich, T., & E. Villaespesa. (2013). “Meeting the Real User: Evaluating the Usability of Tate’s Website.” In N. Proctor & R. Cherry (eds.). Museums and the Web 2013. Silver Spring: Museums and the Web. Published January 31, 2013. Consulted January 27, 2015. http://mw2013.museumsandtheweb.com/paper/meeting-the-real-user-evaluating-the-usability-of-tates-website/

Taylor, J. (2006). “Evaluating Mobile Learning: What are appropriate methods for evaluating learning in mobile environments?” In M. Sharples (ed.). Big Issues in Mobile Learning. Report of a workshop by the Kaleidoscope Network of Excellence Mobile Learning Initiative. Nottingham, University of Nottingham. 25–27. Consulted January 27, 2015. Available at: http://matchsz.inf.elte.hu/tt/docs/Sharples-20062.pdf

Taylor, S. (ed.). (1991). Try It! Improving Exhibits through Formative Evaluation. Washington, D.C.: Association of Science-Technology Centers.

Vavoula, G. & M. Sharples. (2009). “Meeting the challenges in evaluating mobile learning: a 3-level evaluation framework.” International Journal of Mobile and Blended Learning 1(2), 54–75. Consulted January 27, 2015. Available at: https://www2.le.ac.uk/Members/gv18/downloads/publicationpreprints/conference-proceedings/VavoulaSharples-mlearn2008.pdf


Cite as:
. "When to ask and when to shut up: How to get visitor feedback on digital interactives." MW2015: Museums and the Web 2015. Published January 31, 2015. Consulted .
https://mw2015.museumsandtheweb.com/paper/when-to-ask-and-when-to-shut-up-how-to-get-visitor-feedback-on-digital-interactives/