Designing mobile support technology for zoo interpreters

Brian Slattery, University of Illinois at Chicago, USA, Leilah Lyons, University of Illinois at Chicago; New York Hall of Science, USA, Priscilla Jimenez Pazmino, University of Illinois at Chicago, USA

Abstract

This paper reports on the Climate Literacy Zoo Education Network's (CLiZEN) work to design mobile technology to support interpreters (a.k.a. docents, explainers). Interpreters are tasked with supporting guest engagement and collaborative meaning making, and can benefit from tools that help them build on their existing interactions with guests around exhibit content. These tools are especially critical in interactive, technology-centered exhibits. However, introducing additional, complex tools is challenging, as interpreters already are busy maintaining conversations with guests. These tools need to have the appropriate functionality, content, and scaffolding so that they can support and improve interpreters' facilitation without hampering interpreters' existing strategies and goals. The CLiZEN team followed a user-centered design approach by working with adult and teen interpreters at the Brookfield Zoo in Chicago, Illinois, to develop a tablet support tool that aligned with interpreters' needs. The tool is designed to provide alternate representations of the core interaction of "A Mile in My Paws" for visitor audiences, through dynamic data representations (e.g., graphs, maps) and additional static media. These representations can also be used by interpreters to support and structure their presentations. Our work focused on three core areas of interpreters' use of the tablet support tool: content mastery, pedagogical content knowledge, and reflection. We found that the tool can support interpreters' existing strengths of appraising visitor interest and knowledge, and adapting exhibit content based on these appraisals. It also can complement interpreters' interactions with visitors by providing additional functionality around exhibit multimedia and supporting interpreters' professional development by facilitating reflection on past performance.

Keywords: design, technology, interpreters, collaboration, tablet, zoo

1. Introduction and background

Interpreters (also called docents, facilitators, or explainers) in informal learning environments are tasked with engaging a large, diverse museum-going audience in shared meaning making around exhibit content (Tilden & Craig, 1977; Beck & Cable, 2012). This form of instruction is most memorable for visitors when interpreters engage in dialogue that incorporates visitors’ background knowledge and interests (Ham, 1992; Knapp, 2007). However, interpreters face a range of challenges when attempting to enact this kind of rich interpretation at museum exhibits. They must work on attaining mastery of both exhibit content and pedagogical content knowledge (i.e., how to present content so learners can make sense of it), with the added complication of acquiring these types of mastery during their day-to-day interpretation, which requires incorporation of reflective professional development into their enactment.

Exhibit content can be defined as the range of core ideas and concepts targeted as learning goals at each exhibit. Interpreters work on mastering exhibit content by increasing the breadth and depth of knowledge they have about each exhibit’s subject and, when the exhibit involves interactive or dynamic components, on how to recognize which concepts are illustrated by different states of the exhibits (Hsi, 2003). Interpreters generally learn exhibit content through both formal training as well as on-the-job mentoring and observation (Diamond et al., 1987), although the process of both mastering a wide range of topics and knowing each topic in depth is ongoing throughout an interpreter’s career. Interpreters need to be flexible enough to ensure that they are matching their presentation to visitors’ preexisting experiences and interests. Furthermore, as different interpreters have different areas of experience and interest, they may disagree about what content should be emphasized at a given exhibit. These challenges make content “mastery” an elusive goal for interpreters.

Interpreters must also master how exhibit content should be presented so that it is comprehensible to visitors—what educational researchers term “pedagogical content knowledge” or PCK (Shulman, 1986). In contrast to general pedagogical strategies, such as asking learners to generate questions or predictions, PCK involves domain-specific strategies. For this reason, PCK generally requires instructors to have deep knowledge of the concepts they are teaching. Deep knowledge of exhibit content allows interpreters to appraise learners’ comprehension and alter instructional strategies and styles in response to what they discover. For example, an interpreter might encounter a visitor who asserts that holes in the ozone layer cause global warming. The interpreter would only encounter this misconception if they ask certain kinds of diagnostic questions, such as why visitors think global temperatures are changing. If the interpreter lacks content knowledge in climate science, for instance about the effect of carbon dioxide in the atmosphere, they might not be able to put known pedagogical strategies into use. But if their instructional approach is informed by their mastery of climate science content, the interpreter could ask visitors probing questions about their knowledge of different types of radiation, or have an easier time recognizing visitors’ incomplete remarks as revealing their emerging understanding of climate change mechanisms.

While interpreters receive some training in pedagogy, they often have to develop PCK on-the-job; interpreters have to learn what subset of content to present given a particular audience, and when that content should be presented based on their conversations with visitors and the surrounding context. A lack of content mastery can hamper development of PCK, and so interpreters may avoid adapting their conversations to audience interests when those interests are outside of their expertise. Also, given the different types of exhibits and learning goals that interpreters must account for, this means that interpreters are also tasked with mastering a range of PCK, rather than simply having to learn one set of pedagogical strategies that can be applied across a range of domains or content areas.

Given the relative lack of formal content and pedagogical training interpreters receive compared to other educators, interpreters must learn how to reflect on their own practice using their on-the-job experiences as fuel for professional development. While structured reflection typically takes place after an activity is complete, more expert practitioners can benefit from reflecting on their own actions in-the-moment, allowing them to adjust their performance based on their own self-appraisals (Schön, 1984). However, more novice practitioners can find it difficult to both engage in an activity and attend to their own actions during that activity. For this reason, while reflection can involve solo work like journaling, it also often incorporates others to help observe the practitioner. Given how often interpreters work in groups, either during interpretation itself (e.g., working as a pair to talk with visitors at an exhibit) or during training and meetings, finding ways for interpreters to help each other reflect is valuable. Interpreters need to be able to share instructional or procedural tips with other interpreters and keep apprised of what other interpreters know. That said, providing feedback to peers is a learned skill that requires its own practice; novice interpreters may need to learn how to share their observations and give pointers to other interpreters.

Establishing and maintaining a culture of continuous professional development can be difficult. While school teachers already struggle to balance day-to-day work demands with additional work that supports professional growth, interpreters face an even greater challenge, as they do not have regular curricula or on-site training programs to provide structured guidance. But we argue that mobile devices can act as “support technology” to provide an additional means that interpreters can use to improve their mastery of content, PCK, and reflection. Support technology consists of tools that are designed to scaffold and improve the existing practices of learners and instructors. We have observed a gradual integration of mobile devices such as smartphones and tablets into exhibit interpretation, largely used to provide additional multimedia content to enhance exhibit experiences, as in the early i-Guide (Hsi, 2003) and the more recent 21-Tech (Garibay & Ostfeld, 2013) programs. Mobile devices can provide much more than just a portal to multimedia, however. The main strength of support technology is that it can be “adaptive,” which is to say that it is able to respond and change based on instructors and learners’ needs and the surrounding context.

Adaptive technology has a number of features that support interpreter professional development. It can support content mastery by acting as a source for a wide range of content, available for interpreters to peruse and rehearse while waiting for visitors to appear at an exhibit. Mastering this content could also take place during interpretation itself, as the technology would allow interpreters to summon new or more in-depth content in response to learners’ questions in a “just-in-time” fashion, allowing interpreters to learn alongside visitors (Jimenez Pazmino et al., 2013). Adaptive technology can be useful not just for accessing content, but also for altering the presentation of content to better suit a given context. Thus, it holds promise for supporting mastery of PCK, helping interpreters judge what content to present and when to present it, and by organizing and framing exhibit content to be compatible with known PCK strategies. Also, adaptive technology can support reflection by reducing the amount of “extra work” interpreters must do, via automated and effort-free recording of parts of interpreters’ presentations to aid in reflection and post hoc discussion.

But to make use of these possible features, interpreters have to change how they normally approach their practice, ceding some of their autonomy and control to support tools. This raises a number of questions about whether using such tools is practical for interpreters in the real world. How comfortable are interpreters with relying on support technology and incorporating it into their dialogues with visitors? How will interpreters blend their own judgment and expertise with suggestions from a support tool, when deciding what to present and discuss? What records will be useful for interpreters to reflect on their instruction, and how will they make use of this information? While support technology is potentially helpful, introducing additional tools can be disruptive for interpreters’ existing practices (Engeström, 2001). To answer these questions and design effective support technology, it is imperative to work closely with interpreters through a user-centered, design-based research approach (Cobb et al., 2003). This will allow interpreters to identify for themselves which support features will actually be useful for their goals, instead of an imposition on interpreters’ existing approach.

Interpreters have rarely been the target population for mobile applications in informal spaces; most of these applications are designed for visitors, with interpretation provided (to some extent) by the application itself. For instance, mobile-tour guides are intended to be used only by visitors, although some tours incorporating these apps have been led by interpreters (Tebeau et al., 2014). In other cases, interpreters have appropriated apps designed for visitors—such as the Exploratorium’s Mobile Guidebook (Hsi, 2003) or Science on a Sphere’s companion app (ILI, 2010)—and made use of these apps’ functionality for their own support and professional development. These studies show the promise of mobile devices for supporting interpretation, but also indicate that interpreters could benefit from tools designed from the outset for their own needs.

One of the challenges of integrating technology into informal spaces is that interacting with mobile tools or other interactives can conflict with the essential social learning that occurs in conversations between people (Hall & Bannon, 2006; Hornecker & Stifter, 2006; Lyons, 2009; Cahill et al., 2011). For a support tool to be beneficial for interpreters, it should serve as a “mediational means” (Wertsch & Rupert, 1993) for their existing and new practices, especially around their social interactions with visitors. Manipulation of a support tool should not take away from an interpreter’s interaction with visitors, but rather afford new ways of interacting and learning at an exhibit.

2. Methods and research context

Over the course of multiple years, researchers from the University of Illinois at Chicago (UIC) worked closely with adult and teen interpreters and interpretive staff at Brookfield Zoo to design an interactive exhibit called “A Mile in My Paws” (“Paws”) as part of the Climate Literacy Zoo Education Network (CLiZEN) project. This exhibit is centered around a role-playing activity where visitors pretend to be a polar bear in different eras of the Arctic (Lyons et al., 2012, 2013, Slattery et al., 2013), and it also includes a mobile tablet support tool with controls for the interactive, as well as additional features (Slattery et al., 2014)

The research team followed a user-centered, design-based research approach (Cobb et al., 2003) with interpreters for the creation of “Paws” and the support tool. Rather than researchers and designers deciding how technology should be organized and used, user-centered design methods (Norman & Draper, 1986) involve foregrounding the goals, needs, and norms of the people who will be using the tool (in this case, zoo interpreters). The support tool went through a series of iterations as the UIC researchers refined the design of the tool—as well as their own goals and understanding—through cycles of brainstorming and feedback from interpreters.

Although the UIC researchers and interpreters were working together, they did not necessarily begin their collaboration with the same goals. For interpreters, their main goals involved ensuring that guests to the zoo were engaged, curious, and learning about ideas and relationships that they could not have seen by themselves. In contrast, the researchers’ main goals consisted of research questions that would help define the design space for interactive exhibit and mobile technology, as well as identifying and addressing issues that interpreters might have with support tools. While these goals were not incompatible, steps had to be taken to ensure that the research goals were properly informed by the interpreters’ goals for practice. Thus, early in the project, a researcher from the UIC team spent time embedded as an interpreter, attending training, interacting with visitors, and gaining firsthand experience in how interpreters at Brookfield approached their profession. This provided a bridge for further close collaboration between the researchers and interpreters, so that the UIC team could continue to learn more about the interpreters’ perspective through individual and group interviews, design sessions, and fast prototyping (i.e., with pen and paper interface designs).

The “Paws” exhibit was incorporated into the initial summer-season training for both the large group of volunteer teenaged “Youth Volunteer Corps” (YVC) and the smaller group of paid adult “Roving Naturalist” (RN) interpreters. This training is held for both new interpreters as well as returning staff and provides a foundation on Brookfield’s approach to guest engagement and education that is expanded on with on-the-job experiences and regular (daily or weekly) team meetings. “Paws” was first discussed in interpreter training sessions and evaluated in a controlled meeting room where visitors were invited to participate in formative studies. The exhibit was then tested with regular visitor crowds as a temporary installation at the underwater polar bear viewing area, and later as a permanent installation. The interpreters involved in the study increased along with these changes in installation. Initially, a small subset of YVC group provided feedback, followed by a mix of YVC and RN interpreters, and then finally the whole RN team was included when the exhibit was permanently installed. This incremental approach to the research setting was intended to gradually increase the complexity of the situational demands on interpreters, as well as technical demands on the research and design team, allowing for design iterations on the “Paws” exhibit and support tool between each stage.

3. Design process and findings

This section will review what our different design iterations revealed about the challenges of supporting content mastery, pedagogical content mastery, and ongoing professional development.

Support tool design for content mastery

Since earlier support tools used by interpreters mainly consisted of multimedia reference materials, the initial design approach was to provide a library of multimedia content—text, images, audio, video—that interpreters could access during visitor interactions. This library would be organized hierarchically for ease of navigation, and would be structured similarly to the paper-based training and reference materials that interpreters were already familiar with. This would allow the support tool to supplement interpreters’ knowledge by providing at-hand examples of concepts they were already familiar with, as well as allowing novices to have a convenient reference for content they were still learning.

However, we found that the interpreters used the hierarchically organized library only as a resource to be memorized and then discarded during interactions with visitors. The model of professional development interpreters brought to the experience (namely that reference materials were to be memorized in advance of visitor interactions) meant that our library approach for the support tool provided little in the way of active, in-the-moment scaffolding for interpreters’ content mastery. In debriefing interviews with the interpreters, we found two additional barriers to “live” use of media lay in our design: that navigating the interface required too much attention, and that in their judgment the information provided was not engaging enough to justify showing it directly to visitors.

Navigation is a nontrivial problem. Even with the most well-organized hierarchical structure, interpreters won’t necessarily have the disciplinary familiarity needed to swiftly and accurately navigate through multiple fields of reference materials (at least without extensive practice). The well-structured mental model necessary to proficiently accomplish this task is generally reached by attaining expertise in a subject. Thus, a hierarchical organized tool could cause problems for novices, but could also be detrimental for experts, as even highly experienced interpreters are always expanding their content knowledge. Since there is so much cross-disciplinary information that interpreters have to be aware of—especially at exhibits focused on climate change, which includes climatology, polar bear biology, behavioral ecology, marine sciences, etc.—they are always in the process of mastering or staying current with the state of scientific knowledge on various topics. Although the research team could have tried to change the interpreters’ norms with respect to reference materials and provided formal training on navigating complex multimedia libraries, this experience was taken instead to indicate that a new content organizational scheme was necessary.

We were surprised that the interpreters deemed the multimedia not engaging enough to be worth the trouble of presentation to visitors, since other museums have presented such content to visitors to good effect, and the media elements were chosen by senior interpretive staff. We realized, though, that such media has to compete for attention with the exhibit itself, which might be difficult when the exhibit is highly interactive. For interactive exhibits like “Paws,” we identified an alternative approach: delivering “dynamic content,” the live information and events that emerge from a visitor’s unpredictable (but bounded) manipulation of an interactive exhibit. In the case of “Paws,” since visitors were role-playing as polar bears, dynamic content includes information on their performance, such as recordings or data representations like graphs or maps. These provide exemplars of events that occur at the exhibit, which can be clearly highlighted for visitors. By re-representing the core roleplaying activity of “Paws,” the support tool itself generates additional content that can be shared with visitors at the exhibit, especially peripheral visitors such as children waiting to play, teenaged friends observing the exhibit, or adults watching from nearby. Displaying this dynamic content helps interpreters create additional engaging moments for visitors who might otherwise be disconnected from the exhibit.

Interpreting dynamic content offers new possibilities for facilitation, but also requires that interpreters be aware of the current state of the exhibit. This includes information such as: who is playing, details about the game “level” (i.e., for “Paws,” whether the player is traversing the past, present, or future of the Arctic environment), actions the player is currently taking (in “Paws,” this is simple—the visitor can either choose to walk or swim), and other performance details (like progress through the “level,” or details about what the player has or hasn’t encountered). In addition to being aware of these dynamic state details, interpreters also need to know why different state details may be important, how they relate, and how they connect to non-dynamic exhibit content (such as climate change, polar bear behavior, etc.). This additional overhead could easily overwhelm interpreters, but fortunately the mobile tool can be designed to streamline the interpretation of dynamic content, by making this information visible to interpreters via updates on the exhibit state, and re-representations that connect across multiple events going on at the exhibit (e.g., a graph showing how many calories the player’s virtual polar bear is expending when swimming versus when walking).

Since the dynamic content displayed on the support tool is coming from live interactions with “Paws,” it becomes more immediately relevant to discuss than pre-created reference materials. When redesigning the mobile tool, we recognized that it would be beneficial to organize media resources based on their relevance to the situation at hand, rather than a hierarchical organization of multimedia, transforming what would have been “static” media to be “dynamic” as well. For instance, the tool can automatically highlight media on swimming polar bears, and on the climate-driven thinning of ice, when the visitor playing “Paws” begins to swim their polar bear avatar through water.

Based on these insights, the researchers and interpreters worked together to iterate on the design of the multimedia library of the support tool. Dynamic exhibit content highlighting the state of the exhibit was made central to the interface, rather than organizing information hierarchically, with multimedia resources (limited to text and pictures) that would appear periodically based on the state of the exhibit. This allowed interpreters to focus visitors’ attention on what is occurring at “Paws” at that moment. Also, this design significantly cut down interpreters’ time manipulating the interface, as they allowed the support tool to handle displaying information that would be appropriate to the current state of the exhibit.

Figure 1: Side-by-side image showing interpreters interacting with zoo guests at the temporary “Paws” installation, and the tablet support tool showing data representations and discussion questions that change based on the current state of the exhibit.

Figure 1: side-by-side images showing interpreters interacting with zoo guests at the temporary “Paws” installation, and the tablet support tool showing data representations and discussion questions that change based on the current state of the exhibit

Support tool design for pedagogical content knowledge

The redesign of the support tool to include dynamic content raised new pedagogical challenges. The increasingly dynamic nature of content on the tablet meant that interpreters were ceding most of their conversational control to the support tool. To make content delivery relevant to “Paws,” the tool was designed to help interpreters identify moments when delivery of specific multimedia content would be salient and identify further content that would relevant or engaging for visitors. However, interpreters are trained to make these decisions themselves, based on their PCK rather than scaffolding from a support tool. It was unclear which elements of decision making could be offloaded to a support tool, and which were necessary for the interpreter to maintain so that they could make use of their own PCK to facilitate the exhibit.

We found that, initially, the redesigned support tool did indeed obstruct interpreters from taking advantage of their PCK to interact with visitors. The high rate of dynamic change of the “Paws” exhibit’s state was preventing interpreters from leading conversations with visitors during interpretation. Since the tool would automatically display different content media as events unfolded within the exhibit, it caused interpreters to either rapidly shift their topics of conversation or be forced to struggle with the tool when it tried to display newly “relevant” information instead of what the interpreter was already in the middle of talking about.

Through further discussion with interpreters, the research team learned more about interpreters’ PCK, especially how they determined what exhibit content is relevant at a particular time. In this iteration of the tool, multimedia was being presented in a way that was “situationally relevant,” which is to say, corresponding to the state of the interactive exhibit. To a certain extent, the interpreters’ issues could be addressed by refining the ways the tool selected situationally relevant content (e.g., by slowing the rate of suggestions). But interpreters also attended to how their facilitation was “visitor relevant,” or connected to the visitors’ current interests, knowledge, motivation, experiences, and attention. Interpreters normally judge this at “Paws” by asking visitors what they know or are interested in, relative to exhibit content areas such as climate science, polar bear biology, and Arctic ecology. Interpreters also make use of their PCK of these different fields through moment-to-moment appraisals of how the visitor audience’s motivation or attention might be shifting. The support tool was not only blind to these contextual factors, but even hindered interpreters’ exercise of their PCK.

This contrast showed a fundamental differences between the strengths of the interpreters compared to the support tool. While the tool is easily able to keep track of the situational relevance (all technology is state-based at its core), it is much more difficult to use a support tool to directly determine visitor relevance, as it cannot converse with visitors. On the other hand, although interpreters are proficient at evaluating visitors’ preexisting knowledge or interests, they do not have the training (or capacity) to keep track of unfiltered or unprocessed state-change information coming from an interactive exhibit.

We realized that the support tool must be redesigned to make space for interpreters to engage their PCK in conversations with visitors, while supplementing them with the option of using situationally relevant contextual information (so that the interpreters did not need to monitor the exhibit state). Thus, we had to work with interpreters to understand their pedagogical approach: how they make judgments about what content to present to visitors at different times. Challenges at the PCK level have a multiplicative effect on challenges associated with content mastery, since interpreters’ different approaches to conveying ideas are moderated by the breadth and depth of knowledge they have on different topics. Interpreters need considerable expertise in a content area to be able to respond dynamically to visitor needs and questions and select both situational- and visitor-relevant information to discuss.

We incorporated these findings into another iteration of the support tool. Rather than tying the available multimedia content solely to the state of the “Paws” exhibit, we created separate “collections” for dynamic information (e.g., live graphs and maps, which were always available) and a palette of “static” multimedia (which was fully scrollable but was populated with most situationally relevant media options). Interpreters could choose from the palette of images and text by quickly dragging media thumbnails to a central display area. Instead of embedding clear references to the exhibit’s state in the multimedia elements, media were left more ambiguous so that interpreters could make use of their PCK to contextualize content in different ways based on who they were talking to. Interpreters could even show two pieces of media side by side to juxtapose or combine them in different ways. Interpretive text was reframed as open questions that interpreters could pose to visitors or ignore in favor of discussing something else (rather than text or images overlaying the whole interface, as they had previously). This allowed interpreters greater flexibility in exercising their ability to both pace interpretation and connect visitors’ questions and experiences to exhibit content, whether it was the pre-set multimedia or live representations of player activity (Slattery et al., 2014). This also made the tool more viable for both novice and expert interpreters to develop their PCK, as the tool was not preventing them from exercising skills they had gained.

Figure 2: Side-by-side image showing the current “Paws” installation at the Brookfield Zoo polar bear exhibit, and an iteration of the tablet support tool allowing interpreters to choose multimedia (images as well as live data representations) to present alone or juxtaposed with other media.

Figure 2: side-by-side image showing the current “Paws” installation at the Brookfield Zoo polar bear exhibit, and an iteration of the tablet support tool allowing interpreters to choose multimedia (images as well as live data representations) to present alone or juxtaposed with other media

Support tool design for reflection

By designing the support tool to scaffold content mastery and PCK, we initially expected that it would aid interpreters’ professional development without need for any additional features or changes. On the contrary, we found that the support tool needed to be designed explicitly to support interpreters’ reflection. Under normal circumstances, practitioners may struggle to both engage in their professional practice while also being metacognitively aware of their own strategies or shortcomings (Schön, 1984). Since the tablet necessitated additional attention for interpreters to manipulate, this took away both time and attention that interpreters could be using to reflect in-the-moment on their interactions with visitors.

Although there are many existing approaches for encouraging reflection, these are generally post hoc and require time commitments (such as reflection writing or group discussion) that don’t fit easily within current practices. Interpreters do not have the kinds of “prep time” teachers may have; their only opportunities for reflection occur during downtime or transit from one exhibit to another, which are generally more informal. While organized group meetings do occur, time must be shared with announcements and more generalized information, making it difficult to focus on individual performances at particular exhibits. In addition, interpreters themselves have a variety of needs when it comes to reflection and professional development, especially when considering different groups of interpreters (such as YVC or RNs) with differing levels of expertise. Interpreters who have been working longer have a larger repertoire of experience to draw on when making sense of their past and present performance, which is a resource that novices lack. But a tool designed to support reflection must be able to account for differing levels of experience and skill among users.

Our team found that the most productive approach for the “Paws” support tool was to work within and enhance existing norms and goals for reflection and professional development, rather than attempting to implement our own novel system. To this end, we implemented automatic back-end logging of events and interactions with the “Paws” exhibit and support tool.

In our current iteration cycle, we are also exploring the use of “tags” for flagging salient events or situations from previous interactions with visitors. This will allow interpreters to build a “folksonomy”-style organizational system aggregating input from interpreters across the YVC and RN programs. For example, a graph showing exponential growth that is engaging for younger children could be tagged #GoodForKids and #ExponentialGrowth, tags pertaining to both pedagogy and content. By making these judgments visible to the group, the system provides an opportunity for PCK growth. Interpreters can recognize correspondences between different tags, revealing previously unconsidered ways that different content is more or less accessible for various visitor audiences. This sort of tagging information is small-scale enough that individual tags could be added opportunistically as interpreters attend meetings or walk between exhibits. Collecting these tags allows for interpreters’ ephemeral insights to be persistent, searchable, and shareable with their peers. The goal of this system is to encourage peer collaboration and reflection by providing persistent information to enhance interpreters’ existing formal and informal sharing of previous experiences.

4. Contribution

Through close collaboration between researchers and interpreters, we designed and iterated on a mobile tool that has the potential to support interpreters’ facilitation of an interactive zoo exhibit. Introducing new technology into an existing professional practice is almost always disruptive, so it is imperative for researchers and designers to work closely with end users to minimize disruption that is detrimental to users, while capturing and understanding the more beneficial disruption that can occur, which drives learning and professional development. This paper illustrates three areas in which our tool’s design was revised to support productive disruptions of existing interpretive practice.

Our work focused on three core areas—content mastery, pedagogical content knowledge, and reflection—of the design of a tablet-based tool intended to be used during live interpretation. We found that supplying hierarchically organized information, which would ordinarily reduce the time needed to navigate and locate information, was a poor match for interpreters who might lack the expertise needed to navigate the hierarchy, and whose professional norms stressed that content expertise should be developed outside of times when interacting with visitors. We also found that including dynamic content (e.g., content dependent on the exhibit’s current state) helped interpreters see the value of incorporating multimedia into their interpretation, as it allowed interpreters to present “just-in-time” information that was immediately relevant to understanding the exhibit.

However, the question of what is “relevant” revealed fundamental differences between the strengths of the interpreters compared to the support tool. Technology can quickly and easily process incoming data, thus making it easy to, for example, have information on a tablet correspond with state changes at an installation, or be coherent with multimedia content displayed previously. But relying solely on this functionality can obstruct interpreters from exercising their own strengths, which include directly appraising visitors’ prior knowledge, experiences, interests, and motivation. These factors are central to interpreters’ decisions about what information is salient to discuss. Tools should not be designed to interfere with this aspect of pedagogical content knowledge, but instead should help support it. Finally, our last design iteration seeks to make the reflection on and the sharing of insights (particularly PCK insights) more streamlined for interpreters, so that they can capture and make easily searchable different insights around media use during interpretation.

These findings arise from research on a single tablet-based tool used at a specific interactive exhibit, but our methodological approach of user-centered, design-based research can be a beneficial approach for any researcher/practitioner collaboration. By highlighting and privileging interpreters’ viewpoint, we were able to engage in a productive collaboration across multiple design iterations that will continue to provide important insights into the design of adaptive support tools. While the specific implementation details may differ if someone were to create a tablet support tool for another exhibit, it is likely that many of the issues we uncovered (the need to support continuous content mastery, the need to balance pedagogical “decision making” between the tool and the interpreter to capitalize on their relative strengths, and the need to fit ongoing professional development into existing work habits and constraints) will occur in many interpretive settings.

References

Beck, L., & T. T. Cable. (2012). The Gifts of Interpretation: Fifteen Guiding Principles for Interpreting Nature and Culture. Sagamore.

Cahill, C., A. Kuhn, S. Schmoll, W. Lo, B. McNally, & C. Quintana. (2011). “Mobile learning in museums: How mobile supports for learning influence student behavior.” In Proceedings of the 10th International Conference on Interaction Design and Children, Ann Arbor, MI.

Cobb, P., J. Confrey, A. diSessa, R. Lehrer, & L. Schauble. (2003). “Design Experiments in Educational Research.” Educational Researcher 32(1), 9–13.

Diamond, J., M. S. John, B. Cleary, & D. Librero. (1987). “The exploratorium’s explainer program: The long-term impacts on teenagers of teaching science to the public.” Science Education 71(5), 643–656.

Engeström, Y. (2001). “Expansive learning at work: toward an activity theoretical reconceptualization.” Journal of Education and Work 14(1), 133–156.

Garibay, C., & K. Ostfeld. (2013). “21-Tech: Engaging visitors using open-source apps.” Exhibitionist 32(2).

Hall, T., & L. Bannon. (2006). “Designing ubiquitous computing to enhance children’s learning in museums.” Journal of Computer Assisted Learning 22(4), 231–243.

Ham, S. H. (1992). Environmental interpretation: A practical guide for people with big ideas and small budgets. Fulcrum Publishing.

Hornecker, E., & M. Stifter. (2006). “Learning from interactive museum installations about interaction design for public settings.” Paper presented at the 2006 Computer-Human Interaction SIG of the Human Factors and Ergonomics Society of Australia (OzCHI).

Hsi, S. (2003). “A study of user experiences mediated by nomadic web content in a museum.” Journal of Computer Assisted Learning 19, 308–319.

Institute for Learning Innovation (ILI). (2010). Science On a Sphere: Cross-site summative evaluation. Available http://www.oesd.noaa.gov/network/SOS_evals/SOS_Final_Summative_Report.pdf

Jimenez Pazmino, P., B. Lopez Silva, B. Slattery, & L. Lyons. (2013). “Teachable mo[bil]ment: Capitalizing on teachable moments with mobile technology in zoos.” In Extended abstracts of the 2013 Conference on Human Factors in Computing Systems (CHI EA 2013). Paris, France.

Knapp, D. (2007). Applied interpretation: Putting research into practice. Fort Collins, CO: InterpPress.

Lyons, L. (2009). “Designing opportunistic user interfaces to support a collaborative museum exhibit.” Paper presented at Conference on Computer Supported Collaborative Learning. Rhodes, Greece.

Lyons, L., B. Slattery, P. Jimenez, B. Lopez, & T. Moher. (2012). “Don’t forget about the sweat: Effortful embodied interaction in support of learning.” Paper presented at the 6th international conference on Tangible, Embedded, and Embodied Interaction (TEI 2012). Kingston, Ontario, Canada.

Lyons, L., B. Lopez Silva, T. Moher, P. Pazmino Jimenez, & B. Slattery. (2013). “Feel the burn: Exploring design parameters for effortful interaction for educational games.” Paper presented at the 2013 conference for Interaction Design and Children (IDC 2013). New York, NY.

Norman, D. A., & S. W. Draper. (1986). User Centered System Design: New Perspectives on Human-Computer Interaction. Lawrence Erlbaum Associates.

Schön, D. A. (1984). The reflective practitioner: How professionals think in action. New York, NY: Basic Books.

Shulman, L. S. (1986). “Those who understand: Knowledge growth in teaching.” Educational Researcher 15(2), 4–14.

Slattery, B., L. Lyons, B. Lopez Silva, & P. Jimenez Pazmino. (2013). “Extending the reach of embodied interaction in informal spaces.” Poster presented at the 10th international conference on Computer Supported Collaborative Learning (CSCL 2013). Madison, WI.

Slattery, B., L. Lyons, P. Jimenez Pazmino, B. Lopez Silva, & T. Moher. (2014). “How interpreters make use of technological supports in an interactive zoo exhibit.” Paper presented at the 11th International Conference of the Learning Sciences (ICLS 2014). Boulder, CO.

Tebeau, M., C. Hanson, & A. Harbine. (2014). “Strategies and techniques for mobile interpretation of landscapes and museums.” Paper presented at the 2014 National Association for Interpretation National Conference (NAI 2014).

Tilden, F., & R. Craig. (1977). Interpreting our heritage. Chapel Hill, NC: University of North Carolina Press.

Wertsch, J., & L. Rupert. (1993). “The authority of cultural tools in a sociocultural approach to mediated agency.” Cognition and Instruction 11(3/4), 227–239.


Cite as:
. "Designing mobile support technology for zoo interpreters." MW2015: Museums and the Web 2015. Published January 16, 2015. Consulted .
https://mw2015.museumsandtheweb.com/paper/designing-mobile-support-technology-for-zoo-interpreters/