The school as the crowd: Adventures in crowdsourcing with schools

Ally Davies, Museum of London, UK, Rhiannon Looseley, Museum of London, UK

Abstract

Heritage organisations have embraced the use of crowdsourcing in recent years, and the Museum of London has experimented with using this model specifically with a schools' audience. Two departments (Information Resources and Learning) piloted the "Tag London" project, each hoping to meet some discrete needs: to improve online collections data and engage a large number of schoolchildren. The museum was attracted to the idea of a project in which participants actively engaged with museum objects whilst also helping improve data. The result was a website that invited eight- to fourteen-year-old children to categorise a selection of museum objects by object type and time period. The initial pilot with some three hundred students produced valuable and encouraging insights. A subsequent consultation with a much broader sample of teachers went on to highlight potential flaws with the project. As a result of a clear disconnect between the museum’s and teachers’ expectations, which saw teachers reporting a need for adjustments to the product that would have worked against the fundamental principle of crowdsourcing, the museum was required to reflect on what constitutes an acceptable level of compromise in a project intended to benefit all parties. This paper explores the role of choice in participation (were the school students more digital conscripts than digital volunteers?) and the importance of the teacher as gatekeeper. Tag London suggests assumptions made about one crowdsourcing approach cannot necessarily be applied to another and reflects on the use of crowdsourcing in a formal learning environment. The museum ultimately decided to pause its use of Tag London with schools, and this paper details what informed this decision.

Keywords: crowdsourcing, school, learning, project, children

1. Introduction

Crowdsourcing offers an enticing proposition for museums. It can provide an opportunity to have tasks completed that would otherwise be impossible to resource. At the same time, it can offer a tangible benefit to the project participants in terms of engagement with a museum’s collection and a feeling of having been part of something interesting, and to have contributed or given something back (Ridge, 2013).

The Museum of London has experimented with three crowdsourcing models, one of which engaged a schools’ audience. This project set out to test whether the museum could provide a meaningful and engaging experience for schools whilst simultaneously helping to improve collections data. The findings from this experimental project are rich. The data has stories to tell, as does the qualitative information gathered during session observations, teacher interviews and discussions with students, and in a later focus group with primary and secondary school teachers. The museum has reflected at length on what failure and success look like in a project intended to benefit “both audiences and institutions” (Ridge, 2014).

This paper provides a history of why Tag London evolved, the questions raised during and after development, its reception by students and their teachers, and the museum’s subsequent decision not to extend the project. It looks critically at the challenges that arose and what the heritage sector can learn. The paper concludes with a reflection on the opportunities offered by the project and a summary of some key insights gained.

2. About the project

2.1 Why crowdsourcing?

Crowdsourcing was one of the areas of focus that formed part of a suite of digital activity funded by a major grant from Arts Council England (http://www.artscouncil.org.uk). The impetus to explore crowdsourcing was, in part, due to its perceived potential to advance the “fundamental change […] taking place in the relationship between the public and museums; a change towards a collaboration of joint interest, joint views, feelings and sensitivities” (Dodd, 1992). As Nina Simon (2010) notes, participation can “inform and invigorate” museums’ practice and offer, and crowdsourcing promises a natural digital extension to the participation movement.

The Museum of London, like many others, makes collections available through a searchable Web database (Collections Online: http://collections.museumoflondon.org.uk/Online). This is a useful step towards opening collections to the public, but nevertheless relies on a model whereby a user passively consumes knowledge created by the museum. Crowdsourcing offered a way in which the museum could invite audiences to become more actively involved in their online exploration of collections.

The museum also had a known requirement for digital collections to be categorised by object type and time period, and it was identified that this need could be met through the use of crowdsourcing. The object-type descriptors used in the museum’s databases are intentionally specific: for example, favouring “glass half plate” or “carbon print” over the generic “photography.” This can leave them too varied and granular to facilitate ready retrieval by generic object descriptions. The broad time period from which the object dates (“Roman” or “Victorian,” for example) is similarly not included for all records, but having objects categorised in this way would again permit more effective collections searching. Given the range of parameters within the text, human intervention would be needed for this categorisation and therefore, due to the scale of this potential undertaking, this project constituted the biggest of the museum’s three crowdsourcing pilots.

The museum had identified a strategic objective to engage every schoolchild in London between 2013 and 2018, a challenging aspiration that the museum’s physical capacity could only go partway to achieving. This makes the Web an obvious channel through which to supplement the existing physical offer. The museum is keen that this online engagement goes further than a simple consumption of online content, and therefore taking advantage of the known benefits to both museums and learners of participatory projects had great appeal. It was envisioned that a specially designed crowdsourcing website for use by schools could address the requirement to reach large numbers whilst simultaneously eliciting valuable data for the museum.

2.2 What is Tag London?

Museum staff met with teachers to consult on the idea of using crowdsourcing for mutual benefit to schools and the institution, and their contributions informed the result: Tag London, a website that invited students aged eight to fourteen to categorise a sample of 1,169 objects by type and time period. The objects were drawn randomly from the museum’s collections database to provide a pool with which to pilot the website. Users were offered an object image and caption and asked to identify a time period and object type, selecting from predetermined lists (figures 1 and 2).

Screenshot of Tag London interface page showing an object and a list time periods

Figure 1: time period selection

Image of Tag London interface page showing an object and a list of object types

Figure 2: object type selection

Students were able to review their progress in a “My account” section and could gather virtual badges. The website also attempted to intimate likely accuracy levels by showing users’ “verification” rate: an object is “verified” when consensus is reached on both answers by at least seven users (figure 3).

Image of Tag London interface page showing thumbnails of objects tagged and a bronze badge

Figure 3: “My account” page showing objects completed and bronze badge

Each class was given a set of anonymised logins, with a separate teacher login. Teacher logins displayed all the objects tagged by a class and how many times each object had been tagged (figure 4). It also showed a list of all class users, identifiable by login ID, how many objects each user had tagged, and the percentage of their objects that had been verified (figure 5).

Image of Tag London interface page showing thumbnails of object images

Figure 4: teachers’ view of all objects tagged by their class

Image of Tag London interface page showing thumbnails of student avatars

Figure 5: teachers’ view of class members and number of objects tagged and percentage verified

The project was a collaboration between the museum’s Learning department and its Information Resources Section (IRS, the department responsible for the museum’s collections documentation), as both departments hoped to reap important benefits from the project. IRS was interested in the obvious benefits to the museum of gathering data. The Learning department ensured that the project also sought to aid classroom learning and identified key learning objectives accordingly. These included supporting student knowledge (“students have an awareness of the practical uses of databases in real-world settings”) and attitudes (“students are effective at coworking with a partner or group to share opinions on appropriate tags”).

Explicit links were made to the National Curriculum for England. For example, the increased focus on chronological understanding within the new History curriculum (Gove, 2013a) was supported by the requirement for students to identify object time periods, and components of the primary and secondary Computing curriculum were directly addressed through use of and reference to the website. It also supported twenty-first-century skills acquisition, particularly enhancing technological literacy. It was envisioned that the website would offer a light-touch, easy means to introduce or complement these curricula themes and skills via this short classroom or homework activity.

2.3 Development discussions and decisions

Some of the decisions the museum faced during the development of Tag London are relevant to the analysis of the project to follow. The development of the taxonomy is a good example of the imperative for compromise inherent in the development of this mutually beneficial project. The list of object types is based on the British Museum Object Names Thesaurus (1999). For the museum to gain data of sufficient granularity to meaningfully support future search needs, a detailed categorisation of object types would be required. The starting language of the categories in the object names thesaurus was designed for curatorial or research audiences: audiences emphatically engaged with and immersed in a museological or archival register. Much of this language would be confusing to schools users. One listing included the word “instruments” to mean tools, intuitive enough for an adult or academic audience, but one which to an eight year old would likely carry specifically musical connotations. Similarly, the category for:

Stimulant/narcotic equipment: includes alcohol equipment, betel equipment, coffee equipment, lime equipment, opium equipment, smoking equipment, snuff equipment, tea equipment, tobacco equipment

felt intimidating and inaccessible, but proved challenging to faithfully reword. These objects became encompassed under “Assorted (other) equipment,” leading to some inevitable loss of detail (although the museum did ensure that the student-friendly taxonomy was mapped back to the original version to enable future use and to futureproof Tag London project data). Despite efforts to distill it, the list of object types still ran to thirty possible selections (figure 6). The process of translation required compromise from both schools and the museum.

Image of Tag London interface page showing full list of object types

Figure 6: full list of object types

After discussion, the museum opted to vet object images in advance of the pilot in order to prevent the inclusion of content that would be inappropriate for young audiences. This process meant checking the list of objects and manually removing problematic examples. The exercise was a worthwhile one, with a potentially sensitive image of a flyer from a London rubber and leather fetish club removed (figure 7). However, the requirement for staff time to undertake this exercise was noted as a potential threat to the project’s intended low demand on staff capacity and ultimate self-sufficiency, one of the main attractions of crowdsourcing.

Poster of man wearing leather fetish outfit

Figure 7: poster dating from 1999 advertising The Backstreet, a London fetish club, removed from Tag London as part of the manual review of images for appropriateness. Digital image © Museum of London

2.4 The in-school pilot

The in-school pilot activity took place in early 2014 in two primary schools (which cover ages four to eleven) and two secondary schools (covering ages eleven to eighteen). Most classes had two lessons around the project. The first, supported by a presentation created by museum staff, introduced data as a concept (this appears within the National Curriculum for Computing and was identified as a topic that Tag London could support) and the project itself. The second saw the students make use of the website. In all, some three hundred students used the website.

Museum staff observed all lessons and evaluated them on a scoresheet based around the questions the pilot aimed to test and the learning objectives (figure 8). Teachers were also asked to answer questions about how easily the project integrated into existing schemes of work and how much support would be needed. They were scored against a set of pre-agreed criteria, and further comments were captured. Museum staff also asked teachers and pupils a set of questions after the lessons in order to ascertain their views. After the pilot, the scoresheets and interviews were analysed and a meeting was held to summarise the findings.

Image showing scoresheet including numbers and descriptions for interpretations of learning objectives

Figure 8: example of scoresheet used by staff observing lessons

The pilot project was well received, and the scoresheets consistently gave mid- to high scores against the achievement of the learning objectives. Teachers particularly reported being pleased with the experience, both in relation to how much students were engaged and how their learning and confidence was enhanced. Amongst other comments, teachers reported a need for students to receive more instant and ongoing feedback on their progress. This request, and the inherent complexities within it, is discussed later.

Students perceived the project positively, particularly in primary schools, and were observed as engaged and enthusiastic (for example, observing staff ranked them all either 1 or 2 against the objective “Students showed a continuing enthusiasm for the task”; 1 indicated, “Students start the activity engaged and curious, and retain excitement levels as each new object appears,” and 2, “Students show some excitement as each new object appears, but this diminishes over time or is dependent on the object”). The way in which the project supported coworking was repeatedly noted, with one observer reflecting of a primary group that “working in pairs was a big help for this age group and … led to a good deal of positive discussion about the task and the objects themselves.” Observation revealed that, as suspected, the length of the object-type list caused some confusion, as did the uncertainty around how to categorise objects where their function was unclear or they overlapped time periods.

2.5 Analysing the crowdsourced data

The crowdsourced data revealed a wealth of complex narratives. However, to summarise some top-line findings, of the 424 objects tagged:

  • 137 were counted as “verified,” as seven or more people agreed on the same answer
  • 77 did not reach the consensus threshold of seven on either element
  • 210 reached the consensus threshold on one element only

For the pilot only, museum staff reviewed student selections. Of the 137 “verified,” the museum deemed 91 to be “correct” on both elements. Therefore within this pilot, 66 percent of objects that achieved consensus on both elements have been wholly tagged in line with museum staff selections. This is 21 percent of all the objects that were tagged.

It is worth mentioning that setting the requirement for consensus at seven identical responses was intentionally cautious, given the experimental nature of the project. The data suggests that this requirement could be lowered without negative impact on the data returned.

Of the small number of objects that were “verified” with an answer that differed from the museum staff member’s selection, most of the attributions were nonetheless intelligent and thoughtful, rather than grossly inaccurate or deliberately wrong. For example, understandable confusion occurred over whether Roman bowls should be categorised as “containers” or “eating and cooking tools.” Many conflicting attributions occur in instances where a single “right” answer is difficult to judge, with more than one reasonable categorization available. Therefore, although only a small sample, the data would suggest that the project could result in consensus data broadly in line with what the museum would deem the “right” answer. It is important to note that where differing selections could be made, “rightness” and who can judge it are, of course, loaded concepts. The requirement for museum staff to assess the “rightness” of student attributions in order to judge the efficacy of the project pilot has raised familiar questions around authority in the museum space. This notwithstanding, overall the data findings were heartening.

2.6 Further teacher consultation

The pilot project raised new areas for discussion. It was evident that the sample size of the pilot was too small to enable the museum to extrapolate how a large-scale rollout could work. Decisive findings to answer important questions (such as whether the project best suited primary or secondary schools or should support History or Computing curricula) were lacking. Before further action, it was important to ask further questions of teachers, engaging a neutral sample beyond the core of teachers who had agreed to participate in a pilot and who, through supporting the museum with the initiation and development of Tag London, were themselves implicated in it.

In September 2014, the museum hosted a group of twenty-seven primary and secondary school teachers for a focus group about Tag London. Unlike those who participated in the pilot, these teachers were asked to use the site without introduction, mimicking the experience of an uninitiated user were the project to be rolled out on a large scale. This approach of modelling unmediated use was a valuable one, immediately highlighting how much interpretation would be required to ensure understanding of the project. Teachers in this group reported feeling confused and unsure about why the task was being undertaken. Both primary and secondary teachers were insistent on the need for clear success markers for students and immediate feedback on whether answers were “right” or “wrong.” Feedback on the website included greater concern about the accessibility of the language used in the captions and about the length of the lists of terms than the pilot had implied. Teachers demanded more interaction (such as objects being made zoomable and three-dimensional) and greater gamification: changes that would call for complex and costly remodeling of the current site. The tasks’ “real-world value” was, teachers reported, crucial to ensuring take-up of the project and therefore that the “the Museum of London needs YOU” angle should be emphasised, with a call for help made a key feature of the website. Some of the requirements expressed perhaps reflect weaknesses in the website design, and indeed could have been addressed with remedial adjustments to the website (for example, the lengthy lists of possible selections could have been made more accessible through a design intervention).

Others, however, would constitute a much more fundamental reframing of the project. There was a strong desire for a system that allowed the teacher to pre-filter objects, limiting the objects the students saw to a specific time period or type. One teacher noted: “As a history teacher, one of the big things is that I’d like to be able to pick what year they’re studying so if they’re learning Georgians, I want to be able to go in there and click on Georgians and all the students are doing Georgian activities.

Even after explaining that the ability to support filtering would require pre-completion of one of the activity tasks, the call for this functionality remained insistent.

2.7 Decision not to extend the project

Ultimately, the museum has taken the decision not to continue with Tag London as a schools’ project. This decision was based on the evidence that the needs intended to be met by the project of both schools (an engaging learning experience) and the museum (reaching large numbers and improving collections data) were not satisfactorily met by the existing design and approach. The adjustments required to adapt Tag London and seek to better meet stakeholder needs would be significant in both time and budget. Other approaches are being considered for meeting these needs.

The technical infrastructure has been retained as a valuable digital asset and will be reused in the future for a project with a more general audience. The learning from the project is invaluable for future crowdsourcing and learning projects.

3. Can crowdsourcing work with schools?

The findings from the project go some way toward suggesting that crowdsourcing with an in-school audience represents a different set of challenges than with an adult audience, and certainly that assumptions made for one cannot be assumed to apply to the other. A number of factors have been identified as at play within this. Without further research, only speculation is possible about the influences at work within this new strand of crowdsourcing. It is hoped that by sharing learning, new avenues for research will be opened.

3.1 Digital volunteers versus digital conscripts

Crowdsourcing has been likened to volunteering in the digital realm. Mia Ridge identifies one type of cultural heritage crowdsourcers as “digital volunteers” who are deliberately participating in the task (Ridge, 2013). Ridge quotes a number of researchers (e.g., Raddick et al., 2009; Oomen & Aroyo, 2011) to show that many participants in cultural heritage crowdsourcing projects find pleasure in the altruism involved and treat such projects as enjoyable hobbies from which they derive benefits of community, generosity, and sharing.

Tag London, as a user experience mediated by a teacher in the context of a formal educational environment, removes both the element of choice to participate and the sense of it as a satisfying use of leisure time. It could be argued that students are effectively digital conscripts rather than digital volunteers. This may explain why focus group teachers were skeptical about the project’s ability to motivate their students. What seems clear is that a project needs to work harder to engage audiences where the voluntary and informal aspect is not present. As explained above, in the case of Tag London, intended to be a relatively “light-touch” project, this harder work (represented by the complex design changes teachers suggested might make it more appealing) made proceeding untenable.

3.2 You scratch my back…

Tag London was heavily premised on a notion of mutual benefit to all the engaged parties: the museum, school teachers, and students. As the project evolved, it became clear that to achieve this aspiration of shared benefit would demand concessions of its stakeholders, requiring the museum to identify when the level of compromise became too great.

The simplification of the object-type list was deemed acceptable (indeed, necessary) within the bounds of the project. The requirement for manually checking the list of objects gave rise to discussions about whether collections could be automatically surfaced to schools without human intervention, given the audience age and the diversity of the museum’s social history collections. With a requirement for collections to be mediated through staff, the more constrained the potential benefit to the museum (in this case, in saving staff time) could be. Since the aim was to scale the project up to reach large student numbers, this issue could become heightened.

These issues incrementally constituted useful—but not project-threatening—learning. The comments from the teachers’ focus group, however, would suggest that the adjustments required in order for the project to meet the reported user needs would be so great as to fundamentally undermine the key premise of reciprocal benefit. For example, the object type list was too long to be accessible for young children, but to distill it would limit its capacity to provide the museum with useful data. The teachers’ desire to be able to pre-filter objects in the website is also telling; to enable filtering, the objects delivered to the user would need to have already been pre-categorised, thus negating the benefit to the museum of having this work done by the “crowd.” The teachers stressed the importance of the project having real-world value, and yet the implications of creating that value by ensuring benefit to the museum risked making the project an unattractive proposition for teachers.

Would it ever have been possible for Tag London to achieve mutual benefit? A contributing factor could be the demands on teachers’ time in the United Kingdom and the culture within the education system (“Teachers of all types work around 12 hours a week around […] their normal working week” (UK Department of Education, 2013)). Tag London, in covering a range of time periods, would not neatly address a single history topic, nor precisely match the curriculum requirements for teaching about data within computing. Secondary teachers in the focus group noted students’ preoccupation with exam results (explored below) and disengagement with anything that will not obviously help them achieve good results. In a culture where teachers report that a large proportion of their time is spent on tasks that do not directly benefit children’s learning (NUT, 2014), it is easy to understand why they resist activities that do not directly cater for a specific curriculum area. In this context, the focus group teachers were less inclined to be sympathetic to the museum’s needs for the project.

It is also possible that the museum’s agenda affected its capacity to make adjustments that might have made the project more palatable to teachers. Just as tester teachers reported attaching value to the project’s shared benefits but were unwilling to accept an uncompromised product to facilitate them, museum staff may have equally felt unable to compromise organisational benefit to better support the user experience. Lynch writes of projects where

[c]hallenge to the organisation’s plans was typically averted or subtly discouraged. Thus, while an illusion of creative participation is on offer in such situations, decisions tend to be coerced, or rushed through on the basis of the organisation’s agenda or strategic plan… (Lynch, 2011)

Whether or not this was at play is hard to judge. Ultimately, the requirements expressed by focus group teachers so far exceeded what the museum could support, in terms of both organisational benefit and simple time and budget constraints, that the project could not proceed on the original terms. It became clear that the museum could deliver greater benefits to students by redirecting resources to other initiatives.

3.3 The gatekeeper’s voice

It is interesting to contrast the largely enthusiastic and accepting student response to Tag London to that of the focus group teachers. This comparison does require some disclaimers, as the experiences of each party cannot be compared like for like. The students had a mediated encounter with the product, preceded at least by an introduction from the teacher and in most cases an additional structured lesson covering relevant concepts. The focus group, in contrast, was intentionally constructed so participants were exposed to the website with a minimum of explication or support. The pilot also consisted only of four schools, whereas the focus group included staff from twenty.

Setting these disclaimers aside, the difference in responses is nonetheless marked. Pilot students appeared open and willing, rarely displaying concern on encountering an unknown word or object. They undertook the task with limited queries, and museum staff observing sessions saw no great evidence of anxiety about the language used in the supporting captions, despite its occasional complexity (the text, from Collections Online, was designed for an adult audience and includes specialist terms such as “Dupondius”
for a Roman coin or “Dirk” for a dagger). In contrast, the focus group teachers were visibly confused—and rapidly disengaged—by the proposition. One, reflecting the mood, said, “I was wondering why I was doing it,” and many queried the task’s value.

This discrepancy between teacher and student responses is a reminder of the challenges in creating a resource intended for use by teachers with students. The museum was bound to test a product intended for student use with teachers, despite the differences in these two audiences. The voice of the teacher, as gatekeeper to the students, must be heeded seriously—even where, as here, it conflicts with that of the students.

It is possible that the nature of the project destabilised focus group teachers who were unfamiliar with the concept of crowdsourcing. For a profession used to being the holders and disseminators of knowledge, being asked to engage with an unfamiliar concept without interpretation could be unsettling (a Department for Education report on effective teaching notes, “Confidence for many teaching practitioners stems from experience,” (McBer, 2000)). This is somewhat borne out by an instance in one of the pilot schools where a teacher, having undertaken to use the website, called upon a museum staff member the day before the lesson professing a lack of preparedness and therefore unwillingness to teach this new area.

A final conjecture on why this discrepancy may have occurred lies in the diversity of voices represented. For the purposes of testing, Tag London catered for both primary and secondary audiences, with a view to targeting one or other in future iterations depending on the pilot findings. In also testing and consulting with both students and teachers (of both History and Computing at secondary level), further layers of audience diversity were added. These are different audiences with very different priorities, and their wide-ranging responses are perhaps not surprising.

This is an inevitable challenge faced by any organization when evaluating a product: how does one reconcile differing viewpoints and draw conclusions? What is clear is that this should serve as a reminder not to assume that teachers can always represent the experience of their students. Indeed, as noted in a discussion within the UK Group for Education in Museums (GEM): “It is not possible to make valid claims about learning impacts on students based on data collected from teachers” (Jensen, 2014).

For all that differing responses from students and teachers present a challenge for the project initiators, the openness of the students to this new and potentially challenging task was inspiring.

3.4 Wrong or right?

The educational climate in the United Kingdom (discussed above) is a useful context within which to reflect on a specific strand of feedback. During discussions around the potential teaching uses of Tag London and how it could be used to facilitate reflective discussions, one focus group teacher (who had museum education experience and personally favored discursive learning approaches) contrasted her own preference with the need for direct efficacy in supporting assessment: “My students are very much conditioned to ‘Is this going to get me an A or not? Is this going to get me the C or not?'”

The UK media and teaching networks frequently see discussions around the notion of teaching as limited by an insistent focus on assessment results. A 2010 report identified several risks precipitated by the focus on assessment, including “the marginalization of certain subjects,” “an adverse effect on teaching and learning,” and “the ‘cramming effect’’’ (Bew, 2010). Perhaps it is this “adverse effect on teaching and learning” at play in the insistent call from teachers for greater clarity on how “success” was made visible within Tag London.

On experimenting with the website, one teacher at the focus group reported “we were getting it wrong on purpose, just to see”—a reference to the expectation of immediate feedback from the website on the “rightness” of the selection. Teacher language in general was heavily inclined towards reference to “right” or “wrong” answers, with one teacher, even after recognizing the complexity of the museum predetermining and supplying answers and the likelihood of instances where single ‘correct’ classifications were impossible, stressed: “… there needs to be more of a: boom, you got one right! … it needs to be in their face, not just the ‘it might be verified, it might not be verified.'”

There are many examples of objects where a single “right” answer would be a subject of debate even for specialist curators. An early bronze age mace head (e.g., http://archive.museumoflondon.org.uk/collections-research/collections-online/object.aspx?objectID=object-1357&start=21&rows=1), for instance, may have been used ceremonially (arguably falling under either “religious or ritual equipment” or simply “Assorted (other) equipment”) or as a battle axe (hence “weapons, armour and equipment for hunting”). Even if curators were making this assessment, their response—whilst originating from a more cultivated perspective—would nevertheless potentially be no more “correct” than a valid attribution by a non-specialist user. Recent changes to the National Curriculum shifted it further towards an E.D. Hirsch-influenced, knowledge-led model of learning, privileging formal delivery of known facts (“We need teachers to actively pass on knowledge, organized in academic disciplines…” (Gove, 2013b))—an uneasy fit perhaps with this act of tagging, where feedback about “wrong” or “right” answers is neither known (hence the need to crowdsource this data) nor constructive (as any well-judged answer can constitute a “right” attribution).

4. Conclusion

Ultimately, the museum took the strategic decision, based on the evidence available, not to extend its experimentation with Tag London beyond the pilot stages. Teacher voices in the focus group clearly lacked support for the model in its current form, and the alternative product requested would constitute a radical reframing of the original project intentions. It was evident that the task the museum had set itself, premised on benefit to all, was a very challenging one.

However, it would be a mistake to view the decision to halt as failure. On the contrary, Tag London afforded the organisation vital insights not only into current teaching practice, but also its own priorities and practices. For an organisation that typically creates educational content solely to support schools’ users without expectation of return, this experiment in educational content yielding tangible gains to the museum was a worthwhile one.

Other project successes include gaining new insights into producing technology products with teachers and an awareness of the differences between voluntary crowdsourcing by adults and co-opted crowdsourcing by students in school. Any activity enabling a refreshed approach to, and discussion of, varied means of addressing key objectives must also be considered successful.

Furthermore, Tag London represented an important foray into a largely untested world of museum crowdsourcing in a formal learning environment. This, all done on a limited budget and affording the Museum of London with a website interface that will be repurposed and pressed into service again for future projects, represents a dexterous model for ways of working with new technology: a model that the museum celebrates and intends to repeat.

Given the emphasis in Tag London, and this paper, on openness to a variety of attributions as “right” and on learning through experience, to consider the project as “wrong” because of some of the complexities it revealed proved irreconcilable would be to fail at practicing what is preached. Rather, the wealth of questions it has invoked yields rich possibilities for further exploration.

Acknowledgements

The Museum of London would like to thank all participating schools and focus group teachers who gave their time to develop the Tag London project.

References

Bew, Paul. (2010). “Review of Key Stage 2 testing, assessment and accountability: progress report.” Consulted December 10, 2014. Available. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/180401/DFE-00035-2011.pdf

Black, Graham. (2007). The Engaging Museum: Developing Museums for Visitor Involvement. Abingdon: Routledge.

Dodd, Jocelyn. (1992). “Whose museum is it anyway?” In Eilean Hooper-Greenhill (ed.). The Educational Role of the Museum (2005). Abingdon: Routledge, 131–133.

Gove, Michael. (2013a). Written statement to Parliament: “Education reform: schools.” July. Consulted January 5, 2015. Available https://www.gov.uk/government/speeches/education-reform-schools

Gove, Michael. (2013b). Speech: “Michael Gove speaks about the importance of teaching.” September. Consulted December 23, 2014. Available https://www.gov.uk/government/speeches/michael-gove-speaks-about-the-importance-of-teaching

Jensen, E. (2014). “Methods of getting feedback from teachers.” Group for Education in Museums discussion. November 19. Consulted November 28, 2014. Available https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind1411&L=GEM&F=&S=&P=91142

Lynch, Bernadette. (2011). “Whose cake is it anyway?” Consulted November 17, 2014. Available http://www.phf.org.uk/downloaddoc.asp?id=547

McBer, Hay. (2000). “Research into Teacher Effectiveness: A Model of Teacher Effectiveness.” Consulted December 12, 2014. Available http://dera.ioe.ac.uk/4566/1/RR216.pdf

Museum of London. (n.d.). Collections Online. Consulted January 5, 2015. http://collections.museumoflondon.org.uk/Online

National Union of Teachers (NUT). (2014). Teachers’ New Year Message. January. Consulted December 9, 2014. Available https://www.teachers.org.uk/files/final-yougov-nut-survey-report-10jan14.doc

Ridge, M. (2013). “From tagging to theorizing: deepening engagement with cultural heritage through crowdsourcing.” Curator: The Museum Journal 56(4), 435–450.

Ridge, M. (ed.). (2014). Crowdsourcing our Cultural Heritage. Farnham: Ashgate.

Simon, Nina. (2010). The Participatory Museum. Consulted November 27, 2014. Available http://www.participatorymuseum.org/chapter1/

Trustees of the British Museum and Collections Trust. (1999). British Museum Object Names Thesaurus. Consulted December 9, 2014. http://www.collectionstrust.org.uk/assets/thesaurus_bmon/Objintro.htm

UK Department of Education. (2013). Computing Key Stages 3 & 4, National Curriculum for England. Consulted December 23, 2014. Available https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/239067/SECONDARY_national_curriculum_-_Computing.pdf

UK Department of Education. (2014). Teacher’ workload diary survey 2013 research report. February. Consulted December 23, 2014. Available https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/285941/DFE-RR316.pdf

Your Paintings Tagger. (n.d.). The Public Catalogue Foundation. Consulted January 5, 2015. Available http://tagger.thepcf.org.uk


Cite as:
. "The school as the crowd: Adventures in crowdsourcing with schools." MW2015: Museums and the Web 2015. Published January 15, 2015. Consulted .
https://mw2015.museumsandtheweb.com/paper/the-school-as-the-crowd-adventures-in-crowdsourcing-with-schools/