Connecting collection objects from museum deposits and reserve areas to the Web

Bernardo Uribe Mendoza, Universidad Nacional de Colombia, Colombia, José J. Martinez, Universidad Nacional de Colombia, Colombia, Henry Galindo, Universidad Santo Tomas, Colombia

Abstract

The paper presents a work in the field of the visualization of museum collections in order to display the collections ‘live’ as a way to redesign the preservation areas of museum collections. To achieve this goal, it was set out to integrate several virtual-studio techniques with bi-directional communication multicasting in the web. The museums architecture were also reviewed to turn this sort of new built-in infrastructure into what could be a new typology, for museum Reserve and Curate areas. Dynamic lighting, MR J3D collision and other software tools were tested and adapted to the real time applications added to the object live navigation. The bi-directional man-object communication which has been developed is set forth in this paper as a simple way to implement the IOT (Internet of Things) in the museological context.

Keywords: ASM Multicasting, 3D Visualizatión, Internet of Things, Bi-directional Multicasting, Reserve Areas

1. Introduction

While the work presented here began in the 2000s, it is related to the current investigation and development of Internet of Things (IOT) applications, but not only because that is mainly aimed at connecting objects of museum collections to the Internet, which currently represents a new stage in the development of the Internet, following Web 2.0 and the explosive growth of mobile personal communication, which is exploring alternatives for connecting “things” to the Internet after a period of mobile communication focused on man/man communication, with the aim of incorporating such uses in an intelligent ambit or acceding to a wider range of utility.

Since last decade, the Mundos Virtuales research group of the Universidad Nacional de Colombia in Bogotá has worked on the idea of creating a mixed-reality (MR) studio for the live capture by Internet of objects from collections of the decorative arts, and with that aim developed a bidirectional communications environment, resulting from the software we call JOrta. It is based on the ORTA (Strowes, 2006), a preexisting application for bidirectional multicasting communication undertaken in C++ programming, of free use in LINUX operating system. It has been redesigned for the project in the JAVA programming language to be compatible with the Windows operating system and thus with the drivers of most multimedia devices, since the objective of such experiences was to share the visualization of objects from collections that are guarded in storerooms so that experts and interested students may study them from afar, in groups (e.g., in academic centers located in countries other than the one where the museum that houses the collection is found).

This paper suggests that the experience developed in the course of this work may serve as a pilot project for what may become a valuable way to guard objects in museums, which would allow them to be visualized or visited at any time online by groups of scholars and researchers without any need to exhibit or handle them for that purpose. We propose a system for archiving the objects that is suitable for connecting the objects to the Web. The idea is that the objects, not their archives guarded in a digital repository, may turn into a live catalogue by means of their connection to the Web and become available for study and research in the manner of, for example, “real” books in libraries. In contrast with real, physical books, where all that matters in the end is the text, in a museum collection the particularity or “aura” of the object is important and is a crucial reference point for “reading” the artwork. (There may be different “readings” of the object and its “aura” by the members of a collaborative session.) In that regard, the development of the sensorial tools found in IOT technology opens exciting possibilities, insofar as it covers an important aspect of perceptual enhancement that enables us to “contemplate” such objects beyond natural methods.

The experience that has arisen has been focused on high-definition (HD) visualization, an important factor in the “contemplation” and study of an object in a collection. As with the general guidelines for IOT applications, where the embedded sensorial tools are important, enlarging the fields for perceiving the object beyond the visual one is useful in the case of objects connected to the Web, as it broadens the spectrum of real-time online information about them.

One of the most important characteristics being developed for the IOT involves the personalization of the information systems through platforms that, in addition to connecting things to the Web, allow them to interact with the human factor. In our work, the connection of the object was implemented by using an ASM bidirectional multicasting platform (Any Source Multicasting replaced the Multicasting Internet Protocol definition), with one or several users and the possibility of joint use. The human factor intervenes in the ambit of the object and its contemplation with the possibilities that bidirectional communication opens, which makes it possible to attain communication with the objects in the archives of a collection, and in this case with the environmental components of the system for archiving or curating the object. The above feature is an example of an outstanding characteristic of IOT: locatable information systems. In this project, it refers to the connection of objects kept in the reserve areas of the collections of a museum, where they are guarded, archived, or curated.

The experience gained with the JOrta bidirectional communications system developed by the research group exemplifies another important aspect of the IOT. While the information systems in Web 2.0 are designed to be used in a single form, those of IOT are versatile. The proposed system for archiving or curating can be used to visualize and study objects in collaborative sessions, as the project has shown, but above all in a novel “tectonic” conception: with embedded interconnection systems as a possible future development in the architecture of museology, it can be used to control and monitor the objects, since both purposes are acknowledged to be characteristics of the IOT, apart from Web interconnections of objects as such. (Architects have taken and extrapolated the tectonic concept from geology, in which it refers to layers of material and their arrangement in a geological formation, and extrapolated it to the idea of how, in buildings, the material, its arrangement (be it homogenous or in layers), and its texture affect a spatial architectural design form. In the present paper, we propose to extend the use of this term to also include the infrastructure of interfaces, like those of a virtual studio, with the idea that this new layer of “material” may significantly transform the architectural building form.)

The level of collective or swarm knowledge that may be reached is an obvious concern when it is a matter of a collection of objects and the intrinsic relation among the objects that make it up. A novel tectonic layer integrated with the systems for interconnecting the objects on the Web as a way of guarding or archiving opens these possibilities, given the current pace of the emergence of IOT technology, which, it is thought, is still in its birth stage.

In view of the above, this paper is divided into two parts. One discusses the experiences of the JOrta and online Virtual Studio and of bidirectional communication with decorative arts objects (cups, plates, china ware), which allows us to confirm that it is possible to connect them (visually) to the Web. The other part is complementary and presents an extrapolation of these experiments: it proposes a tectonics (mixed architecture) that enables one to take advantage of a technology like the IOT: in short, the embedded use of hardware (cameras, sensors) and communications software in a future system for archiving, guarding, and curating the objects in a collection.

2. From virtual mixed reality to IOT

This section presents the futuristic extrapolation in a new tectonic form for museum deposit areas, and even the building as a whole, in terms of the ideas on the interconnection through the Web of objects of museum collections that been examined in the course of our research.

In current augmented-reality (AR) interfaces, the architectural space is an important part of the applications. This is the space regarded by architects and designers as a topological “place” and its material characteristics, as defined by the concept of tectonics (i.e., layers of building materials organized in a three-dimensional, topological arrangement). The “tectonics” of built-in AR infrastructure involve different design disciplines.

Software

Interactive and programmable iconic spatial boundary defined by human body interfaces.
Hardware
Tectonics Material spatial boundaries defined by a formal topology construction

Table 1: built-in Virtual Studio infrastructure and tectonics

The latter would mean significant changes in the tectonics and design due to the introduction of the infrastructure needed for virtual studio techniques, embedded sensorial tools, and collaborative environments in the areas that house collections in the preservation and curatorial areas of museums, as proposed in the work.

This proposal is meant to allow for the live capture and dynamic visualization of these collections objects and thus open the way for bidirectional collaborative work (object/object and human/object) in real time on the Web. In the museum portals seen on the Web today, access to collections by the public or scholars is mainly gained by navigating an established menu of collections made up of two- or three-dimensional images or clips repository, but not “live” material (real objects).

Current museum buildings offer a continuity of space, with tectonic divisions between the areas that house objects viewed by the public and those that preserve objects not on display. Their public halls and galleries are specifically built for the physical contemplation of their public collections, and virtual reality (VR)/AR or MR are complements to those arrangements whereby the public or scholars may study the objects. Digital objects, CAD tools, and other means of visualization or augmented reality mainly occur as a complement to the displays of actual “live” collections of selected or curatorial objects in such galleries. In many cases, the physical space of the gallery is previously photographed, as in Google’s “view art,” whereas the collections stored for preservation are displayed through 2D or 3D databases and search engine charts.

Preservation zones

Physical objects. Exhibition Areas Physical objectsDigital VR/MR/AR Supplement to collections open to the public (Stand Alone or Web) and Data Bases of collections in the preservation zones

Table 2: existing typology of architecture for museums and new media infrastructures

Few studies propose the modification of the relations that arise from the use of an infrastructure like that of the virtual studio and bidirectional Web communication as a way to archive the objects in a collection so that they may be available online in a live manner and with possibilities for communication in both directions.

The transformation would work in the following way:

Areas of preservation: physical objects in the collections not open to the public Digital Supplement: Live Display: Objects from all collections
Physical Supplement: Exhibition Areas for some selected objects or collections.

Table 3: proposed typology of architecture for museums, including built-in virtual studio infrastructure and embedded sensorial tolos.

Auribe.fig1.A

Buribe.fig1.B

Curibe.fig1.C

Figure 1: (A)(B)(C) architectonic drawings of preservation and storage of collections using a built-in virtual studio and embedded sensorial techniques. (A)(B) Small remote-operated compact cameras navigate geodesic circle tracks around the objects in preservation vaults. Other sensorial embedded tools may be embedded in the circle tracks. (C) The basic software and hardware technologies required, such as synchronized media and hardware navigation and virtual studio and multicasting techniques, were tested in the experimental project.

3. Man/object communication in a bidirectional multicasting platform

This section presents the development and applications implemented in the course of the work of the research group, with the aim of exploring the feasibility of the ideas proposed in the previous section.

To create a hardware/software platform for man/object interaction environment, where the “live” examination and study of the object is possible, several multimedia and interface control techniques were employed and adapted to an ASM multicasting application for an interconnected virtual studio. At the hardware level, this included the use of live HD video capture; user camera motion control for navigation and live capture: a dynamic arm podium, controlled by a joystick; dynamic lighting for object extraction by interactive microcontrolled LED RGB chroma–key scene lighting; and collision techniques for camera navigation security. At the software level, 3D scenes for stereoscopic viewing were composed with real-time HD live video (J3D and VRML/ X3D ) through bidirectional multicast communications. It was adapted to the JOrta (2012) multicast software, based on the free Multicast ORTA software of Stephen Strowes (2005) and the XORTA of 2009. The model used for the graphical user interface (GUI) was Skype, but the 2D video display of the main video-conferencing window was changed for the corresponding composite X3D scene of the object, with a live 2D HD video embedded or the more basic 2D HD video source of the viewed object.

3.1 Hardware

Dynamic navigation. An automated arm podium for the camera was developed to support a handycam (Sony HDR HC5). The design features include:

  • Slow and soft motion navigation for the seamless and continuous video viewing of objects.
  • A 360-degree observation field around the upper half of the object’s hemisphere. The approach of the camera to the object is unrestricted, allowing the use of a macro lens, if desired.
  • The podium arm may be easily transported and assembled in different locations in the deposit and curatorial areas.
  • The motion implementation of linear control algorithms includes a basic double-reference system structure, since a very soft and continuous movement is a basic requirement of real-camera video random visualization.
  • Referencing signals for each control loop command are generated by a predefined set, which produces constant seamless motion paths for each of the podium arm’s four components.

The 3D Gui. This interface includes displays of the simulated-arm field operation and the virtual camera viewpoint, which are synchronized to aid the user with joystick navigation and control. The interface was designed to allow for arbitrary viewpoint navigation. The virtual model includes four navigation keys set in the keyboard or in the joystick used for the navigation of the real model, four motion angles and values, and an automatic viewpoint indicator (X, Y, Z).

Implementation of collisions software

An objective of the project’s experiments was to avoid collisions, crashes, or impacts between the viewed objects (which can be fragile pieces such as vases, fossils, etc..) and the podium arm that carries the handycam.

Detecting collisions between different objects can be accomplished through the use of distance maps. The vertices of the model are compared with the distance map of the other object and vice versa, so that together with the distance function, a distance value is indicated. Collision is detected if D (p) <0 for some point p, and the proximity of the object can be detected if D (p) <€. By geometry, the position of P1 and P2 from α, β, L1 and L2 are defined. Then, since it is relatively easy to determine the positions P1 and P2 needed to surround the object with a virtual sphere, one avoids a collision between the arm (and camera) and the VR sphere (noting a small variation in the dimensions of the objects viewed in the application, it was decided that three standard sizes for the spheres were needed: small, medium, and large). An almost identical method is used to avoid a collision of the arm (and camera) with the surface surrounding the scene, which can be represented by a cone. If P1 and P2 are within the VR cone that is created, then there is no danger of contact or a collision trajectory.

A                                       B                                        C

uribe.fig2.Auribe.fig2.Buribe.fig2C

Figure 2: (A)(B)(C). (A) Collision implementation diagram. (B) Virtual Studio implementation for testing software and hardware technologies in real time: interactive chroma-keying for lighting and background extraction and camera navigation in a collaborative multicasting session. Dynamic lighting ceiling diagram. (C) The LED lighting installation is segmented into 3-by-120 LED sets, which turn on/off according to the feedback of information on the real camera position, thus preventing shadows on the viewed object.

Real-time interactive lighting system

Data retrieval for lighting control in real time: Based on the chroma-keying conditions that Virtual Studio lighting requires, the lighting control is mixed: it uses both automatic and manual controls. It was decided to use chroma-keying techniques in order to afford an abstract object-viewing experience. The lighting system should control the light output in order to minimize shadows caused by the movement of the handycam and the podium arm we use.

This system turns off the LED bulbs or the automatic ceiling lights and thus avoids shadows on the chroma-keying surface. Additionally, the Virtual Studio system provides ideal color conditions for a chroma-key extraction of the viewed object (manually controlled), due to the RGB lights plugged into the robotic arm. Automatic control is implemented by a microcontroller, the PIC 16F877A microchip model, which has five ports (A, B, C, D, E) and a manual diming control. The calibration was done manually, with a lux meter, to find the appropriate values ​​(in lumens) for each object. The initial tests were made with bulbs placed on the acrylic ceiling of the Virtual Studio.

The multimedia hardware includes:

  • HDR HC5 Sony handy video cam
  • SE 800 DV SDI digital video stream mixer
  • Flux 3550 video encoding and streaming card
  • Blade Server HS22

3.2 Software

Bidirectional ASM multicasting software (JOrta)

The JOrta software was developed for each multicasting node. It is a refinement, in Java, of the ORTA originally programmed in C++ (Strowes, 2006), but it has two other important features: it creates a P2P network between nodes without the need for IP multicast, and it implements a direct system or protocol in the same JOrta, for packet prioritization in a real-time interactive environment, similar to the functions of the RTP-I of Free XOO (Dumoulin, 2008: 7), on which the work was also based.

The application consists of a central server or node that is installed on an IBM Blade Server HS22 directly connected to the arm podium of the camera, which receives information from the microcontroller’s navigation system on the control board and thus also activates the dynamic lights for the chroma-keying and the MR collision software feedback. The video camera transmits data to each of the users’ nodes, which in turn interact with it. A chat module was also incorporated so that users are in contact with one another. Although the central node acts as a server in many ways, the system is not a client-server model, since the data is sent and received from each of the nodes through a multicast network. In that sense, the central node (S) will send more information to other nodes (R). The video signal from the camera is directly captured from the local port of the importing Flux 3550 card connected to the Video Mixer SE 800, with background and chroma-key functions. In the special case of 3D stereoscopic viewing, the video is not shown directly. It is first processed for the extraction process (JMF), which removes the green background produced by the Video Mixer and leaves a transparent background. The source video is then transformed into an RGB and rendered as a 3D canvas surface embedded in a 3D J3D environment. The Z composition includes the J3D scene conversion to a VRML/X3D scene with a conversion plugging. The resulting video is then embedded in the 3D scene, and the immersion in the composite 3D VRML/X3D is complete. In extracting the green background, video decoding was used; thus, each frame is transmitted with a data buffer and no processing needs to be done at the (R) nodes, in which only the final Z composition is shown. The JOrta sends the video by groups of frames in accordance with the client’s multicast applications.

Given the complexity of turning the applications of a virtual studio and Web multimedia bidirectional ASM multicasting into a single application, it was decided to use modules. This system in which multiple users can view 3D scenes and operate the remote control card and the handycam requires several types of server functions, connected in an ASM multicast network so that users may receive all distributed information.

  1. Joystick control module. A joystick connected by USB to a remote PC provides all the functions of communication and control.
  2. Events created by the joystick at the remote PC create a string that is sent directly to the podium arm by the serial port RS232, which corresponds to the input data received by the server and is connected to the microcontroller card (node S).
  3. Module serial port connection for sending data. As soon the handycam is activated by the microcontroller card, a second feedback string is sent back by the RS232 serial port to the application (Node S), with the angles and position.
  4. Reception of the new string feedback and decoding in the serial port. The data are sent to the virtual scene and update the display position at the remote PC (node R).
  5. The GUI display includes two 3D scenes: the operating field of the robotic arm podium and the point of view of the camera, which are synchronized in RT with the received position.
  6. The fifth module (for 3D stereoscopic visualization) runs parallel to the flow of data in the previous four modules and interacts with the video card source and its treatment to extract the background. The resulting stream is always displayed in the user’s main scene, since it corresponds to a static 3D object (always located at the center of the moving 3D scene). Thus, the final composition is formed.

The JOrta has a library used by each application, which is the interface for sending and receiving data from each node. It also includes a new application, a service or daemon that creates the actual connection between each node and forms the network between the multicasting nodes. Thus, the applications are not directly connected to the network or remote computers, but are locally connected by use of the JOrta library to the JOrta service that creates packages, prioritizes, and connects to the network. The JOrta’s communication protocol is based on UDP multicast sockets, so it handles, further prioritizes, and connects to the network. Since JOrta is a module external to client applications, it supports any data type, as all nodes send and receive information in the multicasting environment.

These data types are:

  • Joystick data: sent from a node to the central node display
  • 3D scene data: sent from the central node display to all nodes
  • Data from chat: sent from one node to other display nodes
  • Video data: sent by the central node from all display nodes

Auribe.fig3.A

Buribe.fig3.B

Curibe.fig3.C

Figure 3: (A)(B)(C) (A) Multicast diagram. Multimedia video and hardware navigation use two nodes (IBM Z Pro Workstation and Blade HS22 IBM server) due to the complex information loads. The 3D scene is loaded locally by each PC participating in the multicast session. (B) The 3D scene in X3D is rendered by using local archives and multicast video. Multimedia video and hardware navigation display are shared by three users in remote locations. (C) The application was tested by connecting the mechatronic lab at the Universidad Nacional de Colombia in Bogotá and the media lab of the Universidad Autónoma de Occidente in Cali, using high-speed academic internet (Renata).

The platform of the communication scheme has two applications: The first routes the information; that is, it sends and receives through a multicasting with Java components. There is also a traditional socket server to serve each application in turn: it retrieves its specific data and distributes it in the multicast object, which then sends the data over the network to other computers that share the group and port. The second manages the data that is sent through the routing application. It consists of a traditional client socket, which opens a port that connects the two applications and sends the information to transit stops. This application can send many types of information, including textual information in a chat-like interface and voice and video data.

4. Conclusions

The development of the IOT not only may help to bring about an important change in relations between museums and the Web, in light of the explosive growth of mobile communication and its importance in disseminating the contents of museum collections, but also may cause a breakthrough in the approach to the subject of IOT, since that has generally been thought of as the handling of things in a massive way. In the case of an object’s aura, it requires a bidirectional IOT, since each piece in a collection is unique, and thus in the manner in which museums arrange and conserve their collections. In the following stage of our project, we hope to explore the field of embedded sensorial devices that may be applied to collections in museums (of decorative arts), which is an important aspect of the current development and investigation of the IOT. This justifies supporting the development of these kind of applications, as well as the exploration of new forms of enhanced perception (and contemplation). 

Acknowledgments

The current study was undertaken at the mechatronics laboratories at the Universidad Nacional de Colombia and the Universidad Autónoma de Occidente, with funding by the Facultad de Artes of the Universidad Nacional de Colombia, CINTEL (Centro de Investigaciones en Telecomunicaciones), and the Ministerio de Eduación Nacional de Colombia (MEN) from 2004 to 2012. An important part of the work was carried out by Alvaro López Pinilla, Olmedo Arcila Guzmán, Luis Miguel Méndez, Cesar Augusto Pantoja, Nicolas Balcero, Juan Pablo Ocampo, and Erick Ocando Sanchez. During 2014, the JOrta ASM multicasting software was tested by the Renata Corporation, by a joint team of the Universidad Nacional and the Renata Corporation that included Luis Fernando Caballero, Mauricio Oliviery, Andrés Sánz, José Luis Puerto, students from the Electronic and Computing Systems School at the Universidad Nacional de Colombia, and Carlos Alberto Ramirez, Communication engineer at the Renata Corporation.

References

Benford, S. (1998). “Understanding and Constructing shared spaces with mixed –reality boundaries.” ACM Transactions 5(3), (September), 185. Computer-Human Interaction (TOCHI).

Darlagiannis, V., et al. (2010). Virtual Collaboration and Media Sharing using COSMOS. School of Information Technology and Engineering, University of Ottawa.

Delicato, F., et al. (2010). Middleware Solutions for the Internet of Things. Berlin: Springer Verlag.

Dressler, F. (2002). “QoS considerations on IP multicast services.” In Proceedings of International Conference on Advances in Infrastructure for Electronic Business, Education, Science, and Medicine on the Internet (SSGRR).

Dumoulin et al. (2008). “Introducing the Free XOO (X3D over Orta).” Canada: CRC.

Fernández, J. (2007). “Multicast Información Tecnológica.” CISCO, Enterasys Networks.

Florkemeier, Ch. et al. (2008). “The Internet of Things.” First International Conference IOT, Zurich, March 2008. Berlin: Springer Verlag.

Fukuya, T., et al. (2003). “An Effective Interaction Tool for performance in the Virtual Studio.” Studio Systems.

Gubbia, J., et al. (2013). “Internet of Things (IoT): A vision, architectural elements, and future directions.” Department of Computing and Information Systems, University of Melbourne.

Johnson, S. (2004). “Beyond On-line Collections: Putting Objects to work,” Museums and the Web 2004 International Conference, Archives & Museums Informatics. Available www.archimuse.com.

Juniper Networks. (2014). “Understanding Junos OS Next-Generation Multicast VPNs.”

Kanade et al. (1999). “Digitizing a varying 3D Event as is and in Real time.” In Mixed Reality: Merging Real and Virtual Worlds. Springer.

Park, Jong-Il, & Seiki Inoue. (1998). “Real Image Based Virtual Studio.” First International Conference on Virtual Worlds, Springer.

Park, S. W., et al. (2000). “Real Time Camera Calibration for Virtual Studio.” Academic Press.

Sarac, K., & Kevin Almeroth. (2005). “Monitoring IP Multicast in the Internet: Recent Advances and Ongoing Challenges.” IEEE COMM Magazine.

Sardella, A. (2006). Video Distribution in a Hybrid Multicast Unicast World. Juniper Networks.

Schneider, J., et al. (2011). “A Testbed for Efficient Multicasting and Seamless Mobility Support.” Demonstration at the 11th Würzburg Workshop on IP: Joint ITG, ITC, and Euro-NF Workshop, “Visions of Future Generation Networks” (EuroView2011), Würzburg, Germany. August.

Strowes, S. (2005). Orta – an Overlay for Real Time Applications. University of Glasgow.

Uribe Mendoza, B., L. Mendez, A. Tovar, J. Charalambos, O. Arcila, & A. López. (2013). “Mixed Reality Boundaries in Museum Preservation Areas.” IJACDT 3(2): 63–74.

Vinod, David, & Srinivas Mulugu. (2011). “Deploying Next Generation Multicast-enabled Applications: Label Switched Multicast for MPLS VPNs, VPLS, and Wholesale Ethernet. Ámsterdam, Boston, London: MK.

Williamson, B. (2000). Developing IP Multicast Networks: The Definitive Guide to Designing and Deploying CISCO IP Multi- cast Networks. Cisco Press. January.


Cite as:
. "Connecting collection objects from museum deposits and reserve areas to the Web." MW2015: Museums and the Web 2015. Published January 14, 2015. Consulted .
https://mw2015.museumsandtheweb.com/paper/connecting-collection-objects-from-museum-deposits-and-reserve-areas-to-the-web/