Mesched up: A First Set of Physical Components for the Creation of Interactive Exhibits

Embedding interactive technology into museum installations can provide many benefits for visitors as well as for curators. At the same time it is a challenging task from the design as well as from the technical perspective. In this blog post the meSch team at the University of Stuttgart (USTUTT) presents a set of physical components created by this team, that can be easily combined to interactive installations. These are a projector lamp, a circular plinth that senses objects in its vicinity as well as a RFID reader / wrist-band combination. We describe a set of interesting use cases for interactive exhibitions that can be realized using these components. Further, we introduce the meschup platform that allows an easy mesh up of heterogeneous components as well as a programming approach that is targeting non-technical users and experts at the same time. Overall, this blog post should give an overview and better insights into the work going on at USTUTT.


The use of technology in museums can create new involving and exciting forms of interaction with exhibits and whole exhibit environments. It can help connecting physical exhibits with its related digital information available from closed and public sources (like Europeana) and provide more active, involving and explorative ways to discover stories and information around exhibits resulting in a better visitor experience.

However, creating those installations is a difficult task. Curators and their teams often outsource their idea for an interactive installation to external specialized companies due to their lack of expertise in computing, programming and electronics – designers and craftsman are however often already part of the team. Development lifecycles are long and ideas often cannot be evaluated at an early stage with the intended target groups and within the desired context. This can result in expensive museum installations stuffed with technology but finally missing the intended user experience or educational effect.

Our vision is to provide curators and their teams with a toolbox of physical components that can be quickly and easily combined to an interactive environment directly in the museum premises, it can be re-arranged and modified, turned into a fixed installation or can be disassembled and reused for other setups. A single user interface similar to the “If-this-than-that” (IFTTT) web service is provided to mesh-up components and create the intended behavior of the interactive installation.


Figure 1: Setup with projector lamp positioned above sensor plinth with exhibit object on top

In this blog post we present a set of useful components for the creation of interactive installations. These have been rapidly built based on widely known DIY low level components such as Arduino boards, .NET Gadgeteer components as well as Raspberry Pis. In addition enclosures have been laser cut, 3D printed or bought from the shelf, thus allowing an easy reproduction. The created components are a projector lamp intended to hang from the ceiling, a plinth where exhibits can be placed on top sensing the position and distance of people around, as well as a wrist-band in combination with a palm rest that can read the RFID tag stored in the wrist band.


Figure 2: Adaptive caption displayed based on direction and distance of a person in vicinity

Scenario and Use Cases

A scenario is used in the following to explain a possible combination of these components in order to create a specific museum user interaction concept:

Juan is visiting Amsterdam for a student exchange semester. He is on the way to one of the large local history museums which was recommended by a friend. In the queue he sees that the museum offers three different routes that all focus on different historical topics. While paying for the entry he is asked by the ticket saleswoman which route he would like to go and which language he would prefer for written and spoken information. Juan is explained that an interactive system can help him to follow a certain tour. He decides for the Egyptian tour and chooses his native language Spanish. Before leaving he is handed a wrist-band that he is instructed to put around his wrist before entering. In the entry area a desk with a rest for a hand palm attracts his attention so he puts his hand with the wrist-band on top of it and a graphical route of the Egyptian tour with Spanish description as well as the direction towards the first exhibit are projected on the desk aside. When entering the first Egyptian exhibition room he finds another palm rest, puts his hand on top and suddenly sees one of the exhibition objects in the dimmed room being highlighted by the lamp above. He moves towards this exhibition artifact that looks like a small figure of a pharaoh standing onto of a plinth. There are no labels around but as he approximates a large label titling it as “Ramses IV.” appears on the surface in front of him. As he comes even closer a more detailed Spanish description appears in smaller font. When walking around the effigy to see it from all sides further information are projected on the surface in front of Juan explaining details of the pieces the figure is wearing. On one side a projected arrow is pointing towards one side saying “next”. Juan takes a look in this direction and notices that another exhibit object near him is highlighted now. He now understands how the interactive system is routing him and how he can use the palm rests. He continuous his tour through the rest of the museum, returning the wrist-band at the exit.


Figure 3: More detailed text is displayed when a person is in reading distance

This scenario demonstrates multiple use cases that are created by combining our three components: these are tour routing, language adaption and in-context information display. Firstly, the combination of a RFID-tag equipped wrist-band and multiple palm rests deployed in the museum allow to apply the profile of the visitor (here: adult, male, Spanish language, Egyptian tour) programmed at the entrance to adapt the visitors local context to his profile (e.g. content language, next stop). Projector lamps mounted above a palm rest can instantly display directions or contextualized information (e.g. what else is around). Palm rests in combination with (projector)-lamps positioned in an exhibition room entrance can initiate an individual tour by sequentially highlighting preselected exhibition objects. Secondly, the users profile can be used to adapt the content projected around the artifacts, e.g. the language of labels, the extent of information or the presentation (e.g. different content for children). Thirdly, by combining the projector-lamp with the circular distance sensor plinth not only the objects themselves can be enlightened when persons are detected but the information presentation can also be adapted to the distance and direction of persons around an exhibit. Static labels in current museums are usually written only in one or two languages and are often only placed on one side of the object resulting in a crowd of people looking over others shoulders trying to read the tiny text. Here labels can adapt to the distance (e.g. large captions when a few meters away, smaller font and more text when in reading distance) and direction: information can follow the visitor around or multiple visitors can read the information at the same time from multiple directions. In another mode information displayed in each direction can directly relate to things visible on this side of the artifact. Even visual lines can be projected connecting a piece of information with the exact position on or in front of the object that it relates to (Figure 1-4).


Figure 4: Further related information is shown while walking around the artifact

Further, sensor data can be persistently stored for analysis by the curators. For examples the distance sensor data of multiple plinths could be used to visualize a heat map or the flow of visitors in exhibition rooms allowing drawing conclusions such as which exhibits are less attractive to visitors or whether routing strategies provided by the interactive system actually work.

Three use cases have been shown that are based on our three components. Countless other use cases could be created by curators and user experience designers; even more with a larger set of components.

Centralized Control with the meschup Platform

The three proposed physical components are built in the context of meschup – a platform developed in parallel at USTUTT which significantly reduces the complexity for programming and interconnecting components of different platform types as well as communication technologies. The core of it is a central unit that we provide in the form of a Raspberry Pi extended with USB dongles for WI-FI, Bluetooth (LE) and XBee. It is intended as an always on device that runs as the central component where all data and events from the various communication interfaces come together and where the system logic is stored. When started up it firstly provides WI-FI, Bluetooth and XBee access point functionality and secondly it offers a web based user interface for the configuration of external components as well as for programming the overall system behavior. The concept of this platform strictly follows the master-slave model, where each associated external component is seen as a simple remotely configurable source of sensor information and / or a sink for actuator commands. We have put effort into creating client firmware for a set of popular embedded platforms that exactly implements this functionality. This includes firmware for the Arduino platform with XBee as main wireless interface, firmware for the .NET Gadgeteer platform including WI-FI, Bluetooth and XBee communication interfaces, an SD card image for Raspberry Pi clients using WI-FI and Bluetooth for connectivity and finally an App for Android devices that turns smartphones and tablets into generic sensor / actuator clients. All those platforms can be used as physical material for the construction of interactive meshed up systems without the need for programming a single line of platform specific code. All programming happens in a web based user interface that targets the non-technical user by providing a wizard that allows to create simple “if this than that rules” as well as an expert interface, that uses JavaScript to realize complex behavior. All three components described in this blog post were built using this physical material and meshed up using the central programming interface.

Conclusion and Next Steps

In this blog post we propose three physical components that can be meshed up for building a variety of interactive installations for museum exhibitions. Curators and user interaction designers can rapidly prototype and evaluate new ideas before turning them into fixed installations possibly reusing the same components. Based on popular DIY platforms hardware, laser cut and 3D printed enclosures and of the shelf material all three components can be easily reproduced. The whole software and inter-communication part is covered by the centralized meschup platform, coming as an inexpensive Raspberry Pi providing wireless connectivity (WI-FI / Bluetooth / XBee access points) and a single web based configuration and programming user interface. This allows rapid creation and exploration of interactive systems by persons without previous expert knowledge in (embedded) programming or electronics.


Figure 5. RFID reader based on Gadgeteer components in a laser cut enclosure

Already by combining those three proposed components useful functionality such as user personalization, in-context adaption and indoor navigation could be realized. With the technological progress cheap projectors are not far away which basically can replace bulbs, making everywhere displays feasible and static labels history. Personalization, in our case realized through explicit user interaction via RFID as well as proximity sensitive installations can be very soon replaced or extended through low-cost wireless Bluetooth Low Energy technology. We are planning to integrate our first BLE devices very soon.


Figure 6. Wrist-band with RFID tag

As next steps we will evaluate the proposed components as well as the centralized platform with two stakeholder groups: Our Work Package partners within the meSch project (first step) and museum curators (second step) as well as museum visitors (third step). In the first two steps we will provide museum partners and their teams of creatives with starting packages consisting of most of the components described in this bl0g post as well as an additional toolset containing parts that can be assembled to new sensor/actuator devices. The creation of new components and their mesh up into interactive systems will be introduced in a near feature hands-on co-design workshop. The resulting interactive installations can be then deployed within the corresponding museums and evaluated with real museum visitors. Those insights will be then again fed back into the project. Additionally we are planning to provide the whole platform as Open Source / Open Hardware as well as provide schemas for easy recreation (e.g. laser cutting templates and 3D models for enclosures).

More information

This blog post is based on a work in progress paper submitted to the 8th International Conference on Tangible, Embedded and Embodied Interaction (TEI’14).

  • BLOG