You are on page 1of 10

Tangibilizing Space

Diego Maranan School of Interactive Arts and Technology Simon Fraser University Surrey, BC, Canada diegom@sfu.ca Alissa Antle School of Interactive Arts and Technology Simon Fraser University Surrey, BC, Canada aantle@sfu.ca

Abstract
In this paper, we argue for the value of endowing the empty space with tangible qualities based the movement of a human body in live dance-theatre performance. To this end, we describe a low-cost, portable proof-of-concept prototype for detecting gestures and interpreting them as sculpting commands. We evaluate the usefulness of this prototype through the creation and realization of a short performance piece, and outline future directions for the development of the system.

Keywords
Gesture recognition, performing arts, dance, embodiment

ACM Classification Keywords


J.5. Arts and Humanities: Performing arts (e.g., music, dance); I.2.10 Vision and Scene Understanding: Motion

Introduction
To a dancer, space need not be a vacuum through which the body simply moves. Empty space can be usefully thought of as a medium that has physical properties: space can be imagined as being thick as honey, grainy as sand, or unrelenting as steel. Choreographer Dana Gingras has likened space to an elastic medium that is deformed according to the number, relative spacing, and speed of the bodies that

Copyright is held by the author/owner(s). CHI 2011, May 712, 2011, Vancouver, BC, Canada. ACM 978-1-4503-0268-5/11/05.

occupy it [Gingras, personal communication], much in the same way that physics describes space-time as a field that is deformed by massive objects. One of choreographer William Forsythes improvisational techniques, volume avoidance, calls for dancers to temporarily imbue the space with the physical properties of a well-defined solid (e.g., a cylinder) around which dancers must improvise while strictly respecting the physical properties of the solid [4]. Gingras and Forsythes conceptualization of space parallels the observation of Williams et al. that space and social action are tightly intertwined [12], but also inverts the notion that space is merely a substrate through which human movement is enacted. By imaginatively endowing space with specific physical qualities, the dancers body reacts accordingly to the tangibility of space. In other words, the imaginative power and refined bodily skill of trained dancers allows them to regard space itself as tangible and expose this tangibility to the viewing audience through their movement. Other areas of practice that endow space with tangibility include somatic and dance conditioning techniques, such as the Franklin Method and the Alexander Technique [5], which ask the dancer to generate visual imagery in, around, and on anatomical features such as muscles and bones that subsequently affect their movement quality. The authors call the strongly-coupled acts of endowing space with tangibility and of reacting to this tangibility in a dancerly way with full conviction, spatial tangibilization. However, it takes a high level of skill to perform spatial tangibilization to a specific level of detail. Refined improvisational technique, kinesthetic sensitivity, and motor ability are required to spatially tangibilize the difference between, say, a clay cylinder that is 180 cm

high and a metallic cylinder that is 170 cm high. In instances where the choreographer deems it desirable that the audience perceives the spatial tangibilization to a high degree of detail, we propose that the application of motion sensing coupled with novel data display strategies could contribute towards the goal of tangibilizing space. In this paper, we describe a prototype proof-of-concept system for tangibilizing space, called the Tangibilizer, that interprets movement as a reaction to tangibilized space and maps this interpretation to a set of visual parameters in real-time. This system aims to augment subtle aspects of the performance by visually representing the relationship of the dancer to space around them. This was used to construct a short live dance-theatre performance piece based on hand gestures. We describe the architecture of the system and explain the design decisions we made. We describe the process of using the Tangibilizer to craft a performance and use our reflections of the creative process to suggest areas for further technical and artistic development. We present our exploration as a form of artistic practice and investigation that is aligned with Fallmans notion of research-oriented design (as opposed to design-oriented research) [3]. We note at this point that although it is customary for authors to adopt a stance of objectivity when presenting research, we feel it is important to state that the primary investigator of this research is a practicing contemporary dance-theatre and new media artist based in the developing world, and some of the design decisions made during the process are informed by his subjective experiences for this context. We aim to create an artifactan instrumentthat can be put to use in artistic practice in resource-restricted contexts.

Our artistic practice is therefore shaped by our interest in do-it-yourself (DIY) cultures and an open source ethos that welcomes the repurposing of existing technological artifacts to suit the needs of small communities, such as small independent dance and theatre groups in the developing world who wish to incorporate new media in their artistic practice. By reporting on the development to date of both the instrument and the use of that instrument in developing live performance work for dance-theatre, we hope to contribute to the understanding of how digital technologies can augment and transform creation and production across artistic disciplines where the bodys interaction with space is important.

large community that share information through websites such as www.wiihacks.com, making the Wii Remote a reasonable choice at this stage of the development process. Specifying the tangible properties of the space There are at least two ways in which the tangible properties of space can be specified. The first way involves the dancer explicitly stating their tangibilization. Two of Forsythes multimedia projects, Improvisation Technologies [4] and Synchronous Objects [7] demonstrate this approach. Improvisation Technologies is a collection of videos where images of lines, planes, and solid volumes are overlaid on previously-captured footage of dancers to expose to the audience the dancers process of spatial tangibilization. Synchronous Objects, on the other hand, overlays animated representations of movement impulses traveling across space from one dancer to anotherlike a light particle moving through a mediumto enrich the viewers understanding of how dancers use space to communicate with each other across distances. However, these overlays are added in post-production and have not been successfully applied to live dance performance to the best of our knowledge. Another approach to specifying the tangible properties of space is by inferring from them expressive properties of the dancers movement. We find this second, inferential approach to be a more interesting and more challenging problem that offers additional opportunities for artistic exploration. For the Tangibilizer, we thus chose to map qualitative aspects of the movement with properties of the space. We infer these qualitative aspects both from the motion tracking data as well as data provided by an experimental system for detecting

Background
There are three separate problems that need to be addressed in the goal of creating a computational application for tangibilizing space: motion tracking, specifying the tangible properties of space, and representing the tangibilization of space. Motion tracking Motion tracking is a well-established research problem, and numerous approaches exist to address it. These approaches include different sensor types (infrared, electromagnetic, optical, acoustic, inertial, acceleration) and marker types (passive, active, or marker-less), among other factors. Since one of the goals of the project was to trial the use of substantially low-cost equipment that could easily be purchased by individual or small groups of artists, we chose to use the infrared (IR) camera built into the Nintendo Wii Remote, which can interface to a computer via Bluetooth and can track up to four IR points. Projects that repurpose Nintendo Wii components are well-documented by a

Laban Effort qualities. Laban Effort analysis employs four factorsSpace, Weight, Time, and Flowto experience and describe properties of motion [6]. They belong to a larger set of analytical toolscollectively called Laban Movement Analysisthat can be applied towards a rigorous and systematic framework for understanding and categorizing movement [11,1]. Several prototype systems for computationally recognizing Laban Effort in real-time from a moving body have been reported in the literature [10,14,15]. However, all of them require tracking between five to seven body parts, whereas for the Tangibilizer we were interested in detecting the movement quality of a single arm. To this end, we used a Laban Effort-recognition system called EffortDetect. EffortDetect was developed by the Institute for Advanced Computing Applications and Technologies at the University of Illinois and the University of Illinois Dance Department, with the expertise of movement analyst Sarah Hook from the Dance Department, and in collaboration with Dr. Thecla Schiphorst, a faculty of the School of Interactive Arts and Technology at Simon Fraser University [9]. EffortDetect is composed of a wearable accelerometerbased sensor system attached to the right wrist. This sensor sends data to a computer running machinelearning algorithms to classify the acceleration data. Figure 1 shows EffortDetects wearable sensor unit. This wearable acceleration unit was incorporated into the Tangibilizer prototype in order to provide the Tangibilizer information on movement quality. While EffortDetect is still in an early development phase, pilot studies on it have shown that the current version can recognize Effort qualities about 65% of the time. (Though this rate of recognition seems low, what is notable is that the recognition uses data from a single body part, the right wrist.) In this prototype of

Tangibilizer, we merely used it to detect when the user was doing a dabbing motion. We describe the use of dab recognition on the section on System Design.

Figure 1. The Wearable Sensor Unit from EffortDetect. The EffortDetect system was incorporated in the design of the Tangibilizer.

Representing the tangibilization of the space Once the tangible properties of space have been defined, a strategy for representing and communicating these properties to a viewing audience must be decided on. Our initial impulse was to avoid traditional 2D displays and to instead use a volumetric display [8] or map it to a physical, kinetic sculpture or a robotic output such as those described by Djajadiningrat [2]. While integrating volumetric displays in live performance is an interesting proposition, this has not been often done (if at all) to the best of our knowledge. Likely reasons include the prohibitive cost of most commercially-available volumetric displays and the fact that volumetric displays often need to be enclosed

within a transparent case, rendering the display physically inaccessible to performers [8]. Augmented reality systems, such as head-mounted sets, represent another possible solution, although they add a mediating layer that potentially alters the viewing experience of live performance. Due to constraints on time and budget, we proceeded with displaying the tangibilization data on a traditional 2D display. We mention an implication of this decision in the Discussion section.

cumbersome additional equipment, and the Wii Remote would be able to detect IR light reflected from reflective tape attached to the performers fingertips. However, after experimenting with passive IR reflection with variable success, we developed a glove (shown as Figure 2) with LED IR emitters embedded in the tips of the middle finger and the thumb. This glove more reliably transmitted IR data to the Wii Remote. These LEDs were powered with a 3.3V battery via a Funnel I/O board, which are part of the EffortDetect subsystem that we integrated into the Tangibilizer. Infrared receiver No hardware modification was needed to use the Wii Remote for tracking motion data. However, because of inaccuracies in tracking four unique IR points, we found that we could most reliably track a maximum of two unique points on the glove. Further experimentation with different types of LEDs may address this problem, but was not explored for this iteration of the design. Vision tracking for z-coordinate When used with DarwiinRemote, the Wii Remote IR camera tracked IR points very efficiently and with little noticeable delay. However, it could only track points in two dimensions, which we assigned to the x and y axes. To track the z axis, an additional sensing system needed to be implemented. A second Wii Remote positioned orthogonal to the first Wii Remote would have been a good candidate; however, in order to test other low-cost, easily-available motion tracking systems, we decided to experiment with computer vision-based tracking using a webcam. The webcam

Figure 2. The Tangibilizer glove.

System design
We used an agile development approach to designing and building a prototype that would sense shaping gestures made by the right hand, and map the shaping gesture to the tangibilized representation of that gesture. Ideally, the performer would not have to wear

was positioned orthogonal to the Wii Remote and tracked a single blue LED light using the JMyron library. Software We used version 0.2.1 of the software package DarwiinRemote, which implemented Open Sound Control to communicate with the Wii Remote. The application that received the positional data sent by DarwiinRemote was developed in Processing and is the heart of Tangibilizer. We outline the hardware and software subsystems of the Tangibilizer in Figure 3.

2-minute performance piece that we crafted as a test bed for illustrating the functionality of the system. Conceptually, Moving On explores shared physical spaces and maps them to shared emotional states: In Moving On, the performer sculpts two figures from empty space. These figures are visually presented as abstract forms, perhaps as spheres or moving clouds of points, but somehow it is clear that they represent individual human beings. The sculpting movement of the performer determines the qualities of the visual representations. The performer then sculpts the existing space to show that it is a nexus of emotional and experiential possibilities which our figures inhabit and move through as they age. Occasionally, the figures inhabit the same space. Though it lasts only briefly, this is a moment/place of emotional convergence, of sharing and empathy; in this moment, the entire space is transformed. Based on this concept, we designed the interactions that we decided to implement for Tangibilizer. We describe these interactions in the following sections. SHAPING GESTURE The system recognized a shaping gesture made in space by a hand equipped with the glove. This shaping gesture began when the fingers were within a certain distance of each other, creating a pinching effect. At this point, the system began to track the distance between the fingers. In the current implementation of the system, the distance between the fingers defined

Figure 3. Components of the Tangibilizer.

Interacting with tangibilized space During the development of the Tangibilizer, we continuously kept in mind the interactive and aesthetic outcomes we wished to achieve in staging Moving On, a

the diameter of a 3D sphere sprite whose center lies in the midpoint between the two fingers. Future implementations aim to track multiple fingers and thus allow the user to shape more complex shapes in space. DABBING TO DETACH EffortDetect was able to detect whenever the user was performing a dabbing motion with their wrist, which we assigned to be as signal for ending the shaping motion which would detach the sprite from the fingers. SPRITE MOVEMENT The performer could move the sphere sprite in space by positioning their fingers on or inside the sprite and moving up to a certain threshold speed. Beyond this speed, the movement was interpreted as a brushing off that detached the sphere from the fingers and sent it flying off at the same speed by which it is was brushed off. Larger sphere sprites required a quicker, more forceful brushing off in order to be detached from the performers fingers. TANGIBILIZATION FROM THE SHAPING We mapped the speed by which the performer shaped the sphere sprite to a visible property of the sprite: the quicker the sphere was shaped, the redder it was rendered. In future versions of the system, the speed of the shaping could be used to determine some tangible property of the space, such as density, viscosity, or surface texture. Figure 4 demonstrates x, y, and z tracking of a sphere using a test setting in Tangibilizer. Notice the two smaller sphere within the sphere; these indicate the IR LEDs. Figure 5 shows two frames from Moving On,

which operates with z-detection disabled in order to optimize speed.

Figure 4. Screenshot of the Tangibilizer in test mode.

Figure 5. Two frames from Moving On.

Discussion
System performance We found that the Wii Remote detected the IR LEDs with remarkable responsiveness and accuracy, and we were therefore able to track x and y coordinates for two fingertips with substantial success. However, when the two IR LEDs were within about 5 arc degrees relative to the Wii Remote, spatial resolution was lost and DarwiinRemote reported only one LED. We also found that while computer vision tracking produced reasonably accurate depth information, it slowed the system down considerably. Furthermore, we had to guarantee that the performance venue did not have any objects that were visible to the camera that could interfere with the motion tracking. We found that the current version of EffortDetect reliably detected certain gestures. For example, as mentioned, dabbing was used to trigger the detachment of the sphere sprite from the hand. However, the detection did not work as well when coupled with the sphere shaping gesture. Nevertheless, we expect further refinements of EffortDetect to produce results that are more easily adaptable to the goals of the Tangibilizer. Furthermore, more information could be extracted from the IR sensors that could be used to infer motion qualities. Effects of tangibilizing on the creative process We found that the physical act of shaping the sphere sprites using different movement qualities (for example, using a lot of muscular tension or by applying a light and easy effort) evoked different images in our mind about the personalities of the sphere sprites. The amount of muscular tension (which we expect will be reflected in micro-fluctuations in positional data) could

be used to alter a spikiness parameter of the sphere sprites, for example. We also felt that seeing the results of our movements in a tangibilized form connected us emotionally to our movement, as if somehow they felt more precious. In general, we found that there was a tight relationship between developing the Tangibilizer and developing the narrative of the performance piece. We feel that by working with an artistic concept as the framework for developing the system, we were inspired to explore other types of mappings between gestures and tangibilized space.

Further work
The Wii Remote and its repurposing represent a development in digital technology that have made motion tracking more available to a larger audience. Recent developments, notably the release of the the Xbox Kinect [13], have made full-body tracking technology accessible. We aim to explore the growing body of Kinect-based repurposing and apply it towards our research and artistic practice. We felt rewarded to see aspects of our movement quality tangibilized. However, we felt that displaying the tangibilizations on a 2D display that was not collocated with the body forced both performer and the viewing audience to continuously split their attention between the performer and the 2D display. While this is not always unwanted, we feel that the next step is in this exploratory research is to explore displays for tangibilizations that enable the body and the tangibilizations to be collocated. One promising, lowcost, and open source solution is Lumarca, which is a low-fidelity volumetric display that uses cotton strings as extended pixels and any commercially-available

projector [8]. This system could be extended and adapted to large-scale performance venues. We also note that the development of the Tangibilizer should be coupled with the development of an artistic work suited particularly for the system. We expect the development of the Tangibilizer to proceed to a point at which we can release it (or some portion of it) to a larger community of dance-theatre artists for them to test, use, and adapt to their needs. Documenting and analyzing how other people apply the system can be used towards the analysis and further development of the system.

Bibliography
1. Bouchard, D. and Badler, N. Semantic segmentation of motion capture using Laban Movement Analysis. Proc. 7th Int. Conf. Intelligent Virtual Agents, Springer-Verlag (2007), 37-44. Djajadiningrat, T., Matthews, B., and Stienstra, M. Easy doesnt do it: skill and expression in tangible aesthetics. Personal and Ubiquitous Computing 11, 8 (2007), 657-676. Fallman, D. Design-oriented human-computer interaction. Proc. SIGCHI Conf. Human Factors in Computing Systems, ACM (2003), 225-232. Forsythe, W. William Forsythe: Improvisation Technologies. Hatje Cantz Publishers, 2000. Franklin, E. Dance Imagery for Technique and Performance. Human Kinetics, 1996. Laban, R.V. Effort. Macdonald & Evans, London, 1947. Palazzi, M., Shaw, N.Z., Forsythe, W., et al. Synchronous Objects for One Flat Thing, reproduced. ACM SIGGRAPH 2009 Art Gallery, ACM (2009), 1-1. Parker, M. Lumarca. ACM SIGGRAPH ASIA 2009 Art Gallery & Emerging Technologies: Adaptation, ACM (2009), 77-77. Pietrowicz, M., Garnett, G., McGrath, R., and Toenjes, J. Multimodal gestural interaction in

2.

3.

Acknowledgements
We thank the members of the Expressive Motion Research Project, the Tangible Computing Class, and the Performance-Research Group in the School of Interactive Arts and Technology at Simon Fraser University. We thank our international collaborators at the Institute for Advanced Computing Applications and Technologies at the University of Illinois Dance Department, and Dr. Karl-F. Bergeron for his insightful comments. We also thank our funders MITACS, and the New Media Initiative funded through the Canada Council for the Arts (CCA) and the Natural Sciences Engineering Research Council (NSERC). We also acknowledge the University of the Philippines Open University. 4.

5.

6.

7.

8.

9.

10

performance. Whole Body Interfaces Workshop, CHI 2010. 10. Santos, L., Prado, J., and Dias, J. Human Robot interaction studies on Laban human movement analysis and dynamic background segmentation. (2009), 4984-4989. 11. Schiphorst, T. Bridging embodied methodologies from somatics and performance to human computer interaction. Ph.D. Thesis, School of Computing, Faculty of Technology, University of Plymouth, 2008. http://www.sfu.ca/~tschipho/PhD/PhD_thesis.html . 12. Williams, A., Kabisch, E., and Dourish, P. From interaction to participation: Configuring space through embodied interaction. UbiComp 2005:

Ubiquitous Computing, (2005), 287304. 13. Wortham, J. With Kinect Controller, Hackers Take Liberties. The New York Times, 2010. Retrieved from http://www.nytimes.com/2010/11/22/technology/ 22hack.html?_r=1&scp=3&sq=xbox%20kinect&st =cse on December 6, 2010. 14. Zhao, L. Synthesis and acquisition of laban movement analysis qualitative parameters for communicative gestures. Ph.D. Thesis, University of Pennsylvania. 2001. 15. Zhao, L. and Badler, N.I. Acquiring and validating motion qualities from live limb gestures. Graphical Models 67, 1 (2005), 1-16.

You might also like