Professional Documents
Culture Documents
Abstract. This paper discusses the pieces Estilhaço 1 and 2, for percussion,
live electronics, and interactive video. The conception of the pieces (including
artistic goal and metaphor used), the context in which they were created, their
formal structures and their relationship, the technologies used to create both
their audio and visual components, and the relationships between the sound and
corresponding images are discussed.
1 Introduction
1
Project undertaken with the support of the Brazilian federal agency CAPES (Coordenação de
Aperfeiçoamento de Pessoal de Nível Superior).
which the performer is free to improvise. Estilhaço 1 also exploits the
melodic potential of the kalimba. The two works dialogue with videos
created by Eli Stine. In the first, the video is created in real time, following
the performer’s sounds and gestures. In the second, the same video is
projected and used as a guide for the performer’s improvisation. Thus the
two works play with the relationships between image and sound. (program
note written by Fernando Rocha for the premiere of the pieces on Feb 5th,
2016)
The name ‘estilhaço’ (originally in Portuguese) means fragment, splinter, or shard.
It relates to the process of breaking the long metal resonances into sharp fragments in
Estilhaço 2. Similar processes also occur in Estilhaço 1. Both pieces use custom made
patches in Max. The video component of the pieces utilizes Jitter and Processing. In
this paper we will present how the pieces were conceived and explain some details of
their audio and video components. We will also discuss how the relationship between
sound and video was explored.
The main compositional idea in both pieces was to explore the dichotomy between
long and short sounds; we extended the instruments’ capabilities to create, in real
time, extremely long resonances as well as short attacks and complex, dense rhythmic
structures. Estilhaço 2 was composed first, and its instrumentation asks for triangles,
crotales, bells and gongs (or other metals of long resonances). The objective here was
to shift the attention of the audience from the natural, long resonances of the
instrument to dense structures, created by a series of short attacks. Formally, the idea
was to create a kind of ABA’ structure, beginning with a sparse, soft and relaxed
texture, moving gradually to a section of greater density, volume, and tension, and
then going back to another relaxed structure, somewhat modified from the beginning
as a reflex of the natural trajectory of the piece.
In terms of technology, all the processes used in the piece were created by
recording samples (from 90ms to 450ms, depending on the effect to be achieved) after
every instrumental attack detected by the system. This made it possible to create a
system with no need to touch the computer during the performance. As stated by
violinist and electronic music performer Mari Kimura, “in order to convey to the
audience that using a computer is just one of many means to create music, I wanted to
appear to use the computer as seamlessly as possible on stage.” [1]. The ‘bonk~’
object [2] was used to detect attacks. After an attack is detected, a gate closes for
300ms and no new attack can be detected during this time. A sequence of effects is
built into the patch, and each attack sends a sample to a corresponding effect
according to this sequence.
There are six types of effects (each one with 6 to 8 variations), all based on
different ways of playing back the recorded samples. The first effect is created by
looping three versions of the sample with cross fades, thus creating smooth
resonances. Effects 2 to 4 create resonances that are less and less smooth, until they
are heard as a series of attacks. This is accomplished first by using imperfect cross
fades, and then by using amplitude modulation. Effects 5 and 6 are created with
shorter samples (90 to 150ms) that are not looped, with envelopes that make the
attacks very clear. Each sample is played using different random rhythmic lines at
different tempi, and some of the samples are played at different speeds, resulting in
variations of length and pitch.
The sequence of these events in the patch guarantees the formal structure of the
piece. They correspond to three sections: (A) a long and gradual crescendo in volume,
density and complexity beginning with natural sounds and then adding artificial
resonances that are increasingly modified as the piece moves on; (B) a climactic
section in which all the different effects occur at the same time. A complex texture is
created by using many short samples with sharp attacks; (A’) a gradual decrescendo,
moving back to the original state, while keeping some elements of the climax
(specifically, two layers with short and articulated sounds from effect 6).
The performer is free to improvise during the piece without necessarily altering the
larger structure. This illustrates another layer of dichotomy in the piece: the action of
the performer versus the result of the system. For example, the performer can
continue playing only long notes throughout the piece, but even in this case the
climax section (between 1/2 and 3/4 of the length of the piece) will create a busy,
complex texture that is full of short attacks. However, there are ways to interfere with
the opening and closing crescendo and decrescendo. In the first sections of the piece,
the system creates only long resonances without generating any rhythmic structure.
But if two attacks are played with an interval of less than 300ms, the second one will
be recorded into the buffer created by the first one. Thus the loop of this sample will
create a texture full of short attacks instead of one stable resonance.
Estilhaço 2 also uses a surround system in order to better engage the audience and
to highlight each of its multiple rhythmical lines. While most of the samples recorded
are sent only to one or two speakers, the samples that are played at different speeds
from the original move around the audience. This is particularly effective when two
very long resonances are played midway through the piece. At this moment, one of
the resonances is slowly pitch shifted to a major third below, while the other goes to a
tritone below. Each of these two processes takes about 20 seconds, during which the
sounds circle around the audience. This is a very noticeable effect that helps to mark
the beginning of the climax section of the piece.
Estilhaço 1 was composed for hyper-kalimba, a thumb piano extended by the use
of sensors2. While the piece similarly explores the dichotomy of long vs. short sounds,
it also highlights another contrast: noise vs. melody. The kalimba is traditionally a
melodic and harmonic instrument. In this piece its sound palette was expanded in
order to create a dialogue between the melodic aspect and noisier, less harmonic
sounds (in both the acoustic and the electronic domains).
For the performance of Estilhaço 1, a new mapping for the hyper-kalimba was
created. This mapping kept some features explored in earlier pieces for the
instrument, such as pitch shift (controlled by a combination of tilt and applying
2
The hyper-kalimba was created by Fernando Rocha with the technical assistance of Joseph
Malloch and the support of the IDMIL (“Input Devices and Music Interaction Laboratory”),
directed by Prof. Marcelo Wanderley at McGill University. It consists of a kalimba (a
traditional African thumb piano) augmented by the use of sensors (two pressures sensors,
three-axis accelerometer and 2 digital buttons) which control various parameters of the sound
processing. An Arduino Mini microcontroller board 2 was used for sensor data acquisition
and data were communicated to the computer over USB. The instrument has been used in
concerts since October 2007, both in improvisational contexts and in written pieces. [3]
pressure to the right pressure sensor), while adding new possibilities. Two of the new
features are essential for the piece, since they help to explore the two dichotomies
(long vs. short notes and melodic vs. noisy sounds); (1) the possibility of creating
artificial resonances of the acoustic sounds; (2) the transformation of the traditional
‘melodic’ sounds of the instrument into ‘noisy’ and unpredictable sounds.
In order to prolong the sounds of the kalimba a modified version of an object
called ‘voz1’ (created by Sérgio Freire and used in his piece, Anamorfoses) was used
[4]. As in Estilhaço 2, this object creates artificial resonances by recording a fragment
of the natural resonance and playing it back in three loops, mixed with crossfades. In
this mapping, pressing the right button attached to the instrument opens or closes a
gate which allows for the recording of the excerpts. When this gate is open, each
attack detected by the ‘bonk~’ object is recorded into a buffer, and the new resonance
is played. To avoid recording and reproducing part of the attack of the sound, the
recording begins 100ms after the attack is detected. As in Estilhaço 2, the perfomer
can also create ‘dirty’ buffers (by playing two consecutive attacks with a short
interval between them: the second attack will be recorded into the buffer created for
the resonance of the first one). By using the left pressure sensor, the performer is able
to cut the artificial resonances into shorter fragments that are played randomly in
different speakers of the system at different tempi.
To be able to change the characteristic, melodic sound of the kalimba, the granular
synthesis object ‘munger1~’ was used [5]. There are two ways of turning on or off
the munger effect: turning the instrument upside down or shaking it vigorously for a
few seconds. Shaking stimulates a more dramatic change, since it also adds some
distortion to the sound.
Compared to Estilhaco 2, the form of this piece was expanded in order to leave
room for melodic exploration and transformations with pitch shift. However, one
could still consider the form of the piece to be ABA’, where A correspond to sections
1 and 2, B to 3 and 4; and A’ (or C) to 5 and 6 (fig. 1). The following is a brief
description of the material explored in the six sections of Estilhaço 1
1. exploration of natural noisy sounds of the instrument (scratching and hitting) and
artificial ‘noisy’ sounds created by the munger1~ object (high pitched overtones);
2. exploration of melodies played in the original range of the instrument. At the same
time a layer of long resonances of the noise from the first section are chopped into
shorter and shorter envelopes until they disappear, and a new layer of long
resonances of notes is created, generating a harmonic background texture.
3. Dramatic change in texture using the munger1~ object with a preset that allows for
great variation of grains and pitch.
4. The munger effect is turned off, but the performer explores the pitch shift feature
of the mapping, generating a wide variation of pitch around the sounds produced.
5. The munger effect comes back with a preset that showcases high overtones, similar
to the first section of the piece.
6. A dramatic gesture of shaking followed by a punching gesture, then placing the
kalimba upside down, triggers the last part of the piece. This consists of two layers
made by long and short sounds, similar to the last part of Estilhaço 2. One layer is
created with the munger1~ object, which continues to process the last sound
produced and generates different fragments that contrast in terms of length,
separation, and pitch. Long, artificial resonances of high-pitched notes which are
recorded into the buffers in the previous section create a second layer of sound;
both layers fade out to the end.
The video component of Estilhaço utilizes Cycling 74’s Jitter and the Processing
programming language. The system is as follows: data from the Max patch that is
being used to process the sounds of the hyper-kalimba and also data extracted from
the audio analysis are sent via OSC messages (over UDP) either remotely (using two
laptops) or locally (from program to program) to a Jitter patch, which parses the data
and sends it to Processing (again over UDP). The data is used to affect a program in
Processing (a flocking simulation) whose frames are sent back to Jitter using the
Syphon framework. The video is then processed inside Jitter (applying colors,
background videos, etc.) and displayed on a projector beside the performer.
In order to create the video for Estilhaço 2 one can either record the video created
during the performance of Estilhaço 1 and play it back for the final piece, or record
the data from the hyper-kalimba during the performance of Estilhaço 1 and play it
back through the system for the final piece. Due to randomization the latter method
will reproduce the exact timings of gestures of the first piece, but not the exact same
visual gestures.
Lastly, in the creation of the video component a parallel was made between the
dichotomy of the accompanying acoustic instruments being manipulated through
digital means and the use of a completely synthesized, animated image (the flocking
algorithm) being combined with real-world video recordings.
4 Connecting Audio and Video in a Collaborative Project
Fig 2. Images of the video of Estilhaço in (a) section1; (b) section 2; (c) section 3
3
As stated by Wessel and Wright, in the creation of interactive works, “metaphors for control
are central to our research agenda” [7].
4
“Starboard (2016) is a collaboration between Fernando Rocha and instrument
creator/multimedia artist Peter Bussigel. Sensors hidden within the instrument allow for the
creation of two different textures… one dense and messy, like a cloud of sounds, and the
other dancing and rhythmic. Moving in and out of time with sounds that may or may not
return, creating at some point a feeling of groove that is ready to dissolve, Starboard is an
instrument for navigating between the promise of the computer and the truth of nails sticking
out of a board.” (program note written by Fernando Rocha for the performance of the piece
on July 5th, 2016 at CMMR)
Fig 3. Image of Starboard performance.
References
1. Kimura, M.: Creative Process and Performance Practice of Interactive Computer Music: a
Performer's Tale. In: Organised Sound, Cambridge University Press, Vol. 8, Issue 03,
pp. 289--296. (2003)
2. Puckette, M., Apel, T., and Zicarelli, D.: Real-time Audio Analysis Tools for Pd and MSP.
In: Proceedings, International Computer Music Conference. San Francisco: International
Computer Music Association, pp. 109--112. (1998)
3. Rocha, F., and Malloch, J.: The Hyper-Kalimba: Developing an Augmented Instrument
from a Performer’s Perspective. In: Proceedings of 6th Sound and Music Computing
Conference, Porto - Portugal, pp. 25--29. (2009)
4. Freire, S.: Anamorfoses (2007) para Percussão e Eletrônica ao Vivo. In: Revista do III
Seminário Música Ciência e Tecnologia, pp. 98--108. São Paulo. (2008)
5. Bukvic, I.I., Kim, J-S., Trueman, D., and Grill, T.: munger1 ̃: Towards a Cross-platform
Swiss-army Knife of Real-time Granular Synthesis. In Proceedings, International Computer
Music Conference. Copenhagen: International Computer Music Association. (2007)
6. Cook, N.: Analysing Musical Multimedia. Oxford University Press. (1998)
7. Wessel, D. and Wright, M.: Problems and Prospects for Intimate Musical Control of
Computers. In: Computer Music Journal, vol. 26, no. 3, pp. 11--22. (2002)