You are on page 1of 10

A Live Performance System in Pure Data: Pitch Contour as Figurative Gesture

Richard Graham B.Sc, M.Mus School of Creative Arts, University of Ulster Foyle Arts Centre, Magee Campus Londonderry, Northern Ireland graham-r9@email.ulster.ac.uk Abstract The implementation of real-time music technology within an improvisatory performance practice raises a series of issues concerning perceptual relationships between the performer, musical gesture and resultant sound structure. The following paper presents a live performance system constructed in Pure Data which exploits a polyphonic or multi-channel audio output of an electric guitar in order to create dense polyphonic music structures. The system provides pitch data for each string in order to construct relevant pitch class contour (figurative gesture) information in real-time. The contour information is then interpolated and assigned to a series of spatial and timbral synthesis parameters in order to create a series of new gestural relationships between the performer and resulting sonority. Overall, the presentation will highlight the salience of contour in the construction and perceptual auditory organisation of polyphonic music structures. Keywords Polyphonic Guitar, Melodic Contour, Septar Board, Figurative Gesture, Pure Data. 1 Introduction Such an approach may potentially allow further comprehensibility of complex polyphonic structures as they unfold in real-time performance [3]. One may further develop more meaningful, or at least more relevant envelopes by utilising contour structure of the performer's melodic figuration, constructed in real-time practice. A further benefit to be found in the utilisation of a constant incoming stream of performance data is the potential mapping of relevant performance data in order to control concurrent real-time morphological transformations, rather than a transformation of sound structure occurring after a conditional algorithm is satisfied. The following paper presents a theoretically grounded live performance system which exploits relevant schemata in order to evoke conscious, consistent evaluation and interaction between the performer and available patterning resources2. It would seem logical, then, to develop a strong theoretical underpinning based on relevant musical, technological, and psychological research to function as the crux of a computer based performance practice in order to address matters of interactivity, density, polyphony and comprehensibility across rehearsals and performances [4]. By extending understanding of the cognitive constraints surrounding the perception of musical structure, one creates an opportunity to formulate useful compositional strategies which may be utilsed within a live performance system and practice [5]. Methods of auditory (scene) analysis 3 will be considered in order to improve comprehensibility of how the listener may establish and retain musical structure in memory. Therefore, one creates a suitable foundation for the implementation of a performance system which successfully encourages structural percepts in dense polyphonic music. This paper also considers compositional structures relevant to the instrument, which may be manipulated algorithmically to further encourage musical structure in real-time practice.
2 3

A series of performance related issues concerning the perception of gestural interaction between the performer, their instrument and the resultant acoustic event, are commonplace in electronic music practice. The computer undoubtedly extends the compositional palette of the solo performing musician beyond pitch primacy conventions, allowing potential development of dense polyphonic textures over time. However, the retention and comprehension of such a polyphony of polyphonies [1] becomes increasingly difficult in proportion to the density of acoustic events. Unfamiliar sound structures created in real-time may also challenge audiovisual schema associated with relevant compositional or instrumental (melodist) concepts, such as linear completion and other forms of musical or instrumental expectancy1. One may consider envelopes as a means for thematic evolution, to be treated as an open gesture in order to create a memorable shape, [2] perceivable as a meaningful scheme at various hierarchical levels of musical structure. Thus, one may increase structural percepts in dense polyphonic music by applying contours amongst concurrent transformative real-time synthesis processes.
1

(cf. Bregman, 1990) (cf. Bregman in McAdams and Bigand, 1993)

(cf. Lerdahl, 2001; Iazzetta 2000, pp. 265 266)

The following section discusses project background and motivation, development and exploitation of available hardware and relevant theoretical research concerning the perceptual organisation of sequential pitch structure. The implementation of a live performance system in Pure Data will follow in the latter section of this paper. 2 Beyond Instrumental Convention

Given the potential found in multi-channel audio technology to foreground musical cues, otherwise nuanced, one may deem it pertinent to examine the traditional definition of polyphony versus a more contemporary and technological definition. The Harvard Dictionary of Music observes polyphony to be: Music that combines several simultaneous voice-parts of individual design, in contrast to monophonic music, which consists of a single melody, or homophonic music, which combines several voice-parts of similar, rhythmically identical design [12]. Such individual design is limited by conventional monaural pick-up technology. Inventor and Musician, Grob, observes: Polyphony has its specific meaning in [sic] music history, but the word itself simply means that there are several sounds and we understand several voices or notes. Of course this is the case for any ordinary guitar, but conventional technology treats all notes as a single sound source [which often does not make sense to the musician] [13]. It would seem logical to treat technological audio hardware and applications in the same manner as traditional polyphony, and not as a single monaural audio output. 2.1 Beyond MIDI Technology

Pitch primacy in Western art music tends to overshadow the potential utilisation of timbre and spatial location cues beyond the role of nuance [6]. Western musical instrument design is based on the premise whereby timbre has been streamed in specifically acoustical-refined instruments and adapted to the logic of pitch/duration lattice structure [7]. As a result, acoustic instruments acquire homogenous or less distinctive timbres [8] due to a standardised source vibration and a resonance shaper [9]. However, modern interactive technologies and techniques provide an opportunity for one to foreground timbre and spatial location cues beyond such nuance [10]. The traditional electric guitar is treated as a single (monaural) sound source due to the standardisation of pick-up systems in conventional electric guitar design. Monaural amplification of all available registers makes it difficult to exploit multiple timbral structures in real-time. As a result, electric guitar performers who choose to explore timbral and spatial location configurations in musical practice are presented with limited physical interaction between the electric guitar and selected means of real-time signal processing. The performer's (physical) inability to convincingly control and apply real-time processing technology in conjunction with an unmodified instrument results in an arduous, time consuming performance approach. Oftentimes, such an approach may present a series of issues concerning causation between the performer and resulting sonority. That is to say, the relationship between the physical (effective) gesture of the performer and the resulting musical structure may not be clear. Thus, future interrogation of available musical gesture may prove most useful in addressing issues of causation. However, if independent audio streams per string are utilised concurrently, the performer would have the potential to create multiple configurations of musical stimuli, including complex timbral and spatial location structures. The physical spatial independence of each individual pole or sensor results in interference reduction between strings. This may potentially improve the extraction of pitch data in real-time. Accurate extraction of performance information may allow for musical structures to be created more convincingly in real-time practice by developing a series of interactive algorithms which reflect cognitive constructs relevant to real-time musical figuration [11]. Thus, potentially negating the aforementioned interaction issues between the performer and computer.

The polyphonic guitar pick-up is an extension of electric guitar pick-up technology utilised in conjunction with the note-message-based protocol, MIDI (Musical Instrument Digital Interface), standardised in 1982 and adopted by general industry in 1983 [14]. A MIDI guitar pick-up utilises a magnetic or piezoelectric pick-up system to capture the vibration of each string.

Fig 1. 13 Pin DIN Connector

A thirteen pin DIN (Deutsches Institut fr Normung) connector has been incorporated into the standardised MIDI guitar pick-up design to allow multiple streams of voltage to be utilised concurrently, or polyphonically. Each pin may correspond to a particular function. For example, pins one to six of the thirteen pin DIN connector correspond to strings one to six of the electric guitar. Traditionally, independent data streams would subsequently be converted into event messages by MIDI conversion hardware or software4. The converted data could then be utilised to trigger various computer based processes, with the potential to reflect additional musical expression. 2.1.1 Hardware The Septar Board

The completed kit typically provides seven independent audio channels (dependent upon user specification), each channel consists of an independent circuit, based on vintage vacuum tube topology, which provides impedance matching by employing low noise and spectrally accurate Junction Field Effect Transistors [16]. Notable Effects A series of early tests noting potential integrative and segregative effects between a polyphonic or multi-channel (piezo) pick-up system and a conventional monaural (magnetic) pick-up technology were made by exploiting available musical patterning resources [17]. The physical separation of each pole or sensor allows for the reduction of harmonic interference between notes produced on different strings, particularly when high gain structures are applied. Thus, allowing for the utilisation of a clearer harmonic partial structure per complex tone, per string. Therefore, more dissonant tone combinations may be utilised with reduced roughness, due to the physical spatial separation of each pole or sensor. This phenomenon may significantly impact conventional guitar practice and composition [18]. In addition, spatial configurations may ultimately influence perceptual integration and segregation. Grouping effects may occur between musical stimuli that are similar in spatial location proximity. Differences in spatial location proximity may encourage segregation effects, potentially allowing the listener to delineate between rhythmically contrasting stimuli more clearly [19]. However, it is important to note the dichotomy between the physical interaction within the acoustic domain and the potential perception of acoustic events. For example, while two tones close in log frequency or partial structure may result in psychoacoustic dissonance, spatialising the two tones apart in terms of spatial location proximity may ultimately create the perception of tonal resolve through (spatial) segregation. However, the contour of the acoustic phenomenon, beats or beating [20], as a result of the physical interaction between two complex tones close in log frequency on adjacent strings, remains intact. Further experiments concerning the perception of acoustic interaction between musical stimuli in real-time performance will be a part of future work.

By constructing a simple break-out audio board, one is able to extract audio for each string of the electric guitar for processing in real-time music performance. Pins one to six on the thirteen pin DIN connector correspond to the audio signals of each string. Pin seven may be utilised to accommodate a seven string electric guitar. Each (string) pin is connected to mono jack socket and grounded accordingly. The polyphonic system is powered directly by the twelfth and thirteenth pins, depending on the pick-up system utilised [15][16].

Fig 2. Septar Board Schematic The Septar board is a prototype break-out audio board, implemented in collaboration between the author and Engineer, John Harding, at the University of Ulster in Northern Ireland. The board functions as an audio break-out for multi-channel audio pick-up systems for electric guitar in order to provide audio for each string or any other series of string combinations. The following prototype comprises of a newly designed circuit to improve the audio output of magnetic or piezoelectric-based pick-up systems, notably, the GK-3 by Roland 5 and the Hexpander by Graphtech Guitar Labs6.
4

(cf. Axon AX-50, 2008; Keith McMillen's, Stringport, 2008) (cf. Roland Corporation, 2003)
6

(cf. Graphtech Guitar Labs, 2009)

2.1.2 Hierarchies in Auditory Scene Analysis It can be asserted that a melody is not provided through individually determined interval and rhythms, but is a structure (Gestalt) whose individual parts possess a free variability within the characteristics limits [21]. An approach whereby resulting sonority is analysed and organised in terms of auditory grouping by similarity or difference in pitch, timbre, or spatial location, is analogous to a form of Gestalt psychology. Gestalt psychology often concerns the perception of a visual aggregate by the analysis of its parts. Analysis concerning scenes of auditory stimuli may be referred to as Auditory Scene Analysis 7. Gestalt theorist, Max Wertheimer, insisted gestalten possess attributes which cannot be described merely as a sum of its parts. By analogy, auditory scene analysis is the process whereby all the auditory evidence that comes, over time, from a single environmental source is put together as a perceptual unit [22]. Similarities and differences of musical parts ultimately influence the organisation of hierarchical musical structures, such as interval distance between tones in sequence in order to form a hierarchical melodic structure [23]. However, the resulting auditory gestalt is much more than a mere summation of musical intervals. The concept of a melodic gestalt or hierarchical musical structure is arguably synonymous with the perceptual aggregate [24] of musical stimuli, similar to the concept of unity assumption whereby, humans are motivated to maintain congruence in their perceptual world such that any physical discrepancies (of a reasonable magnitude) are reduced in order to attain an integrated and unitary percept 8[25]. Furthermore, the conceptual fusion of cognitive (musical) percepts is particularly pertinent in musicology research concerning melodic attraction and asymmetry of sequential tonal relationships [26]. That is to say, the listener may base perceptual organisation of musical cues on familiar cognitive musical constructs, such as the circularity of pitch within an equal-tempered pitch system working systematically within the octave [27]. Both conceptual and perceptual regularities form an average or aggregate musical event or percept over time, relative to concurrent patterning resources, such as pitch, rhythm, timbre, spatial location and so on. Thus, the perceptual auditory gestalt is arguably the resulting aggregate percept in relation to both acoustic and cognitive constructs available to the listener. While one must exercise caution in the application of abstract principles devised specifically for the visual domain, several Gestalt principles may directly apply to various forms of musical organisation, specifically concerning sequential pitch structure. Gestalt principles concerning temporal and spatial proximity in relation to tone duration, and the
7

principle of closure in relation to melodic cadential points, are arguably relevant to the perceptual organisation of sequential pitch structure [28]. The principle of similarity may also pertain to auditory groupings based on similarities in spatial location or harmonic partial structure (or timbre) of a complex tone. The Gestalt principle of good continuation, whereby one has the perceptual tendency to group items based on their continuity in following a consistent lawful direction is also arguably relevant in relation to rules of melodic motion [29], given general musical protocol commonplace in pedagogy, composition, and practice [30][31][32][33]. Thus, as research would suggest, the listener processes sequential pitch information in hierarchical form, usually creating a metaphorical contour relevant to hierarchical auditory structure in memory. Melodic Contour is a hierarchical visuospatial construct which may potentially be a result of axial or arch based melodic structure [34]. Contour metaphorically relates to objects in physical space, and accounts for both micro and macro-structural musical motion between temporal sequential pitch events. The listener may often reflect on a melodic structure in aggregate form due to the limitations in human (short-term) memory systems, particularly in relation to non-tonal pitch structures. Contour, as a result of melodic motion, may also provide greater comprehensibility when processing melodic motion which does not deal in interval steps, perhaps due to spectral glide as a result of a fretless stringed instrument or instrument accessory. Interestingly, the speed at which a linear contour is perceived may also contribute to perceived melodic tension. For example, in speech, a fast-sliding motion tends to produce tension, while slower-sliding motion is more relaxed, and leads to resolution [35]. Bartlett and Dowling have demonstrated the effect on the perceptual delineation between two unfamiliar melodies heard for the first time, which vary in interval skips, but maintain the same melodic contour. The results undoubtedly highlight the salience of contour as a cognitive melodic schema in the perceptual organisation of unfamiliar melodic events, as the subjects were relying on a representation of melody's contour in order to delineate between melodic stimuli [36]. The salience of contour, particularly continuity, is further evidenced by how the listener may perceive the transposition of melodies, which exhibit the same melodic contour, as identical [37]. Since the listener establishes musical structure in hierarchical form, it would seem logical to extract and interpolate melodic contour information in real-time. One may then apply contour data and relative attributes amongst concurrent musical cue configurations in order to establish new structural ties between

(cf. Bregman, 1990; 1994) (cf. Welch 1999, pp. 371 - 387)

the performer, instrument and the resultant sound structure. The following section takes into account the saliency of aggregation and contour in the perceptual organisation of melodic structures, and details the implementation and mapping of both aggregates of pitch-space quantification and interpolations of melodic contour within a live perform system designed in Pure Data. 3 Live Performance System in Pure Data

Fig 2. Pd2Live - Live Performance System The initial premise of the following live performance system was to design a modular interactive performance system in Pure Data which exploits a polyphonic or multi-phonic audio output of an electric guitar, whilst extracting relative performance data in order to (re)establish structural similarities amongst concurrent cue configurations. The following approach is similar to the realtime interactive improvisation software [38] by Christopher Dobrian in Max/MSP 9. The extraction of pitch information in real-time may be utilised to convey expressive performance attributes (such as pitch, dynamics and rhythm). Data may then be interpolated and assigned to a variation of synthesis parameters in real-time, thus, in the opinion of the author, creating a more interactive performance process between the performer, instrument, and computer in real-time practice. DSP Signal Chain in Pure Data

Pd2Live provide a series of simple synthesis abstractions, available per string. The following abstractions provide a combined or chained synthesis relationship within the completed digital signal processing system, similar to Puckette's patch for guitar [39]. Each synthesis abstraction is interchangeable in relation to chain order. [filter~] is a simple filter abstraction which utilises the state-variable filter object [svf~] to provide the user with simple frequency and resonance parameters per string. [dino~] is a simple distortion to noise abstraction which permits the user to interrogate the perceptual effects of (timbral) morphologies between a series of simple synthesis processes. [dino~] includes an octave-up distortion implemented using the [clip~] object in order to truncate regions of periodic waveforms, highlighting (upper) harmonic partial structure. [dino~] also includes the Soundhack external, [+decimate~], which provides bit depth and sample rate reduction for added aliasing and decimation noise [40]. The [line] object provides an effective crossfade between each process. [partch~] is another filter abstraction available per string to explore approximations of harmonic partial relationships per complex tone. The following abstraction does not deal with individual sinusoids as such, but a reasonable approximation is attainable by applying a high Q factor, producing narrow frequency bands at specified multiples of the fundamental frequency per tone in sequence. [mygrain~] is a granular sampler abstraction available per string to explore the effect of time manipulation on the perception of sequential pitch structures in terms of fusion (integration) and the streaming (segregation) effects. It assumes a linear synchronous granular synthesis approach consisting of two overlapping grains controlled by a series of [vline~] objects. The grain engine provides parameters which allow the user to manipulate playback speed, grain position and grain size. Future development of this particular abstraction may involve the potential exploration into the effect of time manipulation on echoic and short-term memory systems 10. [freeze~] is a simple reverb simulator abstraction which utilises the [rev2~] object, providing independent reverberation per string in order to explore integration and segregation effects of simulated resonances in relation to concurrent patterning resources. [ambipanner~] is an ambisonic panner abstraction available per string, allowing the user to explore the relationships between dynamic spatial trajectories and additional patterning resources. The abstraction utilises the [ambilib] library ported to Pure Data by Matthew Paradis in 2002. 11
10

Fig 3. DSP Control Signal Chain

(cf. Snyder, 2000) (cf. University of York Website, 2011)

(cf. Dobrian, 2004)

11

The [ambilib] externals are based on the ambisonic encoder and decoder VST plug-ins developed by David Malham at the University of York [41]. The [ambipan~] object provides first order ambisonic representation (B Format), decodable to function with two-dimensional and three-dimensional multiple speaker arrays using the [ambidec~] object. Ambisonics has a series of limitations based on early reflections and reverberations which may be present in an acoustic space, and while amplitude panning is recommended for music which concerns horizontal localization as a critical parameter, Ambisonics would seem the more appropriate spatialisation technique in order to incorporate dynamic spatialisation trajectories as part of a real-time acoustic event, particularly when using multiple speaker arrays beyond a stereo configuration [42]. Extracting Performance Data

Each tone event produced by the pitch tracker abstraction is then organised accordingly with additional algorithmic information, including pitch-class, pitch-class distance, interval class, string number, amplitude, register and tonal context (basic space), per tone. The following data is collected as a tonal set and stored in a [coll] object, allowing the information to be recalled by interactive algorithms at any point during a performance. The system operates at an average of seven tones per set, relative to instrumental scalar structures typical in repertoire and the maximum number of events retained in short-term memory at one time (seven plus or minus two) [43]. The system currently operates best on a [coll] size between four and seven notes, attending to both tetra and modal structures relevant to current instrumental vocabulary. The user may change the size of the [coll] object in relation to instrumental vocabulary at any time. Considering an Algorithmic Approach There are numerous mapping strategies to consider in the development and implementation of algorithmic abstractions in Pure Data. A synthesis parameter may be driven by one form of musical gesture, one musical gesture may drive many synthesis parameters, or one synthesis parameter may be driven by two or more musical gestures. A combination of the presented mapping strategies is also feasible, which may be the most plausible representation of the mapping of a traditional instrument [45]. One may consider the potential causal percepts of each strategy in relation to available forms of musical gesture, such as effective and accompanist gestures, pertaining to the physical motion of the performer, and figurative gestures, pertaining to structures conveyed by musical structure [44]. Therefore, the following system will strive to provide flexibility between mapping strategies, whilst considering various forms of musical gesture. The following example demonstrates a convergent 14 strategy, concerning macro-structural pitch events relevant to the instrument.

Fig 4. Prototype Pitch Tracker Abstraction The analysis stage of the proposed live performance system utilises a pitch tracking system in order to record incoming pitch data, which is then subject to further quantification, organisation and interpolation. Pitch data is then mapped in order to interact with a series of synthesis processes in real-time. The following system utilises the [sigmund~] object in order to track pitch (continuously using the pitch argument) and amplitude information. The [threshold~] object is utilised to improve the accuracy of pitch tracking per string, whilst also taking into consideration peak amplitude values which may be obtained using [peakamp~], [bonk~] and [specFlux~] objects 12. The user may set the threshold of each instance of [sigmund~] in relation to individual performance style and technique. Pitch data is further organised into pitch-class (assigning values 0 to 11 to each pitch within the octave of an equal tempered pitch system) providing a simple modulo arithmetic based quantification per tone. The following quantification proves useful in future work involving the implementation of cognitive abstractions relating to melodic attraction, linear completion, and linear expectancy 13.
12

Fig 5. [expr] with Tetra and Modal Structures


13

(cf. Lerdahl, 2001) (cf. Iazzetta 2000, p. 265)

(cf. Dobrian, 2004; Puckette, 2007; Brent, 2009)

14

Given the ability to track incoming pitch data, one is able to adopt a machine listening approach whereby the [expr] object may be utilised to actively listen to an incoming sequence of notes to determine as to whether or not a preset structure has been performed at a particular point in an analysed pitch-class set. Once an expression is satisfied, this may result in a trigger event, which may then be assigned to a particular synthesis process, arguably providing a clear causal relationship between a scalar structure performed by the instrumentalist, and a resulting synthesis process. Whilst the causal relationship may be clear, this is not a wholly interactive or dynamic approach, but rather a more re-active approach 15. whereby the performer creates an event and the computer reacts after a condition has been fulfilled. One may instead interrogate the real-time extraction, interpolation, and mapping of additional figurative structure, such as melodic contour, potentially creating a clear and concurrent association between the performer, musical figuration, and resultant sound structure. Considering Contours There is a multitude of contours embedded throughout all performance domains, from the physical gestures of the performer (both effective and ancillary gesture) to the figurative musical hierarchies in perceived sound structure. Following on from our discussion regarding visuospatial contours and tonal hierarchies one must recognise the vital role of contours in the perceptual organisation of musical structure, and their potential role in electronic music performance. Simple (linear) interpolation of incoming sequential pitch-class data is arguably a suitable approach, given the frequent hierarchical perceptual organisation of melodic structure into a perceptual whole. If one were to consider a pitch-class set as a regular data set, one may interpolate data in terms of an average or central tendency within the analysed pitch-class set over time. A hierarchical aggregation of sequential pitch structure is achieved quite successfully with a number of simple objects in Pure Data. The [average] and [line] objects produce a smooth interpolation of a quantified sequential pitch structure. As such, the rate output may be quite relevant to melodic motion and the gestalt principle of spatial proximity16. For example, if the performer were to repeat a tone, the contour of melodic motion ceases to change (location) in perceptual pitch space, and thus a simple combination of the [average] and [line] objects would mimic such a cognitive construct, repeating the (central tendency) value relative to the repeating (average) tone over time.

Fig 6. Solo and Global (Temporal) String Contours Melodic contours may be plotted as incoming pitch or interval class per note in a [table] object, providing a visualisation of incoming melodic motion within the context of a pitchclass set.

Fig 7. Plotted Contour of Pitch-Class Set Interpolated melodic contour data is available per string (solo), as well as for all strings (global). A [line] object may then interpolate between selectable points within a table, also allowing the user to stipulate the interpolation time relative to the desired perceptual tension 17, attending to both micro and macro-structural elements of incoming melodic context. The interpolation times may also be controlled by note onset and offset between each tone in sequence, arguably creating additional interaction between the performer and the algorithmic process. Random contours may also be assigned to encourage randomisation of musical parameters at designated points by the user, during the course of a performance. Aside from the interpolation and scaling of incoming pitch data, contours also provide a number of other useful real-time mapping approaches. Simple switches may be established by establishing thresholds at certain points in the contour. In the example patch, the record function of the granular synth abstraction is controlled by a defined pitch-class threshold.
17

15 16

(cf. Dobrian, 2004) (cf. Lerdahl, 2001) (cf. Snyder, 2000)

Again, a simple, yet effective mapping strategy for simple switch orientated processes in real-time performance. Also, the shape of each contour may potentially be manipulated and directly applied to control amplitude or wave-shaping processes. Tonal Pitch Space Vs Physical Space Given the visuospatial associations between sequential pitch structure and perceived melodic motion in physical (geometric) space, an obvious and arguably useful association may be to create a relationship between contour data and the azimuth and distance parameters of the [ambipanner~] ambisonic abstraction, creating a useful compositional narrative between pitch-class space and the angle (theta) and distance (radius) of each spatial gesture in twodimensional space.

Discussion

The following live performance system maintains the ability to establish a series of clear and interactive gestural relationships between musical figuration constructed by the performer, and constructed synthesis processes in Pure Data. In relation to the application of contours, one is able to encourage integrative and segregative effects by applying global (interpolated pitch data for all strings) and solo (interpolated pitch data per string) contours amongst spatial and timbral processes, thus encouraging grouping and stream segregation based on similarities and differences in pitch, timbre and spatial location cues per string. By utilising independent melodic contours per string, musical structures assume contrasting trajectories, which arguably produce more interesting musical results. However, by encouraging stream segregation, one may argue that such an approach may negate musical structure, rather than establish musical structure. The application of global contour to musical events, however, produces more integrated results which tend to establish clearer structure between concurrent musical cues, such as incoming pitch and resultant spatial configurations. Further research in relation to memory, contour, and the time domain may produce more interesting and comprehensive interactive algorithms to be utilised within the presented live performance system. As to whether or not such an approach to musical gesture allows the (audience) listener to establish the same musical structure based on melodic contour is a cause for further work. The following system, whilst in it's infancy, maintains a strong theoretical underpinning. Through practice, one will be able to establish how to improve upon the current implementation in Pure Data. Given the primary emphasis on the extraction of sequential pitch structure, one can improve the accuracy of the pitch tracking abstraction by including comparable cross-talk function in relation to adjacent strings, a similar approach to that proposed by Puckette 19. Further work may also include the dynamic development of macro-structural expressions using the [expr] object, in order to provide a more comprehensive machine listening approach, whereby the system maintains the ability to identify a specified structure at any point in a sequential pitch structure, rather than at a limited point within an analysed sequence of tones.

Fig 8. Assigning Pitch Contour to Spatial Gesture in 2D Space An interpolation of contour data may be triggered each time a tone is played in order to create a more interactive algorithmic process. The following association between pitch and spatial configurations may increase perceived structure, particularly in relation to Tonal Pitch Space abstractions18, thus creating the potential for greater retention of more dense polyphonic music structures in memory. Compositional narrative between cue configurations may be exploited further, particularly in relation to motion and sequential pitch structure. Given the ability of the pitch tracking system to only produce data for tones executed in sequence, one may establish the gestural association between figurative melodic motion, and dynamic spatialisation. That is to say, real-time simultaneous harmonic structures may assume an inertial state.

18

(cf. Lerdahl, 2001)

19

(cf. SMECK, 2007)

Conclusion

The following paper details the implementation of an interactive live performance system to be utilised with a polyphonic or multi-channel audio output of an electric guitar. The following system includes a series of interactive algorithms based on theoretical research in the fields of music, technology, and psychology in order to address a series of perceptual concerns in relation to gestural relationships between the performer , instrument, and resultant sound structure. It is the hope of the author that time invested in developing a strong theoretical foundation in the development of a performance system will ultimately lead to the establishment of a more convincing electronic music performance through creative practice. Extraction of (expressive) performance data provides hierarchical melodic information, which may be manipulated and mapped to a series of synthesis processes in Pure Data in order to (re)establish gestural relationships between the performer and resultant musical structure. Future work will include the development of a cognitive model based on Tonal Pitch Space abstractions presented by Fred Lerdahl 20, not only to further develop compositional narrative between musical cue configurations, but also to interrogate audience expectation and perception of sequential musical structure in highly dense polyphonic music.

[7] T. Wishart: On Sonic Art, Harwood (Amsterdam) 1996, p. 22. [8] A.S. Bregman: Auditory Scene Analysis: The Perceptual Organisation of Sound. MIT Press (Mass.) 1994, p. 489. [9] A.S. Bregman: Auditory Scene Analysis: The Perceptual Organisation of Sound. MIT Press. (Mass.) 1994, p. 99. [10] C. Fales: Short Circuiting Perceptual Systems: Timbre in Ambient and Techno Music. In Greene, P.D. & Porcello, T. (Eds.), Wired for Sound: Engineering and Technology in Sonic Cultures, Wesleyan (Connecticut) 2005, pp. 156 - 180. [11] C. Dobrian: Strategies For Continuous Pitch and Amplitude Tracking In Realtime Interactive Improvisation Software, UCI (Cal.) 2004. [12] W. Apel, eds: Second Edition, Revised and Enlarged, Harvard Dictionary of Music, Harvard (Mass.) 1969, p. 687. [13] M. Grob Website (2008): http://matthiasgrob.com/pParad/WhyPolyDistor tion.html [14] J. Rothstein: MIDI: A Comprehensive Introduction, A-R Editions (Wisconsin) 1995, p. 11. [15] J. Berg Website http://www.unfretted.com (2004):

Acknowledgements

I would like to thank Mike Moser-Booth and those users of the Pure Data online community for any help provided in the development of this live performance system. I would also like to extend my thanks to my wife, Laura Graham, to the Bauhaus-Universitt, Weimar, to my supervisors, Brian Bridges and Professor Frank Lyons and to my colleagues, Brendan McCloskey and John King at the University of Ulster, Northern Ireland. References [1] S. Emmerson: Living Electronic Music, Ashgate Publishing (Hampshire) 2007, p. 114. [2] O. Karamanlis: How something is born, lives and dies: A composers approach for thematic evolution in electroacoustic music, CEC (Montreal) 2010. [3] T. Winkler: Composing Interactive Music: Techniques and Ideas using Max, MIT Press (Mass.) 2001. [4] S. Emmerson: Living Electronic Music, Ashgate Publishing (Hampshire) 2007, p. 114. [5] B. Snyder: Music and Memory, MIT Press (Mass.) 2000, xvi. [6] R. Erickson: Sound Structure in Music. Berkley (Cal.) 1975, p. 108.
20

[16] R. Graham Website (2011): http://rickygraham.com/pages/research [17] R. Graham: The Effects of Polyphonic Technology on Contemporary Electric Guitar Performance, Proceedings from SMI Postgraduate Conference, DIT (Dublin) 2010, pp. 6 8. [18] M. Grob Website (2008): http://matthiasgrob.com/pParad/WhyPolyDistor tion.html [19] A.S. Bregman: Auditory Scene Analysis: The Perceptual Organisation of Sound. MIT Press (Mass.) 1994, p. 522. [20] T.D. Rossing: Springer Handbook of Acoustics, Springer Science + Business Media (New York) 2007, p. 214. [21] D.B. King & M. Wertheimer: Max Wertheimer and Gestalt Theory, Transaction Publishers (New Jersey) 2005, p. 80. [22] A.S. Bregman: Auditory Scene Analysis: hearing in complex environments. In McAdams, S. & Bigand, E. (Eds.), Thinking in Sound. The cognitive psychology of human audition. Clarendon Press (Oxford) 1993, pp. 10 36.

(cf. Lerdahl, 2001)

[23] D. Deutsch: Effect of repetition of standard and comparison tones on recognition memory for pitch. Journal of Experimental Psychology, Vol. 93, No. 1, pp. 158 159, April 1972. [24] J. Tenney: John Cage and the Theory of Harmony, http://www.plainsound.org/JTwork.html 1983, p. 15. [25] M.G. Boltz et al: Audiovisual Interactions: the Impact of Visual Information on Music Perception and Memory, Music Perception: An Interdisciplinary Journal, Vol. 27, No. 1, pp. 43-59, September 2009. [26] F. Lerdahl: Tonal Pitch Space, Oxford University Press (New York) 2001, p. 167 168. [27] D. Deutsch: Pitch circularity from tones comprising full harmonic series, Journal of Acoustical Society of America, vol. 124 no. 1, pp. 589597, June 2008. [28] Neuhaus et al: Processing of Rhythmic and Melodic GestaltsAn ERP Study, Journal of Acoustical Society of America, vol. 24, No. 2, December 2006, pp. 209 213. [29] B. Snyder: Music and Memory, MIT Press (Mass.) 2000, pp. 146 - 149 [30] D. B. King and M. Wertheimer: Max Wertheimer and Gestalt Theory, Transaction Publishers (New Jersey) 2005. [31] A.S. Bregman: Auditory Scene Analysis: The Perceptual Organisation of Sound. MIT Press (Mass.) 1994, pp. 155, 346. [32] B. Snyder: Music and Memory, MIT Press (Mass.) 2000, pp. 42 43. [33] F. Lerdahl: Tonal Pitch Space, Oxford University Press (New York) 2001, pp. 170 171. [34] B. Snyder: Music and Memory, MIT Press (Mass.) 2000, p. 152. [35] B. Snyder: Music and Memory, MIT Press (Mass.) 2000, pp. 149 1530. [36] R.G. Crowder: Auditory Memory. In McAdams, S. & Bigand, E. (Eds.), Thinking in Sound. The cognitive psychology of human audition. Clarendon Press (Oxford) 1993, p. 132 [37] R.G. Crowder: Auditory Memory. In McAdams, S. & Bigand, E. (Eds.), Thinking in Sound. The cognitive psychology of human audition. Clarendon Press (Oxford) 1993, pp. 128 132. [38] M. Puckette http://crca.ucsd.edu/~msp/ Website (2007):

[42] E. Bates: The Composition and Performance of Spatial Music, Trinity College (Dublin) 2009, pp. 209 211. [43] B. Snyder: Music and Memory, MIT Press (Mass.) 2000, p. 140. [44] E. R. Miranda , E. R., & M.M Wanderley: New Digital Music Instruments: Control and Interaction beyond the Keyboard, A-R Editions (Wisconsin) 2006. [45] F. Iazzetta: Meaning in Musical Gesture. In M.M. Wanderley and M. Battier. (Eds.), Trends in Gestural Control of Music 2000, IRCAM (Paris) pp. 265 266. [46] B. Snyder: Music and Memory, MIT Press (Mass.) 2000, pp. 154 155.

[39] C. Dobrian: Strategies For Continuous Pitch and Amplitude Tracking In Realtime Interactive Improvisation Software, UCI (Cal.) 2004. [40] Soundhack Website (2010): http://soundhack.henfast.com/freeware/the-boneyard/ [41] University of York Website (2010): http://www.york.ac.uk/music/mrc/software/objects/

You might also like