[It’s my pleasure to present Leo Hofmann as the tenth and final guest writer in this series. Leo is a Swiss artist working in the fields of performance and media art, often combining the two to create a very unique body of work. I first met Leo at the Hochschule der Künste Bern where we both studied, and where he is just finishing his Masters Degree in Contemporary Arts Practice. - Tobias]
After reading the previous posts about different strategies and goals in live electronic music on Tobias’s blog, I’d like to present two of my performances, which go slightly beyond the realm of live electronic music in the strictest sense. For two years, I’ve been experimenting with performances which work as interactive radio dramas. In these performances I use gestures as an element of my compositions. An important inspiration were the many (body) sensor-based works in experimental and contemporary music. These deal with the question of how movements/gestures and music are related and how to set up that relationship. My experiments resulted in two performances („An die verehrte Körperschaft“, „A. wie Albertine“), in which words, sounds and gestures are combined through the use of a sensor I have fixed on my forearm.
Interface – concept
Although the pieces “An die verehrte Körperschaft”, and “A. wie Albertine” are played live with a sensor interface on my forearm, I don’t create or interpret any detailed aspects of the music itself live. Instead, the interface manages the synchronization between the gestures and movements which I perform and the musical tracks and samples. My use of the gesture interface does not intend to make full use of its data bandwidth. There are no fixed combinations of movement parameters and musical parameters. The position in a linear timeline determines which data control the events and parameters in the program. This time-based implementation follows a distinctly different conception than the majority of live electronic pieces, which utilize body interfaces as instruments with precise control over certain musical parameters. My approach does not attempt to recombine the digital division of movement and sound, which would result in the same obsolete instrumental concept. Instead, the digital division of input/output is used to create a new combination of movement and sound for each moment, giving each of them full autonomy. Each one of these combinations is a creative decision.
Interface - construction
The interface is based on a arduino-fio chip, to which I added four sensors, a battery, and an xbee wireless-module.
pressure sensor (worn on the thumb)
infrared distance sensor
The small and wireless construction allows the performer to move freely and doesn’t impede the gestures while performing. A small vibration motor in the interface can cause haptic feedback .
<|> : distance sensor (0-127, 7-bit)
Ox : globe, x axis position (0/1, boolean)
Oy : globe, y axis position (0/1, boolean)
X : tilt, x axis (0-127, 7-bit)
Y : tilt, y axis (0-127, 7-bit)
Z : tilt, z axis (0-127, 7-bit)
Plug : pressure sensor (0-127, 7-bit)
A. wie Albertine
In the performance “A. wie Albertine” (“A., as in Albertine”), the thought world of french writer Marcel Proust imbues a collage of words, gestures, and sounds, which a performer presents in a staged situation. The text is made up of restructured excerpts from the seven volumes of Lost Time that deal with the character Albertine. The words, in parts spoken or recorded, are accompanied by countless sound fragments and musical miniatures that make up a network of associations and fragments. The collage aims to adapt the individual style of Proust’s writing, such as the notoriously long subordinate clauses or the highly analytical tone.
The collage also includes the choreographed gestures of the performer. With the help of a sensor attached to his wrist, the performer controls the audio through movements. Words, gestures, and sounds form a dense transmedial symbiosis in which Albertine, the vanished one, is present in her conspicuous absence.
The text for the piece consists of excerpts from two volumes of Marcel Proust’s “In Search of Lost Time”: “Sodom and Gomorrah” (volume four) and “The Captive” (volume five). While listening to the entire work as an audio-book (which took me a little more than a year), I began writing down passages that appealed to me. With time, I realized that the excerpts all dealt with the same topic: the piercing (self-) analysis of the narrator with regard to his lover Albertine. With an exacting eye, all the the nuances and reciprocal actions of his relationship are dissected and analyzed. The behavioral patterns of the narrator and Albertine are set up like functions in a game theory experiment. As is often the case in Proust, his intensified sensibility penetrates deep psychological structures, only to find negative feelings such as egoism, idleness or cowardice at the root of complex emotional states or actions. Since Proust retains his immaculate language skills at every moment and the entire work reflects his contenance, the tone always ranges from neutral to positively interested.
The fundamental theme that Proust develops in the sentences I compiled is that of projection. Again and again, he illustrates the lack of any connection between the object of love and love itself. On the basis of this assumption, he explains various patterns of behavior or actions whose allegedly irrational character is made plausible in light of his system of projection. This exacting development of a philosophical theorem (to which Proust was inspired by the philosopher Henri Bergson) in the monumentally constructed, highly artificial arrangement in the Recherches is set up as an experiment in which all the characters function as exemplary elements of a depiction of the world. Albertine’s role in this is that of the loved person (Proust’s chauffeur Alfred Agostinelli, with whom he had a relationship, is often considered to be a model for her character).
Any reader of Proust continually comes across endless passages with exhaustively detailed descriptions: life in Paris salons, descriptions of nature, a certain mood, a piece of clothing; anything can become the object of an expansive descriptive potency. Even though he always remains focused on the artistic or psychological moment, certain passages can become long-winded, when, for example, they aim to illustrate traits in a character that are already sufficiently clear. For this reason, the passages in which the philosophical foundation of the book emerge explicitly stand out all the more clearly. The passages I chose contain numerous such culmination points of the Proustian world-view. In this dense concentration, I emphasized one particular sentence in the performance: it is the only complete sentence that is spoken live, not as a recording, and without being accompanied by live-electronics.
In my selection, I reorganized the sentences from five paragraphs which totaled about a half a page in length. In some cases, I made significant changes to the text; removing parts of sentences, changing personal forms, or adapting tenses. To accent the universality of the principle of projection, I made Albertine’s name “anonymous” and replaced it with an “A.” in the three cases where it appears: any person can be a substitute for the object of projection, as in the case of Alfred for the historical and Albertine for the literary Marcel.
Voice improvisation & arrangement
On the basis of this new text arrangement, I recorded the voice material for the sound collage. To have enough space for notes and composition structures I left ample space between the lines. With these papers, I went to the studio to record. The notes were a kind of musical score: they dictated the expression and dynamics of the voice and made preliminary formal choices with regard to the text. These instructions were not planned in advance, but were developed in a back-and-forth process with the voice improvisation.
I edited a preliminary version from the voice recordings. While the voice material remained unchanged throughout the following process, I made a collage by combining it with a great number of samples. These samples, which accompany the voice as a counterpoint, come from various sources and stages of treatment and have only their very brief length in common. The deciding factors for their integration into the composition were certain patterns such as rhythmic interaction or similarities in density, pulse behavior, or frequency spectrum. By closely linking voice and sound material, I tried to merge the two elements seamlessly. Speech rhythms and flow were to be retained and complemented by collage sounds and thus molded into a new whole without sacrificing the text’s comprehensibility. The most important creative tool in this process is the use of envelopes (changes in volume which are graphically represented as curves). If I set very short envelopes on the sound of a vacuum cleaner being turned off – so that only an impulse of 300 milliseconds remains – the sound’s origin can no longer be traced. Nonetheless, many components of the complex sound (the pitch sinks, the motor decelerates, the airflow in the hose dies down) remain and retain the materiality of their source. I developed this method of sound collage before the Albertine-piece and it seemed appropriate for this piece as well, since it weakens the strict boundaries between abstract and concrete sounds and explores tensions similar to the work with gestures: their artificial character is constantly on a tipping point, since they contain traces of conventional gestures, sign language or material suggestion.
Parallel to the development of the mise-en-scène (gestures, facial expressions, movements, set), I adapted the sound collage in the editing program and compiled the approximately 80 samples that make up the sound material in the piece and which are programmed to be played or modulated by the gesture-interface.
Development of gestures and programming of patches
I find it difficult to describe how I work out the gestures for my performance, since they emerge intuitively. I’d like to explain this multifaceted process by describing the situation in which they were developed.
I isolate myself in a room and make sure that nobody can intrude unexpectedly, since I often contort myself in ways that I could find embarrassing out of context. Sometimes I work with a mirror in the room. I play a short section of the sound collage from my computer, oftentimes in a continuous loop. Through a process of improvisation, preliminary gestures are formed. I always try to take up an aspect of the musicality. In this process, very individual associations and movement patterns come about that my body derives from the music. Some terms that could put these qualities into words: consistency, density, grip, materiality, extension, inertia, weight, texture, vitality, pressure
This intuitive search generates the most artificial gestures. I can immediately tell when a gesture or series of movements feels right. When this method fails, I focus on the text and search for elements that inspire me to movements and gestures (be they pantomimic, symbolic, scenic, or thematic). Oftentimes, sections in which I can’t find a clear solution lead to later areas for improvisation. In this analysis, I have been able to later distinguish different “gesture conceptions” that I utilized. The borders remain in flux. These systematic categories could be used in a dedicated strategy to develop new gesture choreographies in the future:
Gesture as symbol: 1. artificial gesture (symbol), 2. gestural pantomime (icon), 3. material suggestion (index); gesture as plot: 4. theatrical gesture (“body language”, nonverbal communication), 5. functional movement
Programming is done at the same time as the development of the gestures. Once I’ve found a connection from gestures to the sound collage, I work out how to integrate the musical cues into the gestures while listening to the respective section in a loop. Since movements often have to be adapted to the requirements of the sensor integration, the choreography can go through fundamental changes at this stage. The development process can also go in the other direction, if an idea at the programming stage leads, through the use of the interface, to a certain movement or gesture. Usually, a back-and-forth process takes place.
[This was the tenth and final post in an almost weekly series of guest posts on the topic of “live performance with computer technology” by a range of exciting music makers and thinkers. Please post any comments or questions for Leo in the comment section below and share his essay with anyone who you think might want to read it. Make sure to also read the previous posts by Alex McLean (on live coding) Marcel Saegesser (on composing in a digitized world), Andi Otto (on his work with a sensor-equipped cello bow) Jeff Swearengin (on improvising [with] the setup), Samuel Gfeller (on his live electronic instruments), Markus Reuter (on making the computer vibrate), Adrian Benavides (on playback engineering), Ben Carey (on his interactive music software) and Erik Schoster (on the installations of artist Khristian Weeks).
This is the end of this current series - thank you for reading and sharing the articles over the past three months, and a big Thank You to all the writers for their contributions. I’m already thinking about a second series, so check back often or subscribe via RSS feed to learn about new posts. I’d appreciate any feedback, comments and suggestions you might have, please send them to tobias at tobiasreber dot com or just post them in the comment section below.]
[Please welcome Alex McLean as this week’s guest writer. Alex is a musician from Yorkshire, UK who specializes in live coding of music performances. He has written a PhD on “Artist Programmers and Programming Languages for the Arts” and is a co-founder of TOPLAP (Temporary Organisation for the Promotion of Live Algorithm Programming). Alex is @yaxu on Twitter and his website is at http://yaxu.org. - Tobias]
Music is an aspect of Human life which touches almost everyone, present within all hearing cultures. The field of music psychology is an exciting place, but music is still little understood. One recent, straightforward way of defining music is in terms of recordings; music is something you can buy, and then listen to. Digital music-making technology has grown up in this context, where tracker, sequencer, and even live production software has maintained a dichotomy between producer and consumer. Music is something that is made, perhaps alone in the stereotypical bedroom studio, for someone else to listen to later.
If we think of music not in terms of product, or even sound, but just in terms of Human activity, then music technology begins to take a different shape. Christopher Small reframed music activity as Musicking, an expansive definition which takes in not only the playing of instruments, but also ticket purchases, the gathering in a concert hall atrium, and all the other music rituals. Against this background, the iTunes music store is not a way of acquiring music, but is actually music itself. The wider experience of music reduced to a tabular interface centred around consumer choice. The same applies to Ableton Live, Cubase, Cakewalk Pro, and Buzz tracker; if the production and distribution of these software systems is musical activity, then their programmers are the master musicians of our times. Where they are anonymous, we could call them folk musicians, making music which is not for people but of the people.
Generative music is perhaps the purest of music technologies, and based on the idea of autonomous music. Wind-chimes are the classic tangible example, played by the wind, beyond the realms of Human activity. But if music is Human activity, then the music of windchimes is not created by the wind, but the windchime maker, and the passerby taking the time and imagination to find music in the experience of listening to it.
The ideology of autonomous generative music has leaked into music technology in surprising ways, and left us in some strange situations. As deadmau5 points out, there is a huge disparity between pressing play on a computer and the frenzied, engaged reaction of a large crowd. This situation seems unfair, not to the audience who are having a great time, but the performers who are left looking for self-justification (or in the case of deadmau5, honourable self-deprecation). Perhaps there is a better way of approaching technology.
The principle of technological determinism might suggest that the music technology we currently have is inevitable, but that isn’t the case. Computers are language machines, for writing text about text, and yet through great, painful and difficult work we use them to construct simulations resembling something like the world outside. If software *did* determine music technology, then perhaps it would look more like the fringe activity of live coding. Live coders write and manipulate text, interpreted live into sound, which is experienced as music. This is a form of improvised algorithmic music, where a piece of music is described using higher-order, abstract constructions, while that description is interpreted live by a computer. In this way live coders have blended music and software-making, both often considered as end-products, into a single activity.
Live coding is not really a leap up, but rather a return to roots, establishing new foundations of human-computer interaction, using programming languages in end-user interaction. You can find example videos, and freely-downloadable live programming languages, through the TOPLAP website.
Live coding is a family of approaches to music-making, used within a wide range of musical genres. It is also applied to live video animation such as VJing (for example using Fluxus or LivecodeLab). Classically, a live coding performance will involve one or more improvising musicians with a live audience. An audience is not necessary though, we might live code as part of a private drumming circle, or even alone, either as a solo exploration or as part of a compositional process towards a fixed recording.
If an audience is involved, they are generally able to see the live coders’ screens, in accordance with the TOPLAP manifesto. This is often achieved by projecting the screens. The audience are not expected to read or understand the code in order to enjoy the music however, any more than we need to understand guitar-playing technique to enjoy guitar music. Showing screens simply opens up the Human activity of live coding, so that people feel included and have a way into the music.
While there is on-screen movement, it is also true that live coding is generally not the most active of performances. The difference between the audience and performers can be stark; everyone is engaged, but on two very different levels:
The disparity between the livecoder, fixed on their screen, and a dancing audience before them is strange enough. Everyone in a state of creative flow, and everyone somehow engaged in the dance as a group. The emerging Algorave movement emphasises this activity of the audience in algorithmic music, including (but not only) through live coding performance. Just as the original rave movement was driven by anonymous producers and DJs, the contemporary algorave encourages creative listening. Not the still, deep listening of Pauline Oliveros, but listening with the whole body. Establishing a ground in complex, polyrhythmic output of musical algorithms is easy if you dance to it.
So this has been a bit of a ramble, but hopefully goes some way towards motivating embedding the expressive power of computer language in an active music experience. It’s been a great pleasure sorting through some of these thoughts, but there is a lot more sorting through to do… This is of course the function of blogging, and I hope you’ll contribute some of your thoughts to the mix below!
Thanks for reading, and thanks also to Tobias for having me as a guest.
 An electronic computer isn’t strictly necessary in live coding performances, for example Nick Collins and Kate Sicchio have both used Human dancers to take the role of a computer, as a form of live algorithmic choreography.
[This was the ninth in an almost weekly series of guest posts on the topic of “live performance with computer technology” by a range of exciting music makers and thinkers. Please post any comments or questions for Alex in the comment section below and share the essay with anyone who you think might want to read this. Make sure to also read the previous guest posts by Marcel Saegesser (on composing in a digitized world), Andi Otto (on his work with a sensor-equipped cello bow) Jeff Swearengin (on improvising [with] the setup), Samuel Gfeller (on his live electronic instruments), Markus Reuter (on making the computer vibrate), Adrian Benavides (on playback engineering), Ben Carey (on his interactive music software) and Erik Schoster (on the installations of artist Khristian Weeks). Check back on April 29th for the next installment in the series.]
Picture by Markus Reuter
I’ve arrived in Denver, Colorado yesterday for a week of work on Markus Reuter’s “Todmorden 513” orchestral project. We’re staying at conductor and orchestrator Thomas Blomster’s place and are currently working out all communication and “to do” items for the coming week, which will include the following events: Rehearsals with the Colorado Chamber Orchestra on Monday, Tuesday and Wednesday evenings which are open to the public (see below). The performance itself is on Thursday, with a CD/DVD recording session on Saturday morning. Friday evening we’ll have an exclusive dinner with guests who purchased this option from our successful PledgeMusic campaign to fund the project (also see below), and Sunday we’ll see the Youth Orchestra of the Rockies perform Thomas’ arrangement of Markus’ piece “Mariola”.
Concert and rehearsal info with directions courtesy of Thomas Blomster:
15655 Brookstone Dr.
Parker CO 80134
King Center Concert Hall (Auraria campus)
PledgeMusic campaign - still running
The “Todmorden 513 World Premiere Recording” PledgeMusic campaign will be running until June 22. The amount we raised only covers the recording costs - the whole project costs a lot more money, so we appreciate any support you can give.
If you’re interested in staying up to date about the project as the week progresses, follow Markus (Facebook, Twitter), Thomas (Facebook) and me (Facebook, Twitter). There’s also a Facebook event page for the concert. If you write about the project on social media please use the hashtag #tm513.
„Material creation from the word is an idea central to magic in all cultures; it is precisely what magic spells perform. Magic therefore is, at its core, a technology, serving the rational end of achieving an effect, and being judged by its efficacy. […] The technical principle of magic, controlling matter through manipulation of symbols, is the technical principle of computer software as well.“
Florian Cramer - Words made flesh - Code, Culture, Imagination, p. 42 (pdf)
[Please welcome Marcel Saegesser to the series. Marcel is a composer and musician from Bern, Switzerland who has steadily built a body of performative and installation work over the past few years, often blurring the boundaries between the two. Marcel’s work is being performed internationally and his piece “The last place (left)” was released on Tonus Music Records in 2011. His work and activities are documented at www.marcelsaegesser.com- Tobias]
Being of the “postmodern generation”
I’m one of those composers / sound artists who was born into a mediatized world that was in the act of being completely digitized within a few years. The digital era has brought forth new methods of composing and creating music, new definitions of the traditional “roles” and professions, new forms of education to enter professional digital music creation and, notably, new aesthetic ideals. From a technical point of view, everything has become possible, and the postmodern way of thinking has allowed doing nearly everything. Due to this paradigm change, it has become obvious that every single step in the process from composing to performing has to be re-thought from scratch. Every single decision to take is either a quotation or a negation of the past.
In this essay, I will to mention some personal notices and points of critique on performance in the field of electronic music. Then I’m going to chronologically describe the development of my method of composing and performing, in particular with respect to the importance of “liveness” compared to that of “performativity” in my own work. (Note: it is important not to equate liveness with performativity. By “liveness” I mean the immediacy and authenticity caused by a given uncertainty and the fact that some parameters always remain uncontrollable when performing live; that’s why each representation is unique. “Performativity”, for me, means the visual and theatrical (or even catchpenny) content in the act of performing digital music on stage, that is to say the size respectively visibility of hand or body movements while handling electronic controllers and interfaces.)
Performativity and musicality as substitutes?
Many performances in the field of electronic music bore me; guys fiddling around with immense technical equipment on stage, operating some control dials in an irreproducible way, generating weird and noisy sound clusters, which to me seem arbitrary and not necessarily musical. Such artists came up with haptic interfaces and large-dimensioned controllers for the sake of comprehensibility and performativity – often at the expense of musical quality and musical form. Others came up with great music without any “performativity”, that is to say without any visual-theatrical content. A few artists succeeded and still succeed in creating great music and great performances at once.
Yet another aspect of the same thought on the “evolution” of the live performance in the field of electronic music: I observe that many artists place great interest in inventing and constructing amazing experimental interfaces while they invest less in the successive process of learning to handle these interfaces and in the development of good quality music. Instead, a big amount of all the existing digital music is no more than demonstrations of (new) technologies and experimental interfaces; many works remain studies. In this point I absolutely agree with Markus Reuter (cf. post of March 4th, 2013). Actually it is not the skill of “playing the computer” what I’m missing (since I do not believe that the computer is an “instrument” that can be played as traditional instruments can) – I’m rather criticizing the lack of musicality within the field of digital music.
Furthermore, at a certain point I’ve discovered that, as opposed to Jeff Swearengin’s notice (cf. post of March 18th, 2013), the sound palette of digital media is quite limited (later on in this essay I will go on to claim that the library of sound samples today is endless; nonetheless I insist that the resulting sound is often no more varied than that of classical instruments, or even less).
Through all these observations and experiences I grew stimulated to search for further sounds, means and methods. On the one hand, I became interested in the discussion of the performativity aspect of non-performative arts and I follow the developments in this field with interest, on the other hand, I do not seek to pursue performativity in my work. What I do seek to maintain is liveness on stage, since I’m still convinced that liveness endows electronic music performances with additional value.
Combining studio and live
With the digitalization the traditional sequence of composing, producing and performing music has become obsolete. As Adrian Benavides reports (cf. post of February 18th, 2013), music creation, studio environment and live performance moved closer to one another. In the early stages of my personal music development, I came up with the idea of bringing techniques and methods out of the studio environment onto stage. I realized that the simplest studio strategies that are used by sound engineers as well as sampling artists, namely the use of prerecorded sound material which is cut into small pieces (“samples”), could get interesting when applied on stage, in real-time. Instead of using such samples, I’ve started working with a kind of self-programmed real-time sampler based on real human instrumentalists (mostly professional classical musicians) who act on stage during the performance. By producing mainly long-sustained notes they provide organic sound material that is edited in real-time by a software tool (note that I do favor the term “tool” over “instrument” in order to accent its difference from traditional music instruments). This tool, similar to a “noise gate”, is basically a switch, which is able to open and to close the instruments’ amplification. As the opening and closing process is controlled by repetitive rhythmic patterns (with a smooth algorithmic variation), strong rhythmic textures arise. This computer-generated rhythmic microstructure does not become audible until the instrumentalists start to provide sound; this is the basic concept of my “real-time sampling tool”. Works originated from this tool usually consist of a sequence of several such “textures”. Afterward it’s me deciding the order of these sequences, that is to say the macrostructure or the musical form.
The real-time sampler or human sampler
The first version of this real-time sampling tool was born in 2009. It’s not an instrument; it’s not named, though it has become most probably the essential tool in my work since then. It combines a contemporary notation tool, algorithmic modules (being used for the rhythmical microstructure), a sound recording and playing back engine, a sampler, a simple six-step sequencer and a digital mixing console with sound equalization and sound improvement tools. Both the software itself and my role when handling it have become complex and sometimes even overlap. When creating new work I would often find myself simultaneously an improviser, musician, composer, producer and sound engineer. Later on, when performing on stage, I would be mixing the volumes and improving the sound in three-dimensional space – which is a rather engineering work than a musical one. I’ve never really been interested in searching for experimental interfaces in order to render the software tool “performative” or “playable”, rather I’ve always looked for the most musical way of controlling it. In the majority of cases, the most musical solution wasn’t at all “theatrical”. I controlled and still control it by using the laptop keyboard and the cursor, eventually extended with some very basic faders and knobs. As I’m not looking for an “instrument”, I don’t need my software to be interactive or to offer a “feedback loop” (1) as classical instruments do; I’m fully satisfied with a one-directional control over it.
The decision not to work with prerecorded samples might also be based upon the observation that through the digitalization the entirety of this world’s sounds has become available and playable as an artistic footage in global digital media networks at any time (2) and a majority of music producers have started to use this footage as an artistic material on which they base their work. This footage is quite attractive in the sense of its unlimited availability and its variety. However, since I for a long time have been looking for a way of bringing back liveness or at least some last fragments of liveness into live performances of electronic music – without being conservative – the use of the “human sampler” brought that liveness and immediateness onto stage. Clearly, there are several more categorical differences between working with a sound archive and working with real musicians, but I will not treat them in this essay.
“südwärts” was the first work being born out of the new real-time sampler in 2009. After a long researching phase, which took place in collaboration with the baritone violinist Katryn Hasler, I fixed the final form as well as the final score for the violinist.
Short video documentary by Katharina Bhend on “südwärts”, 2009
The work “südwärts” is in all respects a representative work concerning the clearness of the concept: the violinist generates the sound spectrum while the software tool creates rhythmic patterns out of it. Since the software switch is acting without crossfading or otherwise “smoothing” the violin sound, its sound result is quite rough and direct. What is more, the simple step sequencer gets clearly audible as it repeats one and the same rhythmic pattern (with variations) for the whole duration of the piece. With “südwärts”, I’ve defined the musical language I’ve been working with for the last five years.
What is “composing”?
When working with my real-time sampling tool, the whole process of “composing” itself has become complex in a way: it’s not anymore clear, which part of this entire process is the real composing process. Is it the first trying around and improvisational search for unshaped rough sound textures? Is it the following process of structuring and fixing sequences out of these textures, the creation of musical form? Or is it the definition of the score for the instrumentalist who is later on going to provide the sound material? Is it, at the end, the very preceding creation of the software tool itself, the “system” which already consists of several fixations of diverse musical parameters?
The last place (left)
Several pieces followed. I’d like to speak about “The last place (left)” which probably is the most important work that originates from this real-time sampling tool and the conjunctive composing system. This work might even represent the climax of what is possible at all with my real-time sampler in the sense that the simplistic concept and its consequent realization has lead to musical quality and maturity (compared to “südwärts” for instance), which is in a way no more developable. The very first idea of bringing studio methods onto stage has reached its summit. “The last place (left)”, being mainstream because of the use of ostinato patterns and sample loops that bring freezed fragments of the real world into digital music (3), is at the same time not mainstream because the majority of the used sample loops and ostinatos are not samples but real-time-generated on stage. Certainly, when this music is being recorded on CD, it might be mistaken for sampling music, but is not (this is valid for almost all my works).
THE LAST PLACE (LEFT), 2010, performed by Hasler/Juvet/Saegesser
The living sound library
Being aware of the fact that my real-time sampler was almost fully developed and that I’ve learned how to handle all its parameters by using it for several years, I’ve kept it more or less as it is since 2010 and, instead, I’ve taken a little change of direction. By reflecting again and again the primary concept of my human sampler I’ve become aware of one conceptual aspect: a DJ, during the performance, is able to exchange his vinyl disks over and over; he acts and reacts spontaneously. A sampling artist, when performing live on stage with some commercial or experimental interface, is able to navigate huge sound archives instantly; he as well chooses and mixes his sounds spontaneously, following his own or the audience’s mood. I dreamt of developing my software tool towards a real-time sampler whose sounding content can instantly be controlled and doesn’t follow anymore a predefined form respectively score. Of course I could have changed concept and started to work with archive samples – but it was too important for me to keep the human musicians. I wanted to have a “living sound pool”.
Cut low from above
With two recent works I’ve introduced a kind of computer-based “guide system”, which is based on a very simple concept: By pressing keys I announce to the musicians instantly what notes to play, whether to play or not and the dynamic range. My role has become totally comparable to a DJ or a sampling artist. In the last work “cut low from above” (which is the music for the theatre piece “Die Wand”, (Theater am Gleis, Winterthur, Switzerland, February 2013), I’ve written about 80 short fragments for the musicians: rough pitches, sounds and some more elaborated phrases. This was the living sound library from which I could choose and combine fragments over and over. In this manner and by adding some pure electronic sounds I’ve created on stage the resulting music parallel to the actor speaking. This concept of real-time control over a living sound library is highly fascinating and versatile; I can’t wait to further enhance it.
Some concluding notes
At this point I choose to end my discussion, since I’m not able and not going to provide an all-embracing scientific paper on electronic music performance; please note that what I’m writing here can be considered neither “universal” nor the “common way” of doing digital music. What I’m describing here is my very personal approach. Instead, I will add some last notes about my work.
First, I’d like to say that when I’m working in an installative context (e.g. sound installations in museums) I often try to add some performative elements, for instance a traditional musician creating sound that will be algorithmically transformed into a sound installation (cf. my work “prairies”). On the other hand, when working in the field of music performance, I always attempt to include installative elements in order to render the traditional format of the concert more complex and interesting. The classical instrumentalist performing on stage in my works is not anymore the 19th century virtuoso, he is rather present as a kind of machine-like sculptural human body that represents this very last fragment of the real world whereof I’ve been speaking previously. I like this emerging ambivalence between the installative and the performative.
Secondly, I’d like to say that all decisions I’m taking throughout the whole process from composing to performing are always taken based on the consistency of the conceptual idea and – equally important – based on good sounding results and artistic quality.
And finally, as I feel that liveness is an important question for artists performing with computers on stage, I’m happy to see the existence of this forum, stimulating the discussion on this topic.
(1) Cf. for instance: Fischer-Lichte, Erika (2004). Ästhetik des Performativen. Frankfurt am Main: Suhrkamp.
[This was the eighth in a weekly series of guest posts on the topic of “live performance with computer technology” by a range of exciting music makers and thinkers. Please post any comments or questions for Marcel in the comment section below and share the essay with anyone who you think might want to read this. Make sure to also read the previous guest posts by Andi Otto (on his work with a sensor-equipped cello bow) Jeff Swearengin (on improvising [with] the setup), Samuel Gfeller (on his live electronic instruments), Markus Reuter (on making the computer vibrate), Adrian Benavides (on playback engineering), Ben Carey (on his interactive music software) and Erik Schoster (on the installations of artist Khristian Weeks). Come back on April 22nd for the next installment in the series.]
[Please welcome Andi Otto as this week’s guest contributor. Andi is a composer, electronic musician and researcher from Hamburg, Germany. He performs internationally under the name of Springintgut, his third album “Where We Need No Map” (CD/LP) will be released on April 15th on the Pingipung label. Here’s his artist profile on the label’s website, and these are his Soundcloud and Facebook pages. - Tobias]
Photo by Robin Hinsch
Write about myself? Thanks for the invitation, Tobias, never done that… except for some project applications, of course. You asked me to reflect on my sensor-equiped cello bow, Fello, with which I perform my music. I’ll gladly try.
Some background first: I’m a trained musician on cello and drums, producing my own electronic music under the name of Springintgut since 2002. In 2005 I have been invited to develop a stage instrument for my music at STEIM in Amsterdam, and that’s where I have been developing my “Fello” instrument over the past few years. I was impressed with the skills and expertise in sensor-based music that is collected within the STEIM facilities, both with the engineers and performers around. Michel Waisvisz, the STEIM director from 1981 until his passing in 2008, had already used ultrasound and tilt sensors to perform with racks of DX7 synths in the early 80s! When everybody else was hailing to MIDI as the new opportunity to hook up one “master keyboard” to several other devices, he used audience-embracing gestures to shape new digital FM-sounds, and later pioneered live-sampling performances on stage:
Ever since I touched base at STEIM I have on the one hand pursued research in their history (which no one before me ever really cared about) and on the other hand worked on my own musical interface for the Fello. I find that looking back and looking ahead goes hand in hand (there’s no new without the old) – both in my academic PhD project on the history of STEIM’s sensor-based music and in my artistic works.
After going through a draft period in 2005 in which I tried to use the cello sound as control input in a complex improvisation system (together with Florian Grote who is a brilliant PD programmer), I ended up with an accelerometer on the bow’s frog. It was a leftover device sitting on a shelf in STEIM’s workshop - Joel Ryan had built it out of a Wii Nunchuck for a dance piece. In the first mapping tests it was connected to a delay, the two axes of the bow changing the time and feedback of the echoes. I remember the magical moment when I first lifted the bow off the strings and activated the delay with a foot switch: the gesture in the space around cello would produce sounds like little rolls (fast delays, high feedback), single repetitions (slow delay, low feedback) or looping structures (slow delays, high feedback), gradually changing, derived only from my sampled cello sounds, depending on the moves I would make with the bow. I spent hours there the first night, then days, just exploring the possibilities with the delay mapped to these gestures. It was as if a magical dimension had just opened, and I have rarely ever played the cello without the sensor since.
It took me one month to work on the entire mapping of the “Fello” which I left untouched until today. Learning the instrument contradicts the urge to constantly rewrite it. Almost not changing the patches anymore makes the system feel like an instrument which I have to master, not an arbitrary set of variables I can adjust any time for more efficient ergonomics (whatever this may be). The input is mapped to control 8 delays, 1 filter, 1 complex live-sampling patch, 2 freeze reverbs and a crossfader.
Here’s the hardware setup I use on stage today:
1 Piezo Mic, David Gage “The Realist”
1 Accelerometer, 1 pressure sensor LI battery powered
1 XBee wireless sensor data transmitter
1 Behringer BCR2000 MIDI controller
1 NI Maschine
2 foot switches
1 RME Fireface UC
1 MacBook Pro
Software: JunXion, LiSa, Ableton Live
I use the BCR to activate MIDI channels 1-16 with its 16 toggle switches. If no button is pressed, there will be no data going from the bow to the software. Each button / MIDI channel represents another process. In the delays, the two foot switches combined with the toggle button will activate the time and feedback parameters for the bow. This means that I can “freeze” a certain setting by releasing the foot. Otherwise a delay which constantly changes its micro-timing would generate much noise. STEIM’s engineers Byung-Jun Kwon and Marije Baalman skillfully engineered the XBee sensor modules. A force sensitive stripe on the wood of the bow has finally been added in 2010.
The first public appearance of the “Fello” system took place in a dance theatre piece by the choreographer Victoria Hauke at Kampnagel (Hamburg) in early 2009. There was no control in this controller, it felt like taming a wild beast, I was doing lots of brain-worked gestures to move the bow in the right positions to come up with (or mute) a certain sound. The ability to reproduce a sound was demanded by the dancers – a huge challenge which eventually made me play only very simple structures in this first performance.
In these early days I found out that one important feature is to be able to switch the sound off. It’s so simple that it’s easily ignored in the development. There must be a shortcut to silence in all situations, especially when performing with others.
This piece “()else” was then invited to perform at a dance festival in India. After the show in Hyderabad I met DJ Prashant who uses south Indian percussion recordings on his digital DJ setup, and we played a quick session at a jazz club. Someone from the Goethe Institut saw this and the result was that at the end of 2010 we were invited to play a 3 month tour through India under the name of PandA System.
I went to Kyoto, Japan, in 2011 as artist-in-residenc at the Villa Kamogawa. Three months of intense work in a very inspiring and beautiful environment – it was a blissful gift – made me rehearse, record, and extend the “Fello” system. After shows in Japan I sometimes got the response from the audience that I “do nice dancing to my music”. Some obviously couldn’t see the interaction between bow and sound, because the sensor was small and almost invisible in my hand. The double interface function for both the cello and the computer is something which the audience don’t necessarily see at first. That’s why I decided to add a visual element to emphasize the bow gestures. So during the Kyoto residency, I tried to generate video with Max4Live’s V-Module tools. It proved to be difficult to create interesting visuals while performing the music. After some concerts I decided not to use my own live visuals on stage, unless there is another artist taking care of it (who could also use my sensor data of course).
Back at STEIM I told them about my idea and this was Marije Baalman’s beautifully simple solution: She wired an RGB-LED to the sensor, so that the bow glows in different colours depending on the gesture. It now looks like a piece of technology, even from 50m distance - that was a major advice I brought home from Japan. That green swoosh in my hands can affect all the electronic sounds, and play the cello traditionally, everybody can see that ambiguity at once now.
Photo by Mike Wilfinger
In April 2013, I will finally release my third album as Springintgut on the Pingipung Label. It’s called “Where We Need No Map” and is available for pre-order here.
(c) Pingipung Records. Artwork by Jochen Ruderer
It’s the first album to feature plenty of Fello recordings, and if you’ve read through here, you’ll know that the title could refer to the software mappings which are no longer in the brain-knowledge but in the body; and it’s like swimming to play the instrument, not like reading a manual.
I’ve edited and produced the recordings in the studio, played drum machines and synths over them, added field recordings from Japan and India and received fantastic voice tracks from Sasha Perera. She’s the singer for the band Jahcoozi (BPitch Control, Berlin) and we spent two weeks together in Sri Lanka in the Soundcamp South Asia in 2012. Here’s another short clip, from a solo performance during that stay:
It’s funny that I initially made the Fello to be able to play a veritable live concert with my electronic music, and now it’s fixed on this record which I can’t perform like it is because every sound has been processed and re-built in the studio for hours and days. This means that I have to come up with new interpretations of the tracks on stage which will likely be much more reduced and raw than the studio music. The common element is the Fello, exposed in its simplicity on stage as the source material for the album’s sonic identity.
[This was the seventh in a weekly series of guest posts on the topic of “live performance with computer technology” by a range of exciting music makers and thinkers. Please post any comments or questions for Andi in the comment section below and share the essay with anyone who you think might want to read this. Make sure to also read the previous guest posts by Jeff Swearengin (on improvising [with] the setup), Samuel Gfeller (on his live electronic instruments), Markus Reuter (on making the computer vibrate), Adrian Benavides (on playback engineering), Ben Carey (on his interactive music software) and Erik Schoster (on the installations of artist Khristian Weeks). Come back on April 8th for the next installment in the series.]
[It’s my pleasure to welcome guest writer Jeff Swearengin to this weekly series of writings on “live performance with computer technology”. I’m particularly excited because I heard about Jeff through the recommendation of our mutual friend and previous guest writer Adrian Benavides. Jeff is an electronic musician, drummer and producer residing in Los Angeles. His SoundCloud page is at soundcloud.com/sleep-clinic. - Tobias]
When I was asked about participating in this project I was really unsure of what I would talk about. Tobias gave me a list of great questions and prompts to sort of run along with, so I have kind of treated this more as a conversation/interview but will preface it with some background about myself. I’ve been working with electronics in music since the mid 90s. I come from a drumming background. So, originally, I had a slew of drum machines and two synths (Oberheim Matrix 6 and a Kawai K4). After working with Kurzweil K-series and Ensoniq samplers I really liked the idea of having everything in one place versus MIDI-ing everything together. I eventually bought an MPC2000xl (much more affordable than the Kurzweil at the time) and that really opened me up much more to the world of sound design through sampling and processing audio. When the advent of the computer came along in my home I obtained a copy of Reaktor 2.0 and was thoroughly confused and fascinated by it all at once. This was probably my first real encounter and experimentation with granular synthesis and I was hooked even though it was all very trial and error for me. At that time Reaktor had elements for me that were like using a synth with martian text for all the function labeling. I think that turned out to be an advantage in ways, not having a reference point or context in some cases as I did when using subtractive, FM, or additive synths. Shortly after this point I also discovered things like Bidule, Audio Mulch, Max/MSP, Supercollider, Pure Data, Ableton Live etc. It took some time to really find ways to use these programs in musical ways. I would spend great deals of time making setups to do improvisations with or even just to design one shot samples that would all eventually be edited, processed, combined, and then thrown into my MPC where I would create tracks where the sound design and my sense of rhythms dictated arrangement more than doing things on purpose. This kind of software/hardware fusion in conjunction with all my musical influences from the time really made me re-evaluate what I considered to be music or musical. In the case of the MPC2000xl, I would load up rather long samples into programs. Where most people would have taken those and chopped them and assigned them to pads to fit it into the timing. I would just allow the sample to run and overlap on itself and sort of allow it’s own rhythmic identity to form and then build around this to further structure. These early experiments with the sampler would later translate into software environments really well once my understanding of algorithmic and generative composition developed further.
Coming from a instrumental background, how do you approach improvising with gear/software?
At a very young age I was completely fascinated and obsessed with drums. Drum kits looked like these complex machines and I was fascinated by the way one must interface with them. I spent many hours a day from 6-15 playing drums and being content with a straightforward kit. I was inspired by everything from the classic rock my Dad listened to all the way over into death metal and everything in between. Around the age of 15 I was CD shopping and came across Godflesh’s Slavestate EP. This album was a major pivotal change in what I would begin to listen to and also how I played music. I could not afford an electronic drum kit so I would instead detune my drums and play with different configurations that would force me to approach my playing differently. In time, as I was falling deeper down the rabbit hole of electronic music and its endless sub-genres I grew very frustrated by the limited sound palette of acoustic instruments. This is where my obsession with synthesizers, drum machines, samplers, effects processors and eventually computers starts.
Having this instrumental background, my approach to improvising with hardware/software varies quite a bit depending on the set up that is in front of me. What is constant is that I use my improvisation sessions as a form of meditation or channeling. A ritual, designed to articulate and manifest feelings, memories, fantasies, moods, dreams, and other subconscious percolations into sound - an exercise into reflection and projection.
When you set up for an improv, do you have a musical vision, a structural idea or a setup you want to try, or curiosity about combining certain pieces of gear, or something else?
Any time that I am creating music it is from an improvisational approach. The idea of having a thumbnail sketch in mind of a song and wanting to attempt to translate that into a reality is an alien concept to me. I do sometimes perhaps have aspects of the improv predetermined depending on the current mood of the set and setting, or, the source of the inspirational trigger.
Outside of that, my only vision is to transport myself and the listener to another place, and to fully experience immersion. My curiosity in combining particular pieces of gear is as endless as it can be though. Fortunately, most of the hardware synths I have also have audio inputs. This gets great results both for compositions and for sample harvesting and developing some unique effects chains. It’s also been a great way to get software out of the box and colored by different pieces of hardware that can add so much unique character to the sound of software.
Which things do you prepare, how do you prepare them? Is there a fixed routing or can that be part of the improvisation as well?
The prep work is pretty involved. Since a lot of what I work with is sample based I spend large amounts of time creating, editing, and processing the samples. Generally what I will do is take my source material - which varies from field recordings, material from movies (I try to grab things without voice or music), synths to long form improvisations – and edit that first. Then the processing portion can be a number of things from typical effects processing, using granular synths to transform the sample, harmonic extraction/destruction, and all to varying degrees. This process itself is an improvisation. Once I have built up a really good sized library of sounds 64-100 samples these will then get dumped into either or both the MPC2000xl and Elektron Octatrack for sequencing, further processing, modulation, and actual composition for live use. Within a studio environment it’s the same approach but the material generally will end up in Ableton Live, Reaktor, or, Max/MSP/Jitter where it will be composed into an assemblage of versions or takes then those pieces will then be exported out to hardware where it is reworked again. Routing definitely is part of the improvisation as well. I think routings are a crucial and integral part of the improvisation and determining a trajectory for it to head toward. It is the deciding factor of the setup’s language. Much of the detail and nuance is created via the routing and re-routings particularly when combining software and hardware performances. The individual[s] involved in the improv are also a type of algorithmic element in the improvisational system that determines the outcome versus just a controller or composer - you become apart of the circuitry of the system you are interfacing with in a way.
What parameters are being played manually, and what is algorithmically decided?
This is a really tough question because it changes so much depending on what is taking place for that particular session. I used to like a majority of algorithmic decision and manual interaction being used more to determine changes in place of controlling or producing the actual composition. Now it’s more of the opposite where I like to have just a few things being done algorithmically — almost like having an invisible player with you that you are able to vibe and feed off of and vice versa. You recently had Ben Carey on your blog. I was completely blown away by his use of, and set up for algorithmic composition. He mentions in his writing wanting a system that is more interactive than reactive. I couldn’t agree more. I was really inspired by that entry.
In this example the melody is being played manually with algorithms controlling scale properties. The beat is fired off from a sample pool that it can choose from. These are assigned to a keymap with zone 1 being kicks, zone 2 being snares, and zone 3 being clicks giving some degree of control over what type of drum is played for each synth note. Then, tempo is randomly generated.
Is the result of an improv the finished piece? If not, how do you proceed with the generated material?
A lot of times I will do one take improvs that I will consider finished. For the improvs that don’t seem finished they are either treated as a stem in a multi-track or sliced and diced for the samplers where it will be transformed into something else.
Here is an example of an improv that was done as a single take. This is a Waldorf XT being fed into a sequenced reverb effect and then back to itself again. The reverb is basically several snapshots set at different settings then it runs through the snapshots as the audio comes in.
In the next example there are several algorithmic takes and also improvs using synths chained by audio inputs. The rhythmic material was programmed and then everything once set in place in Ableton Live was then improvised again using the clip mode to get the arrangement.
I like how you wrote in an email that “[t]his kind of software/hardware fusion in conjunction with all my musical influences from the time really made me re-evaluate what I considered to be music or musical.” I’d love if you could expand on that. It’s something I’ve observed with myself as well - algorithmic composition is ear training - you get accustomed to hearing very weird, previously unimaginable stuff that really expands your musical imagination.
I agree with you completely on this. It does start to open your ears and mind up to atypical elements that in most cases would probably seem undesirable, desirable. From broken or angular rhythms to a-melodic passages, and so on. I’ve always been a huge fan of bands like Coil, O.S.T., Autechre, Nurse With Wound, Arovane, Proem, Throbbing Gristle, etc. One thing that really turned me on to Autechre early on (other than the incredible sound design) was the use of repetition to a point of hypnotic effect. When they began to use generative sequencing and algorithmic composition I was really blown away at what could be achieved using these methods to make music. Between them taking us to the outer limits of abstract electronics and deconstructed forms of techno and hip-hop and the employed techniques such as musique concrete and found sound used by artists like O.S.T. and Nurse With Wound to create their improvisations — I found myself really drawn into and fascinated by the way these artists were creating really strong, intense works, and slightly disenchanted with more conventional arrangements and compositions. I eventually found my way back to a balance in listening habits, but, my tack out into experimental forms of electronic music has had the biggest impact on me as a listener and collector of music and as a producer and artist as well…it really redefined the meaning of what a song could be and what music is.
[This was the sixth in a weekly series of guest posts by a range of exciting music makers and thinkers. Please post any comments or questions for Jeff in the comment section below, and share the essay with anyone who you think might want to read this. Make sure to also read the previous guest posts by Samuel Gfeller (on his live electronic instruments), Markus Reuter (on making the computer vibrate) Adrian Benavides (on playback engineering), Ben Carey (on his interactive music software) and Erik Schoster (on the installations of artist Khristian Weeks). The series will be taking a break next week - please come back on April 1st for the next installment.]
A couple of weeks ago I researched libraries in Switzerland that would accept my CDs for inclusion in their catalogue. I also offered anyone on Facebook and Twitter to send them two CDs for free if they volunteered to bring one of them to their local library. I liked the idea because it was about making gifts to each other and to the public and built on collaboration and trust.
Ten people got in touch that day, and so I got to ship packages to Germany, UK, USA, Iceland, South Korea and Switzerland in addition to the ones I sent to Swiss libraries myself. The costs for sending all the CDs amounted to 111.- Swiss Francs (90€, $118). That is a significant amount of money and the offer to send free CDs is currently on hold, but here’s an idea to make the project come full circle:
I offer two exclusive recordings (17 minutes of music) from my recent composition practice to anyone who donates 10 Swiss Francs (8€, $11) or more towards the project.
Send donations via Paypal to tobias(at)tobiasreber.com.
The offer expires on April 1st, 2013.
Any money raised beyond 111 Francs will go towards shipping more CDs to libraries.
Within 3 days of your donation I will send you a link to two high resolution MP3 files.
I may chose to repeat the whole project at a later time, offering different sound files.
What you give:
You support libraries by giving them music they wouldn’t otherwise get.
You help making my music available to people who in turn will give a copy to their library.
You support me by helping me make my music available.
You give us both the pleasure of sharing something unusual. :)
What you get:
First of all, my thanks! Also, 17 minutes of music that is exclusive to this project, recorded directly from my custom software playing software synthesizers. The recordings are not mixed or mastered. They are sketches, not finished compositions, and they offer a glimpse into the structures and sounds that I’m interested in at the moment. The two recordings on offer will not be released at any later time.
I’m very happy to announce that I’ll be teaching at the Hochschule der Künste Bern (University of the Arts Berne) from April to June. Media and performance artist Lara Stanic and I will be standing in for a faculty member on maternity leave. I will be working with students form the BA programme in Music and Media Arts and the MA in Contemporary Arts Practice, and classes will include electronic music ear training and composition as well as project mentoring in personal meetings. I’m very much looking forward to this!
[Please welcome Samuel Gfeller as this week’s guest contributor. It’s my big pleasure to have Samuel here as I’ve seen him produce very interesting work for a number of years now, much of which fits this series perfectly. He’s applying his interest in interactive and generative composition in such diverse areas as music composition, performance, installations and audio drama. Samuel is on Facebook and his work is documented on his website is at www.samuelgfeller.ch - Tobias]
As a musician I started working with electronics and the computer as an instrument a few years ago. After many experiments with straightforward samplers and – in my opinion – rather interface-controlled sound effects, I got in touch with algorithmic composition. One of the main inspirations for me the work of austrian composer Karlheinz Essl. In 2010 I wrote my bachelor thesis on the subject of “algorithmic composition and live-electronics”.
One of the questions I asked myself when first working with algorithmic composition was: how do I deal with the wide range of possibilities between total organization and complete chaos.
Using the computer and software-based algorithms to produce music led me to the following notion: as a composer, I used the algorithm as a kind of “agent” which follows my intents in the digital world, i.e. it brings order into the chaos. My challenge was to find a way to translate my musical and compositional ideas into concepts I could use in software.
It’s obvious that working with random operations, chance and probabilities leads to more complex sounds and musical forms. I didn’t have to control every single parameter of the sound any more when I was playing with an algorithmic instrument. Rather I controlled the meta-structures, ranges of pitches or note durations. Always having in mind that I could get lost in arbitrariness, I had to adapt and to limit the algorithmic operations very carefully. So, finding the right interface to play with was the next crucial step. Faderboxes and game controllers were the first things I tried out, but very soon I realized that the the design of the interface influenced the way I performed on stage. Being limited by and bound to the hardware concepts that are borrowed from mixing consoles, I looked for more freely configurable interfaces. The ones I’m working with now are the monome 64 and arc 2 from monome.org.
„Performing music with intelligent devices tends towards an interactive dialogue between instrument and instrumentalist.“
(Sergi Jordà, Interactivity and live computer music, in: Nick Collins & Julio dʻEscriván, The Cambridge Companion to Electronic Music, Cambridge University Press, 2007, p. 90)
As Markus Reuter wrote in last week’ guest post, I’m convinced that there is a dialogue between the instrument and the instrumentalist, especially when working with software. I’d like to write about two instruments I played improvised music with in the last two years and the way they influenced my improvisational practice as well as the design and the configuration of the interfaces for performing on stage.
1 - Cut Asunder (2011)
Montage of an improvised set with Cut Asunder:
The main concept of this instrument is quite simple. I’m playing with different sound files that are loaded into six channels of a sampler. The samples are sliced into small parts with which I could play during the performance. As a further complication there’s an underlying system of rules which interacted with me and triggered sounds on its own.
(Click image to open a larger version in a new window)
The physical interface I used was the monome 64 with two control layers. One was the main layer with which I could trigger the sounds, activate auto-playing mode and controlling the time structure of the algorithmic system. A second layer I simply used to control the levels of the six samplers and the main volume. I also used the built-in tilt function to control a low pass filter and the amount of feedback in the effects chain. The computer was hidden whenever possible so I could play with only the controller.
This was the allocation of the buttons:
(Click image to open a larger version in a new window)
As you can see, each row of the monome has just eight buttons. When I was designing the software I knew that I didn’t want to have direct access to the sounds. That’s why the samples were randomly cut into 8 to 64 slices. When playing I could only play with the ranges of the sound and left the control up to the algorithm. Furthermore, when I activated the auto-play mode, the sliced samples were triggered by a random time structure. This time structure was divided into rhythmical and arhythmical parts with accelerandos and ritardandos.
When performing with the instrument I could interact with those slices played by the time structure by imitating or trying to create musical contrast. This was quite challenging since the software also registered the sequence of pushed buttons and in turn changed the transition probabilities for the sample playback.
Solo performance with Cut Asunder:
On the compositional side, it was very important to choose the material with care. I mostly worked with diverging sounds; some percussive ones, drones, some with more high frequencies or with many gaps. I arranged the sounds in time before playing, so I knew roughly when I was using a sound or when I made a transition to another one.
I realized that in addition to the algorithms, it was helpful to play with structures too. By playing the monome I developed gestures I could use and repeat at any time – just as a pianist knows how to put his fingers when he wants to play a triad, for example. The gestures also made the performance interesting for me in the confrontation with the interface’s own algorithmic “inner life”. It gave me the sensation of being creator and listener at the same time and it was my choice when and how I would interfere with the instrument.
Improvised lounge set (duo with Tobias Reber)
Samuel Gfeller: Transit IV
2 - Arc____ular
This second instrument I want to present is an ongoing project I recently developed for the soundtrack of a silent movie commissioned by the Institute of incoherent cinematography IOIC, and I use it in an improvisation trio with a flutist and a clarinetist too.
Arc____ular improvisation with flute and clarinet
The fact that I prepared myself to improvise to a silent movie of 90 minutes duration changed many things when I developed my instrument.
I knew that I had to follow a different strategy that allowed me to play for a very long time without getting lost in too abstract and difficult to understand stuff. I was also looking for a consistence in sound. So I had to limit the sound sources to just a few small percussion instruments (in the end: kalimba, singing bowls, FM3 Buddha Machine, Korg Montron synthesizer) and focus more on processing of the sounds (e.g. by using a Revox A77 for tape delays and feedback in an external effects chain). The interface was a monome 64 combined with the monome arc 2 and a fader box for controlling the levels of the different sound modules of the software.
The instrument was built up of a granular synthesizer and a sample recorder which again cut the recorded samples into slices. Only this time I decided to have direct control over the material I recorded. I also added the possibility to resample internal sounds to have more ways to process material once recorded. I got rid of most of the algorithmic stuff because it was hard to control it and to make it fit with the soundtrack I had in mind for the silent movie. The only module that was still based on algorithms was an analyzer that tracked all the pitches played by the software and used them to produce slowly evolving sine tone drones.
(Click image to open a larger version in a new window)
When I was preparing for the concert I tried to find different strategies and different ways to play with the whole setup. I had to find ways to move very slowly from one sound to the next, to develop a musical material over a long time but then to change it very fast. This flexibility was necessary for not getting bored or to fall into the same archetypes I learned while rehearsing again and again.
I think when improvising with a software based instrument it is obvious that you act or react in the same manner a “traditional” instrumentalist would. What makes the difference for me is some kind of inertia I feel when playing with live-electronics (you can’t simply move to a C major chord from one second to the next) . That’s why I learned to think – or better to listen – ahead in time to be ready with a sound I imagine when its the time to play it. For that it was very helpful to have some kind of timeline I could hold on to during the movie – a cue list that told me which section is next and when it comes.
Another idea was creating some kind of play-along I could start at the beginning of the movie and that helps me to have clearer formal segments. This din’t work because I just had very few prefabricated sounds I could trigger in the end.
“La passion de Jeanne d’Arc” – Teaser
Coming to the end of this field report I can say that playing to the silent movie was a great experience for me. What surprised me was the fact that I completely lost all sense of time after the first 10 minutes and the movie was over sooner than I thought.
Having more control over what happens within the instrument was a good decision. I forced myself to really be in the moment, to be very present and to have confidence in the strategies I had rehearsed previously.
“La passion de Jeanne d’Arc” – Full video
[This was the fifth in a weekly series of guest posts by a range of exciting music makers and thinkers. Please post any comments or questions for Samuel in the comment section below, and share the essay with interested people. Make sure to also read the previous guest posts by Markus Reuter (on making the computer vibrate) Adrian Benavides (on playback engineering), Ben Carey (on his interactive music software) and Erik Schoster (on the installations of artist Khristian Weeks), and come back next Monday for the next guest entry.]