[Please welcome Alex McLean as this week’s guest writer. Alex is a musician from Yorkshire, UK who specializes in live coding of music performances. He has written a PhD on “Artist Programmers and Programming Languages for the Arts” and is a co-founder of TOPLAP (Temporary Organisation for the Promotion of Live Algorithm Programming). Alex is @yaxu on Twitter and his website is at http://yaxu.org. - Tobias]
Music is an aspect of Human life which touches almost everyone, present within all hearing cultures. The field of music psychology is an exciting place, but music is still little understood. One recent, straightforward way of defining music is in terms of recordings; music is something you can buy, and then listen to. Digital music-making technology has grown up in this context, where tracker, sequencer, and even live production software has maintained a dichotomy between producer and consumer. Music is something that is made, perhaps alone in the stereotypical bedroom studio, for someone else to listen to later.
If we think of music not in terms of product, or even sound, but just in terms of Human activity, then music technology begins to take a different shape. Christopher Small reframed music activity as Musicking, an expansive definition which takes in not only the playing of instruments, but also ticket purchases, the gathering in a concert hall atrium, and all the other music rituals. Against this background, the iTunes music store is not a way of acquiring music, but is actually music itself. The wider experience of music reduced to a tabular interface centred around consumer choice. The same applies to Ableton Live, Cubase, Cakewalk Pro, and Buzz tracker; if the production and distribution of these software systems is musical activity, then their programmers are the master musicians of our times. Where they are anonymous, we could call them folk musicians, making music which is not for people but of the people.
Generative music is perhaps the purest of music technologies, and based on the idea of autonomous music. Wind-chimes are the classic tangible example, played by the wind, beyond the realms of Human activity. But if music is Human activity, then the music of windchimes is not created by the wind, but the windchime maker, and the passerby taking the time and imagination to find music in the experience of listening to it.
The ideology of autonomous generative music has leaked into music technology in surprising ways, and left us in some strange situations. As deadmau5 points out, there is a huge disparity between pressing play on a computer and the frenzied, engaged reaction of a large crowd. This situation seems unfair, not to the audience who are having a great time, but the performers who are left looking for self-justification (or in the case of deadmau5, honourable self-deprecation). Perhaps there is a better way of approaching technology.
The principle of technological determinism might suggest that the music technology we currently have is inevitable, but that isn’t the case. Computers are language machines, for writing text about text, and yet through great, painful and difficult work we use them to construct simulations resembling something like the world outside. If software *did* determine music technology, then perhaps it would look more like the fringe activity of live coding. Live coders write and manipulate text, interpreted live into sound, which is experienced as music. This is a form of improvised algorithmic music, where a piece of music is described using higher-order, abstract constructions, while that description is interpreted live by a computer. In this way live coders have blended music and software-making, both often considered as end-products, into a single activity.
Live coding is not really a leap up, but rather a return to roots, establishing new foundations of human-computer interaction, using programming languages in end-user interaction. You can find example videos, and freely-downloadable live programming languages, through the TOPLAP website.
Live coding is a family of approaches to music-making, used within a wide range of musical genres. It is also applied to live video animation such as VJing (for example using Fluxus or LivecodeLab). Classically, a live coding performance will involve one or more improvising musicians with a live audience. An audience is not necessary though, we might live code as part of a private drumming circle, or even alone, either as a solo exploration or as part of a compositional process towards a fixed recording.
If an audience is involved, they are generally able to see the live coders’ screens, in accordance with the TOPLAP manifesto. This is often achieved by projecting the screens. The audience are not expected to read or understand the code in order to enjoy the music however, any more than we need to understand guitar-playing technique to enjoy guitar music. Showing screens simply opens up the Human activity of live coding, so that people feel included and have a way into the music.
While there is on-screen movement, it is also true that live coding is generally not the most active of performances. The difference between the audience and performers can be stark; everyone is engaged, but on two very different levels:
The disparity between the livecoder, fixed on their screen, and a dancing audience before them is strange enough. Everyone in a state of creative flow, and everyone somehow engaged in the dance as a group. The emerging Algorave movement emphasises this activity of the audience in algorithmic music, including (but not only) through live coding performance. Just as the original rave movement was driven by anonymous producers and DJs, the contemporary algorave encourages creative listening. Not the still, deep listening of Pauline Oliveros, but listening with the whole body. Establishing a ground in complex, polyrhythmic output of musical algorithms is easy if you dance to it.
So this has been a bit of a ramble, but hopefully goes some way towards motivating embedding the expressive power of computer language in an active music experience. It’s been a great pleasure sorting through some of these thoughts, but there is a lot more sorting through to do… This is of course the function of blogging, and I hope you’ll contribute some of your thoughts to the mix below!
Thanks for reading, and thanks also to Tobias for having me as a guest.
 An electronic computer isn’t strictly necessary in live coding performances, for example Nick Collins and Kate Sicchio have both used Human dancers to take the role of a computer, as a form of live algorithmic choreography.
[This was the ninth in an almost weekly series of guest posts on the topic of “live performance with computer technology” by a range of exciting music makers and thinkers. Please post any comments or questions for Alex in the comment section below and share the essay with anyone who you think might want to read this. Make sure to also read the previous guest posts by Marcel Saegesser (on composing in a digitized world), Andi Otto (on his work with a sensor-equipped cello bow) Jeff Swearengin (on improvising [with] the setup), Samuel Gfeller (on his live electronic instruments), Markus Reuter (on making the computer vibrate), Adrian Benavides (on playback engineering), Ben Carey (on his interactive music software) and Erik Schoster (on the installations of artist Khristian Weeks). Check back on April 29th for the next installment in the series.]
Here’s “A Listener’s Guide to centrozoon” that artist Whitney Leary compiled.
It’s made up from tour photos and excerpts of the round table discussion that Bernhard Wöstheinrich, Erik Emil Eskildsen, Lee Fletcher and I recorded for DEGEM’s web radio this summer. The background music is “Field 2” from our Lovefield album.
You can listen to or download the full one hour discussion here.
I’ve been very busy these past two weeks, working on a documentary feature film mix, composing and recording for my collaborative project with guitarist Alex Dowerk and working on other less interesting things, so I didn’t get a chance to play around with my new Steven Slate drums until today. Below you find a short improv recording on which I play SSD’s Jazz Sticks kit with a spontaneously cobbled-together Max patch. This is a one take recording without any postproduction.
On another note, I’ll be posting a lot of content for free download over the whole next week, so please check back often and share the links!
centrozoon is releasing “Fire” today, an improv session from June 2011. It’s available for free listening or as a 6€ download at http://iapetus-store.com/album/fire.
All revenue will go towards the Bonestarter campaign, which I’ve written about in the blog post “This is how we do it”. In addition to the audio release there is free video footage for Fires 2, 3, 4 and 8 on the centrozoon Youtube channel.
liner notes and credits
“Fire” documents the complete evening of centrozoon recordings at the foyer of Gütersloh’s town hall in June 2011.
Taking place five months ahead of the “We Will Tongue You 2011/2012” Tour, the Fire session was scheduled with the primary intention of recording audio and video material to promote centrozoon’s renewed intention of presenting their music live, and to acquire more gigs for the tour. It also marked the band’s first time playing together in over 18 months, so while there was no audience present, the recordings served both as a “sound check” and as way to shorten the wait until the tour.
The revenue from this release will go to support Bonestarter, our homemade crowdsourcing campaign to finance our latest studio album “Boner” which was released on May 9, 2012.
centrozoon - Fire
Recorded and filmed on June 14, 2011 at Rathausfoyer, Gütersloh, Germany
Mastered by Lee Fletcher, Paignton UK
Thank you: Rathaus Gütersloh, Wolfgang Hein, Mia Plaßmann
Released July 14, 2012
Several times over the past few weeks I’ve heard or read the phrase “expanding my vocabulary as a musician” or some slight variation thereof. It is a common thing to say among musicians, one that I hadn’t really paid attention to since my teenage years when I was still thinking about trying to pursue an education and career as a guitar player. And now, suddenly, it caught my attention.
I wonder if this is because the algorithmic setups that I work with tend to be systems with which I interfere, rather than instruments that I play? Or because there is no set background to navigate, as in many jazz context for example - the context where I’ve heard it being used the most? The concept of “playing as speaking” does not appeal to me any more than the notion of “composing as communicating” (leading to that very awkward but very common question of “What is the composer trying to tell us?”). So usually I find virtuoso pop/rock/jazz improv/soloing to be - talkative. I suddenly find myself quite certain that this might have something to do with the prevailing metaphor of language.
In making and performing music we are inevitably expressing ourselves in one way or another - all kinds of momentary circumstances have an effect on the way we play. But in my view, using music to emulate language seems to be not just kind of ineffective communication, but simply to miss the point - I feel that whatever music does is quite different from what language does, and thinking of it in terms of language and speech will only limit one’s capability to study it on it’s own terms.
Or is it simply because I currently see myself more as a composer than an improviser, even when coming up with never-played-before music on stage? The complex relationships and the form of musical events here being just as much of interest as a single “utterance”, to keep using the metaphor of language.
It seems to me, then, that at this point I am not that interested in “expanding my vocabulary as a musician” - if pushed to stick with the analogy of language I’m much more interested in questioning, testing, manipulating and yes, expanding my grammars - the internal workings of music on all levels of form, structure and events.
A snapshot from this week’s rehearsal of the Tönstör Laptop Ensemble, the new electronic music ensemble for teenagers that I’m building up for Tönstör in Bern.
Featured in the background is Till Hillbrecht, DJ and media artist and my co-teacher for the current performance project, the Ensemble’s second after last year’s music for contemporary dance (documentary video here). It will see its completion on July 6 with a performance at the Lange Nacht der elektronischen Musik at Dampfzentrale Bern where the Ensemble will open for Asmus Tietchens, Christoph Heemann, Scanner and Maja Ratkje.