Tag Archives: performance

Crossing Over

I played a concert as part of Simon Whetham’s “Active Crossover” project a couple of weeks back, as I wrote a couple of posts ago. I really enjoyed getting back out to perform in public, there’s something about the real concert environment which tests the material in a way that can’t be duplicated otherwise.

I always love collaborating with other people, so the best part of the performance for me was the 10 minute “cross over” section where I improvised with Phil Julian/Cheap Machines. Phil was working with a tone generator and telephone pickup amplifying electronic sounds from the inside of his laptop, while I was making some lo-fi loops on my old Casio SK1 and using my electronic Theremin with some LED finger torches.

I’ve just made an episode of the “Gene Pool” radio show/podcast from the concert mix that Simon sent me and you can hear that over on the Digital Media Centre Podcast page.

Active Crossover

Simon Whetham Active Crossover I’ve been invited to perform as part of Simon Whetham’s “Active Crossover” touring installation project on 1st April at the Rising Sun Arts Centre, Reading.

I’ll be there alongside Jonathan Coleclough, Cheapmachines, Mark Durgan, Felicity Ford & Simon Whetham.

Mr. Whetham’s gallery project showcases an interesting process where he creates audio pieces in response to the exhibition locations, then collaborates with a range of artists from that locality and contrasts his pieces with recordings of improvised performances which top and tail the show. The Active Crossover web site says:

In the other chamber are pieces by artists that Simon has performed and collaborated with through running the project, and through their ongoing cultural exchanges, recordings and events. In the main exhibition space you can hear the sound bleeding out and mingling from both spaces, forming a new and evolving communal work.

Integral to the project is a series of performance evenings which investigate improvisation and collaborative working methods…

It’s been such a long time since I did any performing, I’ve got to get my practice schedule rolling. I’ve got a plan about trying a new format for the concert, so should be good.

Bindu Point

Martin FranklinMotion triggered audio/video performance

For 18 months from mid 2004 through to the end of 2005, I worked with performer,Lee Adams on this project after being awarded a research and development grant from Arts Council England.

The basic concept was to use video motion tracking to produce a responsive sound field, and as we developed the project, this became a sound source that I could improvise with and shape using various software and hardware controls.

Lee AdamsRather than being a purely academic exercise, we sought to produce an engaging, dynamic performance piece as well as explore some of the possiblities of using digital media in the performance arena.

This was my first large project using Max/MSP to build a software system, but before we go into the technical steps, here’s a movie of the performance to give the idea of the experience.

Video Downloads:

Performance at Sonorities Festival, SARC, Belfast (QuickTime MOV 7.9mb)
Early performance video edited for ePerformance & Plugins Festival, Sydney, Aus. (QuickTime MOV 28mb)

Generative Output

This images shows the interface of the final application, combining the generative output principle of the original system with some live performance controls that I could use to modify the sound output.

I used the Cyclops external from Cycling 74 to provide the motion tracking with one of the Apple iSight FW cameras viewing the performance space. The hotspots provided by Cyclops are configured to output a midi note number when movement is detected in any of five zones. Every two note values are then converted to their binary equivalents and become ‘parent’ values. These ‘parents’ are fed to a logical operator that responds by flipping the value to 1 if either input is nonzero, or otherwise returning zero. The outcome of this is that a unique ‘child’ value is generated, which is then fed to a Korg Triton synthesiser and the software Wavestation built into the app.

Control screen for the “Bindu Point” Max patch

I wanted to be able to get my hands on and mix the sound, so the outputs from the two sound sources were run through a small mixer with an effects unit patched in.The new performance controls enable me to add sustain, pitch shifting and to change the velocity of each note in real time. As the performance grows, the sound builds into an evolving field of layered drones that the performer simultaneously creates and responds to.

Lee Adams and video projectionDuring rehearsals, we added video feedback to the system, by projecting the trigger image back into the space and tuning the camera position to the point where it would respond to the changing light levels with chaotic, swirling images of the performance. This is probably no revelation to video artists, but in performance, it provided a spectacular visualisation of the audio output.

With this system, I wanted to address the issue of media-tised performance, so there were no screens or artificial barriers between us and the audience. It’s a purely experiential performance that makes use of technology to augment the human experience, and although the audio output certainly leads what I could do, it allows the human element to take the lead and initiate each stage, as well as respond to the audio and video outputs.

I began the first stage of this project in 2003 when I was studying for my MA and made a movie of Lee Adams using the first version of the system at Bow Arts Trust in East London. The movie is packaged as a skinned QuickTime, called “Telecommunication” (20mb ZIP)