Bindu Point

Martin FranklinMotion triggered audio/video performance

For 18 months from mid 2004 through to the end of 2005, I worked with performer,Lee Adams on this project after being awarded a research and development grant from Arts Council England.

The basic concept was to use video motion tracking to produce a responsive sound field, and as we developed the project, this became a sound source that I could improvise with and shape using various software and hardware controls.

Lee AdamsRather than being a purely academic exercise, we sought to produce an engaging, dynamic performance piece as well as explore some of the possiblities of using digital media in the performance arena.

This was my first large project using Max/MSP to build a software system, but before we go into the technical steps, here’s a movie of the performance to give the idea of the experience.

Video Downloads:

Performance at Sonorities Festival, SARC, Belfast (QuickTime MOV 7.9mb)
Early performance video edited for ePerformance & Plugins Festival, Sydney, Aus. (QuickTime MOV 28mb)

Generative Output

This images shows the interface of the final application, combining the generative output principle of the original system with some live performance controls that I could use to modify the sound output.

I used the Cyclops external from Cycling 74 to provide the motion tracking with one of the Apple iSight FW cameras viewing the performance space. The hotspots provided by Cyclops are configured to output a midi note number when movement is detected in any of five zones. Every two note values are then converted to their binary equivalents and become ‘parent’ values. These ‘parents’ are fed to a logical operator that responds by flipping the value to 1 if either input is nonzero, or otherwise returning zero. The outcome of this is that a unique ‘child’ value is generated, which is then fed to a Korg Triton synthesiser and the software Wavestation built into the app.

Control screen for the “Bindu Point” Max patch

I wanted to be able to get my hands on and mix the sound, so the outputs from the two sound sources were run through a small mixer with an effects unit patched in.The new performance controls enable me to add sustain, pitch shifting and to change the velocity of each note in real time. As the performance grows, the sound builds into an evolving field of layered drones that the performer simultaneously creates and responds to.

Lee Adams and video projectionDuring rehearsals, we added video feedback to the system, by projecting the trigger image back into the space and tuning the camera position to the point where it would respond to the changing light levels with chaotic, swirling images of the performance. This is probably no revelation to video artists, but in performance, it provided a spectacular visualisation of the audio output.

With this system, I wanted to address the issue of media-tised performance, so there were no screens or artificial barriers between us and the audience. It’s a purely experiential performance that makes use of technology to augment the human experience, and although the audio output certainly leads what I could do, it allows the human element to take the lead and initiate each stage, as well as respond to the audio and video outputs.

I began the first stage of this project in 2003 when I was studying for my MA and made a movie of Lee Adams using the first version of the system at Bow Arts Trust in East London. The movie is packaged as a skinned QuickTime, called “Telecommunication” (20mb ZIP)

Transition

After an initial foray into WordPress development for this year’s Sound:Space symposium web site, I’m starting to transition the whole Codetrip site over to the new platform.

I’ve just finished up a whole series of projects, so I’ll add news and restore some of the downloads from the old site as I go on.