I did some recording with artist, Janet Cardiff a couple of weeks back as part of a project for Modern Art Oxford. She is currently showing her work in the gallery, and does a lot of recording using the technique of binaural recording. This is sometimes done with a dummy head to collect sound reflections from the head, ears and shoulders which, when listened back on headphones, creates an uncanny 3D effect of sound coming from locations in the space around you.
In this case, we mounted her small binaural mics on a pair of headphones, to be worn by some of the children in the school where we worked. Then had the “actors” positioned slightly behind them for the recording.
The final assembled project will be a story, with sound effects and characters played by the school children.Â
I worked on the MyWorld strand of this major project for Oxfordshire Education Department for a year from September 2007. The final outcomes of my collaborative residency working with two primary schools, students from Wood Green School in Oxfordshire, and Carver Centre for Arts & Technology, Baltimore, was shown at Modern Art Oxford during September 2008.
I approached this project as a developmental process, only knowing that I wanted to collect material and use it to contrast the online and physical environments that surrounded the project hosts in Oxfordshire and Baltimore – but not knowing exactly how.
As a “resident” artist in the schools, I tried to introduce as many new techniques and possibilities to the students as I could, leaving them to pick up and develop those techniques that were of most interest to them.
Practically, we went out on several field trips to significant locations in the lives of the students and recorded audio, video and still images from these spaces.
Two distinct works came out from this material, the first being a large two channel moving image piece, assembled from panoramic still photographs and manipulated location recordings from equivalent places in the UK and USA. We set up video conferences between the students from Wood Green and Carver College to discuss the work, and finally swapped the material that we had gathered. I worked on the huge task of assembling the material with four students from Wood Green School, who elected to put in extra time to complete the necessary steps.
The second piece, used similar source material but constructed as two “sound-maps” where visitors could listen through headphones and connect an audio cable into several sockets cut into a map of our route, each different socket providing access to recordings taken at each of the significant locations.
Here are a couple of audio examples from the Boar’s Hole, a resonant brick tunnel and stream, running under the main rail line to London in Cholsey, West Oxfordshire:
Inside the Boar’s Hole
The Underground Stream
For 18 months from mid 2004 through to the end of 2005, I worked with performer,Lee Adams on this project after being awarded a research and development grant from Arts Council England.
The basic concept was to use video motion tracking to produce a responsive sound field, and as we developed the project, this became a sound source that I could improvise with and shape using various software and hardware controls.
Rather than being a purely academic exercise, we sought to produce an engaging, dynamic performance piece as well as explore some of the possiblities of using digital media in the performance arena.
This was my first large project using Max/MSP to build a software system, but before we go into the technical steps, here’s a movie of the performance to give the idea of the experience.
This images shows the interface of the final application, combining the generative output principle of the original system with some live performance controls that I could use to modify the sound output.
I used the Cyclops external from Cycling 74 to provide the motion tracking with one of theÂ Apple iSightÂ FW cameras viewing the performance space. The hotspots provided by Cyclops are configured to output a midi note number when movement is detected in any of five zones. Every two note values are then converted to their binary equivalents and become ‘parent’ values. These ‘parents’ are fed to a logical operator that responds by flipping the value to 1 if either input is nonzero, or otherwise returning zero. The outcome of this is that a unique ‘child’ value is generated, which is then fed to a Korg Triton synthesiser and the software Wavestation built into the app.
I wanted to be able to get my hands on and mix the sound, so the outputs from the two sound sources were run through a small mixer with an effects unit patched in.The new performance controls enable me to add sustain, pitch shifting and to change the velocity of each note in real time. As the performance grows, the sound builds into an evolving field of layered drones that the performer simultaneously creates and responds to.
During rehearsals, we added video feedback to the system, by projecting the trigger image back into the space and tuning the camera position to the point where it would respond to the changing light levels with chaotic, swirling images of the performance. This is probably no revelation to video artists, but in performance, it provided a spectacular visualisation of the audio output.
With this system, I wanted to address the issue of media-tised performance, so there were no screens or artificial barriers between us and the audience. It’s a purely experiential performance that makes use of technology to augment the human experience, and although the audio output certainly leads what I could do, it allows the human element to take the lead and initiate each stage, as well as respond to the audio and video outputs.
I began the first stage of this project in 2003 when I was studying for my MA and made a movie of Lee Adams using the first version of the system at Bow Arts Trust in East London. The movie is packaged as a skinned QuickTime, called “Telecommunication” (20mb ZIP)