|imago - an audiovisual triptych||about | dates | documentation | making of | team|
the original idea was that the sole voice of a singer/performer should be the generator for what is going on musically and visually. so the audio processing was split into two parts:
at first the singers utterances were analysed and the cooked data sent to the visual engine via osc and to the stage light via a lanbox. second the voice was transformed through several processes. programs employed for these tasks where kyma on the capybara sound design engine
for analysis in max/msp we used tristan jehan's objects to track brightness and loudness of the singer. in kyma we used peakfollowers. due to the nature of the sound produced by the singer, pitch trackers were not adequate.
sound processing methods in max/msp where mainly sampling portions of the singer and then reading out those samples in different ways: pitching, stretching/squeezing, freezing and granulating the voice then applying filters and vst-effects and eventually feeding back the signal to the input and starting the process again...
with its 12 (out of 24 possible) dsp's the capybara was dedicated to the more processor-intensive cross syntesis and vocoding routines.
at some point we decided to spice up the sound and also use material not produced by the singer. those rhythmical patterns where generated within kyma and every time a trigger would occur it would also be sent to the visual engine. to avoid the network lag, the trigger was actually sent some milliseconds before triggering the sound in kyma. since we therefore basically used the same data to generate sound and visuals the synaesthetic link between them is quite strong.
finally all the audio was fed into max/msp and with the help of ville pulkki's vbap object distributed dynamically on the 6 channel PA through an edirol fa-101 interface.