[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [microsound] live performance
--Boundary_(ID_iPrADxEmNX2OgAodv/eHpQ)
Content-type: text/plain; charset=us-ascii; format=flowed
Content-transfer-encoding: 7BIT
At 8:42 PM +0000 12/8/02, wesley m wrote:
>describe your hardware/software setup, the process of producing and
>processing sounds in a live performance context, and the sonic
>content of the performance in relation to your setup.
Below is the typical sound setup used in recent performances. In past
years certain particulars have varied, but the basic approach is
similar.
Sound sources:
Two (2) CD players
Sound processors:
Eventide Orville (dual processor)
Eventide DSP4000
Eventide H3000
Mixing:
Mackie 1604
Control:
Niche Automation Station MIDI fader box
Performance is entirely improvised, though based on several hours of
practice with the selected source materials. CD players are patched
into Mackie Mixer and Eventide processors are fed via auxiliary
sends. Typically each of the four processors is fed a mono signal.
Processor outputs return to channel inputs on the mixer, so that the
output of any processor can be re-routed to the inputs of any others.
Control of certain processing parameters is done with the Niche MIDI
faders.
Performance strategy is based on the recognizability of most source
recordings. These range from spoken voice (William Burroughs, Noam
Chomsky, Alvin Lucier, James Joyce, et al.), to environmental sounds
(surf, rain, insects, short wave radio, etc.) to music (Satie,
Xenakis, the Who, Dick Dale, Conlon Nancarrow, Bach. et al.).
Juxtapositions of sounds are sometimes musically motivated and
sometimes theatrically motivated. The particular choice of material
can be haphazard or meaningful, depending on my mood, and the number
of different sources can vary from just two to a dozen or more. For
instance, my last performance was on the campus of UC Santa Cruz.
This is where I first heard a recording of Conlon Nancarrow's player
piano music (in Jim Tenney's computer music class) and it is also
where I met my violist partner of the past 16 years. I selected a
recording of one of her recitals, playing a Bach Cello suite, and I
chose a Nancarrow CD (Nancarrow happens also to have been influenced
by Bach).
This particular performance was based mainly on capturing source
material in a 4-track looping program on the Orville and then
postprocessing the looping tracks with the other Eventides. Short
snippets of the source would be captured into the loops without being
able to hear the source in advance. This caused everything to be
based on an initial serendipity. Whatever popped up was what I had to
make music out of. Some of the postprocessing was extreme, so that
the sonic identity of the sources could be radically obscured.
Sometimes the original looping sounds would be supressed in favor of
the processed sounds, then brought back in some form later in the
performance.
I've used this general approach with live performers as well,
starting in the early 1980s when I was working regularly with
Diamanda Galas, then over a 10-year period with bassist Robert Black
and others, and more recently with the ensemble Cosmic Debris (AKA
Alias Zone in its studio incarnation). At various times the
processing kit has included a TC2290 delay, MIDI controlled from a
Max patch, and assorted other digital processors. Playback has
occasionally come from cassette as well as CD, with the TASCAM Porta
One being a key playback device (handy for its 4-track feature, that
also allows backwards playback of cassettes).
--
______________________________________________________________
Richard Zvonar, PhD
(818) 788-2202
http://www.zvonar.com
http://RZCybernetics.com
--Boundary_(ID_iPrADxEmNX2OgAodv/eHpQ)--
------------------------------