[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [microsound] input + software = output



>In one sense, albeit a completely simplistic one, the formula is
>correct. However, it does leave out the all important element of human
>input...

what other kind of input does the software have? randomized elements
aren't exactly the zenith of articulation, so while there is a whole world
of abstract input and composition tools and techniques (spatial response
triggers or wires hooked up to a plant or whatever), the human is what
sets up the criteria for inclusion in the output.

it is definitely the case that the "input + software" is a feedback loop
of multiple stages (inspiration, coding, image preparation,
post-processing, etc.), but in the case of the "musical actions" of this
music - the performance method basically mirrors the compositional
gestures that we see the wire reacting against - theres really just no way
around the fact that there are no guitars or drums involved, and music
criticism is going to have to adapt to this kind of death of visual
virtuosity. as an attempt through this (or perhaps it was just an
anachronism), brian reinbolt's performance at mills on saturday was
largely the site of wind-controller usage that provided no additional
insight to the piece at hand. for all i could tell, it was some guy
walking around sucking on a star trek toy sax. the music is too abstracted
from standard methods of "play" that a visual sense of what's being heard
is basically impossible, however much the chosen input device facilitates
the performance. this is where the value of beanbags increases
exponentially.

probably the most direct way to short-circuit the dismissal of push-button
music would be to project the laptop's display onto a screen that the
audience can see. it's the closest thing to the visual representation of
what's coming out of the speakers: "yeah, he's moving the slider up!"
<headbanging starts>

eric