[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [microsound] soundDesign in max/msp



Hi Michal,

isn't max/msp supposed to be able to do *anything*, soundwise? please excuse my unintelligence, but from what i understand:
1. sound is made up of sine waves.
2. max can produce and alter sound waves in any way.

This is sort of a simplified explanation to me. it is a bit like saying
since we can represent any naturally occurring sound with 1s and 0s (by sampling),
then it is possible to generate any naturally occurring sound by synthetic means since it is just figuring out which process results in that orientation of 1s and 0s.
In theory both cases are true but implementation never works out that way.


Different interfaces result in different compositions and sound design. I've found that
the music one writes in Csound is different than the music one writes with Max, Supercollider,
Tape, what have you. Therefore I think it is very possible for someone's piece to sound Max-ish.


2 and 3 are with respect to:
a. synthesis of the human voice

as far as I can tell Max is not the *best* place to do this. you are looking for an environment
to deal with mathematical vocal-tract models, Linear PredictiveCoding (old school), or FOF. as far as I can tell MSP would not be the best place to do this.


c. synthesis of non-electronic instruments

the Karplus-Strong string algorithm should be very simple since it is just a delay line
and a filter.


I would suggest looking into wave guides, which model many instruments with delays and feedback. however, some of the more complicated ones will be difficult to realize in a graphic language such as MSP.

(i.e. Are there any existing "learning" algorithms that accept input much like the adaptive filters in digital cellphones?)

I'm not sure if I understand you correctly. But I'll try to answer.
Classification of tones by pattern recognition is actually rather difficult.
That said, I'm thinking that the question you are asking is whether or not a pattern
classifier can take in a signal in the time (or frequency domain) then construct a model
of components that would recreate the sound. That would be very difficult at this point,
yet it would be very beneficial, it is a goal of many researchers, though not necessarily related
to musical problems.
I have done some research using recursive PDP algorithms to represent music data
types (in the time domain) implicitly (as opposed to explicitly where every byte is stored),
as a form of data reduction. it can reconstruct the original sound files with a minmal amount of input data Sort of like associating someone's name with all of the variables that make up the pattern of their face. Not sure if this is interesting or not, but I hope I've answered some of your questions, at least in part.


take care,

-Bobby.


_one thought : many forms_

----------------------------------------------------------
http://www.cSounds.com
----------------------------------------------------------