[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [microsound] pixels per minute?



i was thinking along similar lines awhile back.. wanted to play around with
graphical generation within frequency analyzers..

this is just a sketch idea.. maybe someone who has more experience with this can
comment. i might be off on a tangent, but perhaps this is interesting to someone
anyway.

i think pixilation is perceptual.. the most basic of audio pixilation would be
something like a line of single samples.. each represented by the bitrate of the
digital audio format.. and playing with them would be writing some program to
generate patterened or random streams of samples completely disreguarding the
concept of waveforms and see what happens.

since pixels are descrete and sound is analogue.. thats where the major problem
lies.. but thats ok, theres still fun to be had

heres something to create a frequnecy based pixelated grid.. i dont know if it
works however; just theory

anyway, it depends on sample rate..

lets say sample rate is 44100hz and the nyquist is 22050 

let?s let the maximum representable frequency influence the pixel size..

so hmm, we can make 22050 represent 10 pixels.. Depending on the resolution we
want..

2205 hz per pixel or so.. now we can turn the audio spectrum into a 10 pixel
grid from 0hz to maximum(nyquist) hz.. we can take the medium of the numbers to
produce 10 actual values using something like

pixel value = final - ((final - initial) / 2)

or heck, a better way to do it is synthesize and sum the sines of all
frequencies that fall within our init and final range.. thats more like pixel
blocks..

init	final
-----------------
[19845 - 22050] 
[17640 - 19844]
[15435 - 17639]
[13230 - 15434]
[11025 - 13229]
[8820 - 11024]
[6615 - 8819]
[4410 - 6614]
[2205 - 4409]
[0 - 2204]

(our pixel is either the medium or sum of all frequencies inbetween)

so now have 10 or more pixels represented in a second of audio. in theory we can
also increase the time resolution by dividing the nyquist while also dividing time..

so we want 10 pixels represented in 0.5 seconds of audio at 44100hz we can use a
top range of 0.5*22050.. (11025).. hrm, it seems also to squish our grid abit..
oh well.. 

so now it?s a matter of taking the pixel grid and converting it to frequency
data, which is abit more difficult.. but synthesizing sines based on your data
values in the calculated time frame should do the trick..

i guess we can run into problems if we want "colour" tho.. we have to then setup
a colour range.. thats ok.. we can produce these frequencies at varying
applitudes.. use amplitude to setup the colour range.. the only problem i see
with this colour stuff is sample clipping.. oh well

anyone played more with this stuff? i was motivated to do some experiments with
it.. but that motivation no longer exists.

you could probably do some neat stuff with a simple application that generates
csound files.. use csound for your synthesis and the application for your data input

hours of entertainment

Quoting Kassen <kassen@xxxxxxxxxxxxxx>:

> graham miller wonders;
> 
> 
> > i know this may sound like a ridiculous question, but how many pixel per
> > minute of audio is processed in real time? i'm not talking visual
> > waveforms here, but rather 'pixels' of sound (as in the smallest measure
> > of computer detail in the visual realm).
> 
> I think it´s perfectly valid to look at it that way but I also think it´s
> heavy context dependant. Many audio programs process audio in blocks of a
> set of samples (sample as in a single value) where each block will have the
> same values for controling signals. You could look as those blocks as pixels
> or as clusters of pixels, both are valid, I think, but the size of such a
> block will depend on the buffer of your soundcard or your program´s settings
> or.....
> 
> It gets harder if if you chain up several operations in some modular system.
> It´s clear that everything that arives at the output was processed but
> suppose we have a chain that goes like; VCO, VCF, envelope, chorus (heresy
> on this list!). clearly everything that got out of the chorus was processed
> but you could argue that the filter and the envelope are processing just as
> many "pixels". Worse yet, some packages will use internal processing at a
> higher bit rate then the processed sample is. That would mean that arguably
> many more pixels are processed then will ever be heard. Also; what exactly
> is "one process"? a reverb might be seen as one process but your homebuild
> contraption that uses a reverb after a granulator might be "one process" to
> you.
> 
> If we could agree on what standard to adhere to when talking about "pixels
> per second" it might be a interesting unit, for example to compare
> workstations and digital processors but practically speaking I think the
> only thing that realy matters is the percentage of cpu time (and perhaps
> memory) a certain operation or patch will cost on your tool of choice.
> 
> I think it´s a interesting question to contemplate but i fear it won´t
> result in a very usefull standard of measurement.
> 
> Kas.
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: microsound-unsubscribe@xxxxxxxxxxxxx
> For additional commands, e-mail: microsound-help@xxxxxxxxxxxxx
> website: http://www.microsound.org
> 
> 





---------------------------------------------------------------------
To unsubscribe, e-mail: microsound-unsubscribe@xxxxxxxxxxxxx
For additional commands, e-mail: microsound-help@xxxxxxxxxxxxx
website: http://www.microsound.org