[music-dsp] Look-ahead & buffering

Citizen Chunk citizenchunk at nyc.rr.com
Sun Jan 25 18:32:23 EST 2004

Hi David. thanks for replying. your advice on buffering will come in 
handy when i'm ready to tackle that.

actually, my question had more to do with the theory of look-ahead 
dynamics. i brought it up because, upon searching the archives (i 
always search before i open my big mouth), the few references that i 
found seemed to be overly complicated approaches to look ahead. i 
wanted to make sure that i wasn't missing anything.

to me, all it seems to be is using the current envelope sample to 
modulate a previous input sample. like i wrote earlier:

envelope = f ( x )
gR = g ( x )

out ( x - lookahead ) = in ( x - lookahead ) * g ( f (x) )

if the time constants are set right, it should compensate for the 
delayed reaction of the attack time. does this make sense.

after more thought on the other techniques (sorting, etc), i'm believe 
that they were working on a different type of processor. maybe an 
auto-attack/release limiter/maximizer, or something like that. here's a 

if you use sorting to find the greatest sample in the buffer, you can 
adapt your envelope around the peak, etc etc. that's my best guess of 
why one would choose to do that, unless of course, i am missing the big 
picture. which is why i posted.

you bring up some good points about buffering. to tell the truth, i 
hadn't even considered HOW i was going to go about buffering yet. in 
order to achieve a reasonable lookahead, i will either need to 
extrapolate/predict, like you suggested, or perhaps just rely on 
auto-latency compensation to bail me out. (i'm always looking for an 
easy way out/ :) )

thanks again.

On Jan 23, 2004, at 1:03 PM, David Olofson wrote:
> Well, buffering means increased latency, so for real time processing,
> you would want to keep the delay minimal, and make the best possible
> use of the look-ahead time it provides.
> Just an idea: You could try implementing a short look-ahead using
> buffering, and extend it using some extrapolation/prediction method.
> The latter should probably be a bit on the conservative side (that
> is, not too aggressive with the gain reduction), and the former would
> make the final adjustment. The delay should be sufficient for
> avoiding clicks, but if the delay is too short, there would still be
> low frequency thuds occasionally - so the prediction based stage
> tries to get close enough that the thuds are minimized.
>> The reason I ask is that, when I search for "look ahead" in the
>> archives, I get a bunch of what seems like, to me, overly
>> complicated solutions to the problem.
> It is a hard problem - if you need to keep the latency minimal and
> still get good results. To get the "thuds" outside the audible range
> with a naive algorithm, you need a delay of at least 50 ms, which
> would means your "magic 3 ms" system gets useless for real time
> monitoring all of a sudden.
>> Like one solution uses
>> sorting to find the greatest level in the buffer. Is that really
>> necessary?
> Can't say, but I suspect the sorting is about more than just finding
> the peak value. Sorting gives you all the samples ordered by
> magnitude, which might be useful for some statistical analysis. (Only
> guessing, as I'm not sure which algorithm you're referring to. Do you
> have an URL?)
> //David Olofson - Programmer, Composer, Open Source Advocate

More information about the music-dsp mailing list