[music-dsp] About volume ramping
david at olofson.net
Sat Apr 1 15:24:49 EST 2006
On Saturday 01 April 2006 21:47, Didier Dambrin wrote:
> Everyone probably ramps volume a linear way, it works well and uses
> few CPU.
> However, if CPU didn't matter, what would be the best, ideal method
> to ramp in the minimum time possible? Would this require filtering,
> as high freqs wouldn't require as much time to ramp as lower ones?
There is no single best way of doing it, because different
applications have different requirements. Sometimes, getting a quick
response is more important than minimizing distortion.
Anyway, if you really want minimum response times without audible
distortion, you're probably going to have to make it frequency
dependent one way or another. How about basically implementing a
multiband compressor, but without the envelope tracking part?
> I'm concerned about ramping time because quick volume changes,
> needed for a synth's envelope, or fast LFO's, need the minimum time
> not to sound too smoothed up. But what's the ideal time?
Envelope control is different from smoothing volume control changes. A
volume control is normally expected to just respond "reasonably
fast", but without any audible artifacts whatsoever. Zipper noise in
slow fades must be totally eliminated to make the volume control
Envelope control, OTOH, is part of the code DSP of the synth. That is,
whatever it does becomes part of the synth's "soul", and the only
thing that really matters is that it sounds appropriate for the type
of synth your building/coding. Very fast envelopes in analog
modelling synths are (generally) expected to, and *supposed* to
click. If the sound programmer doesn't want clicks, he/she slows down
the envelope. Pitch scaling of envelope times can come in handy here.
(Obviously, this requires that fairly high parameter resolution for
very short envelope times! Minor adjustments can be used to color
desired clicks, so the accuracy must be pretty close to audio rate
resolution for full control.)
> My method is to compute this ramping time based on the source &
> target level. Ramping from 0 to 100% takes 2x longer than from 50%
> to 100%.
> However, this only works for normalized sources, and fails miserably
> when you think it's 50%, but it's actually much higher because of a
> post-amp later in the signal (and not predictable).
This doesn't make sense. Why would scaling down the chain make any
difference to the quality of the ramping? (Unless you're talking
about non-linear effects, that is.)
//David Olofson - Programmer, Composer, Open Source Advocate
.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
'-- http://www.reologica.se - Rheology instrumentation --'
More information about the music-dsp