[music-dsp] Saturation vs Distortion

Dave Gamble signalzerodb at yahoo.co.uk
Mon Mar 3 09:02:00 EST 2003

> Rate changes are linear, but aren't time invariant.
> Think of a signal that
> is zero for all odd-numbered times, and one for all
> even-numbered times.
> Then, if you downsample by two, you get a signal
> that is equal to one for
> all time. If you then shift the input signal by one
> time unit and
> downsample, you get a signal that is always zero. So
> downsampling is
> time-variant.
Yes and no.

The signal you describe is a nyquist sine with
peak-to-peak of 0.5 plus a DC signal with level=1/2.
Obviously, if we downsample, the nyquist sine will

For this reason, classical downsampling must assume
that the signal has been bandlimited before the
downsampling process.

Downsampling causes aliasing - it does not, however,
add new intermodulatory harmonics.

'Perfect' fourier downsampling (this is my own
opinion) of the form of Stilson+Smith's paper (which
uses sinc interpolation before downsampling) is quite
linear, because the signal has been bandlimited before


Do You Yahoo!?
Everything you'll ever need on one web page
from News and Sport to Email and Music Charts

More information about the music-dsp mailing list