[music-dsp] FFT filtering...
Lee Kong Aik
EKALee at ntu.edu.sg
Mon Aug 4 19:28:00 EDT 2003
Modifying the frequency response to the shape we want will cause the length of the impulse response to change (in most of the cases become longer if we want to have a shape cut off). Longer impulse response means that we need to increase the DFT-point to avoid circular convolution. This implies that, if we modify the frequency response to the shape we want, at the same time the length of DFT remain the same, the output of the filter is distorted by circular effect.
So, the difficulty will be: what is the appropriate length? We will of course need the impulse response to be as short as possible for the reasons:
1. Longer filter length give longer latency
2. Longer filter length requires higher computational complexity.
K. A. Lee
From: Sampo Syreeni [mailto:decoy at iki.fi]
Sent: Tuesday, August 05, 2003 2:55 AM
To: music-dsp at aulos.calarts.edu
Subject: Re: [music-dsp] FFT filtering...
On 2003-08-04, rob conde uttered:
>I remember hearing/reading that zeroing FFT components to do filtering
>was bad/wrong - but i can't remember why and can't find a reference.
The reason is that windowed FFT followed by meddling with the coefficients
and reconstruction is not a shift-invariant operation. Without any
transformation it would always keep the signal constant, sure, but as soon
as you touch one of the coefficients, not all samples will be treated
Think about in impulse (a single sample is full scale, the rest zero), and
50% overlap with normalized windows. Let the impulse hit a unity value in
one window. You do something to the coefficients, and the impulse
typically spreads out to the whole window. We assumed 50% overlap and
normalization, so the impulse won't spread to the neighbouring windows.
Then let the impulse hit directly between two windows. The same processing
will now, most of the time, spread the sample over two windows. It's easy
to see that different samples are treated differently.
This means that the operation will cause different kinds of modulation
artifacts, and not just filtering. More overlap can help, as can better
windows. But the fact remains that your system introduces new time
structure to your signals.
>To do a lowpass filter in the time domain, you convolve with a windowed
>sinc function. Taking the FFT of the windowed sinc should give a lowpass
>frequency response with a finite transition bandwidth. But what if you
>just multiply by the ideal lp response in the frequency domain?
It works wonderfully, if you take a DFT of the whole signal (say, minutes
or hours worth of data) and multiply that by the DFT of the impulse,
stretched to the same length. (You'd need n extra samples at the end to
guard against wraparound, where n is the length of the support of the
Piecewise processing with windowing is very different.
>Is there really a problem to this approach or am I confusing this issue
>with warnings about the erroneous belief of dsp newbies that FFT is *THE*
>way to do filtering?
It is, but you need to be very careful in how you use it. Piecewise
processing with overlapping windows isn't the same as
overlap-add/overlap-save. The latter implement what I described above in a
piecemeal fashion, and *do* treat each sample equally. It's just that you
cannot change the coefficients on the fly without distortion which is even
nastier than what you'd get by windowing.
Sampo Syreeni, aka decoy - mailto:decoy at iki.fi, tel:+358-50-5756111
student/math+cs/helsinki university, http://www.iki.fi/~decoy/front
openpgp: 050985C2/025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book reviews, dsp links http://shoko.calarts.edu/musicdsp/
More information about the music-dsp