[music-dsp] fixed point fft?
signalzerodb at yahoo.co.uk
Tue Apr 22 06:10:00 EDT 2003
> I'm working on lossless sound compression and one
> thing that's occurred to
> me is that for a losses decompressor to work across
> multiple platforms all
> of the arithmetic has to be exactly reproducible.
> If one implementation uses Intel 80 double
> arithmetic, for instance and one
> uses 64 bit arithmetic it will fail.
Quite; feel free to note the proliferation of maths
libraries that swim around alongside modern
> Or if one machine handles denormalized numbers or
> rounding slightly
> differently, once again I'm screwed.
In a word: Yes.
> Does anyone know if even the apple and pc have any
> perfectly compatible
> floating point modes? If I can make regular double
> and single precision
> arithmetic portable (but still fast) that would be
Ha! Not damn likely.
It might be possible. If the wind(ows) blows in the
> If that's not possible then the obvious answer is to
> use fixed point arithmetic.
It'd not be a bad idea to choose that option in the
Even though we /can/ afford it, floating point maths
is still a decadance.
> Basically I'm going to need:
> 1. filtering with convolutions (filters coefficients
> range from -1 to 1,
> while the data being convolved over is maybe 30 bit
> ints). For this some
> sort of fixed point FFT should work. I hope. The
> filters are too long to
> do directly or with the knuth_multiplication.
Before you say that, have you pulled Art of Computer
Programming off the shelf?
Mine is at home, so I can't say aye or nay just now,
but Knuth is no novice (he's probably the most
significant computer scientist to ever have lived).
He's probably got an extension to the technique that
If not, and you feel the filters are too large, just
cut them up and use knuthmult on the smaller blocks.
> I don't think number theoretic FFTs work for fixed
Number theoretic transforms ONLY work on fixed point!!
That's the point of why they're interesting; you can
implement them WITHOUT multiplication and in integer
> so it would require twice the bits, so I guess we
> can forget that idea -
> anyway modulo is a slow instruction.
Please please don't!
Look at pages 419-433 of Rabiner+Gold to read Rader's
section on number theoretic transforms.
I'll try my best to help you out if you can't get hold
of a copy.
> It occures to me that one could fake it by doing two
> number theoretic FFTs to different bases and then
> lift the results... But once
> again I have the impression that this would also be
> impractical and I don't
> even know how you do that chinese remainder lifting
> thingy thing... Anyone
> know any practical tricks?
Messy messy messy. Rader's text gives you everything
you need. It's not a nice technique to implement (you
have to use rather generous wordlengths, and be a
little careful with the maths - but this shouldn't be
a surprise, you're implementing a complex technique to
gain real performance improvements). You will,
however, need to quantize your filter coeffs - but
30bits should easily suffice for almost any
> 2. LPC. Straight forward fixed point arithmetic
> should work here, though
> I've noticed in the past that it's possible to have
> useful coefficients that
> require higher precision - I've also noticed that
> constraining the LPC
> filter to be close to stable seems to get rid of the
> need for higher
> precision. I don't think I don't need help with
> that part, but any insights
> are welcome.
You might find mileage in using double-precision
integer values to store your output values for your
> Any code or pointers to code would be helpful.
This all sounds interesting. I'm interested.
I might say the word 'Wavelets' out of the corner of
my mouth, to see if you pick up on it... number
theoretic wavelet transforms could be a blindingly
interesting can of worms...
I have to confess to not understanding why you're
using LPC in the context of loss-less compression; but
I can see that it might lend you greater sparsity of
data for compression.
At the end of the day though, for testing, you can use
direct integer convolution implementation - just bear
in mind that you'll want a wordlength of W+F+L where W
is the input wordlength, F is the filter coefficient
wordlength and L is log_2(filter length) if you want
complete accuracy. That'll run anywhere, but, of
course, you'll probably end up implementing your own
double-length integer maths functions. Which would be
no bad thing, in all honesty.
Still confused by the LPC.
Take care, and have fun. Feel free to email me,
For a better Internet experience
More information about the music-dsp