[music-dsp] Formant shifting
svante_dsp at hotmail.com
Fri Oct 26 15:16:07 EDT 2001
Analysis and resynthesis is usually done with LPC, which finds the filter
that turns a short piece of the signal into a white noise AR process. This
is closely related to the source/filter model of the voice, which (usually)
assumes that the voice excitation is a pulse train and the human
articulation system is an all-pole transfer function. This works pretty
good, at least for non-nasal sounds.
The standard formant preservation technique, used by Steinberg and others,
is a pitch-tracking algoritm that finds the individual FOFs (formant wave
packets) and changes the interval between them, thus changing the pulse
train but keeping the articulatory transfer function. Obviously it only
works with monophonic signals, otherwise your pitch tracking algorithm will
I have done some work on formant preservation for arbitrary signals, mainly
with LPC analysis, but with little success. To algoritmically separate
'tones' from 'resonances' is problematic. If someone solves this problem
(i.e. me) I think it could be the basis for some really cool
>I'm interested in approaches to applying the characteristics of one voice
>another, perhaps through the use of analysis and resynthesis techniques.
>What research has been done in this area? - any references to books/web
>pages/papers/etc would be much appreciated. One could for example use
>formant shifting techniques - could anyone tell me about resources on this?
>Is there any source code available anywhere on the web?
>Thanks very much,
>dupswapdrop -- the music-dsp mailing list and website: subscription info,
>FAQ, source code archive, list archive, book reviews, dsp links
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
More information about the music-dsp