HiFi-Glot: High-Fidelity Neural Formant Synthesis
with Differentiable Resonant Filters

Yicheng Gu1,2 Pablo Pérez Zarazaga3 Chaoren Wang2 Zhizheng Wu2
Gustav Eje Henter3 Zofia Malisz3 Lauri Juvela1

Abstract

Formant synthesis aims to generate speech with controllable formant structures, enabling precise control of vocal resonance and phonetic features. However, while existing formant synthesis approaches enable precise formant manipulation, they often yield an impoverished speech signal by failing to capture the complex co-occurring acoustic cues essential for naturalness. To address this issue, this letter presents HiFi-Glot, an end-to-end neural formant synthesis system that achieves both precise formant control and high-fidelity speech synthesis. Specifically, the proposed model adopts a source-filter architecture inspired by classical formant synthesis, where a neural vocoder generates the glottal excitation signal, and differentiable resonant filters model the formants to produce the speech waveform. Experiment results demonstrate that our proposed HiFi-Glot model can generate speech with higher perceptual quality and naturalness while exhibiting a more precise control over formant frequencies, outperforming industry-standard formant manipulation tools such as Praat.

What is HiFi-Glot?

HiFi-Glot is an end-to-end neural formant synthesis system that achieves high perceptual quality and precise formant control using a source-filter architecture with differentiable resonant filters. It addresses the unnatural timbre and imperfect source-filter separation often found in legacy tools by utilizing a neural vocoder to generate glottal excitation and a fully differentiable all-pole filter to model vocal tract resonance, enabling interpretable manipulation of parameters such as formants, spectral tilt, and energy. To illustrate the effectiveness of HiFi-Glot, we conduct evaluations on speech parameter manipulation and vocal tract length simulation, demonstrating that it outperforms industry-standard tools like Praat in both speech naturalness and manipulation accuracy.

Speech Parameter Manipulation

The above figure shows the manipulation errors for different speech parameters across various scaling factors. For Tilt, Centroid, and Energy, HiFi-Glot consistently outperforms the baseline model, showing lower median error values across all scaling factors. Regarding F0, HiFi-Glot outperformed the baselines across most scaling factors. The only exception is at the 0.7 scale, where it performed comparably to NFS-HiFiGAN, though it remained superior to Praat. For F1, F2, and F3, HiFi-Glot demonstrated clear advantages when the scaling factors were below 1.0. Meanwhile, when the scaling factors were greater than or equal to 1.0, HiFi-Glot performed on par with the NFS-HiFiGAN baseline while consistently outperforming Praat. Overall, these results validated the effectiveness of the proposed HiFi-Glot model, highlighting its exceptional accuracy in downward scaling and competitive performance in upward scaling.


💡
Note that the Praat baseline benefits from access to the original excitation signal and the LPC envelope, and it simply performs LPC copy-synthesis at a unit-scale (1.0) modification. In contrast, neural methods need to generate signals directly from the speech parameters; thus, a performance bias exists.

Vocal Tract Length Simulation

We conduct global formant manipulation to illustrate the robustness of our proposed system. In global formant manipulation, all formants will be globally shifted by a same scaling factor, including: 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3. We use Praat and NFS-HiFiGAN as baselines, representative samples are shown below.

Scaling Factor GT Praat NFS-HiFiGAN
(Interspeech 2023)
HiFi-Glot
0.7 /
/
/
/
0.8 /
/
/
/
0.9 /
/
/
/
1.0
1.1 /
/
/
/
1.2 /
/
/
/
1.3 /
/
/
/