Papers

Effective Spectral and Excitation Modeling Techniques for LSTM-RNN-Based Speech Synthesis Systems

International Journal
2016~2020
작성자
이진영
작성일
2017-11-01 22:07
조회
6087
Authors : Eunwoo Song, Frank K. Soong, Hong-Goo Kang

Year : 2017

Publisher / Conference : IEEE/ACM Transactions on Audio, Speech, and Language Processing

Volume : 25, issue 11

Page : 2152-2161

In this paper, we report research results on modeling the parameters of an improved time-frequency trajectory excitation (ITFTE) and spectral envelopes of an LPC vocoder with a long short-term memory (LSTM)-based recurrent neural network (RNN) for high-quality text-to-speech (TTS) systems. The ITFTE vocoder has been shown to significantly improve the perceptual quality of statistical parameter-based TTS systems in our prior works. However, a simple feed-forward deep neural network (DNN) with a finite window length is inadequate to capture the time evolution of the ITFTE parameters. We propose to use the LSTM to exploit the time-varying nature of both trajectories of the excitation and filter parameters, where the LSTM is implemented to use the linguistic text input and to predict both ITFTE and LPC parameters holistically. In the case of LPC parameters, we further enhance the generated spectrum by applying LP bandwidth expansion and line spectral frequency-sharpening filters. These filters are not only beneficial for reducing unstable synthesis filter conditions but also advantageous toward minimizing the muffling problem in the generated spectrum. Experimental results have shown that the proposed LSTM-RNN system with the ITFTE vocoder significantly outperforms both similarly configured band aperiodicity-based systems and our best prior DNN-trainecounterpart, both objectively and subjectively.
전체 381
381 International Conference Seyun Um, Doyeon Kim, Hong-Goo Kang "HANUI: Harnessing Distributional Discrepancies for Singing Voice Deepfake Detection" in in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2026
380 International Conference Miseul Kim, Soo jin Park, Kyungguen Byun, Hyeon-Kyeong Shin, Sunkuk Moon, Shuhua Zhang, Erik Visser "Mitigating Intra-Speaker Variability in Diarization with Style-Controllable Speech Augmentation" in in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2026
379 International Conference Woongjib Choi, Sangmin Lee, Hyungseob Lim, Hong-Goo Kang "UniverSR: Unified and Versatile Audio Super-Resolution via Vocoder-Free Flow Matching" in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2026
378 International Journal Hyeonjin Cha, Seyun Um, Miseul Kim, Changhwan Kim, Seungshin Lee, Hong-Goo Kang "Content-Aware Style Augmentation for Zero-Shot Voice Conversion With Short Target Speech" in IEEE Signal Processing Letters, vol.33, pp.66-70, 2025
377 Domestic Conference 신재훈, 최웅집, 김병현, 장인선, 강홍구 "조건부 플로우 매칭을 활용한 심층 신경망 기반 음성 코덱 향상 기법" in 한국방송·미디어공학회 2025년 하계학술대회, 2025
376 International Conference Miseul Kim, Seyun Um, Hyeonjin Cha, Hong-Goo Kang "SpeechMLC: Speech Multi-Label Classification" in INTERSPEECH, 2025
375 International Conference Sangmin Lee, Woojin Chung, Seyun Um, and Hong-Goo Kang "UniCoM: A Universal Code-Switching Speech Generator" in EMNLP Findings, 2025
374 International Conference Woongjib Choi, Byeong Hyeon Kim, Hyungseob Lim, Inseon Jang, Hong-Goo Kang "Neural Spectral Band Generation for Audio Coding" in INTERSPEECH, 2025
373 International Conference Jihyun Kim, Doyeon Kim, Hyewon Han, Jinyoung Lee, Jonguk Yoo, Chang Woo Han, Jeongook Song, Hoon-Young Cho, Hong-Goo Kang "Quadruple Path Modeling with Latent Feature Transfer for Permutation-free Continuous Speech Separation" in INTERSPEECH, 2025
372 International Conference Byeong Hyeon Kim,Hyungseob Lim,Inseon Jang,Hong-Goo Kang "Towards an Ultra-Low-Delay Neural Audio Coding with Computational Efficiency" in INTERSPEECH, 2025