Papers

PARAN: Variational Autoencoder-based End-to-End Articulation-to-Speech System for Speech Intelligibility

International Conference
작성자
이지현
작성일
2024-06-13 11:26
조회
1107
Authors : Seyun Um, Doyeon Kim, Hong-Goo Kang

Year : 2024

Publisher / Conference : INTERSPEECH

Research area : Speech Signal Processing, Speech Synthesis, Multi-modal Signal Processing

Presentation/Publication date : 2024.09.03

Presentation : Poster

Deep learning-based articulation-to-speech (ATS) systems designed for individuals with speech disorders have been extensively researched in recent years. However, conventional methods have faced challenges in effectively representing the transformation in latent space across speech and electromagnetic articulography (EMA) domains, resulting in low speech quality. In this paper, we propose a variational autoencoder (VAE)-based end-to-end ATS model called PARAN that efficiently produces high-fidelity speech waveforms from EMA signals. Our model adjusts a prior distribution of latent representations from EMA signals to match a posterior distribution derived from speech waveforms utilizing a normalizing flow mechanism. To further enhance the clarity and intelligibility of the synthesized speech, we incorporate an additional loss function aimed at predicting phonetic information from EMA signals. Experimental results demonstrate that our model outperforms previous methods in terms of speech quality and intelligibility.
전체 368
176 International Journal Hyewon Han, Xiulian Peng, Doyeon Kim, Yan Lu, Hong-Goo Kang "Dual-Branch Guidance Encoder for Robust Acoustic Echo Suppression" in IEEE Transactions on Audio, Speech and Language Processing, 2024
175 International Journal Hyungseob Lim, Jihyun Lee, Byeong Hyeon Kim, Inseon Jang, Hong-Goo Kang "Perceptual Neural Audio Coding with Modified Discrete Cosine Transform" in IEEE Journal of Special Topics in Signal Processing (JSTSP), 2025
174 International Conference Juhwan Yoon, Hyungseob Lim, Hyeonjin Cha, Hong-Goo Kang "StylebookTTS: Zero-Shot Text-to-Speech Leveraging Unsupervised Style Representation" in APSIPA ASC, 2024
173 International Conference Doyeon Kim, Yanjue Song, Nilesh Madhu, Hong-Goo Kang "Enhancing Neural Speech Embeddings for Generative Speech Models" in APSIPA ASC, 2024
172 International Conference Miseul Kim, Soo-Whan Chung, Youna Ji, Hong-Goo Kang, Min-Seok Choi "Speak in the Scene: Diffusion-based Acoustic Scene Transfer toward Immersive Speech Generation" in INTERSPEECH, 2024
171 International Conference Seyun Um, Doyeon Kim, Hong-Goo Kang "PARAN: Variational Autoencoder-based End-to-End Articulation-to-Speech System for Speech Intelligibility" in INTERSPEECH, 2024
170 International Conference Jihyun Kim, Stijn Kindt, Nilesh Madhu, Hong-Goo Kang "Enhanced Deep Speech Separation in Clustered Ad Hoc Distributed Microphone Environments" in INTERSPEECH, 2024
169 International Conference Woo-Jin Chung, Hong-Goo Kang "Speaker-Independent Acoustic-to-Articulatory Inversion through Multi-Channel Attention Discriminator" in INTERSPEECH, 2024
168 International Conference Juhwan Yoon, Woo Seok Ko, Seyun Um, Sungwoong Hwang, Soojoong Hwang, Changhwan Kim, Hong-Goo Kang "UNIQUE : Unsupervised Network for Integrated Speech Quality Evaluation" in INTERSPEECH, 2024
167 International Conference Yanjue Song, Doyeon Kim, Hong-Goo Kang, Nilesh Madhu "Spectrum-aware neural vocoder based on self-supervised learning for speech enhancement" in EUSIPCO, 2024