Papers

Spectrum-aware neural vocoder based on self-supervised learning for speech enhancement

International Conference
2021~
작성자
김도연
작성일
2024-05-23 18:42
조회
1494
Authors : Yanjue Song, Doyeon Kim, Hong-Goo Kang, Nilesh Madhu

Year : 2024

Publisher / Conference : EUSIPCO

Research area : Speech Signal Processing, Speech Synthesis, Speech Enhancement

Presentation : Poster

We consider a speech enhancement setup where neural speech embeddings, obtained from pre-trained self-supervised learning (SSL) models applied to the noisy signal, are subsequently input to a neural vocoder to synthesize the underlying clean speech. The key innovation is in enhancing these latent neural embeddings to mitigate the distortions due to noise and reverberation, resulting in a superior quality of the synthesized signal. Separating the task, thus, into distinct embedding enhancement and speech generation phases permits increased flexibility in network design. We further explore the benefit of combining the hidden states from the SSL model in a learnable manner, to produce a more robust embedding as the vocoder input. Finally, we also investigate different loss functions for training the neural vocoder. Experimental results validate the effectiveness of our proposed approach, particularly in scenarios characterized by concurrent background noise and reverberation.
전체 368
358 International Conference Woo-Jin Chung, Hong-Goo Kang "Speaker-Independent Acoustic-to-Articulatory Inversion through Multi-Channel Attention Discriminator" in INTERSPEECH, 2024
357 International Conference Juhwan Yoon, Woo Seok Ko, Seyun Um, Sungwoong Hwang, Soojoong Hwang, Changhwan Kim, Hong-Goo Kang "UNIQUE : Unsupervised Network for Integrated Speech Quality Evaluation" in INTERSPEECH, 2024
356 International Conference Yanjue Song, Doyeon Kim, Hong-Goo Kang, Nilesh Madhu "Spectrum-aware neural vocoder based on self-supervised learning for speech enhancement" in EUSIPCO, 2024
355 International Conference Hyewon Han, Naveen Kumar "A cross-talk robust multichannel VAD model for multiparty agent interactions trained using synthetic re-recordings" in Hands-free Speech Communication and Microphone Arrays (HSCMA, Satellite workshop in ICASSP), 2024
354 International Conference Yanjue Song, Doyeon Kim, Nilesh Madhu, Hong-Goo Kang "On the Disentanglement and Robustness of Self-Supervised Speech Representations" in International Conference on Electronics, Information, and Communication (ICEIC) (*awarded Best Paper), 2024
353 International Conference Yeona Hong, Miseul Kim, Woo-Jin Chung, Hong-Goo Kang "Contextual Learning for Missing Speech Automatic Speech Recognition" in International Conference on Electronics, Information, and Communication (ICEIC), 2024
352 International Conference Juhwan Yoon, Seyun Um, Woo-Jin Chung, Hong-Goo Kang "SC-ERM: Speaker-Centric Learning for Speech Emotion Recognition" in International Conference on Electronics, Information, and Communication (ICEIC), 2024
351 International Conference Hejung Yang, Hong-Goo Kang "On Fine-Tuning Pre-Trained Speech Models With EMA-Target Self-Supervised Loss" in ICASSP, 2024
350 International Journal Zainab Alhakeem, Se-In Jang, Hong-Goo Kang "Disentangled Representations in Local-Global Contexts for Arabic Dialect Identification" in Transactions on Audio, Speech, and Language Processing, 2024
349 International Conference Hong-Goo Kang, W. Bastiaan Kleijn, Jan Skoglund, Michael Chinen "Convolutional Transformer for Neural Speech Coding" in Audio Engineering Society Convention, 2023