Papers

Online Speech Dereverberation Algorithm Based on Adaptive Multichannel Linear Prediction

International Journal
2011~2015
작성자
이진영
작성일
2014-03-01 21:56
조회
1468
Authors : Jae-Mo Yang, Hong-Goo Kang

Year : 2014

Publisher / Conference : IEEE/ACM Transactions on Audio, Speech, and Language Processing

Volume : 22, issue 3

Page : 608-619

This paper proposes a real-time acoustic channel equalization method that uses an adaptive multichannel linear prediction technique. In general, multichannel equalization algorithms can eliminate reverberation if they meet the following specific conditions including: the co-primeness between channels and sufficient filter length. It also requires the characteristic of correct channel information, however, it is difficult to estimate accurate acoustic channels in a practical system. The proposed method utilizes a theoretically perfect channel equalization algorithm and considers problems that may arise in the actual system. Linear-predictive multi-input equalization (LIME) is also an appropriate attempt at blind dereverberation by assuring the theoretical basis. However, a huge computational cost is incurred by calculating the large dimensions of a covariance matrix and its inversion. The proposed equalizer is developed as a multichannel linear prediction (MLP) oriented structure with a new formula that is optimized to time-varying acoustical room environments. Moreover, experimental results show that the proposed method works well even if the channel characteristics of each microphone are similar. The results of experiments using various room impulse response (RIR) models, including both the synthesized and real room environments, show that the proposed method is superior to conventional methods.
전체 360
360 International Conference Seyun Um, Doyeon Kim, Hong-Goo Kang "PARAN: Variational Autoencoder-based End-to-End Articulation-to-Speech System for Speech Intelligibility" in INTERSPEECH, 2024
359 International Conference Jihyun Kim, Stijn Kindt, Nilesh Madhu, Hong-Goo Kang "Enhanced Deep Speech Separation in Clustered Ad Hoc Distributed Microphone Environments" in INTERSPEECH, 2024
358 International Conference Woo-Jin Chung, Hong-Goo Kang "Speaker-Independent Acoustic-to-Articulatory Inversion through Multi-Channel Attention Discriminator" in INTERSPEECH, 2024
357 International Conference Juhwan Yoon, Woo Seok Ko, Seyun Um, Sungwoong Hwang, Soojoong Hwang, Changhwan Kim, Hong-Goo Kang "UNIQUE : Unsupervised Network for Integrated Speech Quality Evaluation" in INTERSPEECH, 2024
356 International Conference Yanjue Song, Doyeon Kim, Hong-Goo Kang, Nilesh Madhu "Spectrum-aware neural vocoder based on self-supervised learning for speech enhancement" in EUSIPCO, 2024
355 International Conference Hyewon Han, Naveen Kumar "A cross-talk robust multichannel VAD model for multiparty agent interactions trained using synthetic re-recordings" in Hands-free Speech Communication and Microphone Arrays (HSCMA, Satellite workshop in ICASSP), 2024
354 International Conference Yanjue Song, Doyeon Kim, Nilesh Madhu, Hong-Goo Kang "On the Disentanglement and Robustness of Self-Supervised Speech Representations" in International Conference on Electronics, Information, and Communication (ICEIC) (*awarded Best Paper), 2024
353 International Conference Yeona Hong, Miseul Kim, Woo-Jin Chung, Hong-Goo Kang "Contextual Learning for Missing Speech Automatic Speech Recognition" in International Conference on Electronics, Information, and Communication (ICEIC), 2024
352 International Conference Juhwan Yoon, Seyun Um, Woo-Jin Chung, Hong-Goo Kang "SC-ERM: Speaker-Centric Learning for Speech Emotion Recognition" in International Conference on Electronics, Information, and Communication (ICEIC), 2024
351 International Conference Hejung Yang, Hong-Goo Kang "On Fine-Tuning Pre-Trained Speech Models With EMA-Target Self-Supervised Loss" in ICASSP, 2024