Papers

Stacked U-Net with High-level Feature Transfer for Parameter Efficient Speech Enhancement

International Conference
2021~
작성자
김화연
작성일
2021-09-01 14:48
조회
4281
Authors : Jinyoung Lee and Hong-Goo Kang

Year : 2021

Publisher / Conference : APSIPA ASC

Research area : Speech Signal Processing, Speech Enhancement

In this paper, we present a stacked U-Net structure-based speech enhancement algorithm with parameter reduction and real-time processing. To significantly reduce the number of network parameters, we propose a stacked structure in which several shallow U-Nets with fewer convolutional layer channels are cascaded. However, simply stacking the small-scale U-Nets cannot sufficiently compensate for the performance loss caused by the lack of parameters. To overcome this problem, we propose a high-level feature transfer method that passes all the multi-channel output features, which are obtained before passing through the intermediate output layer, to the next stage.Furthermore, our proposed model can process analysis frames with short lengths because its down-sampling and up-sampling blocks are much smaller than the conventional Wave U-Net method; theses smaller layers make our proposed model suitable for low-delay online processing. Experiments show that our proposed method outperforms the conventional Wave U-Net method on almost all objective measures and requires only 7.21%of the network parameters when compared to the conventional method. In addition, our model can be successfully implemented in real time on both GPU and CPU environments.
전체 372
164 International Conference Byeong Hyeon Kim,Hyungseob Lim,Inseon Jang,Hong-Goo Kang "Towards an Ultra-Low-Delay Neural Audio Coding with Computational Efficiency" in INTERSPEECH, 2025
163 International Conference Stijn Kindt,Jihyun Kim,Hong-Goo Kang,Nilesh Madhu "Efficient, Cluster-Informed, Deep Speech Separation with Cross-Cluster Information in AD-HOC Wireless Acoustic Sensor Networks" in International Workshop on Acoustic Signal Enhancement (IWAENC), 2024
162 International Conference Yeona Hong, Hyewon Han, Woo-jin Chung, Hong-Goo Kang "StableQuant: Layer Adaptive Post-Training Quantization for Speech Foundation Models" in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2025
161 International Conference Sangmin Lee, Woojin Chung, Hong-Goo Kang "LAMA-UT: Language Agnostic Multilingual ASR through Orthography Unification and Language-Specific Transliteration" in Association for the Advancement of Artificial Intelligence (AAAI), 2025
160 International Conference Juhwan Yoon, Hyungseob Lim, Hyeonjin Cha, Hong-Goo Kang "StylebookTTS: Zero-Shot Text-to-Speech Leveraging Unsupervised Style Representation" in APSIPA ASC, 2024
159 International Conference Doyeon Kim, Yanjue Song, Nilesh Madhu, Hong-Goo Kang "Enhancing Neural Speech Embeddings for Generative Speech Models" in APSIPA ASC, 2024
158 International Conference Miseul Kim, Soo-Whan Chung, Youna Ji, Hong-Goo Kang, Min-Seok Choi "Speak in the Scene: Diffusion-based Acoustic Scene Transfer toward Immersive Speech Generation" in INTERSPEECH, 2024
157 International Conference Seyun Um, Doyeon Kim, Hong-Goo Kang "PARAN: Variational Autoencoder-based End-to-End Articulation-to-Speech System for Speech Intelligibility" in INTERSPEECH, 2024
156 International Conference Jihyun Kim, Stijn Kindt, Nilesh Madhu, Hong-Goo Kang "Enhanced Deep Speech Separation in Clustered Ad Hoc Distributed Microphone Environments" in INTERSPEECH, 2024
155 International Conference Woo-Jin Chung, Hong-Goo Kang "Speaker-Independent Acoustic-to-Articulatory Inversion through Multi-Channel Attention Discriminator" in INTERSPEECH, 2024