Papers
On Fine-Tuning Pre-Trained Speech Models With EMA-Target Self-Supervised Loss
International Conference
2021~
작성자
김병현
작성일
2023-12-14 16:35
조회
2171
However, fine-tuning can degrade the general knowledge that was originally built up by the pre-training, which could help prevent the model from overfitting given sparse fine-tuning data or bridge gaps between different domains.
We hypothesize that preserving this general knowledge in pre-trained models is crucial for improving performance on downstream tasks.
Based on this idea, we propose a novel method for fine-tuning self-supervised speech models that utilizes a self-supervised loss over the course of fine-tuning.
Then, an Exponential Moving Average (EMA) technique is applied to smoothly transition the domain of the model from the generalized to the task-oriented one.
We perform various downstream tasks using the proposed method, finding that our method improves performance on most of the tasks. Results show that our method induces the generalization ability of the model to be retained without overshadowing the downstream task performance.
전체 371
38 | International Conference | Stijn Kindt,Jihyun Kim,Hong-Goo Kang,Nilesh Madhu "Efficient, Cluster-Informed, Deep Speech Separation with Cross-Cluster Information in AD-HOC Wireless Acoustic Sensor Networks" in International Workshop on Acoustic Signal Enhancement (IWAENC), 2024 | ![]() |
37 | International Conference | Yeona Hong, Hyewon Han, Woo-jin Chung, Hong-Goo Kang "StableQuant: Layer Adaptive Post-Training Quantization for Speech Foundation Models" in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2025 | ![]() |
36 | International Conference | Sangmin Lee, Woojin Chung, Hong-Goo Kang "LAMA-UT: Language Agnostic Multilingual ASR through Orthography Unification and Language-Specific Transliteration" in Association for the Advancement of Artificial Intelligence (AAAI), 2025 | ![]() |
35 | International Conference | Juhwan Yoon, Hyungseob Lim, Hyeonjin Cha, Hong-Goo Kang "StylebookTTS: Zero-Shot Text-to-Speech Leveraging Unsupervised Style Representation" in APSIPA ASC, 2024 | ![]() |
34 | International Conference | Doyeon Kim, Yanjue Song, Nilesh Madhu, Hong-Goo Kang "Enhancing Neural Speech Embeddings for Generative Speech Models" in APSIPA ASC, 2024 | ![]() |
33 | International Conference | Miseul Kim, Soo-Whan Chung, Youna Ji, Hong-Goo Kang, Min-Seok Choi "Speak in the Scene: Diffusion-based Acoustic Scene Transfer toward Immersive Speech Generation" in INTERSPEECH, 2024 | ![]() |
32 | International Conference | Woo-Jin Chung, Hong-Goo Kang "Speaker-Independent Acoustic-to-Articulatory Inversion through Multi-Channel Attention Discriminator" in INTERSPEECH, 2024 | ![]() |
31 | International Conference | Juhwan Yoon, Woo Seok Ko, Seyun Um, Sungwoong Hwang, Soojoong Hwang, Changhwan Kim, Hong-Goo Kang "UNIQUE : Unsupervised Network for Integrated Speech Quality Evaluation" in INTERSPEECH, 2024 | ![]() |
30 | International Conference | Yanjue Song, Doyeon Kim, Hong-Goo Kang, Nilesh Madhu "Spectrum-aware neural vocoder based on self-supervised learning for speech enhancement" in EUSIPCO, 2024 | ![]() |
29 | International Conference | Hyewon Han, Naveen Kumar "A cross-talk robust multichannel VAD model for multiparty agent interactions trained using synthetic re-recordings" in Hands-free Speech Communication and Microphone Arrays (HSCMA, Satellite workshop in ICASSP), 2024 | ![]() |