Papers

On Fine-Tuning Pre-Trained Speech Models With EMA-Target Self-Supervised Loss

International Conference
2021~
작성자
김병현
작성일
2023-12-14 16:35
조회
383
Authors : Hejung Yang, Hong-Goo Kang

Year : 2024

Publisher / Conference : ICASSP

Research area : Speech Signal Processing

Presentation/Publication date : 2024.04.19

Presentation : Poster

Representation models pre-trained on self-supervised objectives are often fine-tuned for solving downstream tasks.
However, fine-tuning can degrade the general knowledge that was originally built up by the pre-training, which could help prevent the model from overfitting given sparse fine-tuning data or bridge gaps between different domains.
We hypothesize that preserving this general knowledge in pre-trained models is crucial for improving performance on downstream tasks.
Based on this idea, we propose a novel method for fine-tuning self-supervised speech models that utilizes a self-supervised loss over the course of fine-tuning.
Then, an Exponential Moving Average (EMA) technique is applied to smoothly transition the domain of the model from the generalized to the task-oriented one.
We perform various downstream tasks using the proposed method, finding that our method improves performance on most of the tasks. Results show that our method induces the generalization ability of the model to be retained without overshadowing the downstream task performance.
전체 355
2 International Conference Hejung Yang, Hong-Goo Kang "On Fine-Tuning Pre-Trained Speech Models With EMA-Target Self-Supervised Loss" in ICASSP, 2024
1 International Conference Hejung Yang, Hong-Goo Kang "Feature Normalization for Fine-tuning Self-Supervised Models in Speech Enhancement" in INTERSPEECH, 2023