Light-Weight Speaker Verification with Global Context Information
In this paper, we propose a light-weight speaker verification (SV) system that utilizes the characteristics of utterance-level global features.
Many recent SV tasks employ convolutional neural networks (CNNs) to extract representative speaker features from the given input utterances. However, their inherent receptive field size on the feature extraction process is limited by the localized structure of the convolutional layers.
To effectively extract utterance-level global speaker representations, we introduce a novel architecture combining a CNN with a self-attention network that is able to utilize the relationship between local and aggregated global features. The global features are continuously updated at every analysis block using a point-wise attentive summation to the local features.
We also adopt a densely connected CNN structure (DenseNet) to reliably estimate speaker-related local features with a small number of model parameters. Our proposed model shows higher speaker verification performance with EER 1.935% with significantly small number of parameters, 1.2M, which is 16% reduced model size than the baseline models.
|144||International Conference||WooSeok Ko, Seyun Um, Zhenyu Piao, Hong-goo Kang "Consideration of Varying Training Lengths for Short-Duration Speaker Verification" in APSIP ASC, 2023|
|143||International Conference||Miseul Kim, Zhenyu Piao, Jihyun Lee, Hong-Goo Kang "BrainTalker: Low-Resource Brain-to-Speech Synthesis with Transfer Learning using Wav2Vec 2.0" in The IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), 2023|
|142||International Conference||Seyun Um, Jihyun Kim, Jihyun Lee, Hong-Goo Kang "Facetron: A Multi-speaker Face-to-Speech Model based on Cross-Modal Latent Representations" in EUSIPCO, 2023|
|141||International Conference||Hejung Yang, Hong-Goo Kang "Feature Normalization for Fine-tuning Self-Supervised Models in Speech Enhancement" in INTERSPEECH, 2023|
|140||International Conference||Jihyun Kim, Hong-Goo Kang "Contrastive Learning based Deep Latent Masking for Music Source Seperation" in INTERSPEECH, 2023|
|139||International Conference||Woo-Jin Chung, Doyeon Kim, Soo-Whan Chung, Hong-Goo Kang "MF-PAM: Accurate Pitch Estimation through Periodicity Analysis and Multi-level Feature Fusion" in INTERSPEECH, 2023|
|138||International Conference||Hyungchan Yoon, Seyun Um, Changhwan Kim, Hong-Goo Kang "Adversarial Learning of Intermediate Acoustic Feature for End-to-End Lightweight Text-to-Speech" in INTERSPEECH, 2023|
|137||International Conference||Hyungchan Yoon, Changhwan Kim, Eunwoo Song, Hyun-Wook Yoon, Hong-Goo Kang "Pruning Self-Attention for Zero-Shot Multi-Speaker Text-to-Speech" in INTERSPEECH, 2023|
|136||International Conference||Doyeon Kim, Soo-Whan Chung, Hyewon Han, Youna Ji, Hong-Goo Kang "HD-DEMUCS: General Speech Restoration with Heterogeneous Decoders" in INTERSPEECH, 2023|
|135||International Conference||Zhenyu Piao, Miseul Kim, Hyungchan Yoon, Hong-Goo Kang "HappyQuokka System for ICASSP 2023 Auditory EEG Challenge" in ICASSP, 2023|