Papers

Looking into Your Speech: Learning Cross-modal Affinity for Audio-visual Speech Separation

International Conference
2021~
작성자
한혜원
작성일
2021-07-01 17:05
조회
1668
Authors : Jiyoung Lee*, Soo-Whan Chung*, Sunok Kim, Hong-Goo Kang**, Kwanghoon Sohn**

Year : 2021

Publisher / Conference : CVPR

Research area : Audio-Visual, Source Separation


Presentation : Poster

In this paper, we address the problem of separating individual speech signals from videos using audio-visual neural processing. Most conventional approaches utilize frame-wise matching criteria to extract shared information between audio and video signals; thus, their performance heavily depends on the accuracy of audio-visual synchronization and the effectiveness of their representations. To overcome the frame discontinuity problem between two modalities due to transmission delay mismatch or jitter, we propose a cross-modal affinity network (CaffNet) that learns global correspondence as well as locally-varying affinities between audio and visual streams. Since the global term provides stability over a temporal sequence at the utterance-level, this also resolves a label permutation problem characterized by inconsistent assignments. By introducing a complex convolution network, CaffNet-C, that estimates both magnitude and phase representations in the time-frequency domain, we further improve the separation performance. Experimental results verify that the proposed methods outperform conventional ones on various datasets, demonstrating their advantages in real-world scenarios.


Notes
*Jiyoung Lee and Soo-Whan Chung contributed equally to this work.
**Hong-Goo Kang and Kwanghoon Sohn are co-corresponding authors.
전체 355
29 International Conference Hyewon Han, Naveen Kumar "A cross-talk robust multichannel VAD model for multiparty agent interactions trained using synthetic re-recordings" in Hands-free Speech Communication and Microphone Arrays (HSCMA, Satellite workshop in ICASSP), 2024
28 International Conference Yanjue Song, Doyeon Kim, Nilesh Madhu, Hong-Goo Kang "On the Disentanglement and Robustness of Self-Supervised Speech Representations" in International Conference on Electronics, Information, and Communication (ICEIC) (*awarded Best Paper), 2024
27 International Conference Yeona Hong, Miseul Kim, Woo-Jin Chung, Hong-Goo Kang "Contextual Learning for Missing Speech Automatic Speech Recognition" in International Conference on Electronics, Information, and Communication (ICEIC), 2024
26 International Conference Juhwan Yoon, Seyun Um, Woo-Jin Chung, Hong-Goo Kang "SC-ERM: Speaker-Centric Learning for Speech Emotion Recognition" in International Conference on Electronics, Information, and Communication (ICEIC), 2024
25 International Conference Hejung Yang, Hong-Goo Kang "On Fine-Tuning Pre-Trained Speech Models With EMA-Target Self-Supervised Loss" in ICASSP, 2024
24 International Conference Hong-Goo Kang, W. Bastiaan Kleijn, Jan Skoglund, Michael Chinen "Convolutional Transformer for Neural Speech Coding" in Audio Engineering Society Convention, 2023
23 International Conference Hong-Goo Kang, Jan Skoglund, W. Bastiaan Kleijn, Andrew Storus, Hengchin Yeh "A High-Rate Extension to Soundstream" in IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2023
22 International Conference Zhenyu Piao, Hyungseob Lim, Miseul Kim, Hong-goo Kang "PDF-NET: Pitch-adaptive Dynamic Filter Network for Intra-gender Speaker Verification" in APSIPA ASC, 2023
21 International Conference Miseul Kim, Zhenyu Piao, Jihyun Lee, Hong-Goo Kang "BrainTalker: Low-Resource Brain-to-Speech Synthesis with Transfer Learning using Wav2Vec 2.0" in The IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), 2023
20 International Conference Seyun Um, Jihyun Kim, Jihyun Lee, Hong-Goo Kang "Facetron: A Multi-speaker Face-to-Speech Model based on Cross-Modal Latent Representations" in EUSIPCO, 2023