Papers

Looking into Your Speech: Learning Cross-modal Affinity for Audio-visual Speech Separation

International Conference
2021~
작성자
한혜원
작성일
2021-07-01 17:05
조회
3219
Authors : Jiyoung Lee*, Soo-Whan Chung*, Sunok Kim, Hong-Goo Kang**, Kwanghoon Sohn**

Year : 2021

Publisher / Conference : CVPR

Research area : Audio-Visual, Source Separation


Presentation : Poster

In this paper, we address the problem of separating individual speech signals from videos using audio-visual neural processing. Most conventional approaches utilize frame-wise matching criteria to extract shared information between audio and video signals; thus, their performance heavily depends on the accuracy of audio-visual synchronization and the effectiveness of their representations. To overcome the frame discontinuity problem between two modalities due to transmission delay mismatch or jitter, we propose a cross-modal affinity network (CaffNet) that learns global correspondence as well as locally-varying affinities between audio and visual streams. Since the global term provides stability over a temporal sequence at the utterance-level, this also resolves a label permutation problem characterized by inconsistent assignments. By introducing a complex convolution network, CaffNet-C, that estimates both magnitude and phase representations in the time-frequency domain, we further improve the separation performance. Experimental results verify that the proposed methods outperform conventional ones on various datasets, demonstrating their advantages in real-world scenarios.


Notes
*Jiyoung Lee and Soo-Whan Chung contributed equally to this work.
**Hong-Goo Kang and Kwanghoon Sohn are co-corresponding authors.
전체 370
370 International Conference Yeona Hong, Hyewon Han, Woo-jin Chung, Hong-Goo Kang "StableQuant: Layer Adaptive Post-Training Quantization for Speech Foundation Models" in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2025
369 International Conference Sangmin Lee, Woojin Chung, Hong-Goo Kang "LAMA-UT: Language Agnostic Multilingual ASR through Orthography Unification and Language-Specific Transliteration" in Association for the Advancement of Artificial Intelligence (AAAI), 2025
368 International Journal Hyewon Han, Xiulian Peng, Doyeon Kim, Yan Lu, Hong-Goo Kang "Dual-Branch Guidance Encoder for Robust Acoustic Echo Suppression" in IEEE Transactions on Audio, Speech and Language Processing (TASLP), 2024
367 International Journal Hyungseob Lim, Jihyun Lee, Byeong Hyeon Kim, Inseon Jang, Hong-Goo Kang "Perceptual Neural Audio Coding with Modified Discrete Cosine Transform" in IEEE Journal of Special Topics in Signal Processing (JSTSP), 2025
366 International Conference Juhwan Yoon, Hyungseob Lim, Hyeonjin Cha, Hong-Goo Kang "StylebookTTS: Zero-Shot Text-to-Speech Leveraging Unsupervised Style Representation" in APSIPA ASC, 2024
365 International Conference Doyeon Kim, Yanjue Song, Nilesh Madhu, Hong-Goo Kang "Enhancing Neural Speech Embeddings for Generative Speech Models" in APSIPA ASC, 2024
364 Domestic Conference 최웅집, 김병현, 강홍구 "자기 지도 학습 특징을 활용한 음성 신호의 논 블라인드 대역폭 확장" in 대한전자공학회 2024년도 하계종합학술대회, 2024
363 Domestic Conference 홍연아, 정우진, 강홍구 "효율적인 양자화 기법을 통한 DNN 기반 화자 인식 모델 최적화" in 대한전자공학회 2024년도 하계종합학술대회, 2024
362 Domestic Conference 김병현, 강홍구, 장인선 "저지연 조건하의 심층신경망 기반 음성 압축" in 한국방송·미디어공학회 2024년 하계학술대회, 2024
361 International Conference Miseul Kim, Soo-Whan Chung, Youna Ji, Hong-Goo Kang, Min-Seok Choi "Speak in the Scene: Diffusion-based Acoustic Scene Transfer toward Immersive Speech Generation" in INTERSPEECH, 2024