Papers

Enhancing loudspeaker-based 3D audio with room modeling

International Conference
2006~2010
작성자
한혜원
작성일
2010-10-04 23:47
조회
6591
Authors : Myung-Suk Song, Cha Zhang, Dinei Florencio, Hong-Goo Kang

Year : 2010

Publisher / Conference : MMSP

For many years, spatial (3D) sound using headphones has been widely used in a number of applications. A rich spatial sensation is obtained by using head related transfer functions (HRTF) and playing the appropriate sound through headphones. In theory, loudspeaker audio systems would be capable of rendering 3D sound fields almost as rich as headphones, as long as the room impulse responses (RIRs) between the loudspeakers and the ears are known. In practice, however, obtaining these RIRs is hard, and the performance of loudspeaker based systems is far from perfect. New hope has been recently raised by a system that tracks the user's head position and orientation, and incorporates them into the RIRs estimates in real time. That system made two simplifying assumptions: it used generic HRTFs, and it ignored room reverberation. In this paper we tackle the second problem: we incorporate a room reverberation estimate into the RIRs. Note that this is a nontrivial task: RIRs vary significantly with the listener's positions, and even if one could measure them at a few points, they are notoriously hard to interpolate. Instead, we take an indirect approach: we model the room, and from that model we obtain an estimate of the main reflections. Position and characteristics of walls do not vary with the users' movement, yet they allow to quickly compute an estimate of the RIR for each new user position. Of course the key question is whether the estimates are good enough. We show an improvement in localization perception of up to 32% (i.e., reducing average error from 23.5° to 15.9°).
전체 381
381 International Conference Seyun Um, Doyeon Kim, Hong-Goo Kang "HANUI: Harnessing Distributional Discrepancies for Singing Voice Deepfake Detection" in in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2026
380 International Conference Miseul Kim, Soo jin Park, Kyungguen Byun, Hyeon-Kyeong Shin, Sunkuk Moon, Shuhua Zhang, Erik Visser "Mitigating Intra-Speaker Variability in Diarization with Style-Controllable Speech Augmentation" in in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2026
379 International Conference Woongjib Choi, Sangmin Lee, Hyungseob Lim, Hong-Goo Kang "UniverSR: Unified and Versatile Audio Super-Resolution via Vocoder-Free Flow Matching" in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2026
378 International Journal Hyeonjin Cha, Seyun Um, Miseul Kim, Changhwan Kim, Seungshin Lee, Hong-Goo Kang "Content-Aware Style Augmentation for Zero-Shot Voice Conversion With Short Target Speech" in IEEE Signal Processing Letters, vol.33, pp.66-70, 2025
377 Domestic Conference 신재훈, 최웅집, 김병현, 장인선, 강홍구 "조건부 플로우 매칭을 활용한 심층 신경망 기반 음성 코덱 향상 기법" in 한국방송·미디어공학회 2025년 하계학술대회, 2025
376 International Conference Miseul Kim, Seyun Um, Hyeonjin Cha, Hong-Goo Kang "SpeechMLC: Speech Multi-Label Classification" in INTERSPEECH, 2025
375 International Conference Sangmin Lee, Woojin Chung, Seyun Um, and Hong-Goo Kang "UniCoM: A Universal Code-Switching Speech Generator" in EMNLP Findings, 2025
374 International Conference Woongjib Choi, Byeong Hyeon Kim, Hyungseob Lim, Inseon Jang, Hong-Goo Kang "Neural Spectral Band Generation for Audio Coding" in INTERSPEECH, 2025
373 International Conference Jihyun Kim, Doyeon Kim, Hyewon Han, Jinyoung Lee, Jonguk Yoo, Chang Woo Han, Jeongook Song, Hoon-Young Cho, Hong-Goo Kang "Quadruple Path Modeling with Latent Feature Transfer for Permutation-free Continuous Speech Separation" in INTERSPEECH, 2025
372 International Conference Byeong Hyeon Kim,Hyungseob Lim,Inseon Jang,Hong-Goo Kang "Towards an Ultra-Low-Delay Neural Audio Coding with Computational Efficiency" in INTERSPEECH, 2025