Papers

Scalable Multiband Binaural Renderer for MPEG-H 3D Audio

International Journal
2011~2015
작성자
이진영
작성일
2015-08-01 22:05
조회
6846
Authors : Taegyu Lee, Hyun Oh Oh, Jeongil Seo, Young-Cheol Park, Dae Hee Youn

Year : 2015

Publisher / Conference : IEEE Journal of Selected Topics in Signal Processing

Volume : 9, issue 5

Page : 907-920

To provide immersive 3D multimedia service, MPEG has launched MPEG-H, ISO/IEC 23008, “High Efficiency Coding and Media Delivery in Heterogeneous Environments.” As part of the audio, MPEG-H 3D Audio has been standardized based on a multichannel loudspeaker configuration (e.g., 22.2). Binaural rendering is a key application of 3D audio; however, previous studies focus on binaural rendering with low complexity such as IIR filter design for HRTF or pre-/post-processing to solve in-head localization or front-back confusion. In this paper, a new binaural rendering algorithm is proposed to support the large number of input channel signals and provide high-quality in terms of timbre, parts of this algorithm were adopted into the MPEG-H 3D Audio. The proposed algorithm truncates binaural room impulse response at mixing time, the transition point from the early-reflections to the late reverberation part. Each part is processed independently by variable order filtering in frequency domain (VOFF) and parametric late reverberation filtering (PLF), respectively. Further, a QMF domain tapped delay line (QTDL) is proposed to reduce complexity in the high-frequency band, based on human auditory perception and codec characteristics. In the proposed algorithm, a scalability scheme is adopted to cover a wide range of applications by adjusting the threshold of mixing time. Experimental results show that the proposed algorithm is able to provide the audio quality of a binaural rendered signal using full-length binaural room impulse responses. A scalability test also shows that the proposed scalability scheme smoothly compromises between audio quality and computational complexity.
전체 381
381 International Conference Seyun Um, Doyeon Kim, Hong-Goo Kang "HANUI: Harnessing Distributional Discrepancies for Singing Voice Deepfake Detection" in in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2026
380 International Conference Miseul Kim, Soo jin Park, Kyungguen Byun, Hyeon-Kyeong Shin, Sunkuk Moon, Shuhua Zhang, Erik Visser "Mitigating Intra-Speaker Variability in Diarization with Style-Controllable Speech Augmentation" in in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2026
379 International Conference Woongjib Choi, Sangmin Lee, Hyungseob Lim, Hong-Goo Kang "UniverSR: Unified and Versatile Audio Super-Resolution via Vocoder-Free Flow Matching" in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2026
378 International Journal Hyeonjin Cha, Seyun Um, Miseul Kim, Changhwan Kim, Seungshin Lee, Hong-Goo Kang "Content-Aware Style Augmentation for Zero-Shot Voice Conversion With Short Target Speech" in IEEE Signal Processing Letters, vol.33, pp.66-70, 2025
377 Domestic Conference 신재훈, 최웅집, 김병현, 장인선, 강홍구 "조건부 플로우 매칭을 활용한 심층 신경망 기반 음성 코덱 향상 기법" in 한국방송·미디어공학회 2025년 하계학술대회, 2025
376 International Conference Miseul Kim, Seyun Um, Hyeonjin Cha, Hong-Goo Kang "SpeechMLC: Speech Multi-Label Classification" in INTERSPEECH, 2025
375 International Conference Sangmin Lee, Woojin Chung, Seyun Um, and Hong-Goo Kang "UniCoM: A Universal Code-Switching Speech Generator" in EMNLP Findings, 2025
374 International Conference Woongjib Choi, Byeong Hyeon Kim, Hyungseob Lim, Inseon Jang, Hong-Goo Kang "Neural Spectral Band Generation for Audio Coding" in INTERSPEECH, 2025
373 International Conference Jihyun Kim, Doyeon Kim, Hyewon Han, Jinyoung Lee, Jonguk Yoo, Chang Woo Han, Jeongook Song, Hoon-Young Cho, Hong-Goo Kang "Quadruple Path Modeling with Latent Feature Transfer for Permutation-free Continuous Speech Separation" in INTERSPEECH, 2025
372 International Conference Byeong Hyeon Kim,Hyungseob Lim,Inseon Jang,Hong-Goo Kang "Towards an Ultra-Low-Delay Neural Audio Coding with Computational Efficiency" in INTERSPEECH, 2025