Papers

Scalable Multiband Binaural Renderer for MPEG-H 3D Audio

International Journal
2011~2015
작성자
이진영
작성일
2015-08-01 22:05
조회
218
Authors : Taegyu Lee, Hyun Oh Oh, Jeongil Seo, Young-Cheol Park, Dae Hee Youn

Year : 2015

Publisher / Conference : IEEE Journal of Selected Topics in Signal Processing

Volume : 9, issue 5

Page : 907-920

To provide immersive 3D multimedia service, MPEG has launched MPEG-H, ISO/IEC 23008, “High Efficiency Coding and Media Delivery in Heterogeneous Environments.” As part of the audio, MPEG-H 3D Audio has been standardized based on a multichannel loudspeaker configuration (e.g., 22.2). Binaural rendering is a key application of 3D audio; however, previous studies focus on binaural rendering with low complexity such as IIR filter design for HRTF or pre-/post-processing to solve in-head localization or front-back confusion. In this paper, a new binaural rendering algorithm is proposed to support the large number of input channel signals and provide high-quality in terms of timbre, parts of this algorithm were adopted into the MPEG-H 3D Audio. The proposed algorithm truncates binaural room impulse response at mixing time, the transition point from the early-reflections to the late reverberation part. Each part is processed independently by variable order filtering in frequency domain (VOFF) and parametric late reverberation filtering (PLF), respectively. Further, a QMF domain tapped delay line (QTDL) is proposed to reduce complexity in the high-frequency band, based on human auditory perception and codec characteristics. In the proposed algorithm, a scalability scheme is adopted to cover a wide range of applications by adjusting the threshold of mixing time. Experimental results show that the proposed algorithm is able to provide the audio quality of a binaural rendered signal using full-length binaural room impulse responses. A scalability test also shows that the proposed scalability scheme smoothly compromises between audio quality and computational complexity.
전체 319
319 International Conference Jinyoung Lee and Hong-Goo Kang "Stacked U-Net with High-level Feature Transfer for Parameter Efficient Speech Enhancement" in APSIPA ASC, 2021
318 International Conference Huu-Kim Nguyen, Kihyuk Jeong, Se-Yun Um, Min-Jae Hwang, Eunwoo Song, Hong-Goo Kang "LiteTTS: A Decoder-free Light-weight Text-to-wave Synthesis Based on Generative Adversarial Networks" in INTERSPEECH, 2021
317 International Conference Zainab Alhakeem, Yoohwan Kwon, Hong-Goo Kang "Disentangled Representations for Arabic Dialect Identification based on Supervised Clustering with Triplet Loss" in EUSIPCO, 2021
316 International Conference Miseul Kim, Minh-Tri Ho, Hong-Goo Kang "Self-supervised Complex Network for Machine Sound Anomaly Detection" in EUSIPCO, 2021
315 International Conference Kihyuk Jeong, Huu-Kim Nguyen, Hong-Goo Kang "A Fast and Lightweight Text-To-Speech Model with Spectrum and Waveform Alignment Algorithms" in EUSIPCO, 2021
314 International Conference Jiyoung Lee*, Soo-Whan Chung*, Sunok Kim, Hong-Goo Kang**, Kwanghoon Sohn** "Looking into Your Speech: Learning Cross-modal Affinity for Audio-visual Speech Separation" in CVPR, 2021
313 International Conference Zainab Alhakeem, Hong-Goo Kang "Confidence Learning from Noisy Labels for Arabic Dialect Identification" in ITC-CSCC, 2021
312 International Conference Huu-Kim Nguyen, Kihyuk Jeong, Hong-Goo Kang "Fast and Lightweight Speech Synthesis Model based on FastSpeech2" in ITC-CSCC, 2021
311 International Conference Yoohwan Kwon*, Hee-Soo Heo*, Bong-Jin Lee, Joon Son Chung "The ins and outs of speaker recognition: lessons from VoxSRC 2020" in ICASSP, 2021
310 International Conference You Jin Kim, Hee Soo Heo, Soo-Whan Chung, Bong-Jin Lee "End-to-end Lip Synchronisation Based on Pattern Classification" in IEEE Spoken Language Technology Workshop (SLT), 2020