In this paper, we propose a deep learning-based multi-channel speech enhancement algorithm. The proposed system consists of three sub-modules such as magnitude estimation, phase estimation, and spatial filtering modules. To minimize the distortion between the target speech and enhanced signal waveform, we adopt an end-to-end modeling architecture that considers time-domain reconstruction loss, magnitude and phase spectrum loss, and spatial information between microphones. The experimental results show that the proposed model shows much better performance than conventional algorithm in terms of noise reduction, intelligibility and speech quality.

삼성 전기 논문상 수상작