글수 325
Page | |
---|---|
Publisher | Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC 2020) |
Year | 2020 |
Month | 12 |
링크 주소 | https://ieeexplore.ieee.org/abstract/document/9306384?casa_token=rTSaU8x2LBkAAAAA:kdWIT6j-aZaxaa6BDc3jeFxDwfmlAcZIOSPSj0U1y8JhDnY1HGJ9WoA_KAR7c1oxziAl-o9h43E |
Authors | Hyeon-Kyeong Shin, Hyewon Han, Kyungguen Byun and Hong-Goo Kang |
When people get stressed in nervous or unfamiliar situations, their speaking styles or acoustic characteristics change. These changes are particularly emphasized in certain regions of speech, so a model that automatically computes temporal weights for components of the speech signals that reflect stress-related information can effectively capture the psychological state of the speaker. In this paper, we propose an algorithm for psychological stress detection from speech signals using a deep spectral-temporal encoder and multi-head attention with domain adversarial training. To detect long-term variations and spectral relations in the speech under different stress conditions, we build a network by concatenating a convolutional neural network (CNN) and a recurrent neural network (RNN). Then, multi-head attention is utilized to further emphasize stress-concentrated regions. For speaker-invariant stress detection, the network is trained with adversarial multi-task learning by adding a gradient reversal layer. We show the robustness of our proposed algorithm in stress classification tasks on the Multimodal Korean stress database acquired in [1] and the authorized stress database Speech Under Simulated and Actual Stress (SUSAS) [2]. In addition, we demonstrate the effectiveness of multi-head attention and domain adversarial training with visualized analysis using the t-SNE method.