Authors : Kihyuk Jeong, Huu-Kim Nguyen, Hong-Goo Kang
Year : 2021
Publisher / Conference : EUSIPCO
Research area : Speech Signal Processing, Text-to-Speech
In this paper, we propose a fast and lightweight text-to-speech (TTS) model that generates high-quality speech even in CPU-only environments. By leveraging the front-end architecture of FastSpeech2, we adopt an effective generative adversarial network (GAN) framework for waveform synthesis, which enables training the proposed model in a fully end-to-end manner. Since the waveform generator consists of smallsize convolutional networks, its inference speed is tremendously fast and the number of network parameters can be reduced by half compared to the FastSpeech2 model. However, the generated waveform segments are often not time-aligned with reference ones because of utilizing the predicted duration, which reduces the reliability of the discriminator module in the GAN framework. To solve the time mis-alignment problem, we propose a waveform alignment algorithm that synchronizes timing information between the reference and generated waveforms. In addition to the waveform aligning task, we include an auxiliary mel-spectrogram prediction task to further enhance perceptual quality. Since this task is only required for training, it does not increase the computational complexity during the inference stage. Objective and subjective experimental results show that the synthesized quality of the proposed model is comparable to that of conventional approaches.