Papers

StylebookTTS: Zero-Shot Text-to-Speech Leveraging Unsupervised Style Representation

International Conference
2021~
작성자
임형섭
작성일
2024-10-21 17:22
조회
5003
Authors : Juhwan Yoon, Hyungseob Lim, Hyeonjin Cha, Hong-Goo Kang

Year : 2024

Publisher / Conference : APSIPA ASC

Research area : Speech Signal Processing, Text-to-Speech

Presentation : Poster

Zero-shot text-to-speech (ZS-TTS) is a TTS system capable of generating speech in voices it has not been explicitly trained on. While many recent ZS-TTS models effectively capture target speech styles using a single global style feature per speaker, they still face challenges in achieving high speaker similarity for voices that were not previously encountered. In this study, we propose StylebookTTS, a novel ZS-TTS framework that extracts and utilizes multiple target style embeddings based on the content. We begin by extracting style information from target speeches, leveraging linguistic content obtained through a self-supervised learning (SSL) model. The extracted style information is stored in a collection of embeddings called a stylebook, which represents styles in an unsupervised manner without the need for text transcriptions or speaker labeling. Simultaneously, the input text is transformed into content features using a transformer-based text-to-unit module, which links the text to the SSL representations of an utterance reading that text. The final target style is created by selecting embeddings from the stylebook that most closely align with the content features generated from the text. Finally, a diffusion-based decoder is employed to synthesize the mel-spectrogram by combining the final target style with the content features generated from the text. Experimental results demonstrate that StylebookTTS achieves greater speaker similarity compared to baseline models, while also being highly data-efficient, requiring significantly less paired text-audio data.
전체 381
171 International Conference Seyun Um, Doyeon Kim, Hong-Goo Kang "HANUI: Harnessing Distributional Discrepancies for Singing Voice Deepfake Detection" in in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2026
170 International Conference Miseul Kim, Soo jin Park, Kyungguen Byun, Hyeon-Kyeong Shin, Sunkuk Moon, Shuhua Zhang, Erik Visser "Mitigating Intra-Speaker Variability in Diarization with Style-Controllable Speech Augmentation" in in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2026
169 International Conference Woongjib Choi, Sangmin Lee, Hyungseob Lim, Hong-Goo Kang "UniverSR: Unified and Versatile Audio Super-Resolution via Vocoder-Free Flow Matching" in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2026
168 International Conference Miseul Kim, Seyun Um, Hyeonjin Cha, Hong-Goo Kang "SpeechMLC: Speech Multi-Label Classification" in INTERSPEECH, 2025
167 International Conference Sangmin Lee, Woojin Chung, Seyun Um, and Hong-Goo Kang "UniCoM: A Universal Code-Switching Speech Generator" in EMNLP Findings, 2025
166 International Conference Woongjib Choi, Byeong Hyeon Kim, Hyungseob Lim, Inseon Jang, Hong-Goo Kang "Neural Spectral Band Generation for Audio Coding" in INTERSPEECH, 2025
165 International Conference Jihyun Kim, Doyeon Kim, Hyewon Han, Jinyoung Lee, Jonguk Yoo, Chang Woo Han, Jeongook Song, Hoon-Young Cho, Hong-Goo Kang "Quadruple Path Modeling with Latent Feature Transfer for Permutation-free Continuous Speech Separation" in INTERSPEECH, 2025
164 International Conference Byeong Hyeon Kim,Hyungseob Lim,Inseon Jang,Hong-Goo Kang "Towards an Ultra-Low-Delay Neural Audio Coding with Computational Efficiency" in INTERSPEECH, 2025
163 International Conference Stijn Kindt,Jihyun Kim,Hong-Goo Kang,Nilesh Madhu "Efficient, Cluster-Informed, Deep Speech Separation with Cross-Cluster Information in AD-HOC Wireless Acoustic Sensor Networks" in International Workshop on Acoustic Signal Enhancement (IWAENC), 2024
162 International Conference Yeona Hong, Hyewon Han, Woo-jin Chung, Hong-Goo Kang "StableQuant: Layer Adaptive Post-Training Quantization for Speech Foundation Models" in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2025