|
Yicheng Gu (she / her)
Bachelor Student,
School of Data Science
The Chinese University of Hong Kong, Shenzhen
Email: yichenggu@link.cuhk.edu.cn
Curriculum Vitae
/
Google
Scholar
/
GitHub
|
I'm a second-year Bachelor student at the Chinese University of Hong Kong, Shenzhen,
supervised by
Professor Zhizheng Wu and working closely with Xueyao Zhang. My ongoing
works focus on Dataset Development and Differentiable Digital Signal Processing
(DDSP). Recently, I'm participating in the development of the prototype of the
open-source Amphion toolkit as one of the
core numbers.
My research interests include:
- Neural Vocoder
- Differentiable Digital Signal Processing
- Audio-Visual Generation
- Dataset Development
|
2024/07
|
Method from my my first paper is implemented and
supported by NVIDIA-BigVGAN 2.0
|
2024/07
|
My first large-scale speech dataset Emilia got released.
|
2023/12
|
My first paper about neural vocoder got
accepted
by ICASSP 2024.
|
2023/12
|
My first attempt at developing a
large-scale open-source project Amphion.
|
2023/10
|
My first paper about singing voice processing got accepted
by ML4Audio @ NeurIPS 2023.
|
2022/10
|
I entered Professor Zhizheng Wu's lab after I was admitted by The
Chinese University of Hong Kong, Shenzhen as a Bachelor student.
|
ICASSP 2024
Multi-Scale Sub-Band Constant-Q Transform Discriminator for High-Fidelity Vocoder
Yicheng
Gu, Xueyao Zhang, Liumeng Xue, Zhizheng Wu
International Conference on Acoustics, Speech, and Signal Processing
2024
Paper /
Code
/
Demo /
Pretrained Model
TL;DR: We propose a Constant-Q Transform-based Discriminator for GAN-based neural
vocoders.
|
Amphion: An Open-Source Audio, Music and Speech Generation Toolkit
Xueyao Zhang*, Liumeng Xue*, Yicheng Gu*, Yuancheng
Wang*, Haorui He, Chaoren Wang, Xi Chen, Zihao Fang, Haopeng Chen, Junan Zhang, Tze Ying Tang,
Lexiao Zou, Mingxuan Wang, Jun Han, Kai
Chen, Haizhou Li, Zhizheng
Wu (*: Equal Contribution)
Technical Report /
GitHub /
HuggingFace /
OpenXLab
TL;DR: We develop a unified audio generation open-source toolkit.
|
ML4Audio @ NeurIPS 2023
Leveraging Content-based Features from Multiple Acoustic Models for Singing Voice Conversion
Xueyao Zhang, Yicheng
Gu, Haopeng Chen, Zihao Fang, Lexiao Zou, Liumeng Xue, Zhizheng Wu
Machine Learning for Audio Workshop (ML4Audio) at NeurIPS 2023
Paper /
Code /
Demo /
Pretrained Model /
HuggingFace Space /
OpenXLab App
TL;DR: We propose to utilize multiple content features for singing voice conversion.
|
submitted
An Investigation of Time-Frequency Representation Discriminators for High-Fidelity Vocoder
Yicheng
Gu, Xueyao Zhang, Liumeng Xue, Haizhou Li, Zhizheng
Wu
Preprint /
Code
/
Demo
TL;DR: We propose a Continuous Wavelet Transform-based Discriminator for GAN-based
neural Vocoders.
|
submitted
FoleyCrafter: Bring Silent Videos to Life with Lifelike and Synchronized Sounds
Yiming Zhang, Yicheng Gu,
Yanhong Zeng, Zhening Xing,
Zhizheng Wu,
Kai Chen
Preprint /
Code /
HuggingFace /
Demo
TL;DR: We propose a Video-to-Audio generation pipeline with Audio-Visual
Synchronization and Text-Editability.
|
submitted
Emilia: An Extensive, Multilingual, and Diverse Speech Dataset for Large-Scale Speech
Generation
Haorui he*, Zengqiang Shang*, Chaoren Wang*, Xuyuan Li*, Yicheng Gu,
Hua Hua, Liwei Liu, Chen Yang, Jiaqi Li, Peiyang Shi, Yuancheng Wang, Kai Chen, Pengyuan Zhang, Zhizheng Wu (*: Equal Contribution)
Preprint /
Code /
Demo /
HuggingFace
TL;DR: We propose a large scale multi-lingual speech dataset for TTS.
|
submitted
Leveraging Diverse Semantic-based Audio Pretrained Models for Singing Voice Conversion
Xueyao Zhang, Zihao Fang, Yicheng Gu, Haopeng Chen, Lexiao Zou, Junan Zhang, Liumeng Xue, Zhizheng Wu
TL;DR: We investigated the pros and cons of different semantic tokens for Singing Voice
Conversion.
|
2023
|
The Academic Performance Scholarship, Class B (Top 3%, 2023)
|
2022
|
"LanHuaYing" Scholarship (Top 10 admitted students in Zhejiang Province, 2022)
|
|