Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model
(2023)
Conference Proceeding
Wang, Y., Leng, Z., Li, F. W. B., Wu, S., & Liang, X. (in press). Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model.
Text-driven human motion generation in computer vision is both significant and challenging. However, current methods are limited to producing either deterministic or imprecise motion sequences, failing to effectively control the temporal and spatial... Read More about Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model.