Abstract
Animation has gained significant interest in the recent film and TV industry. Despite the success of advanced video generation models like Sora, Kling, and CogVideoX in generating natural videos, they lack the same effectiveness in handling animation videos. Evaluating animation video generation is also a great challenge due to its unique artist styles, violating the laws of physics and exaggerated motions. In this paper, we present a comprehensive system, AniSora, designed for animation video generation, which includes a data processing pipeline, a controllable generation model, and an evaluation benchmark. Supported by the data processing pipeline with over 10M high-quality data, the generation model incorporates a spatiotemporal mask module to facilitate key animation production functions such as image-to-video generation, frame interpolation, and localized image-guided animation. We also collect an evaluation benchmark of 948 various animation videos, with specifically developed metrics for animation video generation. Our entire project is publicly available on this https URL.
Paper: arxiv.org/abs/2412.10255
Code: github.com/bilibili/Index-anisora/tree/main
Hugging Face: huggingface.co/IndexTeam/Index-anisora
Modelscope: www.modelscope.cn/organization/bilibili-index
Project Page: komiko.app/video/AniSora
HappyFrog@lemmy.blahaj.zone 1 week ago
I was thinking that frame interpolation could maybe be useful, but after seeing the examples in the end I think we still have a long way to go.