Abstract
We introduce a new task – language-driven video inpainting, which uses natural language instructions to guide the inpainting process. This approach overcomes the limitations of traditional video inpainting methods that depend on manually labeled binary masks, a process often tedious and labor-intensive. We present the Remove Objects from Videos by Instructions (ROVI) dataset, containing 5,650 videos and 9,091 inpainting results, to support training and evaluation for this task. We also propose a novel diffusion-based language-driven video inpainting framework, the first end-to-end baseline for this task, integrating Multimodal Large Language Models to understand and execute complex language-based inpainting requests effectively. Our comprehensive results showcase the dataset’s versatility and the model’s effectiveness in various language-instructed inpainting scenarios. We will make datasets, code, and models publicly available.
Paper: arxiv.org/abs/2401.10226
Code: github.com/…/Language-Driven-Video-Inpainting (coming soon)
Data: (coming soon)
Project Page: jianzongwu.github.io/projects/rovi/
tagginator@utter.online [bot] 8 months ago
New Lemmy Post: Towards Language-Driven Video Inpainting via Multimodal Large Language Models (https://lemmy.dbzer0.com/post/12644530)
Tagging: #StableDiffusion
(Replying in the OP of this thread (NOT THIS BOT!) will appear as a comment in the lemmy discussion.)
I am a FOSS bot. Check my README: https://github.com/db0/lemmy-tagginator/blob/main/README.md