nitter.net/yume_asaki_/…/1747202842535068134
Below is a brief workflow explanation
First, place objects in Unity to create the background and capture the screen → Real-time pre-visualization in LCM
After determining the angle and other details, go take a photo of the vending machine → Take only the line drawing information from the photo, simplify it, change the scale and placement, and add objects → Designate the scene and make it into an illustration
Next, decide on the movement of the character, perform the character in front of the smartphone, and capture it on video → use object detection nodes to change the life size (the method I tried with Miku the snow girl I posted before) → extract only the posture information → use Openpose to create v2v (all frame prompts to assist and change facial expressions, which is usually a few thousand characters) It’s usually a few thousand characters, so all the simple tasks like counting up and modifying are thrown into GPT)
Composite background and character in Shoost (light sources are materialized in krita) -> render -> retouch in krita -> Upscale -> retouch in krita
tagginator@utter.online [bot] 8 months ago
New Lemmy Post: At night, in front of the vending machine (https://lemmy.dbzer0.com/post/12757943)
Tagging: #StableDiffusion
(Replying in the OP of this thread (NOT THIS BOT!) will appear as a comment in the lemmy discussion.)
I am a FOSS bot. Check my README: https://github.com/db0/lemmy-tagginator/blob/main/README.md