|
Group: Forum Members
Last Active: 6 days ago
Posts: 230,
Visits: 2.6K
|
I personally use video AI solutions to get a little motion into photos, like trees and plants swinging gently in a breeze or clouds moving over a blue sky.
That works pretty well and I can run it locally on my computer thanks to software like CogVideoX and Topaz Video AI.
But everything else is problematic. No control, no consistency etc.
I would guess the problem lies in the way these models are trained. To my knowledge an image sequence is converted into a single giant texture (comparable to a sprite sheet in videogames) which is loaded into video RAM. I think for CogVideoX it's 7x7 images as one texture and that's why it supports 49 frames and just fits into 24 GB VRAM. That's just 2(!) seconds.
This only allows very short videos and therefore trains only very little consistency. GPUs with 80 GB VRAM will probably only fit three times as much, so about 150 frames. That's better, but still only about 6 seconds at 24 fps.
I don't think that the current way of training video AIs will give much better results in the future.
|
|
alerender
|
alerender
Posted Last Year
|
|
Group: Forum Members
Last Active: 11 Months Ago
Posts: 72,
Visits: 288
|
I did a test passing animations made with Iclone, Cc4, Unreal to Runway And I must say that Runway is totally useless
https://www.alerender.com/
|