Profile Picture

Amazing what can be done with just "images" ....

Posted By sonic7 Last Year
You don't have permission to rate!
Author
Message
sonic7
sonic7
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)

Group: Forum Members
Last Active: Last Year
Posts: 1.7K, Visits: 19.4K

Not sure if this has been mentioned on the forum, but in case it hasn't - check out what this guy is doing ...
Here's just one of his videos:





--------------------------------------------------------------------------------------------------------------------------------------------------------
Please be patient with me ..... I don't always 'get it' the first time 'round - not even the 2nd time! Sad  - yikes! ... 
MSI GT72VR Laptop, i7 7700HQ 4-Core 3.8 GHz 16GB RAM; Nvidia 1070, 8GB Vram iClone-7.93  3DXChange Pipeline 7.81  CC-3 Pipeline 3.44  Live Face  HeadShot  Brekel Pro-Body  Popcorn FX  iRAY  Kinect V2  DaVinci Resolve17  Mixcraft 8.1

Kelleytoons
Kelleytoons
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)

Group: Forum Members
Last Active: Yesterday
Posts: 9.1K, Visits: 21.8K
Actually RL kind of "started" with this very process - CrazyTalk was kind of the cornerstone of this, using still images and animating them, decades ago (RL kind of had a twin path, with CT and iClone, but unless I'm misremembering, CT came first).

So - old tech, nothing new here (perhaps just done a lot more polished, but it was kind of amazing what CT could do back before 3D animation was even a possibility).



Alienware Aurora R12, Win 10, i9-119000KF, 3.5GHz CPU, 128GB RAM, RTX 3090 (24GB), Samsung 960 Pro 4TB M-2 SSD, TB+ Disk space
Mike "ex-genius" Kelley
sonic7
sonic7
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)

Group: Forum Members
Last Active: Last Year
Posts: 1.7K, Visits: 19.4K

Yeah - when I see 'young' guys (ie: sharp minds) jumping in and out of various software's like this it both fascinates AND unsettles me.
Fascination because of the growing options and pathways becoming available to create visual stories. But unsettling (for me) because it makes me question my current approach. And I've always 'preferred' using a single software (in order to get to know it well) - even though some would say that's unrealistic. I do use Blender 'a bit' to supplement iClone. It's natural to want to use the 'easiest' approach given how involved film making is.


--------------------------------------------------------------------------------------------------------------------------------------------------------
Please be patient with me ..... I don't always 'get it' the first time 'round - not even the 2nd time! Sad  - yikes! ... 
MSI GT72VR Laptop, i7 7700HQ 4-Core 3.8 GHz 16GB RAM; Nvidia 1070, 8GB Vram iClone-7.93  3DXChange Pipeline 7.81  CC-3 Pipeline 3.44  Live Face  HeadShot  Brekel Pro-Body  Popcorn FX  iRAY  Kinect V2  DaVinci Resolve17  Mixcraft 8.1

AutoDidact
AutoDidact
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)

Group: Forum Members
Last Active: Yesterday
Posts: 1.9K, Visits: 12.1K
Companies Like Daz3D ,who’s main focus is still image rendering, will likely be put out of business by AI art generation, within five years.
Some of their user base is already publicly announcing their migration.
It will take a bit longer for usable animation
and unlimited camera angles etc.
but People will soon have an alternative to buying into expensive software and hardware eco system like Iclone as well.



RAG DOLL COLLISION ANIMATIONS FOR ICLONE 8 & 7
---------------------------------------------------------------------------------------------------------------------
Ghost Origins
My latest Feature length film created with Iclone.
https://forum.reallusion.com/uploads/images/adf9b210-df59-4cb6-aa1b-9de5.jpg


Sophus
Sophus
Posted Last Year
View Quick Profile
Veteran Member

Veteran Member (685 reputation)Veteran Member (685 reputation)Veteran Member (685 reputation)Veteran Member (685 reputation)Veteran Member (685 reputation)Veteran Member (685 reputation)Veteran Member (685 reputation)Veteran Member (685 reputation)Veteran Member (685 reputation)

Group: Forum Members
Last Active: 3 hours ago
Posts: 166, Visits: 1.8K
Some days ago Corridor Crew released a short film where they filmed themselves before a green screen and then used Stable Diffusion to change the art style of the recordings into an anime.

The impressive part is, that they found a way to get a somewhat stable image generation, at least for this style. They used Dreambooth to train a model of their faces, then stabilised the noise seed during image creation and later used some anti flicker filters in After Effects.

The backgrounds were generated in Stable Diffusion as well. To get the church backgrounds consistent looking they used an Unreal Engine asset as an input image. 

So, it's not only image based but movie based. Pretty impressive stuff, but I'm not sure this will work with more realistic looking styles.

But you could also use DAZ or Reallusion characters to train the model and transfer the style instead of real human actors. 

https://www.youtube.com/watch?v=GVT3WUa-48Y
sonic7
sonic7
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)

Group: Forum Members
Last Active: Last Year
Posts: 1.7K, Visits: 19.4K

Although things are heading in this AI direction, it still has a way to go before it could supplant the use of *real* 3D Avatars, Sets and Props - though things seem to be accelerating fast. For now, I guess *some* aspects of these newer approaches could be used to *supplement* the current 3D method of film making. I'm thinking mainly of the depth map approach this guy demonstrated. For example, in shots that have only a small camera movement (like establishing shots / 'one-off' shots). It might save on both the cost and design-time of using actual sets/props, but could still be lit as actual 3D mesh. The *design* could instead be done in 2D then displaced into 3D using the depth map approach. I'm wondering, after using the "displacement modifier" (in Blender) to create the 3D look, whether that can be exported out of Blender as actual 3D mesh for importing into iClone? Someone might know about this.

Another way of supplementing iClone film production might be to use the "I-Id animation software" mentioned for creating background *extras* in scenes where the lower resolution wouldn't matter since the characters would occupy only a fraction of the frame.

Edid: - Just looked up "I-Id" and, (unfortunately) the *cost* for the small time solo filmaker could well be off-putting.
($50/month if wanting 'commercial' use, and that's for just 15 minutes).

https://forum.reallusion.com/uploads/images/e370b564-25a5-4581-a31b-c52c.jpg





--------------------------------------------------------------------------------------------------------------------------------------------------------
Please be patient with me ..... I don't always 'get it' the first time 'round - not even the 2nd time! Sad  - yikes! ... 
MSI GT72VR Laptop, i7 7700HQ 4-Core 3.8 GHz 16GB RAM; Nvidia 1070, 8GB Vram iClone-7.93  3DXChange Pipeline 7.81  CC-3 Pipeline 3.44  Live Face  HeadShot  Brekel Pro-Body  Popcorn FX  iRAY  Kinect V2  DaVinci Resolve17  Mixcraft 8.1

Edited
Last Year by sonic7
AutoDidact
AutoDidact
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)

Group: Forum Members
Last Active: Yesterday
Posts: 1.9K, Visits: 12.1K
Yeah that D-ID service is clearly targeted
toward corporate clients.
I quickly burned though my 14 day trial
and made a few videos where I generated an AI character on another site and used it with D-ID
for easy talking head/commentary clips.
I have CTA4 
I wish it were this easy to make talking heads from any front facing image with out all of the manual set up required.



RAG DOLL COLLISION ANIMATIONS FOR ICLONE 8 & 7
---------------------------------------------------------------------------------------------------------------------
Ghost Origins
My latest Feature length film created with Iclone.
https://forum.reallusion.com/uploads/images/adf9b210-df59-4cb6-aa1b-9de5.jpg


Kelleytoons
Kelleytoons
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)Distinguished Member (35.6K reputation)

Group: Forum Members
Last Active: Yesterday
Posts: 9.1K, Visits: 21.8K
I saw the Corridor Crew video where they processed their live action stuff and I got really excited because I wanted to do the same with iClone rendered footage.  But - WAY too much work (even though they talk about "democratizing" the animation process, it still requires quite a team and quite expensive equipment and software).  The Stable Diffusion app they used, DreamStudio, is "okay"  - I've had fun playing with it but it's nowhere near ready to handle video (again, they did it with $$$ - no one man studio could possibly do the same).  Plus - the process they used was for a VERY specific animation style with hardly any background movement (still images animated only).  I think we are DECADES away from being able to take our rendered stuff and actually process it in a way that makes it any better than the so-so filters any program offers today (I won't be alive, for sure).

So - it's interesting.  Even this guy's "animation" is VERY limited - sort of panning around the talking head.  I can't see much practical use except as a one-off.  We already have MUCH better tools and ability to generate MUCH better looks inside of iClone itself, with or without post processing.  And doing it even easier (assuming you have the necessary mocap tools - and, yes, they do cost $$ but if this old man can afford them then almost anyone can).






Alienware Aurora R12, Win 10, i9-119000KF, 3.5GHz CPU, 128GB RAM, RTX 3090 (24GB), Samsung 960 Pro 4TB M-2 SSD, TB+ Disk space
Mike "ex-genius" Kelley
sonic7
sonic7
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)Distinguished Member (13.5K reputation)

Group: Forum Members
Last Active: Last Year
Posts: 1.7K, Visits: 19.4K

Yeah - Personally, the animation using D-ID doesn't overly grab me, but I'm wondering about the Blender 3D displacement approach using depth maps. - I checked, and apparently it IS possible to convert the Blender projected displacements into actual 3D mesh for export. I see two advantages and one disadvantage.
+ Greater creative freedom in generating *any* imaginable background using 2D artwork as a starting point.
+ Cost savings on 3D asset purchases.
- But, it involves multiple steps 1) Create the artwork, 2) Generate the depth maps, 3) Layout in Blender and export mesh (assuming iClone usage).

I guess you could get 'similar' results using a couple of iClone's planes to force a "shift in perspective" during a camera move.

--------------------------------------------------------------------------------------------------------------------------------------------------------
Please be patient with me ..... I don't always 'get it' the first time 'round - not even the 2nd time! Sad  - yikes! ... 
MSI GT72VR Laptop, i7 7700HQ 4-Core 3.8 GHz 16GB RAM; Nvidia 1070, 8GB Vram iClone-7.93  3DXChange Pipeline 7.81  CC-3 Pipeline 3.44  Live Face  HeadShot  Brekel Pro-Body  Popcorn FX  iRAY  Kinect V2  DaVinci Resolve17  Mixcraft 8.1

Edited
Last Year by sonic7
AutoDidact
AutoDidact
Posted Last Year
View Quick Profile
Distinguished Member

Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)Distinguished Member (5.2K reputation)

Group: Forum Members
Last Active: Yesterday
Posts: 1.9K, Visits: 12.1K
I think we are DECADES away from being able to take our rendered stuff and actually process it in a way that makes it any better than the so-so filters 
Daz recently updated their EULA to put restrictions on using Daz Assets in AI generators.
This is a clear indication that the Daz/poser and perhaps Iclone community does NOT seem understand that this technology is creating a separate standalone eco system that wont depend 
on users buying & owning 3D assets and expensive software and rendering them to be “filtered” or “converted” after the fact..
They already have the ability create consistent Characters from nothing or custom trained self inserted styles images.
Now they can Re-POSE them for consistent sequential art such as graphic novels/comics
full animation frames is not that far off.

Yes ATM ,you still need a beefy GPU PC for home spun custom AI Data training, but it will only be a short time before all the tedious labor that the  Corridoor crew Did, is packaged into a back ended subscription service for both laypeople
hobbyists and pros with budgets.
I predict the the current business models of companies like Daz & Poser will be the first to die and the animation tech will threaten Reallusion sooner than later.   




RAG DOLL COLLISION ANIMATIONS FOR ICLONE 8 & 7
---------------------------------------------------------------------------------------------------------------------
Ghost Origins
My latest Feature length film created with Iclone.
https://forum.reallusion.com/uploads/images/adf9b210-df59-4cb6-aa1b-9de5.jpg





Reading This Topic