Profile Picture

Creating AccuLips with Python

Posted By josh_177552 3 Years Ago
You don't have permission to rate!
Author
Message
josh_177552
josh_177552
Posted 3 Years Ago
View Quick Profile
Junior Member

Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)Junior Member (249 reputation)

Group: Forum Members
Last Active: 3 Years Ago
Posts: 6, Visits: 56
I guess it will be a bit more work if your goal is to get the results back into iClone, but it should be possible with what commands are available in iClone's Python API.
I'm currently just working with English dialogue, but probably will have to deal with translations later.

To generate phonemes I have just followed the instructions on their website: 
Examples — Montreal Forced Aligner 2.0.0a22 documentation (montreal-forced-aligner.readthedocs.io)
- I have my dialogue organized in folders for each character
- I have for example LINE1.wav, and a corresponding LINE1.lab (this is just a text file with the dialogue written out, it's the same as the .txt file you can make for iClone).
- I run the command to generate the phonemes as shown in the example (but pointed at the directory where all my files are instead of the example dataset).

Now in the output directory, you have all these .TextGrid files with the phonemes and at what time they happen.
You would have to parse this file to extract the phonemes, and use it to place the visemes on the timeline.
In the iClone install folder there is a file "iClone 7\Resource\ICTextToSpeech\Dictionary\en.PhonemeVisemeMapping" where you can read which phonemes get turned into which visemes, for example both the "F" and "V" phonemes get turned into the "F_V" viseme.
animagic
animagic
Posted 3 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)

Group: Forum Members
Last Active: 7 hours ago
Posts: 15.7K, Visits: 30.5K
Mysterious duplicate post...Unsure


https://forum.reallusion.com/uploads/images/436b0ffd-1242-44d6-a876-d631.jpg

Edited
3 Years Ago by animagic
animagic
animagic
Posted 3 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)

Group: Forum Members
Last Active: 7 hours ago
Posts: 15.7K, Visits: 30.5K
There is a Python script from Mike Kelley to take the viseme output from Papagayo and then use that in iClone. That could perhaps be the basis for what you want to do.

See: https://forum.reallusion.com/427265/Improved-Papagayo-Script.


https://forum.reallusion.com/uploads/images/436b0ffd-1242-44d6-a876-d631.jpg

mtakerkart
mtakerkart
Posted 3 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (15.7K reputation)Distinguished Member (15.7K reputation)Distinguished Member (15.7K reputation)Distinguished Member (15.7K reputation)Distinguished Member (15.7K reputation)Distinguished Member (15.7K reputation)Distinguished Member (15.7K reputation)Distinguished Member (15.7K reputation)Distinguished Member (15.7K reputation)

Group: Forum Members
Last Active: Yesterday
Posts: 3.1K, Visits: 28.2K
I'm a Director artist , really not a Python coder.... I want to make movie with fewer mouse click.
Reallusion is one of the few companies that makes this possible. What I deplore is that for the moment they do not want to put a little energy for an acculips in French while Josh_177552 shows us that it is not complicated for a coder and that all the development is already done , free, open access ....
The future of digital is in the creation of very user-friendly applications. This example, and there will be more, shows it clearly:
http://www.cgchannel.com/2021/07/the-freemocap-project-markerless-mocap-for-under-100/
The announcement of character creator 4 which will allow you to rig, with facial blendshape, any mesh in a few clicks goes in this direction

Did you see that Marc Zukerberg want to create a metaverse?
https://www.bloomberg.com/news/articles/2021-07-29/mark-zuckerberg-explains-metaverse-vision-to-facebook-fb-investors-analysts
Imagine the content that will have to be created. You will have to be the first to offer this service.
animagic
animagic
Posted 3 Years Ago
View Quick Profile
Distinguished Member

Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)Distinguished Member (32.5K reputation)

Group: Forum Members
Last Active: 7 hours ago
Posts: 15.7K, Visits: 30.5K
What is disappointing to me is that RL just announced that the CC 3.43 update features compatibility between CC and Omniverse Audio2Face, which supports multiple languages!

So why can't that be implemented for AccuLips?

I think we should keep hammering on this until we get more than some non-committal answer. It is as if RL wants to chase the core filmmakers away from their products...Unsure


https://forum.reallusion.com/uploads/images/436b0ffd-1242-44d6-a876-d631.jpg

Edited
3 Years Ago by animagic



Reading This Topic