• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Check out this ML-Trained Facial Rigging tech for Unreal Engine 5

Bartski

Gold Member

619bb80d844fe.gif



Ziva Dyamics has launched ZRT Face Trainer, their new machine-learning-trained cloud-based facial rigging service meant for games and real-time work.

The new toolkit, available for selected users to test for free, allows transforming "any face mesh into a high-performance real-time puppet in one hour".

The Ziva team states they trained the new cloud-based automated facial rigging platform to recreate a range of expressions of human actors using a 15TB library of 4D scan data. The system is said to convert uploaded character head meshes into a "real-time puppet" that can express over 72,000 facial shapes within an hour.
 
Last edited:

Kuranghi

Member
Holy jeebus thats nice.

I think the only way I can tell its fake is when they engage the jaw, cheeks AND eyes at once. Theres a slight disconnect from reality then, but to be clear I think its amazing.

We've gone past "This will tricking most people over 70 years old" and we're now in the realm of "anyone over 50 can't tell the difference". Next stop US lmao.

Soon only carlosrox carlosrox will be able to tell and he'll be laughing at all of us.
 

CamHostage

Member
Ziva Dynamics is great (they helped Insomniac make the "ML Muscles" animation system in next-gen Spider-Man).




But just to clear up the OP... there's is nothing AFAIK that is specific to Unreal Engine 5 about ZRT Face Trainer, other than that they did a demo video using it. The animation of a rigged 'skin' face cannot use Nanite, the lighting is independent of Lumen, there's probably no inherent use of Niagara, and the technology can be integrated into other engines besides UE5 (including UE4.)

Usuaully on GAF when we gamers get excited about stuff, it's because the 'engine' is doing realtime amazing stuff. This is different; it's pre-processing the face with ML to build a rig so that an engine can then take it and go do its amazing stuff. So just to be a stickler of specifics (not that the OP said otherwise, but it seems easy to see this and go "Ah ha! Thank you UE5, you will save us from our past-gen/cross-gen woes!"), this is not the magic of UE5. It's not even necessarily the magic of next-gen. This is not realtime ML. (Well, no ML is "realtime"; AI and ML gets confusing to talk about, but from a gamer's perspective, ML for them is the benefit they get from what the machine has learned.) It's not technology that helps makes next-gen hardware "true next-gen". What it is, however, is a thing that makes great faces for powerful hardware to make the most of. It's bleeding-edge technology, but your phone and your movie theater (Ziva most powered CG studios before getting into games) may benefit from it as much as your PlayStation or Xbox.

...As long as we can be on that same page as far as what this is and is not, then fuck yeah, ZRT Face Trainer and the Ziva FX technology set are amazing tech that are awesome to bring into video games!

(*Also, anybody more techie than me, feel free to correct details on ML above; I only kinda sorta get how all of this is being explored.)
 
Last edited:
We've been seeing all this fancy high-fidelity facial animation tech for the past 10 years and yet there aren't any games I can think of where it's been implemented broadly. It may show up in a cut-scene here and there with a handful of main characters, but I don't think it will really blow my socks off until every NPC in the game world have faces as realistic as in the video above.

It's pretty jarring and immersion breaking when our player controlled character looks a generation ahead of all the NPCs you interact with...

y2e86qqva4l11.jpg
 

CamHostage

Member
We've been seeing all this fancy high-fidelity facial animation tech for the past 10 years and yet there aren't any games I can think of where it's been implemented broadly. It may show up in a cut-scene here and there with a handful of main characters, but I don't think it will really blow my socks off until every NPC in the game world have faces as realistic as in the video above.

It's pretty jarring and immersion breaking when our player controlled character looks a generation ahead of all the NPCs you interact with...

y2e86qqva4l11.jpg

The Spider-Man boat guy is not at all typical of NPCs in the game world. The actual characters you will interact with or move past in the game are much more in keeping with open-world character standards, and are within a somewhat believable range of detail difference compared to your hero-character to blend into the world.

That particular boat guy character is more a case of probably that they built the whole boat as a single, simple, easy-to-use model instead of the in-game people + vehicle system because the player would never (normally) actually be able to get out there and see those things up close. If they put a 'real' character on the boat, it'd waste resources and also might have problems without AI/collision routines for it. (Like, maybe the NPC might fall off the boat or sink through the floor because the model isn't designed to "rock" in the waves.) (Making games is hard!)

Character placement, population density, AI routines + collision, level of detail & animation, and other aspects of character establishment will always be a challenge in games. You are always wanting more horsepower to run a game world, more manpower to create individual characters and motions, more technology to dynamically create detail. I don't know when or if we'll get to the point that every character in a game will ever be at a level of hero-character detail as the video above. However, time and manpower are two huge factors holding that back, and with ML techniques helping to rig characters for levels of detail that previously was only possible (if at all possible) with countless man-hours doing fine-detail adjustments by hand, things are bound to get better?
 

Deerock71

Member
We've been seeing all this fancy high-fidelity facial animation tech for the past 10 years and yet there aren't any games I can think of where it's been implemented broadly. It may show up in a cut-scene here and there with a handful of main characters, but I don't think it will really blow my socks off until every NPC in the game world have faces as realistic as in the video above.

It's pretty jarring and immersion breaking when our player controlled character looks a generation ahead of all the NPCs you interact with...

y2e86qqva4l11.jpg
Why did you post this pic of Bobby Kotick here? Wrong thread?
 
The Spider-Man boat guy is not at all typical of NPCs in the game world. The actual characters you will interact with or move past in the game are much more in keeping with open-world character standards, and are within a somewhat believable range of detail difference compared to your hero-character to blend into the world.
I know the boat guy is not typical of all NPCs in the 2018 SpiderMan game, I just used it because it is one of the more egregious examples I can remember from the past couple of years and because it's funny.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Automated Face Rigging!

There is a god!

Two of the things I hate most are probably gonna be basically automated within the next few years.

No UVs?
No Rigging?
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
It’s almost perfect. Good enough for cutscene tracking shots.
 

A.Romero

Member
We've been seeing all this fancy high-fidelity facial animation tech for the past 10 years and yet there aren't any games I can think of where it's been implemented broadly. It may show up in a cut-scene here and there with a handful of main characters, but I don't think it will really blow my socks off until every NPC in the game world have faces as realistic as in the video above.

It's pretty jarring and immersion breaking when our player controlled character looks a generation ahead of all the NPCs you interact with...

y2e86qqva4l11.jpg

Right now the use of above average facial animations is reserved to veteran studios with huge budgets. Using AI eases the need of skillful people animating so more studios can put them on their games as well as more characters (other than the main ones).

Cheaper = more commonplace.
 

CuNi

Member
Just a few more AI tools and indie games will create a new gaming boom when they can compete to some degree with tripple A games in many aspects.
 

CamHostage

Member
I know the boat guy is not typical of all NPCs in the 2018 SpiderMan game, I just used it because it is one of the more egregious examples I can remember from the past couple of years and because it's funny.

Sure, I knew you were using an extreme example, I was just taking your complaint seriously to use it as an example of why things actually get that bad in game design even though the power should be there to avoid it. The rest of the characters in the game look fine, but they are still way simpler than Spidey, and then there's this guy for seemingly no reason... so, why? Well, reasons, kind of.

Tech like this ML-trained facial rig system is rarely designed to be "implemented broadly", as you say. It's usually designed to make the lead character and villains look great, and hopefully down-the-line needs on the project can glom onto some of the systems if it fits their processes and if it compresses down to work on less complex characters and if there's still room to add detail to non-playable characters without dragging down the framerate and if there's time to do the work to implement it on generic characters... and if it doesn't get in the way of gameplay. (Imagine if the streets of Spider-Man's NYC were as crowded as real life, Spidey would never be able to give his arms a rest and cut the web to just walk around town because the streets would be clogged with vehicles and people...)

That said, NPCs will improve thanks to ML and other new game development tech. For one thing, (as a few animators have been chiming in,) automation of rigging (or re-useable rigs ala MetaHumans) would take out the hellish minutia work of hand-rigging. Spend that time doing meaningful work. Also, there's the room for more detail to be used in-game, thanks to more powerful hardware and more efficient rigged models. (ZRT makes incredibly detailed faces, but I assume because they're MLed, they should or could also be efficient, smoothing over some of the inefficiencies that man-made math might generate? I don't know it myself, but 30MB for facial data that rich seems a low weight to add onto a character model.) Beyond that, there's procedural generation systems for just creating characters, and that stuff is getting better (and more modular,) and that could be explored more as games get bigger. (WatchDogs Legion is basically the random-generator becoming the gameplay system, as every character in the world is made from dicerolls & creation routines; I believe there isn't a singular "hero character" in it.) Also, taking it back to Unreal a bit, there kind of is something in UE5 that might help, as the Niagara "Particle System" can actually be applied to improving crowds and pedestrian systems, with more complexity and intelligence that can be applied on the fly to an object (in this case a character) when it needs to make a specific interaction with the world. So, much like the birds who can fly around aimlessly until they come to land on a branch in the video below, pedestrians walking around can become more complex in activity or reaction when they need to (and have routines applied to some of those reactions, such as collision and IK) without needing to be "needy" of computer resources all the time that character is loaded into the game.

 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
I don't see UV's being gone anytime soon. I wish. The cost is still too much to go that route.
Auto UV tech is on the up and up, even Adobes Auto UV does a decent job.
Unreal Engine 5 doesnt give a shit about UVs because you inject your high poly.
DCC renderers will get pretty good(great) results even without UVs just using world space and other parameters.

I hate UVs more than I should, but ever since the substance update and unreal engine 5 ive basically stopped fusing too much about UVs.
But face rigging basically had no work around beyond using a metahuman or CC.....now with this tech.

618uUFCzMQL.jpg
 
All this facial capture tech,when are we gonna get new cloud-based automated rigging platform to recreate asses using a 1TB library of 4D scan data? If anyones doing this,let me know if you need a model.

Quick Question: Why only 1TB Library of 4D scan data? Why not 100TB? We are taking Asses here.
 

bargeparty

Member
I know the boat guy is not typical of all NPCs in the 2018 SpiderMan game, I just used it because it is one of the more egregious examples I can remember from the past couple of years and because it's funny.

by egregious i think you mean disingenuous since if memory serves you're not really expected to get that close. it would be more of a case of the player doing something they're not supposed to or expected to so a better model didn't draw in properly.
 

CamHostage

Member
Human faces are boring.
I want to see this tech used for monsters or creatures in general.

It's not that you can't use this type of technology to create creatures and monsters, per se. Different Ziva technology was used to do stuff like the animals in A Dog's Way Home; they do animal FX and aliens and stuff regularly in films and commercials.



(*not at all as gross as the thumbnail.)

The thing is, it's Machine Learning; the machine needs a ton of source material to learn from. It studies how humans move, and, over several reps of attempts, it applies that ML to a character model so that it's rigged to move in a human-like manner.

To do that ML, it needs tons of source material to analyze and run comparison against... and there's no footage of a monster or creature, only people and animals.

So you could apply human-like aspects to the monster, or you could train the ML on footage of an animal you want to incorporate into the creature. You could conceivably blend things to be totally unnatural even, combining two things that are natural but that don't fit together and the machine would have to fight to find common connection points. But if you wanted to create something that's never existed before, this probably isn't the approach for that. (That said, some of the other VFX technology they have like muscle simulation might come into play if you defined the anatomy of the creature down to how its tendons pulled its bones to move its body?)

The ZRT Face Training system can be used to create un-human creature faces, but they will ultimately be rigged to animate like weirdly-shaped humans. Still good, still useful, still something potentially you could do something totally not-boring with, but it'd be a digital "human" wearing a digital "mask", if you follow that analogy.

DfDvZK3.jpg
 
Last edited:
Top Bottom