Unreal Engine's Human CGI is So Real it's Unreal

ChaosEnginesays...

Sorry, not quite there yet. There is no way anyone would actually look at that and think "oh, it's a video of a human".

The uncanny valley is one of those instances where the closer you get to perfection, the more obvious the flaws are.

But in terms of a video game character, this is very, very good.

I would love to know a few more details about it:
- how expensive is the rendering? We're just seeing a face on its own. If we drop it into an actual scene, will it still run?

- how well does it animate/lip sync?

raviolisays...

The reason it doesn't look quite there is because only one modifier is applied at a time, for the purpose of the demo. You must imagine the possibilities if multiple modifiers are put in altogether. Also, the rendering is done in real-time, so this is in itself pretty amazing.

ChaosEnginesaid:

Sorry, not quite there yet. There is no way anyone would actually look at that and think "oh, it's a video of a human".

The uncanny valley is one of those instances where the closer you get to perfection, the more obvious the flaws are.

But in terms of a video game character, this is very, very good.

I would love to know a few more details about it:
- how expensive is the rendering? We're just seeing a face on its own. If we drop it into an actual scene, will it still run?

- how well does it animate/lip sync?

SeesThruYousays...

In motion, there's subtle clues that tell you it's not quite human, BUT, if you were to manipulate it into an expression and take a screenshot... I don't think I'd be able to tell it wasn't real. It's the muscle range, the way the wrinkles appear, the way the blood coloration changes in response to the tightness of the skin, the pores, the subsurface scattering, the facial hairs, etc... incredibly convincing on a whole new level, even extremely close up, which is where the illusion normally breaks down. Very very impressive.

ChaosEnginesays...

Yeah, the real-time aspect of it is insanely good, although I'd still like to know how much of the rendering budget it takes up, i.e. is this usable in a game or just a research project at the moment?

What do you mean by "only one modifier is being applied"? Which is my other criticism of the video; a voiceover explaining the tech would have been more interesting than the music.

I don't believe that "multiple modifiers" would make this look better, for the simple reason that if you're demoing a technology like this, you end with ALL the bells and whistles to make it look as good as possible.

raviolisaid:

The reason it doesn't look quite there is because only one modifier is applied at a time, for the purpose of the demo. You must imagine the possibilities if multiple modifiers are put in altogether. Also, the rendering is done in real-time, so this is in itself pretty amazing.

Khufusays...

what you saw was a mesh with a skin shader rendering in real-time so that's how fast it renders. didn't look terribly hi-res, the real advancement here is the quality of the skin shader(for realtime) and the fidelity of the facial rig, having proper face target shapes all blending together to get complex movements with skin compression/stretching/wrinkling at this level have historically been out of reach for anything but pre-rendered cgi.

They can probably drop libraries of mocap data on this with face markers that match those manipulation points you see in the video, and animators can use them to animate, or clean up/change the motion capture data.

and the skin textures/pore detail/face model are not a technological achievement as much as the work of a skilled artist, and the deformations are the result of someone who really knows their anatomy.

since there is no animation in this video, no performance, it's hard to judge how realistic it feels. the real trick is always seeing it animated.

ChaosEnginesaid:

Sorry, not quite there yet. There is no way anyone would actually look at that and think "oh, it's a video of a human".

The uncanny valley is one of those instances where the closer you get to perfection, the more obvious the flaws are.

But in terms of a video game character, this is very, very good.

I would love to know a few more details about it:
- how expensive is the rendering? We're just seeing a face on its own. If we drop it into an actual scene, will it still run?

- how well does it animate/lip sync?

ChaosEnginesays...

I know that, but shaders can be simple or complex. What we're seeing here is a single face with no background.

In a game, there would lots of other stuff happening, all competing for the compute budget.

Khufusaid:

what you saw was a mesh with a skin shader rendering in real-time so that's how fast it renders.

raviolisays...

What this company (snapperstech.com) did is put together an upgraded control "rig" to manipulate facial expressions, taking into account muscle limits and interactions, skin elasticity, etc.

A little more info in the video's YT desc :
-Adaptive rig: which allows combining any number of expressions using optimized list of blendshapes.
-Real facial muscles constraints: the advanced rig logic simulates real facial muscles constraints.
-Advanced skin shader (for Maya and Unreal): holds up to 16 wrinkles maps and 16 dynamic diffuse maps with micro details and pores stretching.
-Easy to manipulate using facial controllers and/or GUI.
-Compatible with all game engines and animation packages.
-Smooth transition between all the expressions.
-Adjustment layer: freeform manipulation of multiple regions of the face to create unlimited variations of the same expression.

The real-time rendering part is achieved by the Unreal engine itself. The final rendering performance still relies mainly on the hardware used.

ChaosEnginesaid:

Yeah, the real-time aspect of it is insanely good, although I'd still like to know how much of the rendering budget it takes up, i.e. is this usable in a game or just a research project at the moment?

What do you mean by "only one modifier is being applied"? Which is my other criticism of the video; a voiceover explaining the tech would have been more interesting than the music.

I don't believe that "multiple modifiers" would make this look better, for the simple reason that if you're demoing a technology like this, you end with ALL the bells and whistles to make it look as good as possible.

Khufusays...

Oh ya, I miss-read your post. I think this fidelity is probably doable in games now, or maybe soon since games like Star Citizen are able to load SO MUCH geometry into a scene. Sounds like they are optimizing quite a bit by using vertex offsets for the face shapes instead of having to load all the extra geo as target shapes.

ChaosEnginesaid:

I know that, but shaders can be simple or complex. What we're seeing here is a single face with no background.

In a game, there would lots of other stuff happening, all competing for the compute budget.

RedSkysays...

To be overly anal about it, uncanny valley concerns more the eeriness or revulsion 'peak' of a near lifelike replica. Think dead eyes in Polar Express or the plastic faces of Japan's android models.

This to me at least is beyond that trough on the uncanny valley chart, I do not sense any revulsion personally. I can of course detect it is still not real but no particular facet triggers that doppelganger reaction.

Send this Article to a Friend



Separate multiple emails with a comma (,); limit 5 recipients






Your email has been sent successfully!

Manage this Video in Your Playlists




notify when someone comments
X

This website uses cookies.

This website uses cookies to improve user experience. By using this website you consent to all cookies in accordance with our Privacy Policy.

I agree
  
Learn More