Originally Posted by bowman
It's not an 'image', they scan a person in this 'LightStage' thing, it records very dense 3D data (you saw the 32 million polygon face, you couldn't make out a single polygon) and the complete animation of the person, like motion capture except it takes care of the 3D artwork as well. Then they texture it and shade it to make it look like a person, and it's photorealistic. I mean, it's there, it's running in real-time, a photorealistic 3D rendering of a person.
Intel was beating the ray tracing drum a little while ago as they've been doing the past few months, and Nvidia would say, no way, it's not going to do that, that's not the near future. That's too far away, hybrid rendering is the key. And here it is, in real time, running on <$200 consumer graphics cards.
Yes, I understood that it's a rendered 3d object, LOL--I did watch the video. My question is that what they've done thus far is pretty much take a real-live video, digitize/texturize it, generate realistic shadows etc. (pretty much to photo-realism). We call each frame an image, FYI--that's why I used the term "image." This is all fine and dandy, and I don't mean to minimize how cool it is, but they can't render something that's not pre-recorded. That's what I'm saying would be interesting.