|Topic Review (Newest First)|
|12-11-2018 12:39 PM|
that virge article is a lot more informative . .
Nvidia has created the first video game demo using AI-generated graphics
Nvidia’s system generates graphics using a few steps. First, researchers have to collect training data, which in this case was taken from open-source datasets used for autonomous driving research. This footage is then segmented, meaning each frame is broken into different categories: sky, cars, trees, road, buildings, and so on. A generative adversarial network is then trained on this segmented data to generate new versions of these objects.
Next, engineers created the basic topology of the virtual environment using a traditional game engine. In this case the system was Unreal Engine 4, a popular engine used for titles such as Fortnite, PUBG, Gears of War 4, and many others. Using this environment as a framework, deep learning algorithms then generate the graphics for each different category of item in real time, pasting them on to the game engine’s models.
“The structure of the world is being created traditionally,” explains Catanzaro, “the only thing the AI generates is the graphics.” He adds that the demo itself is basic, and was put together by a single engineer. “It’s proof-of-concept rather than a game that’s fun to play.”
To create this system Nvidia’s engineers had to work around a number of challenges, the biggest of which was object permanence. The problem is, if the deep learning algorithms are generating the graphics for the world at a rate of 25 frames per second, how do they keep objects looking the same? Catanzaro says this problem meant the initial results of the system were “painful to look at” as colors and textures “changed every frame.”
The solution was to give the system a short-term memory, so that it would compare each new frame with what’s gone before. It tries to predict things like motion within these images, and creates new frames that are consistent with what’s on screen. All this computation is expensive though, and so the game only runs at 25 frames per second.
|12-11-2018 12:06 PM|
The Verge which is mentioned in the TechSpot article above says a single video card was used to show the demo, the Titan V. Not saying one card was used to create the demo city, just to "power the demo" quoted from their article below.
|12-11-2018 10:41 AM|
[TechSpot] Nvidia showcases an AI-rendered, interactive virtual world
Using real-world video footage, Nvidia managed to train an AI model to create what appears to be a living, breathing city (or a part of a city), with remarkably realistic graphics.