Overclock.net - An Overclocking Community - Reply to Topic

Thread: [TechSpot] Nvidia showcases an AI-rendered, interactive virtual world Reply to Thread
Title:
Message:

Register Now

In order to be able to post messages on the Overclock.net - An Overclocking Community forums, you must first register.
Please enter your desired user name, your email address and other required details in the form below.
User Name:
If you do not want to register, fill this field only and the name will be used as user name for your post.
Password
Please enter a password for your user account. Note that passwords are case-sensitive.
Password:
Confirm Password:
Email Address
Please enter a valid email address for yourself.
Email Address:

Log-in


  Additional Options
Miscellaneous Options

  Topic Review (Newest First)
12-11-2018 12:39 PM
looniam that virge article is a lot more informative . .

Nvidia has created the first video game demo using AI-generated graphics

Quote:
Nvidia’s system generates graphics using a few steps. First, researchers have to collect training data, which in this case was taken from open-source datasets used for autonomous driving research. This footage is then segmented, meaning each frame is broken into different categories: sky, cars, trees, road, buildings, and so on. A generative adversarial network is then trained on this segmented data to generate new versions of these objects.

Next, engineers created the basic topology of the virtual environment using a traditional game engine. In this case the system was Unreal Engine 4, a popular engine used for titles such as Fortnite, PUBG, Gears of War 4, and many others. Using this environment as a framework, deep learning algorithms then generate the graphics for each different category of item in real time, pasting them on to the game engine’s models.

“The structure of the world is being created traditionally,” explains Catanzaro, “the only thing the AI generates is the graphics.” He adds that the demo itself is basic, and was put together by a single engineer. “It’s proof-of-concept rather than a game that’s fun to play.”

To create this system Nvidia’s engineers had to work around a number of challenges, the biggest of which was object permanence. The problem is, if the deep learning algorithms are generating the graphics for the world at a rate of 25 frames per second, how do they keep objects looking the same? Catanzaro says this problem meant the initial results of the system were “painful to look at” as colors and textures “changed every frame.”

The solution was to give the system a short-term memory, so that it would compare each new frame with what’s gone before. It tries to predict things like motion within these images, and creates new frames that are consistent with what’s on screen. All this computation is expensive though, and so the game only runs at 25 frames per second.
12-11-2018 12:06 PM
PharmingInStyle The Verge which is mentioned in the TechSpot article above says a single video card was used to show the demo, the Titan V. Not saying one card was used to create the demo city, just to "power the demo" quoted from their article below.

https://www.theverge.com/2018/12/3/1...g-demo-neurips
12-11-2018 10:41 AM
WannaBeOCer
[TechSpot] Nvidia showcases an AI-rendered, interactive virtual world

Source: https://www.techspot.com/news/77681-...ual-world.html

Quote:
Using real-world video footage, Nvidia managed to train an AI model to create what appears to be a living, breathing city (or a part of a city), with remarkably realistic graphics.
Graphics aren't remarkable but a machine being able to create a virtual world from just footage is incredible.

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off