Overclock.net banner

1 - 20 of 25 Posts

·
New to OCN?
Joined
·
26,919 Posts
Discussion Starter #1
Quote:
DSOGaming: A lot of engines already support DX12, and CRYENGINE is one of them. Have you experimented with this new API and what are the performance benefits of it? What is your general opinion on DX12?

Rok Erjavec: Yes, the engine team at Crytek actually started doing work on DX12 code-path sometime mid-last year, and we're already seeing various benefits from this work in the recent release. The key aspect of working with lower-level APIs is explicit developer control over the rendering pipeline, which allows for better utilization of multi-processor resources, lower-overhead on traditionally expensive draw call and driver overhead for PC platforms, and it opens up possibilities for new workflows that were not viable with higher-abstraction APIs. One of the big differences from a game-development perspective is that it brings aspects of the PC graphic pipeline closer to what we have been able to leverage on consoles with their close-to-hw APIs already, effectively allowing us to change how we approach cross-platform development.
Source
 

·
Registered
Joined
·
3,397 Posts
I just want a FPS from them without console-itis
 

·
Registered
Joined
·
966 Posts
At this point i feel Crytek is just being Crytek because the people at Crytek have never had a different job before. So now they're just trying to starve off early retirement by release tech demos and engine updates.

Smart, really.
 

·
Registered
Joined
·
2,073 Posts
after playing ryse son of crap

i dont care about crytek anymore

i ll just remember crysis as a very good game with some annoying parts
 

·
Registered
Joined
·
308 Posts
Quote:
Rok Erjavec: Currently we're using Asynchronous Compute in VR development, where a technique known as Async-Timewarp/Reprojection is used by VR APIs to reduce latency and offset the negative effect of occasional frame drops. We're also in the process of evaluating and experimenting with other uses for it. The possibilities are interesting, but at least for the time being, it also introduces non-trivial added complexity in PC space due to lack of standardization and the relative immaturity of the hardware that supports it, where many GPUs come with sometimes problematic restrictions on pre-emption granularity, etc.
This particular quote on Async Compute is really interesting. On the one hand, the dude admits how the "lack of standardization" *coughnvidiacough* is inhibiting the practical value of coding for it. On the other hand, he's saying how integral it is in VR development, which really solidifies just how important virtual reality is to the advancement of computer graphics (if not as a whole) technology, both in terms of hardware and software. I'm looking forward to all the big companies pushing the envelope of their products and adopting low-overhead APIs faster in a race to create a mainstream VR market that needs all of these technologies in an affordable price range.
 

·
Registered
Joined
·
2,009 Posts
^Are you guys serious? That's a good article, gives you a insight into present and future graphics. Highly praised Crytek or the console aimed Ubi, the lead engineers know their stuff.
Quote:
Rok Erjavec: As with every new technology stack, there's a learning curve that we're currently seeing and working through. DX10-11 pipelines have had the benefit of multiple years of focused driver optimization and work-arounds specific to their workflow, and the paradigm shift that comes with DX12 effectively erases some of the benefits of this legacy.
Every time I see a past tense comment about how DX12 is a fail, did nothing, has done nothing. Just... OCN in a nutshell.
Quote:
To go a bit more in-depth, the driving principle behind PBR is that the entire pipeline (rendering and asset creation workflow) respects energy conservation. Which takes away a lot of the "cheats" that artists used in the past where values could be used in "any" range as long as the output looked "good" to the artist's eyes - but this came with the side effect that most of the time, such hacks would only work in a specific lighting environment or with specific camera views.
Quote:
Rok Erjavec: As of last year, we have a new GI solution available that uses voxel tracing to light the scene, which was first released as an experimental feature sometime in summer 2015. This algorithm works in two steps, an async-job that continually voxelizes relevant scene geometry, and the actual lighting stage which traces rays against the generated voxel data to compute the light bounces. Without going into too much detail (you can read more on the CRYENGINE documentation site), default configuration provides AO and indirect light contribution - without the usual quality compromises that come with screen-space methods, and still with performance usable with midrange graphics hardware, including the current generation of game consoles.
Quote:
With all this said, progress is iterative, and using a variety of acceleration structures, more and more techniques are now at least partly in the domain of tracing rays. Our new Global Illumination solution using voxel tracing, for instance, is a great example of using it to get a part of the solution.

Good stuff but it comes at a cost. These early GI methods are very low resolution though they do the trick fttb. This speaks to the loads of people complaining that modern games run sub 60fps, a lot of it having to do with the lighting landscape changing to GI (not all of it obviously) - a game's Ultra presets may use a higher frequency of tracing than used on the console version. It really makes you wonder about the viability of 4K, higher sampling GI's will definitely prolong 60fps at that resolution.
Quote:
PBR brings benefits
Yeah, it's cheap and gets the job done
smil3dbd4e4c2e742.gif
 

·
Registered
Joined
·
1,713 Posts
Quote:
Originally Posted by umeng2002 View Post

I just want a FPS from them without console-itis
Uh, Star Citizen is coming along. It has a really complex real physics real recoil real space real fleeping real FPS inside it in development. We may have a free-access weekend when 2.4 comes "live" for peeps to play around in. SC's using a 64bit multithread cryengine and they're tracking work with Crytek (Sean Tracy works at CiG now) development to include in later engine builds.

But the game is really really really early right now, probably not fun to people who are used to console "spawndead" fps gaming.

Definitely fits the "FPS without console-itis" category tho, I guarantee it.

https://www.youtube.com/watch?v=Mijmzvy6uUQ

I seriously guarantee it.
 

·
Registered
Joined
·
3,397 Posts
Quote:
Originally Posted by prjindigo View Post

Uh, Star Citizen is coming along. It has a really complex real physics real recoil real space real fleeping real FPS inside it in development. We may have a free-access weekend when 2.4 comes "live" for peeps to play around in. SC's using a 64bit multithread cryengine and they're tracking work with Crytek (Sean Tracy works at CiG now) development to include in later engine builds.

But the game is really really really early right now, probably not fun to people who are used to console "spawndead" fps gaming.

Definitely fits the "FPS without console-itis" category tho, I guarantee it.

https://www.youtube.com/watch?v=Mijmzvy6uUQ

I seriously guarantee it.
Oh, yeah. Totally forgot about Star Citizen since it's been in development forever.
 

·
Waiting for 7nm EUV
Joined
·
11,521 Posts
Quote:
In conjunction with the fact that the early DX12 titles have effectively been ports of DX11-optimized assets and codebases, we are not yet seeing software that really targets DX12 strengths, and in some ways, teams have to specifically re-learn how to fix things that used to be the responsibility of driver development teams in DX11.
Journalists only look ahead into the future for some things, but never for others, it's quite worrisome. What I'd really like is for some journalist to ask the difficult question: if the burden of optimization has been passed on to DX 12 game developers, how will a DX 12 game from 2016 run on 2020 GPU hardware? Will it run at all? Game developers usually don't care to optimize games that are over a year old (some developers may not even exist after a while), so how will the landscape be in the future?

Will GPU makers have to go back to adding a driver compatibility layer for them to work with a new GPU architecture? And will that compatibility layer translate to a loss of performance, thus offsetting the performance improvements of upgrading to a newer card, meaning you'll only see real performance improvements in a given game by skipping entire GPU generations?
 

·
Registered
Joined
·
2,562 Posts
Quote:
Originally Posted by sixor View Post

after playing ryse son of crap

i dont care about crytek anymore

i ll just remember crysis as a very good game with some annoying parts
Damn your avatar! I tried to peal off the debris of my screen. But it was on your picture
tongue.gif
 

·
Original 16-bit Genesis®
Joined
·
1,757 Posts
Quote:
Originally Posted by ChevChelios View Post

this terrifies me tbh
It definitely puts the buck at developers. So if they want to make questionable decisions, its 100% on them when the blamehammer swings around.
 

·
New to OCN?
Joined
·
26,919 Posts
Discussion Starter #14
Quote:
Originally Posted by sixor View Post

after playing ryse son of crap

i dont care about crytek anymore

i ll just remember crysis as a very good game with some annoying parts
Try Homefront the Revolution
Quote:
Originally Posted by tpi2007 View Post

Journalists only look ahead into the future for some things, but never for others, it's quite worrisome. What I'd really like is for some journalist to ask the difficult question: if the burden of optimization has been passed on to DX 12 game developers, how will a DX 12 game from 2016 run on 2020 GPU hardware? Will it run at all? Game developers usually don't care to optimize games that are over a year old (some developers may not even exist after a while), so how will the landscape be in the future?

Will GPU makers have to go back to adding a driver compatibility layer for them to work with a new GPU architecture? And will that compatibility layer translate to a loss of performance, thus offsetting the performance improvements of upgrading to a newer card, meaning you'll only see real performance improvements in a given game by skipping entire GPU generations?
New GPU architecture should be either similar to current (GCN) or they could add the patch for the new hardware with updates
Quote:
Originally Posted by Omega X View Post

It definitely puts the buck at developers. So if they want to make questionable decisions, its 100% on them when the blamehammer swings around.
if consoles port were broken at launch, now expect worse performance for those who dont team with those which have experience with low level APIs (AMD, DICE,Oxide)
 

·
Registered
Joined
·
150 Posts
Quote:
Originally Posted by ChevChelios View Post

this terrifies me tbh
This is why DX12/Vulcan adaptation is so slow. The devs literally have to take nearly all the responsibility for performance issues instead of passing the buck onto the driver teams (the recent release of Doom is a great example of this).
 

·
Registered
Joined
·
3,113 Posts
Quote:
Originally Posted by Omega X View Post

It definitely puts the buck at developers. So if they want to make questionable decisions, its 100% on them when the blamehammer swings around.
if the recent DOOM case was an example, people will blame nvidia no matter what.
The developers are usually safe unless it is a Arkham Knight situation.
 

·
OC Enthusiast
Joined
·
1,054 Posts
i heard this an year ago and still no dx12 games using their engine...
also making a pc game from scratch is suicide, most developers are focused on consoles because the profit is there.
this is why gaming has been stagnant in the last decade, no breakthroughs at all just the same old rise and repeat . actually there is the VR but it has a long road to mature.
 

·
Consumerism 101
Joined
·
4,098 Posts
Quote:
DSOGaming: Does CRYENGINE support Vulkan and have you experimented with it? If you had to choose one, would you go with DX12 or Vulkan?

Rok Erjavec: The current trajectory of Vulkan provides a path to reach PC users across all the widely used OS platforms, including legacy Windows versions, as well as mobile devices, so if I was shipping a title in 2017 or beyond, Vulkan looks like an appealing choice.

If we implemented Vulkan in CRYENGINE, we wouldn't have to choose one, as titles built with our tech would work seamlessly with both, and thus leave this choice with the users instead.
That is a big IF...
 

·
Waiting for 7nm EUV
Joined
·
11,521 Posts
Quote:
Originally Posted by PontiacGTX View Post

New GPU architecture should be either similar to current (GCN) or they could add the patch for the new hardware with updates
How do you know what GPU architectures are going to look like in, say, 4 years? You're assuming that current GCN and Pascal (which is a revised Maxwell) will just be the backwards compatible basis for every single GPU from now on. That would seriously hamper the GPU makers' freedom to come up with new designs.

And who is "they"? The game developers usually don't care about games after the first year, why would they have to continuously go back to adding patches to old games? In a few years developers may not even be around, but that was ok because it was GPU makers that ensured a big part of the compatibility with newer hardware. Or you mean GPU makers? But if that is what you mean that doesn't answer my question at all.

I'm calling into question the very basis of DX 12 and Vulkan. Are they too close to the metal or are they abstract enough to be able to handle the future? Why is nobody talking about this? Are we about to trade short term performance gains for a hell of a future where games require patches but the developers aren't willing (if they are still around) and the GPU makers will be throwing blame around because they don't want to have the expense of fine tuning again? And if they do, what will the performance penalty be for adding driver overhead back into the equation?

These clean slate, close to the metal APIs sound great at first, but at what cost do they come? Versus, say, improving the CPU multi-threading capabilities of the current APIs. Nobody said that was impossible to do and there is no reason why it should be.
 

·
Registered
Joined
·
3,397 Posts
From the standpoint of market competition, you'd want the driver teams out it.

That's the way PCs are "supposed" to work. You have software... and the software programmers make it work correctly.

Drivers should never, ideally, be the that important to performance. It just meant game programmers weren't doing their jobs well or lots of cheats and hacks were happening in the drivers.

Look what it took to finally get a multi-threaded render API: AMD making mantle. AMD/ Sony/ MS using 8 low-powered CPU cores in the consoles, MS reacting to mantle, Kronos making Vulkan, console game developers using these techniques. Those techniques filtering into PC ports.

That's a lot of steps and learning to get something, from a hardware level, PCs should have been doing 12 years ago.

So let there be a steep learning curve, it will be worth it in the end.
 
1 - 20 of 25 Posts
Top