Wish there was an option to turn on ray-tracing on main objects and keep traditional rendering on unnoticeable background. 120fps is the new norm, so 4k60f for eye candy replay/highlight/cinematics and 3440x1440x120f for gameplay would be preferred...
With next gen consoles offering 4k/1440p/HDR/RT along with 8cores/16 threads (along with kb/mouse support now) I know I won't be alone going that route next year for all those juice games I want. By the time I reach the price of just an ampere card I would have a console and several games.
yep, I'm seeing way less hate than usual for the new consoles. Though I don't spend much time on reddit where I'm sure the shills have had many epic battles over the topic. I'm trying to hold out for the flagships that come AFTER the consoles drop. Not the last few cards running up to the console drops that're the last cash grabs of a generation. Unfortunately though, I had to grab a 5700xt to hold me over due to my 1080ti's being fubar
either way, when was the last time a console actually forced PC to do anything? This time around its forcing HDMI 2.1
R.I.P. Zawarudo, may you OC angels' wings in heaven.
I've not found any technical documents from AMD/Radeon about mesh shaders. If someone finds any presentations/Demo from AMD about MS please link me. From what I understood AMD is doing something similar through next gen generation geometry (NGG). It's a unification of:
into surface shader(s) which have control over tessellation and geometry LOD. But I'm going on memory (I'm sure I missed something). And it's not clear to me if this is in the December 2019 performance update for Navi or if this is in RDNA 2.0.
However, as I said before, AMD has their own patent idea for RT. MS is catering to both.
I have updated the Metaballs2 demo with a compute path, allowing it to run on GPUs without mesh shader support. Final results for performance is that it's pretty much a wash between mesh shader and compute. Compute needs intermediate memory though.
Compute shader gets ahead at very high grid density, whereas mesh shader gets a clear win at lower density. The reason for this is unclear to me, intuitively I was expecting the reverse if any difference emerged at all, i.e. more geometry to dump out -> slower for compute.
This looks like the difference between how games are on console (AMD) and Nvidia use of it.
Currently the compute shader operates very similarly to the mesh shader path, with a task-shader equivalent pass first followed by a mesh-shader equivalent pass. In both implementations, the task shader part seems to be the most expensive.
DXR 1.1 is looking more and more very proprietary in hardware in order for it to work.
that should require a uplift boost between 75-120% over an 2080 Ti just for that just for current games (the typical 25-30% would not cut it lol) that do over 60 on a 2080 ti already lets say 70-80fps
2080 ti cant do 4k 60 in a good amount of games with everything cranked up.. and this is without RT activated lol
RT still a niche feature and would still be in the next 5yr so +
Last edited by zGunBLADEz; 11-04-2019 at 01:30 AM.