Originally Posted by ToTheSun!
RT cores are specialized at doing raytracing and nothing else. "Software accelerated" might not be the most accurate nomenclature, but the point of the distinction is that the GPU is not specifically and exclusively built for raytracing, in the same way that RT cores are.
It's also a way to differentiate them in regard to performance expectation. That's what Mand12 meant. Casually mocking nVidia's hardware solution on the basis that it's considered superfluous is missing the forest for the trees.
RT cores are ASIC cores. But that doesn't mean the GPU was exclusively built for Ray Tracing. If that was the case, what are the Tensor cores doing there? What are the CUDA cores still doing there? If they really wanted a GPU specifically for RT, they would basically put mainly the ASIC there, and just enough CUDA/Tensor cores for the most basic geometry and denoising calculations. This is a transitory GPU with a step towards Ray Tracing.
nVidia's hardware 'solution' is the tackling of their own weakness in their own cards, which was compute power. Ray Tracing is a computationally heavy rendering technique. Their white paper states that the RT cores offload the CUDA/SM cores to do other work while the RT cores (or the ASIC) handles the ray tracing calculation. That is all fine, and it's a good thing for them to tackle.
People need to start thinking for themselves instead of looking at everything through the lens of nVidia. nVidia's solution is for their own cards, but people seem to think that because only they have that solution, everyone else is incapable. That is false. AMD is COMPLETELY different here. Have any of you ever wondered why AMD has many more flops, but still less performance in games? Rather than sweeping it under the rug by saying it's drivers or an inefficient architecture, I'll tell you why.
AMD does not have this issue of their stream processors being saturated and requiring offloading. AMD's compute power is above nVidia's in general. Why do you think miners flocked to AMD's cards during the mining boom? In fact, in terms of compute, the Radeon VII is the equivalent of the 2080Ti. Yes. Really. It doesn't translate into games, because games simply are not compute heavy. You could argue that it is an inefficient architecture, and it is, for rasterization. Not for compute.
Their stream processors don't need to be offloaded to do ray tracing, because in games, many of them are idling anyway, which is the reason why in the case of Vega 56 and Vega 64, they perform EXACTLY the same in games if they run at the exact same clock speeds. The additional 512 stream processors (which is close to 20%
additional compute power) are literally doing NOTHING in games. That is without accounting how many idling stream processors there are within the total of 3584 in Vega 56. Who knows how many there are in the likes of the Radeon VII.
That's where the ACEs come in. They were designed to allow those idling stream processors to be used, in parallel to all the others already being used. All that idling power can be used to get ray tracing to work, without reducing current performance
, because it's specifically using the idle stream processors. Now, we all know that would not be enough to implement ray tracing, which is why I'll go one step further with you. All the power that is used to do traditional shading techniques, that all comes free when you turn those off to do ray tracing instead. In other words, by lowering the amount of traditional rendering, you free up resources and thus more stream processors to increase ray tracing performance. And remember that AMD's cards are considered inefficient at those types of rendering techniques....
Also, the issue that nVidia mentions about thousands of instructions needed with Pascal for the ray tracing calculations... That is relevant because nVidia didn't have a hardware scheduler. AMD has multiple hardware schedulers in their cards, making that also a moot point. They can handle the stream of instructions and assign them efficiently to any idling stream processor through the ACEs. So I repeat... Stop looking at everything through the lens of nVidia.
That does not mean that an ASIC specifically for ray tracing would not help AMD's cards. ASICs for ray tracing would help everyone and everything. One can even put them on CPU cores if one so desires and eliminate the need for a ray tracing GPU. But, making use of those idling stream processors in AMD cards is practically a necessity before they go there. Why would they put additional hardware in there, while over 20% of the current compute power is not being used? Everything is already there to harness that power, and ray tracing is one of the best suited techniques to leverage GCN.