I'd repost there what I posted on Videocardz.
Folks. Have you imagined Linux might CALL RTX 2080 as GTX 1180? Aka Linux developers don't care about NVidia naming convention, they are naming them the standard way. GTX = NVidia. Cut first two digits and you have a number that you can pass into Kernel. Now imagine all these errors that can happen if there was not a consecutive number, or gosh someone would have to create a conversion function that would change 20 into 11. Half of Linux would have that function half don't and when cards arrive at number after 19, all hell breaks lose.
Imagine Linux treating RTX as xxxx 4F60 and trying to return results to AI that needs actually usable FP64 matrices and not useless FP16 matrices, thus failing spectacularly. Feeding card command that assumes drivers on GFX card, aka X1 NVidia CPU (on GPU) handing drivers. And trying to force NVidia card into actual usability, aka trying to use it as a card manufactured in times when NVidia had leadership that cared about users more than about his leather coat.
Until we see non Linux results. I'd keep this result just as a side effect of Linux benchmark. (It would be funny when NVidia would see drop in sales when people would wait for GTX 1180. Perhaps they would even make some to avoid losing market position.)
I can imagine NVidia making low end card without RT cores, but they would kill theirs effort by not forcing forces that have money to buy high end like RTX 2080 into RT cores.