Originally Posted by Syan48306
Why is there so much doom and gloom around SLI? I mean, back in the middle of last year, Nvidia announced that they wont be supporting 3 way SLI anymore and to focus solely on 2 way SLI.
IMO, that decision was pretty reasonable. 3 way was always hit or miss but for the most part, 2 way has always been pretty good on AAA titles.
Does it have to do with the way Direct 12 handles multi GPU? Multi GPU support has always been somehwat up to the developers and it's video card companies like Nvidia that are pushing for support - think Nvidia's slogan: "The Way It's Meant to be Played". If it weren't for Nvidia's involvement in the development process, we'd probably never get any SLI support at all.
SLI has always been a niche market but as a SLI user, I don't understand why the attitude towards SLI changed drastically over the last 8 to 12 months. Nvidia even released a HB SLI bridge last year with the introduction of the 1080. It doesn't seem to me that Nvidia is leaving SLI for dead...
It is because Nvidia has not assigned the priority to SLI the way AMD has for Crossfire.
We see AMD in the past couple of years going from terrible due to their poor frame pacing to arguably a solution that works better than SLI. They've added frame timing pacing to their drivers and with it, it was not uncommon for a pair of 290X to beat a pair of 780Ti or a pair of Fury X to beat a 980Ti, despite the Nvidia cards being faster individually, especially in the lower resolutions.
They've also managed to eliminate the Crossfire bridge. Unless NVLink can filter for gaming, I don't think we are going to see a similar Nvidia solution. The only think that Nvidia introduced was a new bridge for gaming, which is a small innovation at best.
On the software front, we have seen AMD push things like Split Frame Rendering more aggressively with DX12 and Vulkan. Explicit Multi Adapter
could be huge .... if it takes off.
Compounding the problem, quite a few people have said that the scaling and support on release in SLI has slowed down. It really varies title by title, but overall I get the feeling over the past couple of years the trend is negative. So adding a second GPU doesn't give as much in many titles as it used to.
There is also the letting 3 and 4 way SLI die. While it was never anything but a niche, the fact that they let it die indicates they don't care too much about it. SLI never had good scaling with a 3rd card from a frame time POV, but rather than invest resources they killed it off. I think it is leaves the possibility that they might kill off 2 GPU SLI someday too.
SLI and Crossfire both have their problems, but AMD seems to be working more aggressively to address them. Mantle, which influenced DX12 and Vulkan is a good example. Subjectively, when I tested SLI Maxwell Nvidia seems to be letting their driver quality decline. That's alarming because AMD is a company that is struggling to survive and Nvidia has record breaking profits. In other words, they are not making this the priority that AMD is. That's by choice. CF is far from perfect, but AMD seems to be trying, despite their limited resources
It seems like fewer new titles have made supporting it a priority. Nvidia also has not attempted to aggressively resolve the "issues" with SLI in each game (there are always some issues like flickering). I mean AMD too is far from perfect, but there does seem to be at least more effort into trying to push something new.
What I haven't seen is any sign from Nvidia that they might try to aggressively support SLI in the future or push new innovations or even invest more in their drivers to fix existing SLI issues in AAA games. I'd love to be proven wrong - NVLink for gaming for example could be a big step forward. If they did, the situation will be different. I think that one of the reasons we haven't seen dual GPUs like the Titan Z is because it may be an acknowledgement too that SLI is not as good as they said. I have my doubts we will see a Titan Z, Pascal edition with 2x 3840 SP GPUs, at least for consumer gaming use (Tesla and Quadro are different).
Originally Posted by Clocknut
Multi GPU will only be popular when we can no longer getting a faster single GPU. (much like single core CPU)
It is only a matter of time before we starting hitting something like the bottleneck we seem to have with CPUs where the improvements are largely in the single digits each generation.
- 4790k (typical overclocks of 4.7 to 4.9 GHz) was only slightly faster than a 4770k (about 300 MHz assuming typical overclocks of 4.4 to 4.6 MHz), and 4770k in general was not that much faster than a 2600k due to slower clocks
- 6700k was only in the single digits faster as well (typical overclocks of 4.6 to 4.8 GHz) than 4790K; slight regression in clockspeed; no AVX3 was a disappointment too
- 7700k was only slightly faster (overclocks of 4.9 to 5.2 GHz; same IPC though)
I think that GPUs are going to hit that point once transistors run out and architectural gains run rapidly into diminishing returns.
Once we reach that point with Chiplets, which might get a few generations more (see my link below) we will reach that point.
Originally Posted by Zero4549
Something better did
come out. It is called explicit multi-GPU, and is a feature of DX12 and Vulkan.
As for supposed hardware limitations, there actually are none. What actually exists are financial limitations. Nvidia can make a single card twice as powerful as their Titan X, but it would cost them roughly 70% of the manufacturing cost of two Titan Xs, plus additional R&D cost, factory retooling marketing, etc. It just isn't worth their time unless they move significant volume at $5000+ a piece, and that significant volume part is the key. There simply aren't a significant number of people willing to spend $5000 on a GPU (and another $1000 on top of that for the PSU and Mobo upgrades that would be required to run it).
The same scenario applies to AMD, but they would have to move even larger volume or charge even higher prices to make it a successful venture, so it is even less likely to happen on their end, especially considering their substantially lower market share, particularly among enthusiasts.
It just makes more financial sense for them to make and sell single cards that are within the price range that typical human beings could actually afford. The extremely small number of enthusiasts who are willing to spend thousands of dollars on GPU horsepower can simply buy more than one, and eat the huge efficiency penalties and other problems.
Yep. + Rep too
This is basically the problem. Die yields get a lot worse with larger dies. That's why AMD is looking into Chiplets. They are trying to use their interposer techology to make many small dies act as well as one big die. That's what Scalability and Navi I think are about. They've been publishing papers about it.http://www.computermachines.org/joe/publications/pdfs/hpca2017_exascale_apu.pdf
This is what they are attempting for power efficiency.
The big problem is I think is getting Explicit Multi-Adapter to gain widespread support. If it could be made innately into all DX12 games, that would be best. Split frame rendering because the frame times are way better than in AFR.
Originally Posted by Punjab
As individual cards become more powerful there simply won't be a need to SLI two or more together. Phase it out.
The funny thing about that idea is that from a business model point-of-view it's not a good idea to end SLI. They were getting all these gamers to buy 2 cards, sometimes even 3 or 4!
Not at all.
As technology progresses:
- Higher resolution will happen. There's talk of 8k displays in the 2020s.
- We want higher refresh. Right now 120, 144, and 240 Hz panels exist. Can any single GPU do the latest and greatest games at 4k @ 144 Hz once such a panel comes out?
- Game engines themselves will be more demanding on maximum, pushing the graphics technology forward.
There will always be demand for more processing power.
Originally Posted by invincible20xx
what is HPET ? and this is the first time i hear about this microstutter "overcoming" method
High Precision Event Timer. There is an option on most motherboard BIOS to disable it.
Disabling it also lowers the DPC Latency on motherboards.