Originally Posted by Cyber Locc
I don't know where you get less power usage? It will still consume the same amount of power.
The board cleaner is objective, it will increase the pin count, by alot. CPUs will be massive, have you not seen how many chips it takes to make 11gbs on the 2080ti? Multiply that by 3, or 6 lol.
And this will 100% be bad for memory manufacturer's. Do you think every one of them is going to get a contract? Lol, no.
So just like with GPUs, 1 or 2 will get a contract, of which they will barely be able to keep up supply's, prices will rise and the other manufacturer's will suffer from no business until the next shot at a contract.
If this actually happens, which I think you guys are overthinking it, I think it will be like a L4 cache, not full on system memory. It will be bad for everyone.
What about OEMs? Now they are supposed to have what 50 CPU skews? Mobos are supposed to make 50 boards? They can't be the same sized chips. So now we need a i3, in 4gb, 8gb, 16gb, and then in i5 in 4-32, i7 4-64, plus all the different models of those? Now we have I9s to add too.
Then for us overclockers, rip that, the heat will be extreme. CPU blocks massive.
This will not simplify or improve anything, it will things worse.
Then there is right to repair. Ram sticks die, alot. So what happens when some of the ram, overheated by the CPU dies, 1 day after warranty, on a 2000 dollar CPU? Ya let's hope for that case.
Components closer together require less power to communicate, as you need less voltage to accomplish the same task. See: Every Node shrink.
Stacking RAM on the CPU would reduce board pin count and complexity. See: Any HBM based GPU.
There are only three DRAM manufacturers these days. Hynix, Samsung, and Micron. AMD has no problem working with Hynix and Samsung to make HBM, no reason Micron could not except they're working with Intel on HBC.
There are actually more than two GPU companies. ARM, Broadcom, Qualcomm, and Apple all make their own GPUs. They just do not make what you would consider to be a performance GPU.
L4 cache has proven to be effective in Broadwell-C. I agree that it will be L4 as it will be near impossible to put enough RAM on the chip to satisfy all needs, but I disagree that L4 is bad for "everyone".
Intel already has far more than 50 SKUs. This Z370 board supports 39 alone:
This one supports 63 SKUs:
This one supports 51 SKUs:
And so on.
They can be the same sized chips. AMD wants to do 3D stacking, and that means putting the RAM on top of the IO die. The socket will not change between models, just like the "socket" did not change between 16GB Vega FE and 8GB Vega.
DDR4/HBM RAM does not make much heat or use much power. Almost all our RAM could be run without a heat sink if we wanted to, they just look nice. As long as RAM speeds are decoupled from the core speeds, there would be zero change in how you overclock today.
Have you ever actually seen a TR4 chip? Not that it matters, larger blocks just means more surface area and less thermal density. That is a good
thing for overclocking, not a bad thing.
It will improve plenty, or they would not bother to implement it.
RAM never dies in the grand scheme of things. In the last five years over 5000 replaced/disposed assets spanning the last 8 model generations back to the Core 2 Duo/DDR2 through 8000-series on DDR4 and with over 6000 active assets in use with an average of 1.5 sticks per PC, both laptops and desktops, I have seen exactly three RAM stick deaths. In comparison I have seen over a thousand HDD failures and a few hundred PSU failures as cause of "death". I have never seen a CPU fail. MB failures were maybe a few dozen, but almost always user damage.
The RAM would not be overheated by the CPU. RAM is capable of running at 80C+ just like a CPU can, and the RAM temp would never be higher than the CPU temp, so it literally can not die that way as the CPU would throttle itself before hitting dangerous limits. Once again, I will point you to Fury/Vega, which put out far more power than AMD's CPUs do and have no problems doing so.