Overclock.net › Forums › AMD › AMD CPUs › [Failed Promises or Fine Wine?] Discussion on the gains from software optimizations for AMD based Systems over time
New Posts  All Forums:Forum Nav:

[Failed Promises or Fine Wine?] Discussion on the gains from software optimizations for AMD based Systems over time

post #1 of 16
Thread Starter 
For those of us who have been patiently waiting for a proper upgrade from team red, the time is finally upon us. With the release of AMD's new Zen CPU's this year, we few who have been patiently holding out for a #redteam solution, finally have something new and promising to consider. Since Zen's official announcement over a year ago, many of us have been counting the days and our pennies, in an effort to be able to afford a full overhaul of a tech solution that came out in 2011 to the tune of "Faildozer" . Despite common popular opinion, many of my fellow red team die hards and I have discovered the truth of the matter in AMD's unpopular design. It was ahead of its time. Make no mistake, there are absolutely some design flaws and optimizations that needed to take place...but the rumors of FX's and AMD's demise were greatly exaggerated. The proof of their model will be shown not only in the refined ideas of Zen and Zen+, but in the continued "fine wine" aging we have seen in both GCN architecture in their graphics offerings, and the Cores vs Clocks design of AMD's CPUs with the FX series.

So with everyone looking to upgrade to Zen now, what is to become of all these tired old FX platforms that just never quite lived up to their supposed promises of 2011? What possible use could such an old dog of a design still relevantly hold for today's staggering CPU demands and needs in the workstation, gaming, and content creation spaces? I think more than one might expect. But before we look into that lets take a little trip back in time.

I recently saw a great video that helps explain some of the early woes of AMD's CPU design choices and would like to share it here. It is exceptional in content and I give full credit to the maker for explaining so brilliantly an issue that has long been witnessed but never verbalized well in my experience. It is about 20 min long, but worth every second of your time if you want to know more about AMD's technology and why it seems to be aging well.

If you don't watch the whole video at least watch the portion from about (2:50 - 9:50)...this 7min will explain most of what you need to know.





So after watching this video I decided to put some of the ideas expressed to the test. I have recently been developing a graphical template for displaying and comparing differences in gaming performance across potential hardware upgrades for clients i have built budget AMD gaming rigs for. I wanted a clear way to show them what kind of performance gains they might expect in particular games they play with new hardware, when compared to their current hardware solution. The charts are designed to use 3D shape and colors to display typical FPS data in a way that makes objective differences clear, and allows the viewer to make subjective decisions on if any particular upgrade is right for them. I figured if it was capable of showing differences clearly in hardware, why not use it to show differences in software as well. So I did.

What you will find below is one such example. It is clear from the video piece above that if you optimize software to utilize the hardware AMD created...the Cores Vs. Clocks design as i like to call it...then you can remove perceived hardware bottlenecks that are really nonexistent with proper optimization. In order to show this I decided to use a tool I have that clearly shows this discrepancy. The game Ashes of the Singularity has several built in benchmark tools as well as the option for running in both DX11 mode (which is largely un-optimized for AMD hardware, and mitigated by some driver software solutions at a small cost) and DX12 mode (which can properly utilizes any multi-threaded hardware without the need for solutions that add driver overhead as discussed in video).

My goal is usually to simulate actual in game experience as much as possible when I do this type of testing, which usually requires actual game play testing. However i have grown fond of the GPU benchmark test in AoS for the very fact that it evenly displays 3 major types of game scenes (heavy, normal, light) and closely simulates the loads actually seen on GPU/CPU when gaming. It is a robust test that is extremely repeatable which makes it great from sharing here and it's accessible and repeatable by others should they wish to compare results with my own for verification or comparison. Because of all that, it is what I used for this evidence piece.

The point of all this is to show empirically, and visually, whether the claims both in the video above and in much of this forum by red team fan boys are indeed supported by empirical evidence, and to provide a place for discussing the useful life cycle of a CPU line that nearly died at birth but seems to be aging like a champ. It will hopefully in time answer the question...Is AMD technology working better and better as the years go by as software developers are learning to code better for the hardware they have always had at their fingertips but refused to or were unable to utilize?

Lets find out. I now present you one such piece of evidence...my hope being that many more will follow and allow those of us who wish to make such claims a firm set of data to stand on. One such piece of evidence does not a truth reveal, but many can make a firm foundation to stand on.

Without further ado, my DX11 Vs DX12 results...


Edited by gapottberg - 4/11/17 at 2:06pm
post #2 of 16
I'm firmly in the fine wine camp and will be following this thread with interest.
post #3 of 16
Thread Starter 
Quote:
I'm firmly in the fine wine camp and will be following this thread with interest.

Big things have small beginnings. Hopefully this thread grows, educates, and improves peoples ideas about FX. Cheers mate cheers.gif
post #4 of 16
I watched that video and even understand some of it. His conclusions are similar to mine too. Yes FX was ahead of its time in regards to octacores, and yes AMD got hammered by the business practices of intel and nvidia. But is getting past that stage now and is able to show what it can do.

All i have to contribute are my personal experiences so......

Which game would you rather play?

Back in the early days most games were like the first image while now most games look like the second image. It is much better for FX now.

Then there are the business practices. The video concentrated mostly on nvidia so I will show what it is like on the intel side.

Half Life 2. Here is how it plays and remember that it came out before intel's fun and games.

Half Life 2 Episode 2 came out when things were nasty and the ICC Patcher really makes a difference.

The ICC Patcher cleans out intel's malicious code. But that was years ago and it doesn't help to bring up old wounds, right? Here is HWBOT from less than a year ago before and after the ICC Patch.




This thread will probably go nowhere as the intel people don't want to hear it and the AMD people are doing Zen now, but perhaps it will enlighten some new AMD users about the history of FX.
post #5 of 16
Quote:
Originally Posted by miklkit View Post

I watched that video and even understand some of it. His conclusions are similar to mine too. Yes FX was ahead of its time in regards to octacores, and yes AMD got hammered by the business practices of intel and nvidia. But is getting past that stage now and is able to show what it can do.

All i have to contribute are my personal experiences so......

Which game would you rather play?

Back in the early days most games were like the first image while now most games look like the second image. It is much better for FX now.

Then there are the business practices. The video concentrated mostly on nvidia so I will show what it is like on the intel side.

Half Life 2. Here is how it plays and remember that it came out before intel's fun and games.

Half Life 2 Episode 2 came out when things were nasty and the ICC Patcher really makes a difference.

The ICC Patcher cleans out intel's malicious code. But that was years ago and it doesn't help to bring up old wounds, right? Here is HWBOT from less than a year ago before and after the ICC Patch.




This thread will probably go nowhere as the intel people don't want to hear it and the AMD people are doing Zen now, but perhaps it will enlighten some new AMD users about the history of FX.

I am an Intel person and I want to hear it. The Intel pocket rape is real.

Just waiting for the 12 or 16 core Ryzen before I switch ovaaah!!! biggrin.gif

It is hard waiting when your pc store is 10 mins away and have everything in stock specool.gif
post #6 of 16
This is an old long thread but it details what happened pretty well. http://www.overclock.net/t/1554359/reddit-amd-sabotage-wiki

When a corporation gains a monopoly, then the free market is no longer free. Hence the intel pocket rape.
post #7 of 16
Thread Starter 
My goal with this thread is to show people that they were largely misled. An 8 core FX with a modest OC can hold its own in many many of the latest and more demanding games.

You dont see a lot of people showing bemchmarks for FX platforms with 2133 or better ram and 4.2-4.6GHz OCs which are easy to reach on even mod range cooling solutions.

You dont see data that is represnative of gaming experience but more artifical scenarios that are desgined to create a winner as opposed to simulating real world experiences.

This is where the bad rap comes from and why anyone who owns and uses one of these systems is so confused with the ire.

AoS benchmarks for Ryzen prove my point. They show the CPU benchmark...which creates a scenario that is unrealistic in terms of actual use to create winners and losers by measurable margins. Guess what? You play AoS in game and you will see no difference between many many of the chips that suposedly hold some advantage over each other. Are there real differences between many chips? Absolutely. Do those differences show up in any meaningful way in "REAL WORLD" scenarios? Sometimes. But less often and much less impactful than you might have come to expect based on the current quality of tech reviews.

What chaps me the most about AMDs woes is that intel is way less to blame than the lazy ass software developers who cant be bothered to code for better hardware. The sad part is that it helps everyone to code for AMD...more performamce on intel, more performance on nvidia, more performamce on AMD...but it hurts AMD a lot more than the others not to.
post #8 of 16
I am unsure why you are trying to defend the FX bulldozer architecture at this point. AMD themselves decided it wasn't worthwhile and Ryzen solves just about every issue Bulldozer had.

1. Firstly, if the workload is not all integer you end up with the equivalent of 4 cores. So it isn't truly "more cores" unlike Ryzen.
2. Substantially lower IPC with similar clocks to Intel CPUs , unlike Ryzen
3. Massive power usage when overclocked. CPU cooling requirements from this issue. Once again, unlike Ryzen which is fairly efficient up to 3.8GHz or so (Ryzen is supposedly more than double the perf/watt).
4. Motherboards that largely aren't up to par to overclock these chips that consume 220W+ overclocked , something that isn't a large issue with Ryzen.
5. CMT / clustered multi-threading requires developers to program completely differently from Intel which is a huge gamble , fixed with Ryzen SMT
6. NVMe & PCIe 3.0 not implemented on older motherboards, Ryzen has this
7. Lack of small form factor motherboards due to the architecture power and cooling requirement , Ryzen is slated to have the X300 overclock-able ITX form factor motherboard chipset
8. DDR4 , Ryzen has it
9. narrow pipelines on Bulldozer architecture , doubled for Zen
10. poor branch prediction in which repeated instructions have to be fetched and decoded each time (Ryzen uses neural network branch prediction based on perceptrons)
11. poor area efficiency (Ryzen has 10% advantage vs Intel so they can cut prices)
12. higher idle power consumption as well

etc.

There's no reason to be apologetic about it because even results from February 2017 had the FX-9590 220W CPU behind Sandy Bridge when the 8 integer cores weren't utilized.

Zen is basically where AMD would be roughly if they continued advancing from Phenom II , including some things that worked with Bulldozer including register renaming. To put it into perspective, you need a > 5.5GHz watercooled Bulldozer core to be on par with a Ryzen at lower clocks of 3.6-3.8Ghz. A $169 Ryzen 5 1400 four core 8 thread runs about 800 Cinebench R15 nT "multithread" @ 3.8GHz and about 12K Firestrike Physics @3.7GHz. Given that, a Bulldozer CPU cannot be more than $160 after Ryzen launch.

We can write off Bulldozer as AMD's setback similar to Intel's Netburst (which is the basis of HT on Intel CPUs). They had some takeaways from that for Zen.
Workstation stuff
(407 photos)
SpecViewperf 12.0.1
(117 photos)
PGA 1331
(13 items)
 
CPUMotherboardGraphicsRAM
AMD Zen SR7 octocore (Ryzen 7 1700) Overclockable AM4 motherboard X370 To be determined , AMD Vega? 2x8GB DDR4 low-profile or heatsink-less 
Hard DriveHard DriveCoolingCooling
Samsung 950 Pro / 960 Evo / 960 Pro 256GB or 51... Samsung 850 Evo 1TB SSD Storage Black or black+white Twin tower air cooler or s... EK Vardar F2-140 140mm, Phanteks PH-F140SP 140m... 
CoolingOSMonitorPower
Fractal Design Dynamic GP14 (included with case) Win 10 Pro 64 bit 4K monitor with Freesync EVGA Supernova G3/P2 750W or 850W 
Case
Fractal Design Define R5 Blackout edition 
  hide details  
Reply
Workstation stuff
(407 photos)
SpecViewperf 12.0.1
(117 photos)
PGA 1331
(13 items)
 
CPUMotherboardGraphicsRAM
AMD Zen SR7 octocore (Ryzen 7 1700) Overclockable AM4 motherboard X370 To be determined , AMD Vega? 2x8GB DDR4 low-profile or heatsink-less 
Hard DriveHard DriveCoolingCooling
Samsung 950 Pro / 960 Evo / 960 Pro 256GB or 51... Samsung 850 Evo 1TB SSD Storage Black or black+white Twin tower air cooler or s... EK Vardar F2-140 140mm, Phanteks PH-F140SP 140m... 
CoolingOSMonitorPower
Fractal Design Dynamic GP14 (included with case) Win 10 Pro 64 bit 4K monitor with Freesync EVGA Supernova G3/P2 750W or 850W 
Case
Fractal Design Define R5 Blackout edition 
  hide details  
Reply
post #9 of 16
Thread Starter 
Quote:
I am unsure why you are trying to defend the FX bulldozer architecture at this point.


Because of this...

Because with the same hardware that was largely available in 2011, I can finally get the experience i was always promised. There is absolutely no difference in the hardware between these two chart, nor the graphics settings or anything else. They are 100% the same test on every level, save the software optimization done through a new API in this case. The piss poor code makes one experience unplayable, and the other pretty fantastic all things considered. DX11 had the ability to be coded better for AMD, it was not with a few exceptions. DX12 has the same potential cranked to eleven, if the care is taken to make it so.

The reality is that AMD's FX lineup will actually play the "games of tomorrow (DX12, Vulcan)" , better than they ever played the "games of today (DX11,OpenGL)". That is laregly because during the first 3 years of their life software was dated and the advancement of it was lazy. Mantle helped change that, but dam was it slow. I can show a half dozen games dramatically less intensive than AoS, from "yesteryear" that never in their life played as smooth or brilliantly as i can make AoS run today in DX12. That is on 5 year old tech, and is insane!

So with that rant out of the way i will tackle a few of the points on your list...

1. Your comment here is misleading. FX is indeed a true 8 core 8 thread processor with a shared FP unit between 2 cores. You are correct in saying that a workload that is 100% Floating Point will indeed show performance more similar to a 4 core chip...but it is still running 8 threads. You just bottleneck the 8 threads at the 4 FP units, and they have to wait for each other beahving like there are only 4 cores. That scenario however is extreme and does not compare to more typical workloads users actually see in the real world. It is actually brilliant engineering if you think about it in terms of workloads that are primarily Integer based, with some Floating Point work mixed in. As long as you don't saturate the FP units to the point both cores are in need of it at the same time, it carries out the same amount of work as a chip with 8 Integer units and 8 FP units; but at a much lower cost to manufacture. It was a compromise they made, and it works as intended.

2. This was mainly due to Intel's advanced R&D budget and better Fabrication and Manufacturing. There was absolutely no scenario where AMD was ever going to achieve similar IPC with where they were at the time. They knew this and did what again was brilliant engineering by going more cores with higher clocks to stay competitive. The math suggested that it would be a competitive alternative with some added advantages and costs. 5 years later we find out it actually is competitive to its 5 year old competition, when software limitations and overhead are finally removed from the equation. Ryzen made up a lot of ground due to refining the lessons learned from FX and most importantly moving to a smaller node and better manufacturing process. One already established as an industry leader in Samsung. The inability to move to a smaller nodes is really what stalled FXs ability to get more competitive with their IPC more than anything besides the damn software dragging it's feet.

3-12: Going through the rest of these is rather pointless. I was never arguing that FX is equal to Ryzen or Intel offerings, or even a good budget buy in the current market. Merely that they still have life in them and people who have them should be putting them to use for as long as they do. If you have an old FX set up, a small investment in $ can keep it running and running well from most gamers needs fro some time. They make a great hand me down PC for a child or relative. A great back up PC for overflow workloads or gaming with a friend old school co-op style.

I am not advocating for anyone to go buy a brand new FX platform. If you can get one on the cheap used or as a hand me down, it has its uses. If you have one and cant afford to upgrade a full system right now, no need. Just optimize what you have for another year or two and maybe and save up for Zen+. There is no denying that there was plenty AMD got wrong with FX, but its also fair to say they got some of it right...even if it took aging like a fine wine to show it.

Cheers cheers.gif
post #10 of 16
I had a 8350 @ 4.7ghz before switching to a Ryzen 1700X. It did game well. I had zero issues with it and was happy with it's performance. Especially with Mantle/Vulkan/DX12 titles. It still lives on in a friends kid's PC and he is super happy with it. However one big thing I noticed with Ryzen vs FX is heat. FX generated a very large amount of heat compared to Ryzen. It could comfortably warm my room in the winter time, in the summer it was unbearable. The air exhausted from my PC was hot. Now with Ryzen it is only a few degrees above ambient. So I am getting better performance and with a considerable amount less heat.
The Postoffice
(16 items)
 
  
CPUMotherboardGraphicsRAM
AMD Ryzen 7 1700x Asus Prime X370-Pro EVGA 1080Ti Founders Edition G. Skill Ripjaws V  
Hard DriveHard DriveHard DriveOptical Drive
Kingston HyperX 3k WD Black 640 Samsung 850 EVO Sony Optiarc 
CoolingOSMonitorKeyboard
Corsair H100i Windows 10 64 Vizio VOJ320F1A Corsair K70 Vengance 
PowerCaseMouseAudio
NZXT Hale82 Antec TwelveHundred Logictech G9x Realtek 
  hide details  
Reply
The Postoffice
(16 items)
 
  
CPUMotherboardGraphicsRAM
AMD Ryzen 7 1700x Asus Prime X370-Pro EVGA 1080Ti Founders Edition G. Skill Ripjaws V  
Hard DriveHard DriveHard DriveOptical Drive
Kingston HyperX 3k WD Black 640 Samsung 850 EVO Sony Optiarc 
CoolingOSMonitorKeyboard
Corsair H100i Windows 10 64 Vizio VOJ320F1A Corsair K70 Vengance 
PowerCaseMouseAudio
NZXT Hale82 Antec TwelveHundred Logictech G9x Realtek 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: AMD CPUs
  • [Failed Promises or Fine Wine?] Discussion on the gains from software optimizations for AMD based Systems over time
Overclock.net › Forums › AMD › AMD CPUs › [Failed Promises or Fine Wine?] Discussion on the gains from software optimizations for AMD based Systems over time