Overclock.net › Forums › AMD › AMD CPUs › AMD Ryzen Threadripper Owners Club - 1950X | 1920X | 1900X
New Posts  All Forums:Forum Nav:

AMD Ryzen Threadripper Owners Club - 1950X | 1920X | 1900X - Page 113

post #1121 of 2629
Warning: Spoiler! (Click to show)
Quote:
Originally Posted by vgabex View Post

Dear 1950X owners,

There's a new 3d rendering benchmark here: http://www.kraytracing.com/kray3benchmark2017/

It is based on the upcoming Kray 3 rendering engine for Lightwave3D.
I’m working with these softwares, and I’d like to know, how 1950X performs in this application before I upgrade.

I have an i7-5960X @ 4.3GHz, and a Ryzen 7 1700 @ 3.7GHz config.
Under Cinebench, they score exactly the same, around 1600 points.
But in this test, Ryzen seems about 10-15% slower than the i7!

Would someone be so kind to run the benchmark on an 1950X config?
It’s only 1-2 minute to run and upload the score to the online database (please add the @ ...GHz to your description), but it would be a great help for me.

Thank you very much!

done
https://www.kraytracing.com/kraybench/?b&score=7339&rand=1GD3HOV&os=Windows&endian=LE&compiler=VisualC+14.0&threads=32&buswidth=64&hash=c91eff6620efc62e7a0dd2a6a99dd3eb&description=AMD+Ryzen+Threadripper+1950X+16-Core+Processor

stock ram at 3200 14 13 13

11th place (my guess is 9th place is at least 4g smile.gif

ok 4g result same ram settings 10th place bumped the other score to 12th smile.gif

https://www.kraytracing.com/kraybench/?b&score=8192&rand=5U5614&os=Windows&endian=LE&compiler=VisualC+14.0&threads=32&buswidth=64&hash=fb5619e89bf9d99440763d8fe9af1c12&description=AMD+Ryzen+Threadripper+1950X+16-Core+Processor
Edited by tarot - 9/20/17 at 6:04am
post #1122 of 2629
Quote:
Originally Posted by tarot View Post

looks good.
is that hooked up to a 480 radiator and only the threadripper in the loop?

temps look very good at load be interesting to see that thing at 4g with a bit more juice smile.gif

It's on a 560 Radiator with 4 EK-Vardar EVO 140ER fans only the CPU in the loop yeah smile.gif

I may add another loop for the Graphic Card only in the future as the case is build for 2 separate loops.
post #1123 of 2629
Thread Starter 
Quote:
Originally Posted by nycgtr View Post

That's dependent on the person and their own budget. I have 2 titan pascals. I don't even have time to game majority of the time. I gotta do work arounds for sli as well at times. However, currently I am playing Hellblade @ 4k maxed out and that wasn't possible without sli to even hit 60 fps locked. To each their own. It works when it does and it's nice to have the cost is irrelevant for me. So I wouldn't rule it out just because it's not the golden days anymore of multi gpu.

If you have infinite money, you're an exception and that's fine. I don't generally assume that to be the case when giving advice to people though since for most of us there is some degree of significance to the benefit we're receiving in return for our expenditure.
Quote:
Originally Posted by chew* View Post

My multi gpu setup works just fine except for 1 batman game which sucked so bad that steam gave users who bought it every batman game prior to it.

Win 10 may not work so good because its up to the devs for xfire.

Win 7 however is flawless and with over 50 games in my libraries...only one does not benefit from multicard...

Can not vouch for more than 2way or nv...but my amd experience paired with the right settings and monitor has been exceptional...

Key things are vsync, throttling tearing all semi related.

A freesync 144 hz panel is a necessity to counter and quite honestly solve all those issues.

Without vsync you get tearing...with it you get card throttle ( not utiliyzing card clock 100% )

Freesync solves this.

Watercooling solves the rest.

All I know is that I never had any real success with four different generations of crossfire. I had multiple GPUs from the 4000 (quad 4850), 5000 (dual 5850), 6000 (quad 6970), and 7000 (dual 270X) families. Never were the gains consistent and beneficial in the games I played. In those seemingly few situations where my framerate did actually increase versus a single GPU, frame timing issues meant there was no perceivable improvement to fluidity. It was odd getting 50-60 fps in Battlefield games for instance where the video was still choppy (pacing I mean--I don't care about screen tearing) and laggy (ie mouse movements are frustrating) enough to feel the same as when I was getting 25-30 fps on a single card. I was just throwing away money both on hardware and power for no real benefit. I saw enough other people give reports of similar experiences throughout the years to eventually give up on multi-GPU. I had held out hope for half a decade with the ever-present carrot of "it'll work perfectly on the next generation of hardware or the next major driver update, just you wait" never coming to pass.

That's my take on multi-GPU at least. Your experiences have been different from the sound of it, and I'm glad it has worked well for you in the past. I'm not sure how common that is. It's certainly not something I'd recommend to people who don't fully appreciate the potential pain they're signing up for. That is especially the case now where developer support has been on the decline for years and the future looks to be one where traditional multi-GPU on both sides will eventually cease to exist. Things may change once the major engine developers implement agnostic multi-GPU utilization on DX12 and Vulkan code paths, but I've been burned too many times by the promises of future technologies to make recommendations based on it.
post #1124 of 2629
Quote:
Originally Posted by tarot View Post
 
Quote:
Originally Posted by Beatnutz View Post

I think this one was for me and not Tarot? I'm a bit confused, I never said many of these things but anyway:
I've only compared 960 EVO's. The other ones are not relevant to me.
I never posted my m.2. drive temps so not sure what you are talking about, but I've seen +48c max during load/tests. Should be well below throttle temps. I have mine mounted between the GPU's. Not a great position but I don't have any other options with this mobo.
I've had lots of issues with my mobo. Not just the m.2. speed. I'm on a custom loop so it's not like I'm dying to disassemble it for the smallest of reasons and send the mobo off and get it replaced, quite the contrary.

No offense taken.
(i get it with the apples oranges thing...you are referring to me saying how my 850 evo performed a bit better than when it ws an OS drive, what i mean twas THAT drive i was not in any way referring to the 960 evo. So if that confused you sorry about that, but i am not redoing it so i can test the 960 evo as a data drive smile.gif )
yeah ok lets start again
who here is running a nvme drive as an OS drive?
what scores are you getting ?
who here is running the drive as a non os drive and just a data drive(not a slave as that hasn't existed in 10 years)? and what scores are you getting
what drive temps are you getting for what position on the motherboard.

how's that?

as for the questions regarding temps i read them off hwinfo as shown in the screen shot i do not know if they are correct or which of the 2 listed is the one i should be worried about but yes i do believe they are too hot and do throttle but by how much i am not sure.

as for where is the card it is directly under the video card.
why
because the video card is in slot 2 due to some limitations for slot 1
the nvme is there because...thats where the slot is or i can got back a bit and shove it directly under the blower fan for the video card...which would be worse, there are no very good positions for that drive.



oh and no offense taken i have rhino skin smile.gif and always open to suggestions smile.gif
as for your response you may need to clarify it a little smile.gif

Just a FYI. gaming performance us dependent on what cores the game runs on and what slot the GPU is install in. The first x16 slot connects to the PCIE controller that is located on die 0 and is supported locally by CPU 0 - CPU15 on an 1950X, CPU0-cpu11 on 1920X and CPU0-CPU7 on 1900X (as reported by the windows performance monitor). In UWA mode, Assuming that the GPU is in slot 1, setting the affinity for the game to the lower range CPUs or changing to game mode improves game performance because the graphics traffic does not have to travel across the Infinity Fabric and incur the additional latency penalty.

 

In case you are still mystified. A Thread ripper package has two silicon blocks on board called a dies. In each die on a 1950X, there are 8 of the 16 physical cores, a dual channel memory controller and a PCIe controller that supplies the system with 32 lanes of PCIe. The two dies together give you the stated specs that AMD publishes about the chips. Any time something on one die needs to access something on or connected to the other die, there is an additional latency penalty incurred because of the time it takes for the data to travel between the dies. Best performance comes from managing the workload, limiting the amount of cross die traffic like switching threads, accessing memory on the other memory controller or using the graphics card that is connected to the other die. You will get best performance if you ensure that as many discrete jobs as possible are connected locally to the memory and PCIe devices that it is using.

 

Multithreaded applications parallelize workloads by typically having a main thread that controls everything and worker threads that do piece work. It is then the job of the main thread to combine all the pieces back together at the end to provide the full end result. 

 

I understand that these x399 boards have 4 slots and if running four GPUs will run at x16, x8 x16 x8 as the slots move away from the socket. I understand that the 2nd slot is only x8. If slot 1 is obstructed by a cooler, I suggest that you move the GPU to slot 3 which should support all x16 lanes but will connect locally to the 2nd die and the high numbered CPU cores. If your only GPU is installed in that 3rd slot, setting affinity (affinity means telling windows to run an application on a particular CPU core or cores) for games to CPU16-CPU31 should provide you with the best performance.

 

As an extension to that comment above. The different m.2 sockets on x399 will also be connected to the different PCIe controllers on both dies. They will not all be to the same controller so that means that the performance for boot drives will depend on what m.2 slot you are using. If you selected a slot that connects to the PCIe controller that is not local to the die that the operating system threads are running on, your performance will be reduced any time the system or drivers try to access the drive as there is the extra latency incurred by travelling over the between die fabric before it gets to the connected PCIe controller. I am not sure if the manual details what PCIe controller supports what m.2 socket. It should recommend the best slot to use for boot drives somewhere. If it doesn't say anything, you can do some simple trial and error tests to identify which controller you have connected the drive to. Install it in a different slot and see if it performs differently than the one it is in now.

post #1125 of 2629
Quote:
Originally Posted by tps3443 View Post

Im looking at returning to AMD, after 10 years of running only intel processors.

im pretty much dead set on getting a 1950X, and a X399 motherboard. Setting it to 4Ghz, running (4) graphics cards in SLI, or quad fire.

Now that 64 PCI E lanes are available, and 16 cores.. ive always wanted to run 4 cards. Its overkill, doesnt scale properly, has issues and sometimes does not work. Jayz2cents tried (3) way gtx 1080's, it didnt work at all. But everyone on youtube is loving (4) way sli gtx 1080's with great scaling.

Anyone experience 3 or 4 cards on the new threadripper platform? With a, 1900X/1920X/or 1950X?

Id run (4) vega 56's. Or (2) Pro Duo 2017 models.

If you use Nvidia Inspector for Nvidia cards, you can manually set tri or quad SLI profiles for any game. Jay did not do that, he just plugged it in and ran the game without creating the sli profile. I don't have any recent AMD gpu experience so I don't know how you can adjust crossfire profiles.

post #1126 of 2629
Any software designed to test latency?

Just running ATTO with 512B transfer size and when targeting secondary partition I get 36773 Write 38687 Read on CPU 31, and 39936 / 41088 on CPU 0, so like a 6% bump with tiny files, but almost no relative diff with larger transfers

This is when using bottom most M.2 on MSI Gaming Carbon Mobo

For completeness the boot partition on CPU 0 gets 36899/40320

So seems like diff is negligible, seems like any latency issues really only trouble GPU's

Quote:
Originally Posted by gtbtk View Post

Just a FYI. gaming performance us dependent on what cores the game runs on and what slot the GPU is install in. The first x16 slot connects to the PCIE controller that is located on die 0 and is supported locally by CPU 0 - CPU15 on an 1950X, CPU0-cpu11 on 1920X and CPU0-CPU7 on 1900X (as reported by the windows performance monitor). In UWA mode, Assuming that the GPU is in slot 1, setting the affinity for the game to the lower range CPUs or changing to game mode improves game performance because the graphics traffic does not have to travel across the Infinity Fabric and incur the additional latency penalty.

In case you are still mystified. A Thread ripper package has two silicon blocks on board called a dies. In each die on a 1950X, there are 8 of the 16 physical cores, a dual channel memory controller and a PCIe controller that supplies the system with 32 lanes of PCIe. The two dies together give you the stated specs that AMD publishes about the chips. Any time something on one die needs to access something on or connected to the other die, there is an additional latency penalty incurred because of the time it takes for the data to travel between the dies. Best performance comes from managing the workload, limiting the amount of cross die traffic like switching threads, accessing memory on the other memory controller or using the graphics card that is connected to the other die. You will get best performance if you ensure that as many discrete jobs as possible are connected locally to the memory and PCIe devices that it is using.

Multithreaded applications parallelize workloads by typically having a main thread that controls everything and worker threads that do piece work. It is then the job of the main thread to combine all the pieces back together at the end to provide the full end result. 

I understand that these x399 boards have 4 slots and if running four GPUs will run at x16, x8 x16 x8 as the slots move away from the socket. I understand that the 2nd slot is only x8. If slot 1 is obstructed by a cooler, I suggest that you move the GPU to slot 3 which should support all x16 lanes but will connect locally to the 2nd die and the high numbered CPU cores. If your only GPU is installed in that 3rd slot, setting affinity (affinity means telling windows to run an application on a particular CPU core or cores) for games to CPU16-CPU31 should provide you with the best performance.

As an extension to that comment above. The different m.2 sockets on x399 will also be connected to the different PCIe controllers on both dies. They will not all be to the same controller so that means that the performance for boot drives will depend on what m.2 slot you are using. If you selected a slot that connects to the PCIe controller that is not local to the die that the operating system threads are running on, your performance will be reduced any time the system or drivers try to access the drive as there is the extra latency incurred by travelling over the between die fabric before it gets to the connected PCIe controller. I am not sure if the manual details what PCIe controller supports what m.2 socket. It should recommend the best slot to use for boot drives somewhere. If it doesn't say anything, you can do some simple trial and error tests to identify which controller you have connected the drive to. Install it in a different slot and see if it performs differently than the one it is in now.
post #1127 of 2629
Quote:
Originally Posted by pmc25 View Post

How tight was 3200? CL12?

3200 was c14 on DR prime stable settings.
post #1128 of 2629
Warning: Spoiler! (Click to show)
.
Quote:
Originally Posted by Makara View Post

It's on a 560 Radiator with 4 EK-Vardar EVO 140ER fans only the CPU in the loop yeah smile.gif

I may add another loop for the Graphic Card only in the future as the case is build for 2 separate loops.

nice looks like i may need to do the same thing not long down the track, i have a feeling what i have won't cope to well.


for the gpu discussion.
my card is in the 2nd slot.
post #1129 of 2629
Quote:
Originally Posted by Particle View Post

If you have infinite money, you're an exception and that's fine. I don't generally assume that to be the case when giving advice to people though since for most of us there is some degree of significance to the benefit we're receiving in return for our expenditure.
All I know is that I never had any real success with four different generations of crossfire. I had multiple GPUs from the 4000 (quad 4850), 5000 (dual 5850), 6000 (quad 6970), and 7000 (dual 270X) families. Never were the gains consistent and beneficial in the games I played. In those seemingly few situations where my framerate did actually increase versus a single GPU, frame timing issues meant there was no perceivable improvement to fluidity. It was odd getting 50-60 fps in Battlefield games for instance where the video was still choppy (pacing I mean--I don't care about screen tearing) and laggy (ie mouse movements are frustrating) enough to feel the same as when I was getting 25-30 fps on a single card. I was just throwing away money both on hardware and power for no real benefit. I saw enough other people give reports of similar experiences throughout the years to eventually give up on multi-GPU. I had held out hope for half a decade with the ever-present carrot of "it'll work perfectly on the next generation of hardware or the next major driver update, just you wait" never coming to pass.

That's my take on multi-GPU at least. Your experiences have been different from the sound of it, and I'm glad it has worked well for you in the past. I'm not sure how common that is. It's certainly not something I'd recommend to people who don't fully appreciate the potential pain they're signing up for. That is especially the case now where developer support has been on the decline for years and the future looks to be one where traditional multi-GPU on both sides will eventually cease to exist. Things may change once the major engine developers implement agnostic multi-GPU utilization on DX12 and Vulkan code paths, but I've been burned too many times by the promises of future technologies to make recommendations based on it.

I dealt with a lot of the problems that you are referring to or I believe you are referring to.

First step was plenty of power.

Second issue was tearing which vsync solved but then created an issue with not hitting full clocks ( artificial throttle ) which resulted in lower frame rates.

At that point I went and grabbed a 144 hz freesync panel which in turn allowed me to get the cards back to full clocks while solving the tearing issue as well but then created another problem.

Thermal throttling which in turn resulted in lower clocks reducing frame rates. At that point I water cooled both cards 290x with full covers that had great gpu/vrm temps.

At that point all issues were solved except for the fact that I had a 4gb memory limitation and I could only ask so much from the cards texture wise.

Fury x is no different and responds the same way with HBM being more forgiving in the eye candy department since 4gb hbm is equal to about 6gb gddr5.

As far as drivers go I have had 0 issues but of course I am not an early adopter and like OS I only use for the most part hardware that has matured.

I had 2 270x on air fyi. the memory limitation vastly ruins the x fire experience and is a rather significant bottleneck.

On a side note as despite the overall score being what only matters to most I would like to point out that despite the full dual 16x and higher core count and rather well tuned system I have yet to catch my game test results on this platform vs x370.

Basically put if you ditched x370 solely to game on x399 you made a poor decision.

If you grabbed x399 to build a workstation that is "good enough" to game on and can be an all in one system you made the right decision.
Edited by chew* - 9/20/17 at 1:48pm
post #1130 of 2629
Thread Starter 
Quote:
Originally Posted by chew* View Post

I dealt with a lot of the problems that you are referring to or I believe you are referring to.

First step was plenty of power.

Second issue was tearing which vsync solved but then created an issue with not hitting full clocks ( artificial throttle ) which resulted in lower frame rates.

At that point I went and grabbed a 144 hz freesync panel which in turn allowed me to get the cards back to full clocks while solving the tearing issue as well but then created another problem.

Thermal throttling which in turn resulted in lower clocks reducing frame rates. At that point I water cooled both cards 290x with full covers that had great gpu/vrm temps.

At that point all issues were solved except for the fact that I had a 4gb memory limitation and I could only ask so much from the cards texture wise.

Fury x is no different and responds the same way with HBM being more forgiving in the eye candy department since 4gb hbm is equal to about 6gb gddr5.

As far as drivers go I have had 0 issues but of course I am not an early adopter and like OS I only use for the most part hardware that has matured.

I had 2 270x on air fyi. the memory limitation vastly ruins the x fire experience and is a rather significant bottleneck.

On a side note as despite the overall score being what only matters to most I would like to point out that despite the full dual 16x and higher core count and rather well tuned system I have yet to catch my game test results on this platform vs x370.

Basically put if you ditched x370 solely to game on x399 you made a poor decision.

If you grabbed x399 to build a workstation that is "good enough" to game on and can be an all in one system you made the right decision.

Texture memory certainly is a limiting factor. Something I've found that a lot of people don't appreciate is that memory demands are even higher when using multiple cards than when using a single card. A game that fits in and uses 95% of the onboard memory with a single card is going to be rather unhappy when not just one card but suddenly two cards have to page to system memory if it runs on a 2-way setup with identical settings. It leads to a choppy/stuttery experience.

The two main problems I had though I don't think you've touched on. Perhaps they aren't issues you ran into yourself. The first was crossfire support in the games I was interested in. Maybe half of my games would even utilize multiple cards, if that. The period of time when I got into crossfire was around when engines started utilizing deferred rendering techniques which at that time (not sure if still) were wholly incompatible with the technology. The second was frame pacing. In the half of my games where both GPUs would be used, maybe half of those titles would see a framerate uplift but feel no smoother than they would with a single card. 40 fps would feel just like 20 fps. The remaining quarter I'd split into equal parts working-as-advertised and working-but-at-low-scaling-factors of like 1.2 to 1.5.

I've not tried crossfire since retiring my pair of 270X cards though. I went to an 8 GiB 390 when those launched. It's what I'm still using, though I'm waiting for Vega cards to become available at MSRP. I'm not in a hurry on that one since I'm using Debian as my OS these days and the kernel has no display support for Vega yet. AMD is in the middle of a complete rewrite of the display management code and it's looking to be a month or two away before display output is supported. It's required for Vega and beyond with the legacy display manager still being available for prior releases.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: AMD CPUs
Overclock.net › Forums › AMD › AMD CPUs › AMD Ryzen Threadripper Owners Club - 1950X | 1920X | 1900X