Overclock.net banner

1 - 18 of 18 Posts

·
Registered
Joined
·
471 Posts
Discussion Starter #1
My friend had a y510p laptop with a 750m in it. Sometimes he is wary of buying games because they are too demanding. Is there such a way for me to help him out? Emulating the cpu is easy since he has a haswell 4700mq which i can disable 2 cores and downclock it to 2.4 or so ghz. But the graphics card I have no idea. I can see what gpu score a 750m gets in fire strike and start downclocking until it reaches about the same score? But there is also that about vram since he only has 2. Or which method could be easier?
 

·
Not new to Overclock.net
Joined
·
77,827 Posts
Wait a minute, I'm confused. If a video game is too demanding, then how does slowing down the CPU and GPU help?
 

·
Registered
Joined
·
471 Posts
Discussion Starter #3
Quote:
Originally Posted by TwoCables View Post

Wait a minute, I'm confused. If a video game is too demanding, then how does slowing down the CPU and GPU help?
Im emulating his 4700mq. Which is straight foward. But gpu performance is quite hard to emulate. I know i wont get exact performance but i just need ballpark type results.
 

·
RPG Gamer
Joined
·
4,224 Posts
Which version of the 750m he has. DDR3 or GDDR5.
That would affect how well the gpu does.

His laptop can play any game he likes, understand that he has to turn down settings. 750M is on the lower end.
 

·
Not new to Overclock.net
Joined
·
77,827 Posts
Quote:
Originally Posted by chuy409 View Post

Im emulating his 4700mq. Which is straight foward. But gpu performance is quite hard to emulate. I know i wont get exact performance but i just need ballpark type results.
This doesn't answer my question though. I don't see how reducing a system's power in terms of performance will help play very demanding video games BETTER. The less powerful a computer is, the harder it is to play demanding video games. Think about it: if you downgrade to an inferior CPU and GPU, then you'll get worse performance in video games. So, why are you guys trying to emulate downgrading in order to have better performance in a demanding video game? It doesn't make any sense to me. It's the exact opposite of what you want to do in order to have better performance.

Nevermind. thymedtd explained it in post #7.
 

·
Premium Member
Joined
·
11,321 Posts
The i7 4700mq can be overclocked to around 4.2Ghz or so YMMV.

Also is this the SLI model or the single 750m? in any case that laptop needs the custom bios badly to achieve its true potential. Im talking 1.2Ghz core overclocks for the GPU.

As for the above, there is ONLY GDDR5 iterations of the 750m on this machine
 

·
Registered
Joined
·
123 Posts
Quote:
Originally Posted by TwoCables View Post

This doesn't answer my question though. I don't see how reducing a system's power in terms of performance will help play very demanding video games BETTER. The less powerful a computer is, the harder it is to play demanding video games. Think about it: if you downgrade to an inferior CPU and GPU, then you'll get worse performance in video games. So, why are you guys trying to emulate downgrading in order to have better performance in a demanding video game? It doesn't make any sense to me. It's the exact opposite of what you want to do in order to have better performance.
I dont believe he's downclocking his friends machine to try and increase performance (that would just be crazy). Instead (and this is just my guess) he wants to underclock his own computer for both GPU and CPU performance to roughly match his friends machine. Then he can tell his friend if the game will run smoothly on roughly similar hardware. I think
 

·
Not new to Overclock.net
Joined
·
77,827 Posts
Quote:
Originally Posted by thymedtd View Post

I dont believe he's downclocking his friends machine to try and increase performance (that would just be crazy). Instead (and this is just my guess) he wants to underclock his own computer for both GPU and CPU performance to roughly match his friends machine. Then he can tell his friend if the game will run smoothly on roughly similar hardware. I think
Oh yep, that's exactly what he's trying to do.

I have a new post to write now...
 

·
Not new to Overclock.net
Joined
·
77,827 Posts
Quote:
Originally Posted by chuy409 View Post

Im emulating his 4700mq. Which is straight foward. But gpu performance is quite hard to emulate. I know i wont get exact performance but i just need ballpark type results.
Are you saying that you believe you're emulating a 4700MQ using your 5820K?

There's one big problem with what you're trying to do: the architectures (performance efficiency) of your CPU and GPU are very different from his. So, the only way for you to help him is by having him consult you before buying a video game because clearly you can just give him an answer like, "Yes you can play it" or "no you can't". You aren't going to be able to emulate the performance of his laptop no matter how hard you try. There are way too many variables.
 

·
Super Moderator
Joined
·
9,192 Posts
EDIT: I'll just preface this by saying that it would be much easier to do if there's an old Kepler GPU available. I'll recrunch the numbers if you want.
thumb.gif


Ah yesh, dumb math problems inbound, my specialty.
yessir.gif


So, according to Nvidia, one Maxwell SMM with 128 CUDA cores is about 90% as fast as one Kepler SMX with 192 CUDA cores. Your 980 has 16 SMMs, which can be approximated as 14.4 SMXes. The 750M has, in all cases, 384 CUDA cores. It's a fully enabled GK107 die with 2 SMXes.

Because GPUs scale so well with cores and frequency (rendering graphics is embarrassingly parallel), we can take the product of SMX*frequency and match them. The 750M's core clock is 967MHz with 2 SMX units, so it scores 1934 SMX units per second. Yes, those are awful units, but just go with it.
tongue.gif


Now we need to solve for frequency for the 980. 14.4 SMX units * F MHz = 1934 SMX units per second. Solve F and you get 134MHz for a 980 to match a 750M's base speed. You'll probably need to disable turbo boost.

Memory is a bit trickier, and GPU-Z will tell you what he has, but there is a DDR3 variant that reaches 32GB/s and a GDDR5 variant that reaches 80GB/s. Easy enough to solve both - the 980's 256-bit bus can transfer 256 bits = 32 bytes per cycle.

Now we get the equations 32B * F GHz = 80GB/s or 32GB/s. Solve F and you get 2.5GHz effective to reach 80GB/s and 1GHz effective to reach 32GB/s. Since GDDR5 is quad-pumped, transferring four times per clock cycle, the actual memory speed will be 625MHz or 250MHz.

Now, this does make assumptions. Maxwell's memory transfers are a bit more efficient than Kepler's thanks to its color compression. Maxwell's texture units and shaders come in different ratios compared to Kepler, which could also throw things off. Additionally, even GPUs don't scale perfectly by adding more cores. A 980, for example, is in effect a 960 times two (double the cores, ROPs, bus, etc.) but is only 80% faster (incidentally, the scaling you should expect from two 960s in SLI). These are numbers to start with though, assuming you can even downclock that far.

Quote:
Originally Posted by TwoCables View Post

Are you saying that you believe you're emulating a 4700MQ using your 5820K?

There's one big problem with what you're trying to do: the architectures (performance efficiency) of your CPU and GPU are very different from his. So, the only way for you to help him is by having him consult you before buying a video game because clearly you can just give him an answer like, "Yes you can play it" or "no you can't". You aren't going to be able to emulate the performance of his laptop no matter how hard you try. There are way too many variables.
headscratch.gif


The CPU is fine, other than L3 cache. Both are Haswell-based. The only difference, after cores are disabled and frequencies are equalized, is that the 5820K still has 15MiB of L3 while the 4700MQ is stuck with 6MiB. That can affect results, but it's not too big.

Though I think it's necessary to match his memory configuration. A laptop is, what, dual-channel DDR3-1600? You'd need to pull two sticks and adjust the frequency for dual-channel DDR4-1600. The type shouldn't matter once the frequency is adjusted.
 

·
Registered
Joined
·
471 Posts
Discussion Starter #11
Quote:
Originally Posted by CynicalUnicorn View Post

EDIT: I'll just preface this by saying that it would be much easier to do if there's an old Kepler GPU available. I'll recrunch the numbers if you want.
thumb.gif


Ah yesh, dumb math problems inbound, my specialty.
yessir.gif


So, according to Nvidia, one Maxwell SMM with 128 CUDA cores is about 90% as fast as one Kepler SMX with 192 CUDA cores. Your 980 has 16 SMMs, which can be approximated as 14.4 SMXes. The 750M has, in all cases, 384 CUDA cores. It's a fully enabled GK107 die with 2 SMXes.

Because GPUs scale so well with cores and frequency (rendering graphics is embarrassingly parallel), we can take the product of SMX*frequency and match them. The 750M's core clock is 967MHz with 2 SMX units, so it scores 1934 SMX units per second. Yes, those are awful units, but just go with it.
tongue.gif


Now we need to solve for frequency for the 980. 14.4 SMX units * F MHz = 1934 SMX units per second. Solve F and you get 134MHz for a 980 to match a 750M's base speed. You'll probably need to disable turbo boost.

Memory is a bit trickier, and GPU-Z will tell you what he has, but there is a DDR3 variant that reaches 32GB/s and a GDDR5 variant that reaches 80GB/s. Easy enough to solve both - the 980's 256-bit bus can transfer 256 bits = 32 bytes per cycle.

Now we get the equations 32B * F GHz = 80GB/s or 32GB/s. Solve F and you get 2.5GHz effective to reach 80GB/s and 1GHz effective to reach 32GB/s. Since GDDR5 is quad-pumped, transferring four times per clock cycle, the actual memory speed will be 625MHz or 250MHz.

Now, this does make assumptions. Maxwell's memory transfers are a bit more efficient than Kepler's thanks to its color compression. Maxwell's texture units and shaders come in different ratios compared to Kepler, which could also throw things off. Additionally, even GPUs don't scale perfectly by adding more cores. A 980, for example, is in effect a 960 times two (double the cores, ROPs, bus, etc.) but is only 80% faster (incidentally, the scaling you should expect from two 960s in SLI). These are numbers to start with though, assuming you can even downclock that far.
headscratch.gif


The CPU is fine, other than L3 cache. Both are Haswell-based. The only difference, after cores are disabled and frequencies are equalized, is that the 5820K still has 15MiB of L3 while the 4700MQ is stuck with 6MiB. That can affect results, but it's not too big.

Though I think it's necessary to match his memory configuration. A laptop is, what, dual-channel DDR3-1600? You'd need to pull two sticks and adjust the frequency for dual-channel DDR4-1600. The type shouldn't matter once the frequency is adjusted.
Good info. Im going to try all these clocks and compare them to online benchies. Like fire strike and that sort of stuff.
thumb.gif
 

·
Not new to Overclock.net
Joined
·
77,827 Posts
Just have him bench his laptop and then shoot for those numbers.
 

·
Registered
Joined
·
471 Posts
Discussion Starter #13
dam, msi afterburner only lets me downclock the core clock to 764mhz with a 979 boost. And memory only down to 1649mhz. Msi afterburner doesnt go beyond -502 on both core clock and memory clock. Seems to be a software limit?

EDIT: It seems while my 980 is at IDLE, clocks are at 135 and memory is at 325. Seems perfect. But how do I lock them there? hmm:thinking:

2nd EDIT: GOT IT. I used nv inspector and used multi monitor energy something and locked my gpu to the p8 state which I made custom clocks to 135 core and 625 memory. Gonna do some tests.
 

·
Not new to Overclock.net
Joined
·
77,827 Posts
Quote:
Originally Posted by TheReciever View Post

Like mentioned before, You going to need to unlock the BIOS on the laptop
He's not working with the laptop; he is trying to emulate his friend's laptop's performance by reducing his sig rig's performance so that he can be of better help to his friend when his friend wants to buy a new game. By getting his sig rig to emulate his friend's laptop's performance, he would be able to tell his friend how the game in question would perform for him, whatever that game happens to be at any time.
 

·
Registered
Joined
·
471 Posts
Discussion Starter #16
Alright, ran Fire strike and heaven and compared it to some people online.

The person with a 750m and 4700mq got 13.9 fps in heaven. I matched the settings he used. I got 8.9 fps.

I used notebookcheck to compare fire strike gpu scores since they have a few results I can compare with. The laptops they benched got around 1500's gpu score. I got 1469 gpu score.

http://www.notebookcheck.net/NVIDIA-GeForce-GT-650M.90245.0.html

It seems you were off by just a bit. I need to bump the core clock just a bit...
 

·
Super Moderator
Joined
·
9,192 Posts
Quote:
Originally Posted by chuy409 View Post

Alright, ran Fire strike and heaven and compared it to some people online.

The person with a 750m and 4700mq got 13.9 fps in heaven. I matched the settings he used. I got 8.9 fps.

I used notebookcheck to compare fire strike gpu scores since they have a few results I can compare with. The laptops they benched got around 1500's gpu score. I got 1469 gpu score.

http://www.notebookcheck.net/NVIDIA-GeForce-GT-650M.90245.0.html

It seems you were off by just a bit. I need to bump the core clock just a bit...
wheee.gif
That's kind of close! I figured it wouldn't be exact, but it's in the same ballpark at least.
 

·
Premium Member
Joined
·
11,321 Posts
Quote:
Originally Posted by TwoCables View Post

He's not working with the laptop; he is trying to emulate his friend's laptop's performance by reducing his sig rig's performance so that he can be of better help to his friend when his friend wants to buy a new game. By getting his sig rig to emulate his friend's laptop's performance, he would be able to tell his friend how the game in question would perform for him, whatever that game happens to be at any time.
I understand exactly what is taking place.

However his friends laptop is severely under-performing out of the box due to manufacturer issues that are remedied by consumers. I know this, as I have owned this laptop and collaborated with many others on it when it was considered the best performance per dollar laptop of 2013.

10-15% limited by software lock, heatsinks that dont have sound contact, lack of fan profiles, dust filters that restrict airflow significantly and now the CPU can be overclocked.

Instead of trying to validate or dispel fears of whether or not a machine can play a game adequately, you can generally search that on your own as there are many official owner clubs and also youtube videos of people displaying what to expect on it with various titles.

Furthermore still need to know if its an SLI model or not as most people that got them for gaming got the SLI model.
 
1 - 18 of 18 Posts
Top