Since flash-upgrading my gigabyte bios it allows me to raise the multiplier to x21, which just seems to be a disguised way of enabling turbo boost (and, turbo boost stays on if you change the multiplier back down, which has tripped me up a few times).
I'm wondering if this setting is really equivilent to setting an unlocked multiplier to x21, or if it would be in the interests of stability and efficiency to keep the multiplier at x20 and seek speeds beyond 4ghz by raising only the bclk?
It's not the exact same as an unlocked multi set to x21, but for simplicity's sake, it pretty much is the same thing. The turbo in i7s is intended to kick in only under heavy load. Thus, your motherboard has to fool the cpu (it may send dummy current, temp, and load demand readings) to prevent throttleing.
If you're going for an OC over 4ghz, its recommended to use turbo. If you want 4ghz or less, x19 may be a better choice. Even multipliers are less stable than odds, so x20 multi isn't recommended.
For all intents and purposes it is exactly the same as being allowed to set an unlocked x21. As Mr Linky says the motherboard is having to fudge things a bit under the hood to make the CPU stay in x21 all the time but it's entirely transparent to you and is not something to worry about.
Old thread, but I was just wondering the same thing as OP. I have an i5-750 with a GA-P55A-UD4 (F15).
When setting the multi to either x20 or x21 in BIOS:
with Turbo ON I actually get x21
with Turbo OFF I actually get x20
I was confused because some sites, like this one mention x24 multis and whatnot, depending on the number of threads or core loads..
I therefore tried with only 1 or 2 threads in LinX and it would never go above x21. Also the voltage doesn't seem affected at all, neither EIST. Just like if I set x21 myself, nice and predictable behavior. Hopefully it wasn't just coincidence and it stays that way.
So in the end the official (Intel) specs don't matter and motherboard makers just do what they like? Or does it perhaps depend on some other settings in BIOS? (Of all power saving settings, I only use EIST, C states etc. are all disabled.)
Gigabyte (1366 and 1156) boards generally allow easy access to the 1st extra turbo multiplier, and do not TDP throttle, so using it is just like having an extra non-turbo multiplier.
For the higher single or dual core turbo states to function, the various C# states and EIST usually have to be enabled, because Turbo Boost won't utilize multipliers over +1x unless it can put other cores in a low power state.
Don't know what's the policy on bumping old threads, but I figure those who posted here might have some insight or whatever...
With my i5-750 and Gigabyte P55A-UD4 I have turbo boost, which really just acts as 21x multiplier. And I've been using it this way fully stable without any issues in Windows (it also shows as 21x with the right frequency on the post screen).
However, after a lot of messing around I found out that Linux doesn't play well with this. If I use this 21x multi I'll just get a kernel panic or some other error when trying to boot up Linux (various distros, some are more sensitive than others).
At lower clocks, like 21x160, this can be mitigated with some more Vcore, but at 21x190 more Vcore doesn't help anymore (well, I only tried adding up to 0.1 over Prime stable).
Note that this happens when booting Linux only. As long as it manages to boot I can run Mprime without any problems.
I've read in some places that some Linux kernels don't "see" turbo boost and therefore would work at 20x instead of 21x. Perhaps in this case it goes the other way and Linux adds extra multi, like 22x or 24x during boot? Just guessing, but this could explain why adding voltage helps at lower clocks.
A Core i5-750 has a 24 multiplier available when 1 or 2 cores are active. The 24 multiplier is only available if one of the deeper C States like C3 or C6 is enabled. You can disable these C States in the bios. Once disabled in the bios, Windows does not enable these C States but I just found out that there is a driver in Linux that can ignore the bios and can enable these C States.
Symptom
Recent Linux kernels may have a built-in driver ('intel_idle') which will ignore any C-State limits imposed by Basic Input/Output System (BIOS)/Unified Extensible Firmware Interface (UEFI).
This driver was added to take advantage of the power savings given by C-States on newer Intel Central Processing Units (CPUs).
On systems where latency is an issue, this driver may cause issues by enabling C-States even though they are disabled in the BIOS or UEFI. This can cause minor latency (a few microseconds) as the CPUs transition out of a C-State and into a running state.
If Linux is enabling the C3 or C6 core C State, your CPU will immediately be able to start using the 24 multiplier when 1 or 2 cores are active. When booting up, the operating system is not well threaded so it might be jumping up to the 24 multiplier.
You will have to do some Google searching to see if you can find an app that reports C State residency time. I know I have seen a free Linux app that shows this but I can't think of the name of it off hand. You could then boot up at a lower BCLK like 160 and check to see if the C States are being enabled in Linux. RealTemp can show you what C States are being used when idle in Windows just to make sure the bios is working correctly.
Great find, thanks!
That seems to be exactly what's going on. I'll try to verify next time I boot into Linux to make sure.
I have all C-states disabled in BIOS and I don't understand the reasoning for ignoring BIOS settings by default, it seems like a silly thing to do.
(edit: RealTemp is showing all 0.0 for C-states in Windows, so I suppose it works ok there.)
The CPU keeps track of this info internally. I know somewhere there is a command line app for Linux that also reports this info. If I find it again, I will post a link.
Edit - This software from Intel also includes a plugin for Linux.
Then I rebooted with barely-Linux-bootable settings to try crash it and indeed it did: mprime load, 1 core, still going (notice the 24x multiplier)
And as soon as I loaded mprime with 2 cores I got a kernel panic (the cores were at 23x for the split moment before it crashed).
So yeah, that's that.
It can be mitigated with intel_idle.max_cstate=0 if you use it before Linux loads, but it's still bad to ignore BIOS settings IMO.
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Related Threads
?
?
?
?
?
Ask a question
Ask a question
Overclock.net
27.8M posts
541.2K members
Since 2004
A forum community dedicated to overclocking enthusiasts and testing the limits of computing. Come join the discussion about computing, builds, collections, displays, models, styles, scales, specifications, reviews, accessories, classifieds, and more!