Originally Posted by Mit Namso
What are the differences and pros and cons of of using
higher FSB x lower Multiplier = Y MHz
lower FSB x higher Multiplier = Y MHz
it varies, and its affected differently in systems where the memory bus goes right to the processor, but i will explain nonetheless
"higher FSB x lower Multiplier = Y MHz"
What you are doing here is feeding the processor more data per bus clock than it normally would recieve. The primary advantage is that the amount of FSB bandwidth increases when you do this, allowing for more data to pass between the processor and the north bridge, and if the memory is connected to the NB, then this also boosts memory bandwidth as well, while slightly lowering latency. More data per second and lower latency to get that data = increased performance. A perfect example would be this:
I have 2 processors, a Pentium Dual Core E5500 and a Pentium Dual Core E6300. Both chips are exactly the same in every respect, including clock speed and core stepping, except for the FSB. The e5500 has a rated bus speed of 800mhz, while the e6300 has a rated bus speed of 1066mhz. The e6300 will end up being the faster of the two chips as it doesnt have to wait as long for data to get to it. Its the reason my processor is only clocked at 3.3ghz. The high FSB gives me greater performance than doing
3750mhz on 12.5x300 (20gflops at 3330mhz as opposed to 16gflops at 3750).
The disadvantages: It puts higher strain on the north bridge and the memory sub-systems, as they have to run much harder to keep up with the FSB. Northbridge and memory voltages will have to be increased to compensate, and this creates extra heat and power draw. This can be negated somewhat by running the memory at a lower divider, but the cost is a small but measurable loss in performance. Programs that do alot of memory or CPU/NB transfers benefit the most from this, although everything gets a boost. Video transcoding and high precision 3d simulation are among them
"lower FSB x higher Multiplier = Y MHz"
This method is the simpler of the two methods of overclocking, and ususally yields the higher core clocks. All you have to do is increase the multiplier until your overclocking target is reached (such as on any unlocked processor, Phenom II black edition and Intel Extreme editions come to mind). What it does is that it tells the internal clock multiplier on the chip to take each FSB tick and multiply it by a higher value. The result is that the internal frequency of the processor is increased while the FSB remains static.
The advantage of doing this is that the processor can chew through data much faster than before, as it takes far less time to finish up the previous instruction(s). It doesnt put nowhere near as much strain on the memory and chipset. This form of overclocking has its drawbacks as well. The first one is that performance wont go up as much as you think it will. Sure there will be a boost, but not as much. Power draw is increased as the CPU voltage has to go up, sometimes quite considerably, to support the higher core clock. Heat is greatly increased (ive seen core2 quad overclocks that pull over 200W and make a pentium D look like a candle). Programs that benefit from this are ones which dont require as much in the way of system bus transfers, like pure number crunchers (tripcode explorer and prime95 are a few). They can load the processor cache with the necessary data required to run the program, and then let it run.
The best benefit though is to mix the two. Lower the multiplier and get the FSB up to a nice standard figure (mine is at 1333mhz rated), and then tweak the multiplier until your overclocking target is reached, benching between multiplier changes to see which nails the best performance.