If the OS treats an AMD dual module chip as a traditional "quad" core (like an i5, or phenom II X4), the scheduler would not optimize workload assignment very well.
For example, if the OS were to treat the i3 as a quad core, it would have no affinity to favor a balanced scheduling on each actual "core." Rather, it would wind up with a major performance cut as it would schedule tasks to be shared across a single hyper-threaded core that would have finished more quickly had they been split up to run on separate cores instead.
An AMD "module" would have the same problem. The easiest way to "solve" that problem on the software side is to treat a module in a similar manner as a "hyptherthreaded" core, so that the scheduler favors "splitting" workloads to free modules before filling up the remaining logical "threads" with a workload.
It's my opinion that AMD would have been much better off in the long run on this new architecture had they chose to market it differently. A "module" could have been named a "Tandem Compute Core" or something like that instead. The marketing of the new architecture leaves a lot of opportunity to mislead consumers and it is my opinion that such a strategy can only prove beneficial in the short term. Eventually, every person in a position to influence tech related purchasing decisions will be aware of the deception, and apt to advise against being swayed by the large MHZ and Core count claims of AMD. The vast majority of people who "just need a computer" and "don't care" about the details do tend to ask friends, relatives, and especially the younger generation for advise on these matters. Even the uninformed are more often than not, lead by folks who pay attention to the big picture trends and rumors about hardware. This "deception" on the part of AMD, will come back to bite in the long run.
Imagine for a moment, that you're reading a review on the FX-8320. In the parallel workloads the FX chip holds up nicely to a quad core i7 Sandy Bridge or Xeon E3-1230 chip. Considering the $160 price tag, the reviewer has to admit that it's not a bad value, but is forced to point out that it takes "8" brand new AMD cores to compete with "4" Intel cores from 3 generations ago. This leaves a lot of opportunity for the consumer to extrapolate and surmise incorrect information about how the AMD cores stack up against the competition.
The same type of problem occurs when comparing, for instance, an APU "quad" with an i3 "dual." The reviewed perspective is that it takes 4 AMD cores to compete with 2 Intel cores. I don't see how this makes AMD look good. (it doesn't).
In a hypothetical world where AMD had correctly marketed the new architecture in such a way that a Module=Core, the "tune" of the same reviews would sound a lot different. At that point, it'd be a $160 quad core AMD chip that is performing on par with a still-$300 quad Intel chip well enough to be worth the tradeoffs in efficiency and single threaded performance. There would not be as much confusion about what the APU chips are really competing with either. It's important to note that the dual module APUs are currently competing with the i3, while the single module chips are in a loosing battle to the Intel ultra-budget options. Once the GPU is down in those performance ranges it doesn't matter how bad it is, it's no longer a benefit on the AMD side to be slightly faster because they are still too slow to matter. The $70 G3220 absolutely cremes any single-module offering from AMD at this time, which leads me to believe that AMD needs to re-evaluate either the core configurations of these entry level APUs, or re-evaluate their current pricing.
Edited by mdocod - 11/12/13 at 11:57am