Overclock.net banner
1 - 20 of 168 Posts
Most interesting thing about Bartlett Lake-S is that it's supposedly staying on LGA-1700.

Nova Lake should be considerably faster in essentially everything, but BL seems like it could be a great 'budget' option, especially for those with current gen boards and who don't want to deal with the scheduling nonsense that still comes with heterogeneous CPUs.

I dont believe so. Raptor Lake (or whatever the 14th gen was called).
Raptor Cove, but yes.

 
I'm interested to see if this CPU offers any performance gain in games vs RPL i9's. I doubt it actually will since games tend to run 8 or less primary threads. Some games do run many other helper threads but e-cores seem to handle those well.

I'm sure there will be exceptions of course. CP2077 might be one example since it runs the best by turning on the prefer p-core option with HT enabled. Since HT does give higher performances, it might perform better by offering more physical p-cores. That's assuming CDPR didn't hard code it to 8 p-cores which they might have done.
 
That's assuming CDPR didn't hard code it to 8 p-cores which they might have done.
Unless they are going to patch Cyberpunk to use fewer logical cores specifically for Bartlett Lake, which would be really weird, it should use all cores on these CPUs.

Be nice if it had a little more cache but it will be full house with 12p cores stuffed in there lol
I expect the same 3MiB of L3 per P-core (meaning the same 36MiB of L3 that's on the Raptor Lake i9s0 and for total die area to be a bit lower than Raptor Lake, as a P-core is slightly smaller than an E-core cluster.

That said, it's not inconceivable that they could go with a larger L3 slice. Emerald Rapids also uses Raptor Cove P-cores, but with 5MiB of L3 each. It's a mesh setup though, unlike Raptor Lake, and the path of least resistance for Bartlett Lake would be to simply replace the E-core clusters with more of the same Raptor Cove 3MiB L3 P-Cores and slot them into the same ring, with as few changes as possible.
 
  • Rep+
Reactions: Gamerman
Unless they are going to patch Cyberpunk to use fewer logical cores specifically for Bartlett Lake, which would be really weird, it should use all cores on these CPUs.



I expect the same 3MiB of L3 per P-core (meaning the same 36MiB of L3 that's on the Raptor Lake i9s0 and for total die area to be a bit lower than Raptor Lake, as a P-core is slightly smaller than an E-core cluster.

That said, it's not inconceivable that they could go with a larger L3 slice. Emerald Rapids also uses Raptor Cove P-cores, but with 5MiB of L3 each. It's a mesh setup though, unlike Raptor Lake, and the path of least resistance for Bartlett Lake would be to simply replace the E-core clusters with more of the same Raptor Cove 3MiB L3 P-Cores and slot them into the same ring, with as few changes as possible.
Well right now it limits to a maximum of 16 threads. Hard to know if it will expand to 24 threads when using the prefer p-core option. Also unclear how the default mode would behave with 12 p-cores vs 8p16e.
 
I am more wondering if the IMC will have a "Upgrade". Like 9000+ G2
 
I'm interested to see if this CPU offers any performance gain in games vs RPL i9's. I doubt it actually will since games tend to run 8 or less primary threads. Some games do run many other helper threads but e-cores seem to handle those well.

I'm sure there will be exceptions of course. CP2077 might be one example since it runs the best by turning on the prefer p-core option with HT enabled. Since HT does give higher performances, it might perform better by offering more physical p-cores. That's assuming CDPR didn't hard code it to 8 p-cores which they might have done.
I think games will run better because there aren't E-cores to be misallocated by the game engine or Windows.
 
Well right now it limits to a maximum of 16 threads. Hard to know if it will expand to 24 threads when using the prefer p-core option. Also unclear how the default mode would behave with 12 p-cores vs 8p16e.
I wasn't aware that it was limited to 16 threads by default, probably because I've been using unofficial SMT patches with the game forever (with which it clearly scales past that), but this would make sense given the number of worker threads doesn't really scale past a dozen. Sixteen should be just enough for optimal scaling of the renderer plus all the other stuff that needs to be done (except maybe AI/crowds).

Anyway, I would expect either the default or prefer p-core behavior to scale to at least sixteen physical cores, just as it does on AMD CPUs without the SMT patches.
 
I would expirement with this CPU if it gets released.
 
I wasn't aware that it was limited to 16 threads by default, probably because I've been using unofficial SMT patches with the game forever (with which it clearly scales past that), but this would make sense given the number of worker threads doesn't really scale past a dozen. Sixteen should be just enough for optimal scaling of the renderer plus all the other stuff that needs to be done (except maybe AI/crowds).

Anyway, I would expect either the default or prefer p-core behavior to scale to at least sixteen physical cores, just as it does on AMD CPUs without the SMT patches.
To be clear, 16 threads when using the prefer p-core option. Launches threads on all cores when this is not turned on. But on RPL, the prefer p-core option performs significantly better. But this is not a simple matter of scheduling. Simply turning off the e-cores does not result in a performance increase. The prefer p-core option must be enabled. So CDPR did some additional optimization of some kind.

Obviously I do think CP may perform better on the 12 p-core CPU which is why I mentioned it as an outlier in the first place. I'm just not confident it will be a big improvement (big meaning maybe greater than 5%).
 
Obviously I do think CP may perform better on the 12 p-core CPU which is why I mentioned it as an outlier in the first place. I'm just not confident it will be a big improvement (big meaning maybe greater than 5%).
I don't think the performance uplift will be huge either, but there are areas where allowing the game to use more threads has helped. The reason I started using SMT mods with Cyberpunk was because CDPR, in their infinite wisdom, decided that eight-core AMD CPUs wouldn't benefit meaningfully from SMT. When it came to areas of high NPC density, they were extremely wrong. All of my eight-core AMD CPUs saw obvious performance uplifts, when not completely GPU limited, in the middle of Night City in CP2077 from allowing the game to use all logical cores. Likewise, I think having 50% more physical P-cores on Bartlett Lake will help in exactly the same areas. SMT is nice, but having each and every worker thread, up to the limit of scaling, on it's own core with it's own L2 is surely better.

Now, I think the games where Bartlett Lake is the best performing CPU for anything, beyond that tiny gap of time between it becoming available and Nova Lake showing up, will be near zero. Unless it's much better binned than Raptor Lake it's not going to beat Raptor Lake in 95% of games. I don't think it's going to be clocking higher than current Raptor Lake parts. Intel will bake in all of their mitigations and put some pretty aggressive limiters in place to keep the cores or ring from dying prematurely. I suspect any improvements to manufacturing will be spent making it competitive with Raptor Lake at lower voltages.
 
When there are no E-Cores, what about AVX512, Intel?
 
That is the 1 million dollar question. We will see what they come up with, either way I am excited to ditch the e-core nonsense.
Finally a gaming chip from intel, albeit a little late.
Late or not, AMD is gonna have a 12 core next gen on a single chiplet, 15-25% perf increase in gaming over 9800x3d, and the consequent multicore increase.

Even if Intel launches something that can challenge the 9800x3d on a 12 core format, it's good enough if the price is right. Competition is good.
 
Discussion starter · #19 ·
Late or not, AMD is gonna have a 12 core next gen on a single chiplet, 15-25% perf increase in gaming over 9800x3d, and the consequent multicore increase.

Even if Intel launches something that can challenge the 9800x3d on a 12 core format, it's good enough if the price is right. Competition is good.
Depends how you look at it, IMO intel already launched something that challenged the 9800x3D, the 9800 isn't much better than what the 7800 provided. Both my 13700kf and 14900ks tuned beat the pants of the 9800x3d. Then there is AMDip, while many ignore it, it does exist and it's can be serious problem. I bought into AMD hype as well, but now that I'm back on Intel I'm never going back until they prove they can make a proper performance CPU.

The CPU chiplet design and 3d chache is terrible for gaming and top tier performance, it should have never happened. AMD needs to go back to monolithic chip design like they did with their 9070xt GPU.
 
The CPU chiplet design and 3d chache is terrible for gaming and top tier performance, it should have never happened. AMD needs to go back to monolithic chip design like they did with their 9070xt GPU.
The extra cache goes a long way toward making up for the downsides of having the memory controller on a different chip.

AMD is also unlikely to ever move back to monolithic parts, outside of low power chips and APUs. Economics strongly favor modular chiplets/tiles. Intel also knows this, which is why they have been trying to move away from large monolithic parts. Panther Lake might put the IMC back on the compute tile, but Nova Lake and future architectures are likely to separate them again.

Reducing the physical distance between the IOD and CCDs, both to improve performance and efficiency by omitting SerDes in favor of point-to-point Fabric interconnects, is something that's already being done with Strix Halo.

RDNA3 vs. RDNA4 isn't really an apt comparison. Higher-end RDNA4 parts had the memory controllers/PHYs and caches spun off into individual chiplets that each required a very wide and area-hungry interconnect that was difficult and costly to implement with the packaging techniques available. Reducing the size of the largest die flavor helped, but much of that advantage was eaten into by those packaging issues and all the extra transistors needed. As packaging and interconnects improve we're going to see more modular GPUs as well. NVIDIA already does this in the enterprise space with Blackwell. High-end UDNA is unlikely to be monolithic.

From a pure performance perspective, monolithic is better, but that performance advantage is declining and the cost differential increasing. There are very few markets where large monolithic parts will remain viable.
 
  • Rep+
Reactions: Agent_kenshin
1 - 20 of 168 Posts