• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Intel's stock takes a big, runny dump

A 6700K can do just under 1100 points, in single thread test. They are significantly more powerful than these e-cores. A closer CPU would be the 3770K.

The only advantage for those e-cores is power saving. As these can save a few watts. In a laptop it's quite useful, because of the battery.
But on a desktop, it's not that important. Intel should just add more P-cores, not E-cores.
At best, the 12900K should have 10 P-cores, and just 4 E-cores.
I meant in multi thread. There is no point in comparing single thread for the E cores because if the thread scheduling is working correctly they won't be used.

When there are 32 E cores on two architectures newer these things are going to fly, and they're already very useful.

Edit : also I don't think it works like that, I don't know if reducing the e cores to 4 will give you the die space for 2 extra p cores, but also that would create significantly more hear than the extra e cores.

I don't think you're getting why Intel is not going beyond 8 cores ; because gaming barely even needs 8 cores as it is.
 
Last edited:

winjer

Gold Member
I meant in multi thread. There is no point in comparing single thread for the E cores because if the thread scheduling is working correctly they won't be used.

When there are 32 E cores on two architectures newer these things are going to fly, and they're already very useful.

Edit : also I don't think it works like that, I don't know if reducing the e cores to 4 will give you the die space for 2 extra p cores, but also that would create significantly more hear than the extra e cores.

I don't think you're getting why Intel is not going beyond 8 cores ; because gaming barely even needs 8 cores as it is.

I'm talking about multithread. Using Cinebench as an example.
In a bench like R23, a P-core is worth 2600 points. An E-core is worth 880 points.
It takes 3 E-cores to match just one P-core. It's a huge difference in performance.
The 12900K costs 600 US$. For that price, it should bring more P-cores. Not more of those pathetic E-cores.
 
I'm talking about multithread. Using Cinebench as an example.
In a bench like R23, a P-core is worth 2600 points. An E-core is worth 880 points.
It takes 3 E-cores to match just one P-core. It's a huge difference in performance.
The 12900K costs 600 US$. For that price, it should bring more P-cores. Not more of those pathetic E-cores.
But you compared single thread performance of 6700k to the e cores. You have to compare multi thread 6700k performance to e cores.

5950x was launched at $750, and it loses in that cinebench test, so I don't see anything pathetic about it.
 
Last edited:

winjer

Gold Member
Why? Why make that comparison? When does single thread matter when using E cores?

I don't know what your angle is here.

What don't you understand?
Just multiply the single score by the number of cores, and you get roughly the multicore performance. Just a bit less, because power and heat.
 
What don't you understand?
Just multiply the single score by the number of cores, and you get roughly the multicore performance. Just a bit less, because power and heat.
3770k is less performant than 8 E cores in multi thread.

E cores only matter in multi thread. It's 8 E cores vs 4 cores with hyperthreading.
 
Last edited:
winjer winjer

Intel-Alder-Lake-Hybrid-Design-P-Core-E-Core-Performance-_1.png

The claim is with their design they can pack enough E cores in to realize higher performance than just more P cores.

The alder lake P cores are a monolithic design, not chiplet, so this is obviously true.

I am having trouble finding the actual physical size of each core type.
 
Last edited:
cG5nLjk5NDk2Ny8





What? Of course E-cores can run single threaded workloads.
HT can improve performance. Depending on the application.
But this is just another of the advantages of the P-Cores. Since the E-cores can't do SMT.
If an E core is 880, times 8, that is much more than 4382.

They can run single thread applications. They're not meant to, except for background tasks.

They're not going to be used in games unless the scheduling is wrong, like a few games on windows 10.

No kidding hyperthreading helps! I am so confused as to what you are trying to get at here.
 
Last edited:
What kind of calculations are you doing?
You have to compare 4 vs 4 cores. Not 4 vs 8.
It's 8 threads vs. 8 threads.

I think you got confused, I understand that one core on 3770k is better. There just isn't any point in saying so, as the 8 E cores will be much faster for productivity vs the formers 8 threads.
 
Last edited:

winjer

Gold Member
It's 8 threads vs. 8 threads.

I think you got confused, I understand that one core on 3770k is better. There just isn't any point in saying so, as the 8 E cores will be much faster for productivity vs the formers 8 threads.

My point is that those E-cores can only match Ivy Bridge IPC, from 10 years ago. Not Skylake.
Remove the HT from the 3770K, and it's a match. Or give the E-cores HT.

These are really weak cores. And the worst part is that they can cause a small performance degradation in-games.
They have caused compatibility problems with DRM and some applications.

I see you give a lot of value to those E-cores. But I give them no value at all.
 
My point is that those E-cores can only match Ivy Bridge IPC, from 10 years ago. Not Skylake.
Remove the HT from the 3770K, and it's a match. Or give the E-cores HT.

These are really weak cores. And the worst part is that they can cause a small performance degradation in-games.
They have caused compatibility problems with DRM and some applications.

I see you give a lot of value to those E-cores. But I give them no value at all.
I never said they were as good as a skylake core, I said all core performance was roughly the same.

Did you see the picture I posted? That's Intel, not me saying they get more perf from adding e cores.

There have been some thread scheduling issues, but they will get sorted. Games barely, barely use more than 6 cores, there's no need for more than 8.

Will you still trash E cores if 13900k beats 7950x in multicore?
 

winjer

Gold Member
I never said they were as good as a skylake core, I said all core performance was roughly the same.

Did you see the picture I posted? That's Intel, not me saying they get more perf from adding e cores.

There have been some thread scheduling issues, but they will get sorted. Games barely, barely use more than 6 cores, there's no need for more than 8.

Will you still trash E cores if 13900k beats 7950x in multicore?

I'm not trashing E-cores because they are not AMD. I'm trashing them because I think they are a bad idea for the desktop.
In laptops there is a case to be made, for power savings. But that's it.

I've seen on other forums, people posting some nice improvements in several games, after disabling the E-Cores on the 12900K.
Had Intel put more real cores and/or cache, it would be a better CPU all around. Maybe it could lose a few points in cinebench, but many other applications would do better.

But the thing I don't understand, is why you are so focused of cinebench. Are you rendering a lot in these types of applications?
And if so, why not just go for an HDET system, be it from Intel or AMD. They have much greater throughput than a 12900K. So although they are more expensive, the work they do pays for it.

Regarding the 13900k VS 7950X, the leaks and info we got so far, seem to indicate they will be very similar in performance.
The big difference will be power consumption. Both are increasing, but considering that the 12900K already consumes almost double that of the 5950X, it doesn't bode well for Intel.
The 13900K is still on Intel 7 process node. Which is just an improvement on the Intel 10 node. But Zen4 will be on N5, a much more efficient node than N7.
And N7 was already more efficient than Intel 7.
 
Last edited:
I'm not trashing E-cores because they are not AMD. I'm trashing them because I think they are a bad idea for the desktop.
In laptops there is a case to be made, for power savings. But that's it.
But, worst case, E cores are going to let the 13900k match the 7950x. Again, as in the picture, Intel gains more multicore performance, with their monolithic design by adding e cores than adding more P cores.
I've seen on other forums, people posting some nice improvements in several games, after disabling the E-Cores on the 12900K.
Had Intel put more real cores and/or cache, it would be a better CPU all around.
As was the case for Ryzen smt, older Intel chips with HT ; scheduling issues are nothing new, nor exclusive to E cores. It will be sorted.

And that's not true, as I've explained above and showed you in that graphic.

But the thing I don't understand, is why you are so focused of cinebench. Are you rendering a lot in these types of applications?
And if so, why not just go for an HDET system, be it from Intel or AMD. They have much greater throughput than a 12900K. So although they are more expensive, the work they do pays for it.
I'm not focused on it. It's why the i9 and Ryzen 9 skus exist! You don't get a 7950x just for gaming just like you wouldn't get an i9 for just that. I have an 8 core 16 thread chip, like you.

You're not understanding the point of E cores and are falsely claiming Intel would be better without them as it stands with their current design.

Regarding power consumption, it is an issue, but performance is king. And I don't know that it's true that n7 is better than Intel 7, as Intel are pushing clocks higher vs. zen 3.

Obviously Intel's node woes are their biggest obstacle, and we need them to get this sorted out.

Also if you think the wattage is a problem, ditching the E cores for more P cores are going to make that situation much worse.
 
Last edited:
winjer winjer Also you're focusing too much on the i9. The 13600k will be a 20 thread chip up against the 7600x 12 thread. 8 more threads on i7 13 vs 7800x as well.

For budget producers, it adds a lot of value. And even if you don't use it for that, why scoff at extra headroom for similar money?
 
Last edited:

winjer

Gold Member
But, worst case, E cores are going to let the 13900k match the 7950x. Again, as in the picture, Intel gains more multicore performance, with their monolithic design by adding e cores than adding more P cores.

But that is just because Intel is limited by not having as advanced chiplet tech as AMD.
If Intel had as good as tech in chiplets, do you think they would bother with adding more and more E-Cores?

As was the case for Ryzen smt, older Intel chips with HT ; scheduling issues are nothing new, nor exclusive to E cores. It will be sorted.

There are still issues with HT/SMT. It will never be sorted, because CPUs front End can't predict every scenario.
And the same happens with a Hybrid architecture such as this. The thread dispatcher has to make sure that the E-cores don't get important threads, or they'll quickly become a bottleneck.

And that's not true, as I've explained above and showed you in that graphic.

Only applications that use all cores, would benefit. Renderers are but one type of applications that would benefit from it.
But many other benefit from real cores and/or cache. Games are a prime example for cache.

I'm not focused on it. It's why the i9 and Ryzen 9 skus exist! You don't get a 7950x just for gaming just like you wouldn't get an i9 for just that. I have an 8 core 16 thread chip, like you.

You're not understanding the point of E cores and are falsely claiming Intel would be better without them.

But do you do that much rendering? And if so, why not use an HDET?

You give too much trust to Intel's powerpoints. What matters are independent benchmarks.
Companies like Intel, AMD and nVidia have a long history of bending reality, trying to make their products look better.

Regarding power consumption, it is an issue, but performance is king. And I don't know that it's true that n7 is better than Intel 7, as Intel are pushing clocks higher vs. zen 3.

I have to insist on the same question. But how much rendering do you do?
Granted Alder Lake is better at rendering that Zen3. But at almost double the energy cost, if you are a professional, constantly doing renders, it's going to cost significantly more with every year. Especially at a time when energy prices are increasing drastically.

In the case of 13rd Gen Vs Zen4, do you really think power consumption is not a deciding factor for rendering?
 
Last edited:
winjer winjer Not the point, the extra E cores aren't for me. They are for people that create, and are on a budget. You think everyone who wants to render can afford a threadripper?

More P cores would not help games at all, not sure why you're being disingenuous about that.

Cache yes, but only a couple of amd chips will have v cache. You think Intel can't make an 8 core chip with a similar amount of extra cache instead of E cores? Would be great if they did.

As for which approach is superior or not, this is also not a fair comparison as amd has had multiple generations to work out the chiplet kinks. Which were many, if you recall!

Compare Intel's current implementation of big.little to AMD's Ryzen first gen and zen+.
 
Last edited:

winjer

Gold Member
winjer winjer Not the point, the extra E cores aren't for me. They are for people that create, and are on a budget. You think everyone who wants to render can afford a threadripper?

More P cores would not help games at all, not sure why you're being disingenuous about that.

Cache yes, but only a couple of amd chips will have v cache. You think Intel can't make an 8 core chip with a similar amount of extra cache instead of E cores?

There are more applications besides games and renderers. many would benefit from big cores, instead of E-cores.
And in the case of games, E-cores harm performance. And if they were replaced by cache, it would improve performance a bit, instead of reducing it.

Why are you focusing so much on renderers, when many other applications get minimal performance from E-cores, or even get a regression?
And if budget is so important, why are you recommending a CPU that consumes almost double the power? For someone that has a PC that spends all day rendering, this makes a huge difference in cost.

No, I don't think Intel is capable o putting as much L3 cache on a CPU as AMD can. Not at the moment. Maybe they can catch up in a few years.
 
There are more applications besides games and renderers. many would benefit from big cores, instead of E-cores.
And in the case of games, E-cores harm performance. And if they were replaced by cache, it would improve performance a bit, instead of reducing it.

Why are you focusing so much on renderers, when many other applications get minimal performance from E-cores, or even get a regression?
And if budget is so important, why are you recommending a CPU that consumes almost double the power? For someone that has a PC that spends all day rendering, this makes a huge difference in cost.

No, I don't think Intel is capable o putting as much L3 cache on a CPU as AMD can. Not at the moment. Maybe they can catch up in a few years.
Umm no, if you spend all day rendering, time is money. Electricity is secondary, very very secondary. That is completely ridiculous. Performance being equal, sure.

But if you were a gamer, and i9 is better for games, you also render, yeah I would recommend the i9, some extra watts or not. It just depends.

I'm focusing on renderers because that is the main point of E cores as well as to handle background tasks.

Basically you're saying extra headroom isn't good, I guess? You would spend the same money on 12 thread no v cache ryzen 5 vs the 13600k? If so, there's only one way to interpret that...

I wouldn't necessarily recommend an i9, for the nth time, I say get the i7. BUT I would recommend the upcoming i9 over the 7950x, if the former had better performance, and you needed it for productivity.

If someone only had $300 abd they could choose a 7600x or 13600k, yeah, duh I recommend the latter. Again, if gaming performance is there.


See where we are at in a few years I guess 🙂
 
Last edited:

winjer

Gold Member
Umm no, if you spend all day rendering, time is money. Electricity is secondary, very very secondary. That is completely ridiculous. Performance being equal, sure.

But if you were a gamer, and i9 is better for games, you also render, yeah I would recommend the i9, some extra watts or not. It just depends.

If time is money, then HDET is the solution. Not a consumer CPU.
And energy for a CPU that is constantly rendering is very important. Energy is also money.
It's not just some extra watts. It's 294W vs 179W. Zen3 Vs Alder Lake. But with Ze4 VS Raptor Lake, it's likely to increase.

I'm focusing on renderers because that is the main point of E cores as well as to handle background tasks.

No it's not. E-cores suck at rendering. They can only do 880 points in cinebench. A P-Core can do 2600 points.
If Intel could use chiplets and put 16 P-cores on a chip, as easily as AMD can do, it would trounce the 12900K.
And any core can handle background tasks.
The real advantage of E-cores is that they are more frugal on energy consumption.

Basically you're saying extra headroom isn't good, I guess? You would spend the same money on 12 thread no v cache ryzen 5 vs the 13600k? If so, there's only one way to interpret that...

At this point we know nothing about the 7600X vs 126000K. We don't know prices, power consumption, performance, etc.
BTW, have you seen this?


A Userbench benchmark of a hex-core Zen 4 part has surfaced, showcasing impressive gains and easing concerns over the single-threaded capabilities of the next-gen Ryzen processors. The SKU in question is likely the Ryzen 5 7600X with a base and boost clock of 4.4GHz and 4.95GHz, respectively. This matches with what AMD has said about the 5GHz+ load frequencies of its next-gen CPUs.
Userbenchmark is known to heavily favor Intel with rather idiotic claims about AMD’s “incompetence”. Ironically, the Ryzen 5 7600X rips through its Alder Lake rivals with a score of 243 points, leading the Core i9-12900K and i5-12600K by 20% and 25%, respectively.

I wouldn't necessarily recommend an i9, for the nth time, I say get the i7. BUT I would recommend the upcoming i9 over the 7950x, if the former had better performance, and you needed it for productivity.

I don't get why you would recommend the 13900K, for productivity, when the few benchs that have been released, show them to be matched very similarly in performance.
The big differences are that Zen4 is going to be much more efficient at power usage. And it's going to use a platform with support for several generations of CPUs, like AM4.

If someone only had $300 abd they could choose a 7600x or 13600k, yeah, duh I recommend the latter. Again, if gaming performance is there.

See where we are at in a few years I guess 🙂

You have no information or benchmarks to make that recommendation.
Seems that the only reason for you to recommend that is that you love Intel.

Have you ever bought a CPU from AMD? Or have you always had Intel?
I can tell you I've had several Intel CPUs, before the Current Zen2 that I have now.
 
Last edited:

FireFly

Member
winjer winjer Also you're focusing too much on the i9. The 13600k will be a 20 thread chip up against the 7600x 12 thread. 8 more threads on i7 13 vs 7800x as well.
We don't know if the 13600K will go against the 7600X. Currently the 5800X is priced to compete with the 12600K, so the 7700X may be competing with the 13600K.

Edit: Or the 7800X of course.
 
Last edited:
Weird timing. I mean the 12th gen chips are actually really good. Not perfect, but better than their 14nm++++++++ series by miles.
 
At this point we know nothing about the 7600X vs 126000K. We don't know prices, power consumption, performance, etc.
BTW, have you seen this?


Have you ever bought a CPU from AMD? Or have you always had Intel?
I can tell you I've had several Intel CPUs, before the Current Zen2 that I have now.
Have I seen that a next generation CPU beats an last gen one? Really? We know that the 7600x is a 12 thread part, and 13600k has 20 threads. We can comfortably say the i5 will stomp in multicore.

I said what CPU I currently have already in this thread ; the x3D. Primarily because I already had an am4 system. Before that, it was a 1600 AF. Without already having the board, i'd have bought Intel.

Before that, it was Intel. It really doesn't matter though, you should stick to the facts.

Were you on forums talking up your 3700x even though 9900k was significantly better? In this very thread I have said the x3D is the best value gaming chip ATM, if you want the best perf. But it's still pretty expensive, and the Intel chips priced below it are better value.

You would probably say E cores suckz if the i9 was noticeably better than the r9. Teh watts! I'm moving on here.
 
Last edited:
We don't know if the 13600K will go against the 7600X. Currently the 5800X is priced to compete with the 12600K, so the 7700X may be competing with the 13600K.

Edit: Or the 7800X of course.
Because amd had to lower prices to compete with alder lake. Intel is not going to price the i5 much above the 7600x. The 7700x is an r7 part.
 
Last edited:

FireFly

Member
Right. Hence why r5 and i5 won't be much more than another.

They both can't lower prices?
Companies generally lower prices to equalise performance at a given price tier, as price wars can hurt margin for both sides. (Or they "offset" pricing by offering more/less performance at a higher/lower price)

So when AMD dropped the price of the 5800X, Intel didn't follow suit with the 12600K. I can see the same happening for the 7700X/7800X. Or maybe these CPUs will be slightly faster and slightly higher priced.

The point is we don't know what the competitive landscape will look like.
 

winjer

Gold Member
Because amd had to lower prices to compete with alder lake. Intel is not going to price the i5 much above the 7600x. The 7700x is an r7 part.

You haven't seen the earnings call from Intel?
Remember the rumor that Intel was going to increase their prices by up to 20%.
Well, Intel just confirmed they are increasing prices in Q4 2022.
So the 13600K might end up competing against a 7700X or 7800X.
 
Last edited:
Top Bottom