• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

With Hot Chips Around The Corner, Let's Fire Up Some Series X Architecture Speculation!

What area(s) of Series X (and Series S) do you think MS have made most of their customizations in?

  • GPU

    Votes: 53 66.3%
  • CPU

    Votes: 20 25.0%
  • Audio

    Votes: 9 11.3%
  • Memory

    Votes: 22 27.5%

  • Total voters
    80
  • Poll closed .

JonnyMP3

Member
Mesh Shaders still require the hardware to support them to use the full capabilities. not all GPUs support Mesh Shaders. In fact, the only GPUs that currently support mesh shaders are the RTX 20xx series of cards. Please provide evidence to the contrary if you're going to continue making unfounded claims. Here's an article on Mesh Shaders
What is the XSX going to use?
Mesh Shaders.
Does it have Nvidia card? No, it's AMD.
Not all GPU's can use Mesh Shaders but somehow RDNA2 allows Microsoft to do that on the XSX. 🤔
 

Bryank75

Banned
Nothing looked a generation ahead with ps5. Most of the game sony showed looked like ps4 titles. Its not even close. I didnt see any games that looked remotely next gen revealed yet from ms or sony. At least not yet. Games in 4k dont impress me as ive already been playing some games in 4k on my onex
Ratchet looked about two generations ahead of Halo, so did Horizon and a few other games.... if you want to pretend they didn't that's fine. I really don't care.

Enjoy the craig meme's for the rest of next gen.
 
Nothing looked a generation ahead with ps5. Most of the game sony showed looked like ps4 titles. Its not even close. I didnt see any games that looked remotely next gen revealed yet from ms or sony. At least not yet. Games in 4k dont impress me as ive already been playing some games in 4k on my onex

I agree with 4K alone not being a really "impressive" next-gen metric, especially considering, as you say, systems like the One X provide a lot of games at 4K already (PS4 Pro as well, tho to a lesser extent). The other issue being that if upscaling techniques really take off on next-gen, we can essentially "fake" 4K through upscaling that produces results as good if not better than native 4K (Series X in particular seems pretty primed for this since we already know they have customized the CUs in the GPU to support some extended ML features).

BUT...I actually was impressed by several of the PS5 games shown. Then again I don't necessarily need the epitome of graphics to necessarily impress me. That said, I quite liked the visual stature of Horizon Forbidden West, Ratchet & Clank, GT7 and even smaller games like Kena. Given the vast majority of current-gen owerns are on base XBO and base PS4, I think for them the jump to PS5 and Series X is going to be at least somewhat mindblowing ;)
 
There is not enough die space for 32+MB of L3 Cache.
An 8-core Zen 2 CCD is 80mm2.
Given we know the XSX has a die size of 360mm2, that would leave about 280mm2 for the GPU cores, the I/O, the Memory controllers and memory busses (GDDR6 memory busses are huge by the way) and whatever other custom hardware they may or may not have.
A 256-bit is about 80mm2, so a 320-bit bus would be about 100m2. 40CU takes up about 120mm2 and I/0 40mm2.
Extrapolate 40CU to 56CU and you get around 170mm2. Add maybe another 10% for the RT hardware and 180mm2 for the compute cores.

So the GPU portion you could expect to take up a total of 320mm2. Which leaves only 40mm2 for CPU. Even if you increase the transistor density, you're still looking at about 290mm2 just for GPU. Leaving 70mm2 for CPU - again not enough. Baring in mind, that these pieces likely can't be fit perfectly together, so there will be some "empty" die space to account for interconnects and data fabrics etc.

At a stretch....its possible to have the full 32MB of cache, but its highly unlikely imo.
 

JonnyMP3

Member
....Maybe because AMD and Microsoft Collaborated to bring Mesh Shader Support to RDNA 2 GPUs.

RDNA 2 is not out yet. Currently the only GPUs that support Mesh Shaders are the Turing cards.

It's really not a hard concept to grasp...
And guess which other machine has a custom based AMD RDNA2 GPU?
So being that the XSX and PS5 have the same base GPU, they can both use each others Shader algorithms.
Is a game multi-platform developed on DX12U using mesh shaders going to suddenly not use mesh shaders on the PS5 even though it's also RDNA2?

Edit: Looking back. I can understand the confusion. It might have been that when I said all GPUs I specifically mean the ones in the PS5 and XSX. Not non Turing or RDNA2 cores. My bad!
 
Last edited:

CrysisFreak

Banned
Nothing looked a generation ahead with ps5. Most of the game sony showed looked like ps4 titles. Its not even close. I didnt see any games that looked remotely next gen revealed yet from ms or sony. At least not yet. Games in 4k dont impress me as ive already been playing some games in 4k on my onex
Horizon Forbidden West was the most impressive and certainly a generation ahead. There is no way it would run on PS4, ever. If that doesn't impress you then you're not in for a good time jfl.
 

pasterpl

Member
Not an expert in any way, shape, or form, but given the comments from MS about ray tracing is it possible that the shaders are kind of dual threaded? Like can they do normal shader stuff plus either RT or ML without a performance penalty?

isn’t this exactly what they have meant with the infamous 25tf?
 

TigerKnee

Member
If XSX is so superior then why did everything at the PS5 reveal look a generation or more ahead of the Xbox shows?

Also why was everything at the Xbox shows running on PC rigs as if they had something to hide?

If the XBox is capable of so much, when will we get to see these advantages?
One - Microsoft is lying about the Series X specifications.
OR
Two - Sony’s developers are more talented than Microsoft’s.
I’m going with 2
 
Last edited:

GODbody

Member
And guess which other machine has a custom based AMD RDNA2 GPU?
So being that the XSX and PS5 have the same base GPU, they can both use each others Shader algorithms.
Is a game multi-platform developed on DX12U using mesh shaders going to suddenly not use mesh shaders on the PS5 even though it's also RDNA2?
Not necessarily as both the Series X and the PS5 use heavily customized Chips based on the RDNA 2 architecture. Due to the explicit statement of Sony that they're going with a geometry engine and primitive shaders im guessing that they're probably not going to have the hardware support for Mesh Shaders. If they did they probably would have mentioned that.

When Mesh Shaders are run on GPUs that don't support it they still output as vertex shaders. Which you would know if you read the article or the quote from the article I responed to you with a few posts back.

Let me know when you can produce some evidence that the PS5 supports Mesh Shaders. Until then I'm going to stick with that support being unlikely.
 

JonnyMP3

Member
Not necessarily as both the Series X and the PS5 use heavily customized Chips based on the RDNA 2 architecture. Due to the explicit statement of Sony that they're going with a geometry engine and primitive shaders im guessing that they're probably not going to have the hardware support for Mesh Shaders. If they did they probably would have mentioned that.

When Mesh Shaders are run on GPUs that don't support it they still output as vertex shaders. Which you would know if you read the article or the quote from the article I responed to you with a few posts back.

Let me know when you can produce some evidence that the PS5 supports Mesh Shaders. Until then I'm going to stick with that support being unlikely.
We can agree to disagree. That's perfectly fine. But considering both are RDNA2, I'm going with a common functionality as it'll have to be included into the new PC GPU's as well when they're released.
 

Panajev2001a

GAF's Pleasant Genius
isn’t this exactly what they have meant with the infamous 25tf?

Nothing infamous... they can just process 4x and 8x denser vectors at 4x and 8x the speed of regular 4 elements wide vectors (32 bits per element vs 4 or 8 bits per element). Same thing as rapid packed math on Vega GPU’s and PS4 Pro (just supporting even smaller elements in each vector)... or like GCN’s Gekko’s CPU by IBM, etc...

Multi-threading would not help in this scenario as the GPU is already processing many threads at once swapping between them to hide latency in operations (like memory fetches), but the same operation is running on all the elements of the vector.It would not change the performance impact on shading workload.

Classics Multi-Threading you may be thinking of like SMT is designed to make use of potentially idle execution units in the CPU (low instruction level parallelism in each thread of execution) allowing the CPU to fetch, dispatch, and execute instructions from more than one thread at the same time and acts as more than one virtual cores that are essentially sharing the CPU resources.
 
Last edited:

Fafalada

Fafracer forever
Like can they do normal shader stuff plus either RT or ML without a performance penalty?
The 'normal shader stuff' runs on same execution units as ML and portion of RT (the acceleration structure traversal). It's not a question of performance 'penalty' - it's simply what you can execute.
 

DavidGzz

Member
One - Microsoft is lying about the Series X specifications.
OR
Two - Sony’s developers are more talented than Microsoft’s.
I’m going with 2

Flight Simulator, Crossfire X, Everwild, Forza Motorsport, The Medium with the dual reality, Avowed(possible gameplay at the end), and Scorn all looked as good as the PS5 games. And if you're going to bring up a game that is a ways out I can bring up Hellblade 2 which looked better than Horizon. I think overall both have shown a pretty equal graphics level. But you guys can keep pretending that it was lopsided. Hell, Flight Simulator is the best one considering it's out this month and will be a launch game for XSX.
 
Last edited:

Elog

Member
There is not enough die space for 32+MB of L3 Cache.
An 8-core Zen 2 CCD is 80mm2.
Given we know the XSX has a die size of 360mm2, that would leave about 280mm2 for the GPU cores, the I/O, the Memory controllers and memory busses (GDDR6 memory busses are huge by the way) and whatever other custom hardware they may or may not have.
A 256-bit is about 80mm2, so a 320-bit bus would be about 100m2. 40CU takes up about 120mm2 and I/0 40mm2.
Extrapolate 40CU to 56CU and you get around 170mm2. Add maybe another 10% for the RT hardware and 180mm2 for the compute cores.

So the GPU portion you could expect to take up a total of 320mm2. Which leaves only 40mm2 for CPU. Even if you increase the transistor density, you're still looking at about 290mm2 just for GPU. Leaving 70mm2 for CPU - again not enough. Baring in mind, that these pieces likely can't be fit perfectly together, so there will be some "empty" die space to account for interconnects and data fabrics etc.

At a stretch....its possible to have the full 32MB of cache, but its highly unlikely imo.

This is roughly where I end up as well. Assuming that both machines have more or less the same silicon budget in mm2, I end up a little confused with the PS5 to be honest. Even if I assume the new I/O complex plus tempest block would take up close to the same size as a full CPU (which most likely is an overkill in mm2), I end up with roughly 50mm2 unaccounted for. Will be interesting to see where those mm2 are going (I would be surprised if the die was considerably smaller).
 

T-Cake

Member
It's going to be interesting when it comes to CPU. I have a reasonable Intel 8700 (6 cores/12 threads) and it never seems to go above 35% when I'm playing games. So what the devs will do with an even better CPU (and pushing it way more) should be very interesting. But I also love the SSD. I have an NVMe SSD in my PC but it's used purely for Windows and my games are on 500MB SATA SSD. So to have games designed around an SSD which is 5x faster than I'm used to will be very interesting también.

I get paid whenever I use "interesting" in a sentence.
 
Last edited:
This is roughly where I end up as well. Assuming that both machines have more or less the same silicon budget in mm2, I end up a little confused with the PS5 to be honest. Even if I assume the new I/O complex plus tempest block would take up close to the same size as a full CPU (which most likely is an overkill in mm2), I end up with roughly 50mm2 unaccounted for. Will be interesting to see where those mm2 are going (I would be surprised if the die was considerably smaller).

I thought it was assumed they don't have the same silicon budget though? If they do, then we already know where that budget went: the I/O die.

Seriously, look at the shots in Road to PS5 vid. That thing is BIG. It's seemingly the size of the CPU and GPU die area combined. One of the reasons for that would be the SRAM cache on the I/O block. Any semi-decent amount of SRAM cache is pretty sizable, and they are probably rolling with 32 MB - 64 MB of it on the I/O block.
 

oldergamer

Member
Ratchet looked about two generations ahead of Halo, so did Horizon and a few other games.... if you want to pretend they didn't that's fine. I really don't care.

Enjoy the craig meme's for the rest of next gen.
It looked good, but not two generations ahead of anything. neither did horizon as it looked like the same game just with more variety objects on screen. I'm not pretending, I'm being honest. did it look good? damn right, did it look that much different then what the current game looks like? not exactly.

Craig meme? lame. Don't cry fanboy tears when multiplats perform better on xbox.
 

Elog

Member
I thought it was assumed they don't have the same silicon budget though? If they do, then we already know where that budget went: the I/O die.

Seriously, look at the shots in Road to PS5 vid. That thing is BIG. It's seemingly the size of the CPU and GPU die area combined. One of the reasons for that would be the SRAM cache on the I/O block. Any semi-decent amount of SRAM cache is pretty sizable, and they are probably rolling with 32 MB - 64 MB of it on the I/O block.

You might absolutely be right. One should also remember that Sony customised the actual CU clusters on the PS4 Pro (One 20 block has larger CU clusters than the other 20 hinting at changes in cache/rasterizers/primitives etc).

We'll see! Hopefully we get this level of detail on Monday with the XSX.

Edit: If all my speculation before has any validity I expect part of those 50mm2 to be allocated to the GE at least and not just the I/O.
 
Last edited:

oldergamer

Member
Horizon Forbidden West was the most impressive and certainly a generation ahead. There is no way it would run on PS4, ever. If that doesn't impress you then you're not in for a good time jfl.
Nah, you act like you haven't seen the game running on ps4 pro with dynamic resolution before. if its the first time you have seen it the franchise, yeah 100% impressive.
 
This is roughly where I end up as well. Assuming that both machines have more or less the same silicon budget in mm2, I end up a little confused with the PS5 to be honest. Even if I assume the new I/O complex plus tempest block would take up close to the same size as a full CPU (which most likely is an overkill in mm2), I end up with roughly 50mm2 unaccounted for. Will be interesting to see where those mm2 are going (I would be surprised if the die was considerably smaller).
I think you're overthinking it.
If Sony went for a narrower GPU, it's very likely the APU is just smaller.
There is a tremendous benefit in terms of process economics by going for a smaller die size.
More chips per wafer, higher overall process yields, which means more usable chips per wafer, which means lower costs overall.

I thought it was assumed they don't have the same silicon budget though? If they do, then we already know where that budget went: the I/O die.

Seriously, look at the shots in Road to PS5 vid. That thing is BIG. It's seemingly the size of the CPU and GPU die area combined. One of the reasons for that would be the SRAM cache on the I/O block. Any semi-decent amount of SRAM cache is pretty sizable, and they are probably rolling with 32 MB - 64 MB of it on the I/O block.

I don't think those diagrams are to scale. They're about as big as they need to be to display the key features.
 
Last edited:

Great Hair

Banned
PRT(+) existed on 8th gen.consoles, so SFS is nothing new. It even claims to be an update to PRT.

Terminology
Use of sampler feedback with streaming is sometimes abbreviated as SFS. It is also sometimes called sparse feedback textures, or SFT, or PRT+, which stands for “partially resident textures”.


What if this is true? Starting to think, Microsoft has figured out how bad having two mem.pools are especially when one is nearly half the speed of the other.

What if the finite amount out of those 16 GB, in reality are just about 7.5GB? @ 560Gb/s and these "secret sauce" like SFS and what not only exist to alleviate the memory issue at hand aka just a fraction (7.5gb) can read @560Gb?

Thus needing to have "Software Technology" capable of reducing the I/O per second, being it for culled objects, less textures streaming etc.etc.



Interesting 10 part read about the next generation of consoles.
 
PRT(+) existed on 8th gen.consoles, so SFS is nothing new. It even claims to be an update to PRT.



What if this is true? Starting to think, Microsoft has figured out how bad having two mem.pools are especially when one is nearly half the speed of the other.

What if the finite amount out of those 16 GB, in reality are just about 7.5GB? @ 560Gb/s and these "secret sauce" like SFS and what not only exist to alleviate the memory issue at hand aka just a fraction (7.5gb) can read @560Gb?

Thus needing to have "Software Technology" capable of reducing the I/O per second, being it for culled objects, less textures streaming etc.etc.



Interesting 10 part read about the next generation of consoles.

Multiple people on this forum (and other forums, and hell MS's own Xbox engineers on Twitter) have debunked your premise on SFS being "just" another take on PRT, as MS's own engineers say PRT is "just a start" for what they are doing with SFS.

Also that graphic you've linked, I have read that blog and their speculation is not only outdated now (it was written back in March), but some of it was wrong even at the time. I combed through this graphic in particular in another thread weeks ago asserting how ridiculous the blogger's idea that a next-gen console would only render effective bandwidth at or even less than a system released in 2013...

...because that is absolutely an idiotic speculation that, okay, may be possible, but is an extremely low probability, therefore should effectively be dismissed as something a mega-conglomerate corporation as one of the top in its field WRT to engineering hardware, would actually lack the foresight to avoid in engineering in their next-gen console.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
Not expecting anything new myself; feel like MS revealed all the hardware and has just been repeating the same info on their SDK/API stuff.
 
I'm hoping Microsoft have gone for a hidden stacked GPU like with the Xbox One which can be activated at a later date :messenger_grinning_sweat:

To be honest tho I'm just hoping there is still some juicy stuff to reveal, Microsoft have done a lot of talking lately regarding the Series X that it leaves me doubtful we'll get much new. Then again it's been pretty much PR heads doing the talking so this should at least be more detailed on the technical lingo.
 
Not expecting anything new myself; feel like MS revealed all the hardware and has just been repeating the same info on their SDK/API stuff.

There are specific hardware customizations to the APU they have yet to detail. There's also been some things found in patents that they might've implemented into the design, but have not actually mentioned publicly. Even quite a few aspects about XvA have not been fully disclosed as of this time (stuff like BCPack have been getting worked on for a good while now).

Quite a bit for them to make mention of still tbh.
 

GODbody

Member
PRT(+) existed on 8th gen.consoles, so SFS is nothing new. It even claims to be an update to PRT.



What if this is true? Starting to think, Microsoft has figured out how bad having two mem.pools are especially when one is nearly half the speed of the other.

What if the finite amount out of those 16 GB, in reality are just about 7.5GB? @ 560Gb/s and these "secret sauce" like SFS and what not only exist to alleviate the memory issue at hand aka just a fraction (7.5gb) can read @560Gb?

Thus needing to have "Software Technology" capable of reducing the I/O per second, being it for culled objects, less textures streaming etc.etc.



Interesting 10 part read about the next generation of consoles.
Yes SFS is an evolution of PRT designed to be more efficient with a hardware implemented residency map.



As for the ram configuration, as far as the system is concerned it's one large pool with fast address space in the GPU optimal pool, slower address space in the CPU optimal pool, and the slowest address space being the Storage. Heres a good illustration of the ram configuration from reddit.


PzeWP8v.png


Explanation of the diagram
The columns represent the RAM chips. There are 10 of them. 6 are 2GB and 4 are 1GB. That equals the 16GB in the XSX.
If you want to read at the fastest speed, you need to store the data so that it spans across all 10 chips, because each chip can be read at the same time. This spanning of data is represented as the stripes. The red stripes are the full speed, 560GB/s.
You can't span the last 6GB across 10 chips because there are only 6 chips that have the extra 1GB capacity. (that's the yellow stripes) Because you can only read from the 6 chips as opposed to it being spread across all 10, you are limited to 336GB/s.
Hope this explanation helps!

As per digital foundry

"Memory performance is asymmetrical - it's not something we could have done with the PC," explains Andrew Goossen "10 gigabytes of physical memory [runs at] 560GB/s. We call this GPU optimal memory. Six gigabytes [runs at] 336GB/s. We call this standard memory. GPU optimal and standard offer identical performance for CPU audio and file IO. The only hardware component that sees a difference in the GPU."
In terms of how the memory is allocated, games get a total of 13.5GB in total, which encompasses all 10GB of GPU optimal memory and 3.5GB of standard memory. This leaves 2.5GB of GDDR6 memory from the slower pool for the operating system and the front-end shell. From Microsoft's perspective, it is still a unified memory system, even if performance can vary. "In conversations with developers, it's typically easy for games to more than fill up their standard memory quota with CPU, audio data, stack data, and executable data, script data, and developers like such a trade-off when it gives them more potential bandwidth," says Goossen
 
Last edited:

Great Hair

Banned
Yes SFS is an evolution of PRT designed to be more efficient with a hardware implemented residency map.



As for the ram configuration, as far as the system is concerned it's one large pool with fast address space in the GPU optimal pool, slower address space in the CPU optimal pool, and the slowest address space being the Storage. Heres a good illustration of the ram configuration from reddit.


PzeWP8v.png


Explanation of the diagram


As per digital foundry


So 10Gb@560 not 7.5gb. Have to reread how he ended up with 7.5gb.

Btw the quotes are illegible (dark theme).
 

Dr Bass

Member
Flight Simulator, Crossfire X, Everwild, Forza Motorsport, The Medium with the dual reality, Avowed(possible gameplay at the end), and Scorn all looked as good as the PS5 games. And if you're going to bring up a game that is a ways out I can bring up Hellblade 2 which looked better than Horizon. I think overall both have shown a pretty equal graphics level. But you guys can keep pretending that it was lopsided. Hell, Flight Simulator is the best one considering it's out this month and will be a launch game for XSX.

Geez I could not disagree more. The medium looked like a budget current gen game. Scorn has been in development for like 4 years and also looked quite current gen. Go watch the gameplay videos from a few years back. It looks completely drab and boring. Not to mention it's basically an H.R. Giger wannabe game (come up with your own style devs!) Flight simulator looks cool enough (I'd really love to play it) but has only been shown on PC. Everwild could also be current gen. And Rare doesn't even know what the game is yet. Hellblade they showed .... video footage of cities. We have yet to see any gameplay from that game. A real time facial animation is supposed to make me care at all? Especially when it was so laughably corny at trying to be "edge lord" type silliness? I mean, it's been a little while since I was 12. 🤷‍♂️

Go watch Ratchet, Kena and Horizon in their actual 4k videos. Ratchet and Kena especially look like a real time animated movie. GT7 showed the actual game running at 60fps with ray tracing. Forza was running at 30fps and was "in engine."

Your post is straight up disinformation.

Nothing shown for Xbox Series X has even gotten close to what has been shown for PS5 unfortunately. I have a feeling as the disparity in game quality continues to grow this board is going to continue going off the rails. I'd love to see some great stuff coming from Xbox, and I still kinda want a series X because I think it's gonna be a nice piece of hardware and I'm stupid like that, but there is a reason they are getting a fraction of the interest of Sony and Nintendo and they continue to flounder with their communication and marketing.
 
Ratchet looked about two generations ahead of Halo, so did Horizon and a few other games.... if you want to pretend they didn't that's fine. I really don't care.

Enjoy the craig meme's for the rest of next gen.

Enjoy the XSX beating the PS5 at every single third party game for the rest of next gen.

Halo has to run on an Xbox One S, btw.
 

Bryank75

Banned
All of them? I don't get your question.
Death Loop and Ghostwire Tokyo are already exclusive to PS5..

Apparently there is a huge amount of 3rd party exclusives that PlayStation got tied down. Some full exclusive, some timed and others just a lot of extra content and stuff.

Even a rumor about GTA6 and FFXVI
 
Last edited:
Death Loop and Ghostwire Tokyo are already exclusive to PS5..

Apparently there is a huge amount of 3rd party exclusives that PlayStation got tied down. Some full exclusive, some timed and others just a lot of extra content and stuff.

Even a rumor about GTA6 and FFXVI

Both Death Loop and Ghostwire Tokyo are timed exclusives. I assume most if not all exclusives are gonna be timed. So my point stands.
 
Just a small bump: Hot Chips for Series X is today!

5:00 – 6:30 PM: GPUs and Gaming Architectures
  • NVIDIA’s A100 GPU: Performance and Innovation for GPU Computing
    • Jack Choquette and Wishwesh Gandhi, NVIDIA
  • The Xe GPU Architecture
    • David Blythe, Intel
  • Xbox Series X System Architecture
    • Jeff Andrews and Mark Grossman, Microsoft
Should be taking place at this time; gonna catch all three if I can. Does anyone know if they have a stream, or if these companies are going to do streams on their own sites? Is it even being streamed at all? I need to know these things.

EDIT: Dunno if these are new or old, but found these on TomsHardware.

baMcFM2H9RyMFsiJuioCCk-650-80.jpg


F it; they have a bunch of slides at the bottom, I'm gonna skim through them right now so here's the page link.
 
Last edited:

T-Cake

Member
Should be taking place at this time; gonna catch all three if I can. Does anyone know if they have a stream, or if these companies are going to do streams on their own sites? Is it even being streamed at all? I need to know these things.

Are you a paid up member to that site? It's not free viewing.
 
Are you a paid up member to that site? It's not free viewing.

Nah, I didn't know it was that kind of website xD.

Tom's Hardware has an article on the presentation and a bunch of slides at the bottom you can check out. Maybe the presentation will cover those slides and maybe a few more not present here? I linked in the other post if interested.
 

T-Cake

Member
Nah, I didn't know it was that kind of website xD.

Tom's Hardware has an article on the presentation and a bunch of slides at the bottom you can check out. Maybe the presentation will cover those slides and maybe a few more not present here? I linked in the other post if interested.

Yeah, I'm just viewing those now. Very interesting.

So was the 4MB L3 cache per 4 cores in line with what people were predicting, I wonder.
 
Last edited:
IO hub = x8 PCIe Gen 4.

Hmm, what's the max speed for PCIe Gen 4 8x? I wonder if those expansion cards can be upgraded in the future. :goog_unsure:

16 GB/s. Each PCIe Gen 4 TX/RX link is 2 GB (theoretically; in practice it's 1.969 GB/s due to 128b/130b encoding. Conversely if it were Gen 3 it'd actually be 1.6 GB/s because that used 8b/10b encoding, which has more overhead).
 
mU23LQCtEe9ePVZe3DpU7S-650-80.jpg.webp


SOC Fabric Coherency G6 MCs = ......?

Yeah, I'm just viewing those now. Very interesting.

So was the 4MB L3 cache per 4 cores in line with what people were predicting, I wonder.

Maybe for some, maybe not for others. There's been rumors maybe one of the systems has a unified L3 cache reflecting Zen 3 architecture changes. Maybe that's PS5 or maybe it was just a BS rumor the whole time.

Also heard rumors of a "quite large" L3 cache for Series X CPU but IIRC doesn't Zen 2 support up to 32 MB L3 cache? If so 8 MB is actually in line with what was to be expected, nothing particularly surprising in that department.
 
Last edited:
Top Bottom