Yeah, but those are the advertised clocks. RDNA 1 and 2.0 cards go well beyond their advertised game clocks in game.
This is the highest clock I could find on the 6800xt and 6900xt, but the other games in this comparison video show how both cards are consistently in the 2.5-2.4 ghz range. Consistently higher than the PS5 clocks, and way higher than the xsx clocks.
I would love to see DF cap the clocks to 1.8 ghz and see just how proportionately the performance drops. Is it 1:1 with tflops? Or does the GPU is increasing the clockspeeds because thats what it needs to hit the higher framerates?
Then there is the infinity cache. IIRC, the 6800xt literally has a 6 billion transistor infinity cache taking up precious space on the 25 billion transistor die. Thats a cost increase of 30% on each die. That tells me that there is no way they are getting this performance without the infinity cache or they wouldve skimped on such an expense.
And yes, the reason fewer CUs and higher clocks wouldve been more economical. Thats probably why Cerny went with that design because it seems Sony was still targeting a $399 price point. However, you still have to cool that thing. Look at the wattage on the 6700xt. 170w on its own at 2.539 GHz. So 12.99 tflops. MS went with more CUs and higher clocks because they had to budget for the CPU wattage, the SSD, and the motherboard. Thats a lot of heat being produced. MS's vapor chamber cooling solution is already very expensive. Way more than Sony's traditional but bulky cooling solution. If they had gone for a 40 CU 2.4 Ghz GPU, they wouldve saved some space/cost on the die, but wouldve required a far more elaborate and expensive cooling. Adding more CUs was probably cheaper in their scenario.