• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Steam has a new GPU on the number one spot

Excellent news.

The sooner we move on from Series S tier hardware the better.

All of us who said Series S would be holding the generation back have been vindicated. The low end PC excuse is no more.

I don't know what you guys are seeing, but if you add up all the hardware on that list that is below Series S specs, it totals to 22.4%. Almost a quarter of the people who responded to the survey have GPUs with less potency than a $300 console.
 
I don't know what you guys are seeing, but if you add up all the hardware on that list that is below Series S specs, it totals to 22.4%. Almost a quarter of the people who responded to the survey have GPUs with less potency than a $300 console.
PC gamers are too busy playing games to care about that shit
 
Lower clocks = less heat = cheaper cooling.
Yea but I can't imagine the clocks will be that much lower. The PC version of the PS5 GPU boosts to 2400Mhz while the PS5 boosts to 2100Mhz that's a very small difference which saves the PS5 heat and power consumption as well as it means more dies are acceptable for PS5 use but the difference in TFs between the 2 is about 10% that's nothing compared to what we're looking at here.
 

dave_d

Member
3MhZEM2.gif
 
I don't know what you guys are seeing, but if you add up all the hardware on that list that is below Series S specs, it totals to 22.4%. Almost a quarter of the people who responded to the survey have GPUs with less potency than a $300 console.
That doesn't mean they're "holding back the generation". That entire concept is ludicrous. Unlike consoles, PCs and Laptops aren't all made for gaming, and a ton of people don't make gaming performance their main criterion when buying one. Many of the most popular games on Steam (DOTA, CSGO and TF2 in particular) will run on just about anything, and millions of Steam users play those almost exclusively. They don't care about the latest AAA console ports, just like the studios making those ports don't care about them.
 

Dream-Knife

Banned
Yea but I can't imagine the clocks will be that much lower. The PC version of the PS5 GPU boosts to 2400Mhz while the PS5 boosts to 2100Mhz that's a very small difference which saves the PS5 heat and power consumption as well as it means more dies are acceptable for PS5 use but the difference in TFs between the 2 is about 10% that's nothing compared to what we're looking at here.
Maybe the rumors of it being 60 cus is false then.

All of these rumors are made up.
 

GHG

Member
I don't know what you guys are seeing, but if you add up all the hardware on that list that is below Series S specs, it totals to 22.4%. Almost a quarter of the people who responded to the survey have GPUs with less potency than a $300 console.

Yes, and my statement was:

"The sooner we move on from Series S tier hardware the better."

So I'm not sure what you're seeing.
 
TLOU using almost 9 gigabyte of VRAM for 1440p DLSS ultra settings was pretty weird.

Glad I got that 3060 lol but must have sucked for 8 gigabyte card users.
 

PaintTinJr

Member

XeoB50W.jpg



This means that the most used GPU by Steam users, has around the graphical power of a PS5, but with DLSS and better ray-tracing.
Steam has 132 million monthly active users in the last 30 days. So if we extrapolate that figure to the 6.27% of users with a 3060, that gives around 8 million users.
Based on numerous benchmarks of cross-gen of games that barely stream data, we can say it equals a PS5, but between the PS5's nearly double pixelrate and far more efficient AMD async compute and the FP16 rapid pack maths putting it closer to double 3060 teralops, or half the 3060 memory bandwidth required for the same as 12TF of Nvidia FP32, so saying they are equal isn't going to age well IMO, and I say that as one of those 3060 12GB Steam users that doesn't like fake frame generation or DLSS from sub-HD sources.
 

winjer

Gold Member
Based on numerous benchmarks of cross-gen of games that barely stream data, we can say it equals a PS5, but between the PS5's nearly double pixelrate and far more efficient AMD async compute and the FP16 rapid pack maths putting it closer to double 3060 teralops, or half the 3060 memory bandwidth required for the same as 12TF of Nvidia FP32, so saying they are equal isn't going to age well IMO, and I say that as one of those 3060 12GB Steam users that doesn't like fake frame generation or DLSS from sub-HD sources.

Yes, RNDA2 arch has advantages. But Ampere also has some. Like better Delta color compression, better tessellation and geometry engines, dedicated units for RT, BVH traversal and Tensor units.
So it's a case of win some, lose some. But in the send, it kind of evens out.
 

PaintTinJr

Member
Yes, RNDA2 arch has advantages. But Ampere also has some. Like better Delta color compression, better tessellation and geometry engines, dedicated units for RT, BVH traversal and Tensor units.
So it's a case of win some, lose some. But in the send, it kind of evens out.
Not really, RT beyond lower frequency signed distance fields for full scene RT lighting is hardly noticeable compared to complex rasterised indirect lighting in indoor scenes at short distnaces, so my 3060 without fake frames and DLSS, running cross-gen isn't going to be on par with PS5 exclusives by the end of the gen IMO. I doubt the PC port of Spiderman 2 will on balance without fake frames, and without DLSS compare favourably even at this early stage of leaving cross-gen.

Much like the claims of a GTX x60 being a match for the Ps4 at cross-gen, the 3060 isn't going to hold up in reality much better without people kidding themselves with fake frame-rates and inferred imagines IMO
 

winjer

Gold Member
Not really, RT beyond lower frequency signed distance fields for full scene RT lighting is hardly noticeable compared to complex rasterised indirect lighting in indoor scenes at short distnaces, so my 3060 without fake frames and DLSS, running cross-gen isn't going to be on par with PS5 exclusives by the end of the gen IMO. I doubt the PC port of Spiderman 2 will on balance without fake frames, and without DLSS compare favourably even at this early stage of leaving cross-gen.

Much like the claims of a GTX x60 being a match for the Ps4 at cross-gen, the 3060 isn't going to hold up in reality much better without people kidding themselves with fake frame-rates and inferred imagines IMO

On the 3060, we will be able to push higher quality levels of RT, without suffering the performance impact of RDNA2. And with DLSS 3.5, RT quality will be even better than anything a PS5 can produce.
And there are the other things that the 3060 does better, like memory bandwidth, tessellation and geometry, BVH traversal, and ML.
 

PaintTinJr

Member
On the 3060, we will be able to push higher quality levels of RT, without suffering the performance impact of RDNA2. And with DLSS 3.5, RT quality will be even better than anything a PS5 can produce.
And there are the other things that the 3060 does better, like memory bandwidth, tessellation and geometry, BVH traversal, and ML.
At a native base level without eye candy near plane RT, the 3060 isn't a match for the PS5 in non cross-gen gaming. Nvidia can try and use fake frames and inferenced pixels, but in a fair scientific native resolution, with micropolygons and shader fx and SDF indirect lighting where the FP16 competes with FP32 units doing FP16 calculations, the 3060 isn't an equal, even now, and techniques from studios that produced great results by today's standards on the lowly 1.84TF PS4 are only going to widen that gap IMO.

Giving potential 3060 buyers a false sense of how that hardware will last them against their expectations isn't good advice IMO.
 
Yes, RNDA2 arch has advantages. But Ampere also has some. Like better Delta color compression, better tessellation and geometry engines, dedicated units for RT, BVH traversal and Tensor units.
So it's a case of win some, lose some. But in the send, it kind of evens out.

It's been the case for a few years now that Nvidia's cards are better at geometry throughput than AMD, I'm curious as to why that is?
 

winjer

Gold Member
At a native base level without eye candy near plane RT, the 3060 isn't a match for the PS5 in non cross-gen gaming. Nvidia can try and use fake frames and inferenced pixels, but in a fair scientific native resolution, with micropolygons and shader fx and SDF indirect lighting where the FP16 competes with FP32 units doing FP16 calculations, the 3060 isn't an equal, even now, and techniques from studios that produced great results by today's standards on the lowly 1.84TF PS4 are only going to widen that gap IMO.

Giving potential 3060 buyers a false sense of how that hardware will last them against their expectations isn't good advice IMO.

You have to maintain several graphical features on low settings on the PS5, as not to overwhelm it.
RT is a prime example. If we were to try to use RT levels of quality that the 3060 can do, but on a PS5, the console would not be able to cope.
And the same thing for geometry and tessellation.
And try to use AF16X on a PS5 and see how performance drops. Meanwhile, on a 3060, it's almost free, because it has much more memory bandwidth.
Using the Tensor units on a 3060, will not remove processing power from other tasks. But on RDNA2, it uses DP4A.

Yes, we can force games to run with low quality settings, similar to the PS5, for AF, geometry, tessellation, upscaling, ML and RT. But that like having Mike Tyson fighting me with both his hands tied behind hie back, and saying I'm better than him.
 

winjer

Gold Member
It's been the case for a few years now that Nvidia's cards are better at geometry throughput than AMD, I'm curious as to why that is?

Nvidia has dedicated geometry processors, vertex shaders, and primitive assembly units. AMD GPUs don't have as much of these dedicated units.
 

Zathalus

Member
Based on numerous benchmarks of cross-gen of games that barely stream data, we can say it equals a PS5, but between the PS5's nearly double pixelrate and far more efficient AMD async compute and the FP16 rapid pack maths putting it closer to double 3060 teralops, or half the 3060 memory bandwidth required for the same as 12TF of Nvidia FP32, so saying they are equal isn't going to age well IMO, and I say that as one of those 3060 12GB Steam users that doesn't like fake frame generation or DLSS from sub-HD sources.
Accelerated FP16 has been a thing for Nvidia since Turing, thanks to tensor cores. All Nvidia GPUs have had far better FP16 performance since then.

As for Async, Nvidia has equalled AMD since Ampere.
 
Last edited:

winjer

Gold Member
Accelerated FP16 has been a thing for Nvidia since Turing, thanks to tensor cores. All Nvidia GPUs since then have had far better FP16 performance since then.

As for Async, Nvidia has equalled AMD since Ampere.

And it can go more granular, using Int8 and Int4. Doubling throughput with each step. All, on the Tensor cores.
And this can all be done relatively easy, using CUDA.
 

JCK75

Member
The price/performance is just right IMO.. got one for a cheap upgrade to my work PC (just in time for BG3) and also got a cheap prebuilt for my son that is really sweet but needed some more gpu power.. got another 3060 to get it up to date.. though he won't get it until Christmas.
 

PaintTinJr

Member
You have to maintain several graphical features on low settings on the PS5, as not to overwhelm it.
RT is a prime example. If we were to try to use RT levels of quality that the 3060 can do, but on a PS5, the console would not be able to cope.
And the same thing for geometry and tessellation.
And try to use AF16X on a PS5 and see how performance drops. Meanwhile, on a 3060, it's almost free, because it has much more memory bandwidth.
Using the Tensor units on a 3060, will not remove processing power from other tasks. But on RDNA2, it uses DP4A.

Yes, we can force games to run with low quality settings, similar to the PS5, for AF, geometry, tessellation, upscaling, ML and RT. But that like having Mike Tyson fighting me with both his hands tied behind hie back, and saying I'm better than him.
All the features you mention except for AF and RT, ultimately have to generate many fragments per pixel, fragments that eat up pixel fillrate, which is much less on the 3060, so geometry isn't superior on the 3060 and the number of micro-polygons on display in the PS5's first unreal engine 5 demo that used 8K textures too certainly indicates that the specs comparison favours the PS5 by a considerable level with complementary software.

You mention anisotropic filtering on the PS5 as an issue, but as a 3060 will only be using 4K textures at best on balance the 8K source textures with a 2x AF are still going to produce superior texels every day of the week in PS5 exclusives.

The only feature in a fair comparison where the the RTX 3060 easily wins, currently is RT quality at short distances, which by virtue of the complexity of the calculation that leads to a final pixel is sparse on fill-rate use, and because it is offload to accelerators is free of FP16 or FP32 capability, but is graphics polish, and not more important that polygons or texturing when combined with good indirect rasterization lighting/or SDF software RT.

I suspect the cross-gen benchmarks are mostly showing that devs aren't optimising for the new console "mobile" CPUs like they did the jaguar cores, which in turn will be throttling the GPU performance when comparing to a 3060 with a full blown £200 desktop CPU.
 
Last edited:

PaintTinJr

Member
Accelerated FP16 has been a thing for Nvidia since Turing, thanks to tensor cores. All Nvidia GPUs have had far better FP16 performance since then.

As for Async, Nvidia has equalled AMD since Ampere.
Why is it referred to as Async lite/light on Nvidia cards then, and why did Unreal Engine 5 demos not immediately benefit from it to all hit 60fps?
 

Esppiral

Member
true, but with the Pro cominng out these will look like shit in comparison

That being said, fuck the 1660, 1650, pretty much anything below 2060 performance. We really shouldn't have people using cards like that in 2023 unless your situation is desperate
I am still rocking a r9 380x, come at me.
 

RickMasters

Member
most PC gamers are not buying top tier GPUs. Why am I NOT suprised by this? I remember a few years ago when steams most popular GPU was The 560Ti. And that was on the cheaper end of things.



This is why you gotta take it with a pinch of salt when some people of the more snobbish PC gamers brag about 4K 120FPS. They ain’t even getting that out of their own builds due to cost cutting. Straight cap 🧢
 

winjer

Gold Member
All the features you mention except for AF and RT, ultimately have to generate many fragments per pixel, fragments that eat up pixel fillrate, which is much less on the 3060, so geometry isn't superior on the 3060 and the number of micro-polygons on display in the PS5's first unreal engine 5 demo that used 8K textures too certainly indicates that the specs comparison favours the PS5 by a considerable level with complementary software.

Engines like UE5 adjust in real time, the amount of tris per pixel. Between the hardware and software, a lot of geometry is culled. So there is not that much overdraw.
And at resolutions like 1440p, a GPU like the 3060 won't become bottlenecked by pixel fill rate. I fact, pixel fill rate and texture fill rate haven't been a bottleneck in modern GPUs, for quite a while. Things like memory bandwidth and shader throughput, are much more likely to be one.
But the thing is, a 3060 rarely will be bottlenecked by geometry throughput. While the PS5 is more likely to be.

There is also the issue of how memory bandwidth can affect the efficiency of ROPs. And how the hardware is designed.
Example, a 5700XT has pixel fill rate of 121.9 GPixel/s. A 2070 Super has 113.3 GPixel/s. One would expect the 5700XT to win this.
But in the real world, the 2070S wins by over 22%.

i4MkFeb.png


The thing is, Nvidia has a much better support for Delta Color Compression.
Wo9Icwg.png


You mention anisotropic filtering on the PS5 as an issue, but as a 3060 will only be using 4K textures at best on balance the 8K source textures with a 2x AF are still going to produce superior texels every day of the week in PS5 exclusives.

AF2X will always look much worse than AF16, especially at oblique angles.
Using high resolution textures and then using AF2X is a waste of texture detail as then they will show up muddy and lacking detail.
Better to use lower res textures and be able to sample them more times to produce a cleaner image.

Again, we have to lower detail on the PS5, to be able to keep up with the 3060. Because on a 3060, AF16X is basically free. But on the PS5 it takes a serious hit to performance.

The only feature in a fair comparison where the the RTX 3060 easily wins, currently is RT quality at short distances, which by virtue of the complexity of the calculation that leads to a final pixel is sparse on fill-rate use, and because it is offload to accelerators is free of FP16 or FP32 capability, but is graphics polish, and not more important that polygons or texturing when combined with good indirect rasterization lighting/or SDF software RT.

I suspect the cross-gen benchmarks are mostly showing that devs aren't optimising for the new console "mobile" CPUs like they did the jaguar cores, which in turn will be throttling the GPU performance when comparing to a 3060 with a full blown £200 desktop CPU.

Come on. It's not just RT. It's RT, and memory bandwidth, and geometry and tessellation, and ML.

Yes, there is the possibility of using SDF's on the PS5, as a form of cost saving.
But it's less precise and will never look as good as RT in the 3060. Especially if using DLSS 3.5 to upscale and denoise RT effects.
 
Last edited:

Zathalus

Member
Why is it referred to as Async lite/light on Nvidia cards then, and why did Unreal Engine 5 demos not immediately benefit from it to all hit 60fps?
I'm not sure what you mean by Async lite? Async on Nvidia Maxwell actually cost performance, on Pascal is did very little, but since Turing it has offered better improvements with Ampere being really good. I'm not sure if Ada offers any further improvements.
 

PaintTinJr

Member
Engines like UE5 adjust in real time, the amount of tris per pixel. Between the hardware and software, a lot of geometry is culled. So there is not that much overdraw.
And at resolutions like 1440p, a GPU like the 3060 won't become bottlenecked by pixel fill rate. I fact, pixel fill rate and texture fill rate haven't been a bottleneck in modern GPUs, for quite a while. Things like memory bandwidth and shader throughput, are much more likely to be one.
But the thing is, a 3060 rarely will be bottlenecked by geometry throughput. While the PS5 is more likely to be.

There is also the issue of how memory bandwidth can affect the efficiency of ROPs. And how the hardware is designed.
Example, a 5700XT has pixel fill rate of 121.9 GPixel/s. A 2070 Super has 113.3 GPixel/s. One would expect the 5700XT to win this.
But in the real world, the 2070S wins by over 22%.

i4MkFeb.png


The thing is, Nvidia has a much better support for Delta Color Compression.
Wo9Icwg.png




AF2X will always look much worse than AF16, especially at oblique angles.
Using high resolution textures and then using AF2X is a waste of texture detail as then they will show up muddy and lacking detail.
Better to use lower res textures and be able to sample them more times to produce a cleaner image.

Again, we have to lower detail on the PS5, to be able to keep up with the 3060. Because on a 3060, AF16X is basically free. But on the PS5 it takes a serious hit to performance.



Come on. It's not just RT. It's RT, and memory bandwidth, and geometry and tessellation, and ML.

Yes, there is the possibility of using SDF's on the PS5, as a form of cost saving.
But it's less precise and will never look as good as RT in the 3060. Especially if using DLSS 3.5 to upscale and denoise RT effects.
I'm not talking about overdraw at all, but the candidate fragments that are generated when geometry gets decomposed ready for rasterization.

The real world font rasterization test shows that the PS5 fillrate is as claimed and doesn't follow the AMD PC GPU synthetic results you listed.

And an 8K texture with 4 times the samples of 4K provides a 4 : 1 improvement of texel density to polygon area which is effectively another anisotropic filtering level, except that now the texture has even more real source data, meaning the truth of the results may contain data that just cannot be recovered by higher filtering like 16xAF. Although that is moot point because large oblique polygons requiring anything like x16 AF is an artistic pipeline fail we wouldn't expect from PlayStation's first party devs,. and even then could be dealt with by a multitude of ways, possibly even a procedural texture from a shader to render a large flat marble floor, giving the texture sub-texel accuracy because a new procedural fragment texel colour is uniquely derived by each of the candidate fragments generated for that surface geometry in view frustum at the framebuffer resolution.

Nanite/megascans for floor surfaces are also combating a need for high level AF, because micropolygons help mitigate against oblique angle texture undersampling as you would logically expect.

As for the 3060 needing to use DLSS that is proof in itself that the card lacks the ability too brute force the work load at the PS5 level and is now being used as a bandaid.
 
Last edited:
Top Bottom