• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

8 Gb of Vram is not enough even for 1080p gaming.

It's VRAM per process, not the dumb shit everyone uses, allocated VRAM. Either way, it's fully path traced, maxed, a technology that wont see proper day light until the next 6 years and it uses 9gb. You can play games fine on 8GB with a few settings lowered.
VRAM process can be decveving as well. Sometimes VRAM process suggest I have still plenty of VRAM left, yet I see stutters, low fps and texture streaming issues.
 

rodrigolfp

Haptic Gamepads 4 Life
Why the hell are people still talking about last gen games? Ofcourse 8 gigabyte is enough for CP2077 it's minimum required hardware is a toaster.

And I have no dog in this race no real interest in triple A games at all.
Because no last gen version of games like Cyberpunk uses the same assets as the PC version.

yet I see stutters, low fps and texture streaming issues.
Maybe because the problem is not lack of VRAM in those cases.
 
Last edited:

ToTTenTranz

Banned
This is the perfect time for AMD to unveil their 16-18GB 7700XT/7800XT, and poke some fun at Nvidia.

It should be 16GB on a fully enabled Navi 32 (7800XT?) and 12GB on a partially disabled Navi 32 (7700XT?).
I do wonder what's taking so long with releasing Navi 32 and Navi 33, though. Too much stock of Navi 2x?

Or perhaps Navi 32 is clocking higher than Navi 31, putting the 7800XT uncomfortably close to the 7900XT?


Exception not the rule. Hardware Unboxed covered almost a dozen game and almost all the new games had the same problem. Cyberpunk is technically a game from 2020.

It does look like that by adding Path tracing they almost doubled their VRAM usage from 5GB to 9GB.
Cyberpunk is clearly a game with 8th-gen assets (textures, geometry, animations) with a boosted lighting system that very clearly doesn't make it a 9th-gen game.
 

SlimySnake

Flashless at the Golden Globes
It should be 16GB on a fully enabled Navi 32 (7800XT?) and 12GB on a partially disabled Navi 32 (7700XT?).
I do wonder what's taking so long with releasing Navi 32 and Navi 33, though. Too much stock of Navi 2x?

Or perhaps Navi 32 is clocking higher than Navi 31, putting the 7800XT uncomfortably close to the 7900XT?

It's the latter. Look the shady shit they pulled with the 7800x3d. It is $200 cheaper and performs equal to or better than the 7900x3d. They knew their top end had poor performance so they held back the mid range for two months while all the tech enthusiasts grabbed the more expensive but equally powerful CPUs.

TBH, The 7900xtx is not behaving like a 53 tflops card. the 6950xt is 24 tflops and is only 25% slower than the 7900xtx? I get that doubling tflops doesnt always get you 100% more performance, but a 36% performance increase is suspect. They might be hitting into the same CU limitations they ran into with the vega cards and the infinity cache that helped RDNA 2.0 GPUs perform well with 72-80 CUs is no longer helping scale the performance.

I wouldnt be surprised if the 7800xt is 40 tflops and is close to or just as powerful as the 53 tflops 7900xtx.

P.S if the 40 tflops 7800xt cuts down the power usage to 250 watts then we might just see a 25 tflops GPU in the PS5 Pro. Roughly equivalent to 6950xt as long as Cerny doesnt fuck up and cheaps out on vram bandwidth again.
 

Marlenus

Member
It's the latter. Look the shady shit they pulled with the 7800x3d. It is $200 cheaper and performs equal to or better than the 7900x3d. They knew their top end had poor performance so they held back the mid range for two months while all the tech enthusiasts grabbed the more expensive but equally powerful CPUs.

TBH, The 7900xtx is not behaving like a 53 tflops card. the 6950xt is 24 tflops and is only 25% slower than the 7900xtx? I get that doubling tflops doesnt always get you 100% more performance, but a 36% performance increase is suspect. They might be hitting into the same CU limitations they ran into with the vega cards and the infinity cache that helped RDNA 2.0 GPUs perform well with 72-80 CUs is no longer helping scale the performance.

I wouldnt be surprised if the 7800xt is 40 tflops and is close to or just as powerful as the 53 tflops 7900xtx.

P.S if the 40 tflops 7800xt cuts down the power usage to 250 watts then we might just see a 25 tflops GPU in the PS5 Pro. Roughly equivalent to 6950xt as long as Cerny doesnt fuck up and cheaps out on vram bandwidth again.

7800X3D was 1 month later and the release date and price was made known when they announced the lineup prior to the 7950X3D and 7900X3D going on sale. If people did not have the patience to wait an extra month that is on them, especially when there were plenty of simulated 7800X3D results around when the 7950X3D launched.

The 7900XTX will not behave like a 53 tflop RDNA2 card. It does not have more than double the shaders, it has the ability to dual issue commands. This is fine for compute hence the 53 Tflops but for gaming you get much lower performance. Nvidia did the exact same thing going from Turing to Ampere but they advertised it as double the CUDA core count (which is kinda true per their definition of a CUDA core but for gaming it was the same effect as what AMD did with RDNA3).
 

winjer

Gold Member
TBH, The 7900xtx is not behaving like a 53 tflops card. the 6950xt is 24 tflops and is only 25% slower than the 7900xtx? I get that doubling tflops doesnt always get you 100% more performance, but a 36% performance increase is suspect. They might be hitting into the same CU limitations they ran into with the vega cards and the infinity cache that helped RDNA 2.0 GPUs perform well with 72-80 CUs is no longer helping scale the performance.

I wouldnt be surprised if the 7800xt is 40 tflops and is close to or just as powerful as the 53 tflops 7900xtx.

AMD is having problems with their drivers using the dual issue shader units.
This means that in games, we have a 51 TFLOPs GPU runnig as if it's a 25.5 TFLOPs one.
This is why it barely outpaces the 6950XT, a GPU with 23 TFLOPs.

If AMD manages to solve their driver issues with the dual issue instructions, it should boost performance significantly. But this is a big "if".
 
Maybe because the problem is not lack of VRAM in those cases.
For the first few minutes of gameplay my perfromance was good (60-90fps), but after couple of minutes I had like 30fps in the same place and massive stutters and texture swapping. The lower texture settings resolved these issues, so I'm sure it was all related to the VRAM.

For whatever reason VRAM process is very misleading, because VRAM process can suggest plenty of free VRAM left, yet you can still see texture swapping and performance problems.

Hardware Unboxed showed this behaviour in their video as well. The hogwarts legacy was constantly swapping textures, yet VRAM process wasnt maxed out.

Now I prefer to look at VRAM allocation, because only then I can be 100% sure my GPU will be not VRAM limited.
 
Last edited:

Celcius

°Temp. member
https://wccftech.com/amd-says-more-vram-matters-in-modern-games-ahead-of-nvidias-rtx-4070-launch/

AMD Says More VRAM Matters In Modern Games Ahead of NVIDIA’s RTX 4070 Launch

MicrosoftTeams-image-5.png.webp
 

Mercador

Member
I wonder if 12gb will be enough for 4-5 next years before purchasing a 4070... Perhaps I should go for 6800XT instead.
 
I was close to buying an ex-display RTX 3070 ti until I saw it only had 8GB VRAM and then saw that the 6800XT was better value for money... I didn't end up buying one though.
 

Buggy Loop

Member
https://wccftech.com/amd-says-more-vram-matters-in-modern-games-ahead-of-nvidias-rtx-4070-launch/

AMD Says More VRAM Matters In Modern Games Ahead of NVIDIA’s RTX 4070 Launch

MicrosoftTeams-image-5.png.webp


p3lgvym9s8ta1.jpg


Let's not go into the RE games High (0.5,1,2,3,4,6,8) GB textures (what a bunch of ridiculous settings) with barely a concrete explanation of the penalty outside that, you're not overloading the GPU VRAM needlessly? Maybe a bit more streaming?

Using TLOU as a mark of pride? Holy shit :messenger_tears_of_joy: AMD sponsored (tm). How can this help their perception, only god knows.

Meanwhile, something that actually matters like path tracing cyberpunk 2077, arguably the best graphical technical showcase from a "preview" since at least Crysis, an historic moment where the first full fledged AAA (and open world, even more difficult) uses full path tracing, runs 9fps on a 7900XTX 3840x1600 FSR2 balanced
 

SlimySnake

Flashless at the Golden Globes
Not even good optimized non ports, as RE4R is the console versions that are ports.
p3lgvym9s8ta1.jpg


Let's not go into the RE games High (0.5,1,2,3,4,6,8) GB textures (what a bunch of ridiculous settings) with barely a concrete explanation of the penalty outside that, you're not overloading the GPU VRAM needlessly? Maybe a bit more streaming?

Using TLOU as a mark of pride? Holy shit :messenger_tears_of_joy: AMD sponsored (tm). How can this help their perception, only god knows.

Meanwhile, something that actually matters like path tracing cyberpunk 2077, arguably the best graphical technical showcase from a "preview" since at least Crysis, an historic moment where the first full fledged AAA (and open world, even more difficult) uses full path tracing, runs 9fps on a 7900XTX 3840x1600 FSR2 balanced
Re4 at 4gb streams in higher res textures right on front of you. At 8gb all textures are always loaded.

What’s odd is that i was able to use 8gb textures on my 10 gb card until the latest patch and it all of a sudden started crashing a few days ago so i had to switch to 4gb to get under the vram limit and i can see textures loading especially in transition areas.
 

Guilty_AI

Member
Metro exodus is not path tracing, not even close.
I'm confused by the usage of these terms, from what i understand they're just different methods to achieve the same result. And path tracing should, supposedly, be less computationally expensive than full ray tracing.
 
Last edited:

Neo_game

Member
AMD is having problems with their drivers using the dual issue shader units.
This means that in games, we have a 51 TFLOPs GPU runnig as if it's a 25.5 TFLOPs one.
This is why it barely outpaces the 6950XT, a GPU with 23 TFLOPs.

If AMD manages to solve their driver issues with the dual issue instructions, it should boost performance significantly. But this is a big "if".

From where did you get this info and why can't they solve this issue ?
 

Loxus

Member
16GB total with 2GB reserved for OS. IIRC console games allocate about a third for system memory and 2 thirds for VRAM, so about 9GB VRAM.
I don't think that's accurate.
PS4 had 3GB reserved for the OS and 5GB for games.
PlayStation 4 gives up to 5GB of RAM to game developers

This is a break down of Infamous on PS4.
PwQdO7q.jpg


If PS5 has 2GB reserved for the OS, 14GB for games.
On PC system memory is used for storing temporary game data and decompression.
This temporary data also happens with VRAM as VRAM is still much faster than RAM.

Cerny explained how this temporary data can affect vRAM usage.
didjcsy.jpg
p7ypThY.jpg


If you meant 9GB for textures and meshes, then you have a point.
 

Buggy Loop

Member
I'm confused by the usage of these terms, from what i understand they're just different methods to achieve the same result. And path tracing should, supposedly, be less computationally expensive than full ray tracing.

No. Anything not path traced is an approximation, typically an hybrid rasterization + ray tracing.

DDGI / RTXGI

They use probes in a defined grid pattern. In Metro Exodus EE they use 256 of them. An example :

x3-villa-probes.jpg


They're also trimmed down on the fly in clusters to partition the scene (in a screen space reflection manner) because it would be too heavy. Thus they bleed from RT reflections to SSR when you move camera. Reflections also do not include alpha masked geometry (leaves on trees) nor transparent surfaces that are not water. They still rely on screen space to trick the visual effect. This solution also does 1 bounce and trace immediately back to world space probes, not "physically" based like a ray would truly react.

DDGI is not on a pixel accuracy ray tracing either because of those probe grids are have a finite volume. The RTXGI probes generate a 6 pixel texture map (low resolution). DDGI volumes are fiddly to use and require a lot of adjustments to adjust the volumes to get better results. You can have lighting artifacts otherwise, so it's not a "put a camera in, send rays out and tada it works!". So this solution primarily functions for very "static" games.

x3-noise-free.jpg


Looks nice enough right? It is actually! Thing is that, the missing contrasts in shadows (look underneath the dragon, its mouth) will probably again have to be tricked in with standard rasterization solutions like screen space ambient occlusion (and anything screen space, can easily break down).

Path tracing removes all the hybrid "aids" and is on a per-pixel accuracy. While DDGI/RTXGI used in metro exodus could maybe have "tens" of light sources, RTXDI can have millions. While the DDGI effects sometimes break down ala screen space solutions, path tracing does not.

The difference in accuracy is basically Cyberpunk 2077 RT Psycho vs Overdrive.

Psycho vs Overdrive

Psycho vs Overdrive 2

Ray tracing is always a "hack" to save performances to approach path tracing, because that used to be just a pipe dream a few years ago.

Metro Exodus EE is the first AAA game built around ray tracing
Cyberpunk 2077 overdrive is the first AAA game built around path tracing

This is an historical moment, as of now, there's no known better lighting solution. Only improvements coming up is saving render-time with AI, such as Neural radiance caching to have less noise and to bring performances back to high res native.
 

CrustyBritches

Gold Member
I don't think that's accurate.
PS4 had 3GB reserved for the OS and 5GB for games.
PlayStation 4 gives up to 5GB of RAM to game developers

This is a break down of Infamous on PS4.
PwQdO7q.jpg


If PS5 has 2GB reserved for the OS, 14GB for games.
On PC system memory is used for storing temporary game data and decompression.
This temporary data also happens with VRAM as VRAM is still much faster than RAM.

Cerny explained how this temporary data can affect vRAM usage.
didjcsy.jpg
p7ypThY.jpg


If you meant 9GB for textures and meshes, then you have a point.
wx7XRap.png

KZ Shadowfall
 
Last edited:

GHG

Member
wx7XRap.png

KZ Shadowfall

That's a PS4 launch title. The way that the RAM is utilised in the PS5 and Series consoles is now completely different compared to the previous gen consoles due to the respective I/O setups.

Moore's law is dead went over this with an Unreal Engine developer and also offered some anecdotes from discussions he's had with other developers:




It's worth watching/listening to the whole 20 minute segment about VRAM to get a better idea of whats changed and what implications it has for PC hardware selection going forwards.
 

hlm666

Member
Metro Exodus EE is the first AAA game built around ray tracing
Cyberpunk 2077 overdrive is the first AAA game built around path tracing
Nice explanation.

Metro EE also has some temporal aspect aswell, I havn't noticed cyberpunk taking time for the lighting to accumulate or the low ray count causing fizzle/boiling artifacts.

Moore's law is dead went over this with an Unreal Engine developer and also offered some anecdotes from discussions he's had with other developers:
I don't think that guy is actually an unreal engine developer like say Brian Karis, seems more like a developer who uses UE who also doesn't want to say where he works? He says he started with ue4 then about a year ago moved to ue5, and refers to himself as a generalist (whatever that is) not a programmer, artist, designer etc. He bought up shadow maps etc but didn't talk about ue5 virtual shadow maps, would be nice to hear what someone with more credibility has to say like the guy from iD who was saying the xss didn't have enough ram but he's definitely gagged on the subject now.
 
Last edited:

N1tr0sOx1d3

Given another chance
Forgive me if I’m wrong here but I thought the the whole idea for ultra fast NVME drives was to negate the need for VRAM? The drives are so fast that they could essentially be used like VRAM? This not the case?
 

Crayon

Member
Forgive me if I’m wrong here but I thought the the whole idea for ultra fast NVME drives was to negate the need for VRAM? The drives are so fast that they could essentially be used like VRAM? This not the case?

The drive's speed is one thing but it doesn't get to dump straight into vram. It has to be decompressed and it the case of pc there is a little extra shuffling, as well. Upcoming direct storage on pc aims to clean that up. Sony marketed "the ssd", but that was sort of dumbed down for the masses. It's the system to get the data from the ssd into vram that is impressive.
 
Last edited:

N1tr0sOx1d3

Given another chance
The drive's speed is on thing but it's doesn't get to dump straight into vram. It has to be decompressed and it the case of pc there is a little extra shuffling, as well. Upcoming direct storage on pc aims to clean that up. Sony marketed "the ssd", but that was sort of dumbed down for the masses. It's the system to get the data from the ssd into vram that is impressive.
Thank you for the clarification 👍
 

winjer

Gold Member
Forgive me if I’m wrong here but I thought the the whole idea for ultra fast NVME drives was to negate the need for VRAM? The drives are so fast that they could essentially be used like VRAM? This not the case?

No. an SSD is very slow compared to vram. Both in bandwidth and latency.
 
I wonder if 12gb will be enough for 4-5 next years before purchasing a 4070... Perhaps I should go for 6800XT instead.
Depends 1440p or 4k?

I'm looking forward to full exclusive made for PS5 games. We will have answers on just how much VRAM you're going to need.
 

RobRSG

Member
Re4 at 4gb streams in higher res textures right on front of you. At 8gb all textures are always loaded.

What’s odd is that i was able to use 8gb textures on my 10 gb card until the latest patch and it all of a sudden started crashing a few days ago so i had to switch to 4gb to get under the vram limit and i can see textures loading especially in transition areas.
AMD loves you, me and our RTX 3080’s.

They will keep spreading the love on their sponsored titles.
 
Kind of insane in times where all the hw power sunk in games is less and less visibly that much better looking, to abandon proper low settings, even when having a texture pack for FHD and one for 4k would be easier than ever. Appears rather financially dumb, aiming for Crysis like adoption rates and requiring overpriced top of the line GPUs while practically all current half decent CPUs can handle everything.
 

RoboFu

One of the green rats
Kind of insane in times where all the hw power sunk in games is less and less visibly that much better looking, to abandon proper low settings, even when having a texture pack for FHD and one for 4k would be easier than ever. Appears rather financially dumb, aiming for Crysis like adoption rates and requiring overpriced top of the line GPUs while practically all current half decent CPUs can handle everything.

Gamers keep crying about low textures and “ real next gen “ though.
 
Gamers keep crying about low textures and “ real next gen “ though.
those can buy the newest shit? texture pack upgrades were offered (inofficialy) en masse. The reverse is much easier and could reduce the load on the VRAM, and should be a easy move for wider possible success is all I say. 4, 6 or 8GB doesn't need to be abandoned with how little games at their core progressed since the PS360 era. All they do is support nvidia's and AMD's profits while many gamers don't cry about low textures and not real next gen.
RDR2 ran on a god damn PS4 (somewhat decently) and now/soon we need the same amount of VRAM for anything, that is not even breathtaking to a similar degree, than that game needed RAM entirely?
Wanting more, faster and better, sure, but the deal seems increasingly harder to justify and actually see. The added value is diminishing, while the price jumped. Better more advanced tech requires better HW, but often one has to wonder if they actually just brute force stuff instead of working on effcient solutions like they had to in former times.
 

kuncol02

Banned
Cyberpunk 2077 is a last gen game now.

I have fun with this forum more and more.
Technically it's "running" on XOne and PS4 so it is last gen game. It didn't even had next gen version for a year or something like that. If not last second delays it would also be launched before current consoles launched.
 
Technically it's "running" on XOne and PS4 so it is last gen game. It didn't even had next gen version for a year or something like that. If not last second delays it would also be launched before current consoles launched.
Cyberpunk was build for high end PC. Yes, the game was ported to xbox one, but it looked and run like crap.
 
Last edited:
I did some research and realised i can easily upgrade my 8GB ram laptop to 20GB. I just need to find a day off work to get it done. It's not even expensive compared to the SSD i have in that thing.
I just need to stop ordering takeout for a week to afford it.

I guess the downside of having income is that I no longer have the time to enjoy gaming as much as I used to.
 

nkarafo

Member
Forgive me if I’m wrong here but I thought the the whole idea for ultra fast NVME drives was to negate the need for VRAM? The drives are so fast that they could essentially be used like VRAM? This not the case?

This gives me earlier gen consoles/arcades ROM cart nostalgia. I don't think we will ever have such optimal storage performance ever again.
 

ToTTenTranz

Banned
TBH, The 7900xtx is not behaving like a 53 tflops card. the 6950xt is 24 tflops and is only 25% slower than the 7900xtx? I get that doubling tflops doesnt always get you 100% more performance, but a 36% performance increase is suspect. They might be hitting into the same CU limitations they ran into with the vega cards and the infinity cache that helped RDNA 2.0 GPUs perform well with 72-80 CUs is no longer helping scale the performance.

The 7900XTX isn't a 53 TFLOPs card, though.
The ALUs on RDNA3 WGPs are double-pumped, meaning they can do twice the FMA operations, but they did so without doubling the caches and schedulers (which would increase the die area a lot more).
This means Navi3 cards will only achieve their peak throughput if the operations are specifically written to take advantage of VOPD (vector operation dual) or the compiler manages to find and group compatible operations (which it doesn't, yet), and if these operations don't exceed the cache limits designed for single-issue throughput.
This means that, at the moment, there's virtually no performance gain from the architectural differences between RDNA2 and RDNA3. The 7900XTX is behaving like a 20% wider and ~10% higher clocked 6900XT with more memory bandwidth, which is why it's only getting a ~35% performance increase until the 6900XT gets bottlenecked by memory bandwidth at high resolutions.


This should get better with a more mature compiler for RDNA3, but don't expect the 7900XTX to ever behave like a 6900XT if it had 192 CUs instead of 80.
 

64bitmodels

Reverse groomer.
an historic moment where the first full fledged AAA (and open world, even more difficult) uses full path tracing, runs 9fps on a 7900XTX 3840x1600 FSR2 balanced
counterpoint: on a 4090 Cyberpunk runs at 16 fps without DLSS3 frame gen.

It's impressive sure and im glad we've come to the point where modern AAA games can use path tracing... but holy fuck dude, LOOK AT THAT SHIT. THAT IS WACKY. We're gonna have to wait MUCH LONGER until Path Tracing becomes standard like that
 
Last edited:

kingyala

Banned


I have the Rtx 3070 and Rtx 3060ti both are used for 1080p gaming and as you can see in this video the 8Gb Vram is not enough at all. Even at such a low resolution. Don't even get me started on higher resolutions.

00:00 - Welcome back to Hardware Unboxed
01:25 - Backstory
04:33 - Test System Specs
04:48 - The Last of Us Part 1
08:01 - Hogwarts Legacy
12:55 - Resident Evil 4
14:15 - Forspoken
16:25 - A Plague Tale: Requiem
18:49 - The Callisto Protocol
20:21 - Warhammer 40,000: Darktide
21:07 - Call of Duty Modern Warfare II
21:34 - Dying Light 2
22:03 - Dead Space
22:29 - Fortnite
22:53 - Halo Infinite
23:22 - Returnal
23:58 - Marvel’s Spider-Man: Miles Morales
24:30 - Final Thoughts

dr71238bz5ta1.png

there is no amount of memory that is relative to resolution, 8gb was marketed as enough for 1080p to easilly push 8gb card sales and since it was a crossgen era full of ps4 ports so no studio used anything above 8gb at the time and the fat that 8gb gpus still performed better or similar to a ps5 or series during this crossgen period people where convinced it was enough not knowing even though the performance is similar but the memory will be a bottleneck once nextgen games come out needing more than 8gb
 

Spyxos

Gold Member
there is no amount of memory that is relative to resolution, 8gb was marketed as enough for 1080p to easilly push 8gb card sales and since it was a crossgen era full of ps4 ports so no studio used anything above 8gb at the time and the fat that 8gb gpus still performed better or similar to a ps5 or series during this crossgen period people where convinced it was enough not knowing even though the performance is similar but the memory will be a bottleneck once nextgen games come out needing more than 8gb
I may be wrong, but I think that the RTX3070 with it 8gb has always been advertised as a 1440p card.
 
Last edited:
Top Bottom