MLID "knows" the Navi 21 clocks will be between 2.15 GHz to 2.3 GHz.
He also "knows" that Navi 22 will clock between 2.35 GHz - 2.5 GHz.
He drops some other info as well...
Unironically posting a link to a MLID video.
Jesus Christ.
MLID "knows" the Navi 21 clocks will be between 2.15 GHz to 2.3 GHz.
He also "knows" that Navi 22 will clock between 2.35 GHz - 2.5 GHz.
He drops some other info as well...
RE2 is an example of a game that will allocate more memory than it actually uses. I've done tests with a 1060 6GB and settings that allegedly would require over 12GB and it didn't hitch, stutter, or drop any frames. Same with a 2060S and over 11GB requirement on 8GB VRAM. Still runs just fine.
Off the top of my head, a more recent example of VRAM limitations being hit was the 2060 6GB in Wolfenstein Youngblood with RT. I saw a couple benchmarks where the card doesn't scale in line with the 8GB+ cards like 2060S up and performance completely falls off a cliff.
MS has kinda laid out the guidelines for next-gen Xbox/PC games, and that's ~4GB VRAM for 1440p/High and 10GB VRAM for 4K/Ultra.
Ideally that would be the situation. 16GB+ VRAM would be great for future-proofing. For my own use case I probably won't hang onto this upgrade long enough to see the implications of having 8GB or 10GB of VRAM over time I just want something faster for Cyberpunk.10GB of VRAM for 4K Ultra means that PC gamers who want to game at those settings should get cards with more than 10GB of VRAM to have some overhead.
Navi 21 likely to perform between 3070 and 3080
If it struggles with CU scaling like all the 60+ CU GCN parts did then yea, maybe a bit behind 3080.
Why would nvidia put out a high end card without enough vram? This would be remembered by all and harm their image later on. They chose 10gb GDDR6X over more cheaper ram for a reason. Hardware design is a long systematic process where options are considered, evaluated and the optimal solution for the market is chosen. Claiming 10GB will not be enough in a year or 2 when you have no experience with hardware design is foolish and insulting to the people who work their asses off to design the hardware you enjoy. They are experts and they have brought good products to the market consistently that have lasted several years.
Those expecting a 20GB version need to expect to pay at least 200 bucks for it, but likely much more. It will also probably only be on higher end cards. And with the 3090 performance being what it is with more cores, who is going to pay hundreds more for the ram with no performance increase? It would make more sense to go cheaper not then and upgrade to the next gen GPU then go after more vram now in hopes of avoiding upgrading sooner.
And even that can be avoided by simply changing image streaming to high and it runs smooth as butter on a 2060. You don't even compromise texture quality, it's the same as on Über and Ultra. It just changes the amount of fixed memory budget Idtech reserves for itself.Off the top of my head, a more recent example of VRAM limitations being hit was the 2060 6GB in Wolfenstein Youngblood with RT. I saw a couple benchmarks where the card doesn't scale in line with the 8GB+ cards like 2060S up and performance completely falls off a cliff.
And even that can be avoided by simply changing image streaming to high and it runs smooth as butter on a 2060. You don't even compromise texture quality, it's the same as on Über and Ultra. It just changes the amount of fixed memory budget Idtech reserves for itself.
So yes, VRAM is not a concern for next gen, 10 GB on a DX12Ultimate GPU will be enough for 4k and console settings the whole generation, as that is what Microsoft allocates as GPU optimized memory.
However, you might want to get higher graphical fidelity than the consoles on PC in a couple of years, then it's certainly good to have more VRAM. But PC has a key advantage that is DRAM. You see, I think next gen games will get very CPU intensive in the future and that means an increase in CPU memory usage for advanced physics, AI, game logic and so on. It is entirely plausible a CPU intensive next gen game will allocate for example 6 GB for the CPU. There, the 3.5GB slower BUS for games is not enough and it has to get the remaining 2.5GB from the allocated 10 GB GPU optimized memory, meaning you'd only have 7.5GB left as video memory. On PC this doesn't happen as the CPU stuff can just be fed into DRAM. That is a key advantage of PCs dedicated RAM and VRAM.
The memory requirements won't change much because it's exactly as you said, those are old engines. And these old games are programmed with slow hard drives and outdated I/O techniques in mind where you NEED to have every occasion possible in RAM/VRAM because you have seek times and very slow bandwidth on a HDD. Listen to what Mark Cerny said at his road to PS5 presentation.All of those are old engines at this point. Everything going forward will require either more memory or a better way to increase bandwidth, and bit bus. Soon you will have an I/O controller on a gpu board at some point.
Thats when we see big changes. Right now 10gb looks like enough because you drank the Kool-aid that NVIDIA, and others are selling you. It's not.
You'll see how that outlook wont age well with the newest engines come next year and beyond. Everything people are using as comparison is super outdated. Case in point is Flight simulator. That is the new benchmark.
And new battlefield will be too. Your going to see high VRAM usage and SSD requirements for sure on that title.
I feel your pain. Anyone can just refer to his 3080 "leak" video to see he was basically wrong on everything. Dude can tell me the sky is blue and I'll still go outside to confirm.Unironically posting a link to a MLID video.
Jesus Christ.
The memory requirements won't change much because it's exactly as you said, those are old engines. And these old games are programmed with slow hard drives and outdated I/O techniques in mind where you NEED to have every occasion possible in RAM/VRAM because you have seek times and very slow bandwidth on a HDD. Listen to what Mark Cerny said at his road to PS5 presentation.
The next generation jump in texture fidelity comes from techniques like DirectStorage and Sampler Feedback, not from an increase in VRAM. It is why we have a very modest jump in RAM this generation compared to others.
Also I'm not sure what you want to say about Flight Simulator. Flight Simulator allocates all of the available GPU memory but it doesn't exactly need it, as TechPowerUp demonstrated by using a 16 GB and a 11 GB GPU. The 11 GB was completely allocated but it had the same frame pacing as the 16 GB GPU. You can even check real memory usage in Flight Simulators development console and its significantly less than what MSI Afterburner reports. So there's that.
Yes, they will.
And if NVIDIA was so confident with 10gb, then why are they holding out different model variants with more memory?
Usage will increase as the engines change, We dont have a solution currently on PC for what PS5 is doing with it's use of the SSD as virtual memory pool.
Games that come to PC targeting that config, are going to need more memory to compensate for no big virtual memory pool as PC is behind in I/O integration with GPU's.
BOOM! Infinity Cache is real (sorta)
I wonder if developers have to actively develop grames around this extra cache or do all games automatically take advantage of it. If it is something that developers have to implement themselves , then the feature will have a low adoption rate until Nvidia has such features in there cards. Plus, there's no telling how extra cache is going to make up for a card if it's only 256bit bus.Thanks.
This is what I was getting at.
NVIDIA does not have infinity fabric, and this is something that leads to chiplet. Using infinity it will be for using a NVME as virtual memory, and once they start putting nvme storage and I/O controllers on the gpu board along with better L2 cache solutions. Your going to see better use of memory.
Something not seen so far in the RTX 3000 line.
Right. That's why they went instead with helping out develop GDDR6X. And this is not something that happens overnight, it's something crucial that takes years of planning & R&D to implement and not at all trivial to realise and in fact not guaranteed to succeed at (taken straight from the horse's mouth - their CTO interview recently for data/compute), which luckily for AMD they had done plenty of such work on the CPU side. No doubt Nvidia is working on their own things because they are an R&D behemoth as well, but who knows how that will end up & when. I mean, just look at Intel, they're bigger than both AMD & NV combined and they still have issues on this side.NVIDIA does not have infinity fabric, and this is something that leads to chiplet.
No, this would be equivalent to having a higher clock, it's all done at a base level and it's just "sped-up" automagically. Not an accurate description, but it's what it is in spirit. Devs don't have to do anything.I wonder if developers have to actively develop grames around this extra cache or do all games automatically take advantage of it. If it is something that developers have to implement themselves , then the feature will have a low adoption rate until Nvidia has such features in there cards. Plus, there's no telling how extra cache is going to make up for a card if it's only 256bit bus.
Right. That's why they went instead with helping out develop GDDR6X. And this is not something that happens overnight, it's something crucial that takes years of planning & R&D to implement and not at all trivial to realise and in fact not guaranteed to succeed at (taken straight from the horse's mouth - their CTO interview recently for data/compute), which luckily for AMD they had done plenty of such work on the CPU side. No doubt Nvidia is working on their own things because they are an R&D behemoth as well, but who knows how that will end up & when. I mean, just look at Intel, they're bigger than both AMD & NV combined and they still have issues on this side.
No, this would be equivalent to having a higher clock, it's all done at a base level and it's just "sped-up" automagically. Not an accurate description, but it's what it is in spirit. Devs don't have to do anything.
I wonder if developers have to actively develop grames around this extra cache or do all games automatically take advantage of it. If it is something that developers have to implement themselves , then the feature will have a low adoption rate until Nvidia has such features in there cards. Plus, there's no telling how extra cache is going to make up for a card if it's only 256bit bus.
Infinity cache has been talked about earlier then thatInteresting on the 'Infinity cache' that was a big claim that RedGamingTech made a few weeks ago which I was waiting to see was BS or not.
And to see it confirmed as real makes his credibility on AMD GPUs at least go right up.
Infinity cache has been talked about earlier then that
You found a guys Video where he talks about tech issues because he says that stuttering also happens in RE7, nice job proving nothing.
It's even playable on everything max and you can achieve consistent 60 by just turning a few sliders down from ultra to high.
I don't think you ever had a 970 or if you did, I don't know what you did with it that it performed so badly for you.
And that shit had gimped 3.5 + 0.5 like I said. VRAM literally doesn't kill performance as much as people seem to think. You're not going to tank from 120 to 10FPS or so lol.
Dude he has v-ram issue's holy shit did you even watch the video.
I got a 970 and tested it out on RE2 and v-ram bottleneck the game all day long.
Instead of acting like a smartass know what u talk about. Even 4gb cards have issue's in the game because it goes over it at max settings which is why i stated 6gb cards is what u need for current day limate to run ultra at 1080p. Let alone higher resolutions.
That's double the amount of v-ram consoles allocate currently in there current gen games.
Alright, good to know you didn't even watch the video nor check out the Benchmark-Results in the Image I posted from Techspot.
You can yell "BUT BUT VRAM!!!" all you like, it's not going to change the fact that this card is a 1080p monster and there is plenty of evidence that proves your assumption wrong.
That card runs that game just fine on 3.5+0.5GB VRAM, I even showed you two proofs for my claim.
Your video, which supposedly shows "he has VRAM issues omg!" just proves that you didn't even remotely check what the Video you yourself posted is about. It is not about VRAM issues per se, it is about a general Hardware/Software Issue.
If you would have checked your own Video, you'd see that he has only 2.2GB VRAM allocation, which is still way below the 3.5GB VRAM good memory and even shorter of the full 4GB VRAM the card has so this literally cannot be caused by "not enough VRAM" but is a issue he has and asks for help to solve where his GPU usage goes from 100% to 5% and then ramps back up to 100%. This is not the normal behavior of this card.
Here, just to prove how wrong you are, I booted up the game, threw everything at max, selected DX12 and, oh would you look at that, a average of around 60FPS, while VRAM is constantly at 4GB capacity.
If you disagree with the results, go yell at your GPU or something. I and many many other 970 users enjoy our games running above supposedly "needed VRAM" capacities and still maintaining 60FPS.
i was there too RGT seems to be the real deal nowInteresting on the 'Infinity cache' that was a big claim that RedGamingTech made a few weeks ago which I was waiting to see was BS or not.
And to see it confirmed as real makes his credibility on AMD GPUs at least go right up.
i was there too RGT seems to be the real deal now
it means odds amd will deliver just skyrocketed
The video i posted was to demonstrate what happens when u run out of v-ram that was the whole point. And nothing more then that.
U linked a video yourself that showcases the exact v-ram bottleneck i was talking about in a real environment, so good job on that even while u didn't realize it and tried as hard as you could to disprove my statement.
Then u got a lot of "interesting" statements:
1) Such as "VRAM literally doesn't kill performance as much as people seem to think. You're not going to tank from 120 to 10FPS or so lol. "
That's exactly what it does but even worse as that 10 fps will be 0. It will hang until it catches up again. There is nothing worse then hitting a v-ram wall.
2) then u got stuff like this "Here, just to prove how wrong you are, I booted up the game, threw everything at max, selected DX12 and, oh would you look at that, a average of around 60FPS, while VRAM is constantly at 4GB capacity"
- average means nothing, u look at low spikes because that's where bottlenecks happen and v-ram bottlenecks are hard to register for stuff like rtss which will effect benchmark outcomes all day long ( see below video at point 3 2:20 where fps stays at 14 even while it hangs ) .
It's also really depends on the benchmark where it takes place and how much time it takes place on demanding vs non demanding area's which heavily skews averages.
People use benchmarks to see what a gpu does vs another gpu and to give a idea on the performance.
- Then about v-ram the actual v-ram usage isn't measurable through software currently in any useful way, this is why u use hardware and that's also why i stated 6gb is needed for max settings.
3) about the techspot benchmark that proofs my point (2) exactly.
Here's a demonstration of a 1060 at max settings with 3gb of v-ram.
V-ram bottlenecks with hiccups everywhere, to the points it freezes completely. And that area isn't even demanidng + check rtss fps counter when freezes happen.
If you want to know more about v-ram consumption and why 10gb is laughable on a flag ship card u can scroll through my many posts that went on about it and why a 10gb card isn't particulare next gen ready but more a current gen card. I can't be bothered to repeat myself in every topic about it gets annoying and tiring but in short, rtx, io, xboxsx 10gb v-ram, higher settings, higher base settings your typical stuff.
Anyway its about navi and frankly i spend enough time lecturing people for today so i won't be going further in on reactions i don't think its practiculaire useful at this point anymore on this subject.
I literally send you a video where the gameplay is smooth and 99% free of microstutter and when it occurs it's either area change which is acceptable or literally is due to framerate when it jumps between 60 and 45 ish in some bad areas.
That's still with everything on ultra and on DX12 it runs even smoother. If you turn down a few setting so VRAM "only" needs 9 to 10GB this runs without any hiccups or stutter all day long so literally get lost with your "vram bottleneck" bullshit. I posted you Videos that prove you wrong, Benchmarks with 1% low still being above 50FPS and 0.1% low being still above 30FPS, yet you still talk shit about "educating" people because you can't deal with the fact that 10GB is hell of enough for gaming and most VRAM Game estimates are completely oversized to be "on the safe side of things".
Paying for more VRAM on a lower performing card is just as much stupidity as only focusing on performance and ignoring VRAM. You need to strike the golden middle. You could have all the VRAM in the world but if the card cant render images fast enough you'll still be stuck at low FPS. The Radeon VII with its 16GB HBM got outperformed by a 2070 super with only 8GB VRAM. Investing money into VRAM that you never use is quit the way to go.
The memory requirements won't change much because it's exactly as you said, those are old engines. And these old games are programmed with slow hard drives and outdated I/O techniques in mind where you NEED to have every occasion possible in RAM/VRAM because you have seek times and very slow bandwidth on a HDD. Listen to what Mark Cerny said at his road to PS5 presentation.
The next generation jump in texture fidelity comes from techniques like DirectStorage and Sampler Feedback, not from an increase in VRAM. It is why we have a very modest jump in RAM this generation compared to others.
Also I'm not sure what you want to say about Flight Simulator. Flight Simulator allocates all of the available GPU memory but it doesn't exactly need it, as TechPowerUp demonstrated by using a 16 GB and a 11 GB GPU. The 11 GB was completely allocated but it had the same frame pacing as the 16 GB GPU. You can even check real memory usage in Flight Simulators development console and its significantly less than what MSI Afterburner reports. So there's that.
Your using a game that is on a optimized build of an engine meant to play on consoles running gpu's from 2013, as a barometer to disprove the VRAM bottle neck? At 1080p at that? We are all talking about 4k.
If 10gb was enough why do both consoles have over that to play with? Even if you set aside 3-4gb's for OS.
Why did Sony see this as being the big limiter and opp to use an SSD with it's own custom I/O solution to mitigate re-filling of the memory?
There are better sources out there than some random youtube video running a old ass card on a game designed to run on said old ass video card.
I did not take these screenshots and I don't have the game currently, sadly.This is interesting. I can see the resolution but what can you tell me about the settings?
Also, if you took this screenshot, where are you? Are you flying over a large city or are you sitting in the hangar?
Nvidia has been selling GPUs with less VRAM than AMD for a very long time now... I always thought it would hit their clients at some point, but man these GPUs always remained good enough for however long they were needed (the low end 8gb Radeon 570 will never run a game at high enough settings to take advantage of its memory pool, the only use for it it crypto currency mining).Thats when we see big changes. Right now 10gb looks like enough because you drank the Kool-aid that NVIDIA, and others are selling you. It's not.
Nvidia has been selling GPUs with less VRAM than AMD for a very long time now... I always thought it would hit their clients at some point, but man these GPUs always remained good enough for however long they were needed (the low end 8gb Radeon 570 will never run a game at high enough settings to take advantage of its memory pool, the only use for it it crypto currency mining).
By the time you would need that 10gb on the 3080 it's GPU will not be fast enough to handle the games in settings that require that amount of memory anyway.
The reason why the needle hasn't moved that far in terms of needed memory bandwidth, is because of asset quality. Most developers are not using movie quality assets and scaling down. A lot of them are making assets with much lower pixel density with textures taking the bulk in terms of being made at high fidelity.
Like the Unreal 5 demo was using movie quality assets and you could tell. Next gen is going to start showing that kind of quality once newer engines are finished. Some developers are using current engines, but are going to increase the quality of assets used in their games from models, to textures, to dynamic lighting systems.
Mainly the quality of assets they are using is going to increase 10 fold, and we will see that starting next year.
Then you will see VRAM usage skyrocket. Which is why NVIDIA is trying to sell DLSS so hard.
movielike assets huh ))
Vram cant "skyrocket" since there's only 13 usable gigs for games in the new consoles, shared by the entire system. It can only "skyrocket" to that
Right, just like with dynamic resolution and all sorts of other non-sense they will check the 'technically true' (in lawyer-speak) box but in reality will deliver something subpar compared to what they're advertising.There's a reason asset quality was highlighted in the demo, and references to The mandolorian show which used UNREAL 4 for most of it's effects. Imagine that quality from a movie being used in a game?
Right, just like with dynamic resolution and all sorts of other non-sense they will check the 'technically true' (in lawyer-speak) box but in reality will deliver something subpar compared to what they're advertising.
Compare the sharpness & detail of UE5 Demo's assets when used in UE4 at their actual quality vs what they show in UE5 where they "dynamically scale" these assets:
Yes, that's my point. You have clearly better PQ with the old way of doing things (though it's much more expensive performance-wise). What they're bragging about with UE5 is really no different than when consoles bragged about 4K but it was "dynamic", "reconstructed" bla bla.imo it's not hard to see the difference
The reason why the needle hasn't moved that far in terms of needed memory bandwidth, is because of asset quality. Most developers are not using movie quality assets and scaling down. A lot of them are making assets with much lower pixel density with textures taking the bulk in terms of being made at high fidelity.
Like the Unreal 5 demo was using movie quality assets and you could tell. Next gen is going to start showing that kind of quality once newer engines are finished. Some developers are using current engines, but are going to increase the quality of assets used in their games from models, to textures, to dynamic lighting systems.
Mainly the quality of assets they are using is going to increase 10 fold, and we will see that starting next year.
Then you will see VRAM usage skyrocket. Which is why NVIDIA is trying to sell DLSS so hard.
But...movie production rendering farms use many of those same Nvidia GPUs so... :S
The XSX has 52 CUs, from a 56 CU chip. So there is currently nothing in the Navi line-up that reflects the XSX GPU. The PS5 would be the 40 CU equivalent, with 4 CUs disabled.
The Xbox x is based on Navi 21 lite,this PC GPU has 56 CU's and is clocked at 2050 MHZ.
The PS5 is based on Navi 22,this PC GPU has 40 CU's and is clocked at 2500 MHZ.
All of those are old engines at this point. Everything going forward will require either more memory or a better way to increase bandwidth, and bit bus. Soon you will have an I/O controller on a gpu board at some point.
Thats when we see big changes. Right now 10gb looks like enough because you drank the Kool-aid that NVIDIA, and others are selling you. It's not.
You'll see how that outlook wont age well with the newest engines come next year and beyond. Everything people are using as comparison is super outdated. Case in point is Flight simulator. That is the new benchmark.
And new battlefield will be too. Your going to see high VRAM usage and SSD requirements for sure on that title.