BlueXImpulse
Banned
Win in which regard?
slowest IO award
Win in which regard?
Today NV presentation never mentioned any 10% increase AFAIR.
But they did mentioned 2x per core shader performance (whatever that can mean).
slowest IO award
Okay so ask yourself this: why would they go down on IPC gains from an older arch to a newer arch, when "even" AMD have made big gains from old to new?
No company designs a new product to deliver worst performance than their previous product especially when the new one is meant to replace the older one altogether.
No one said that Sony did not have the baddest solution out there. The question I posed before would the industry follow with all that custom hardware or follow the Microsoft hybrid approach. Would the budget been better served beefing up the apu especially now oodles up the compression making it way more than the apu can process.
Does nvidia magically make any hard drive a fast SSD? As long as the storage is enough to feed the beast that is that is all that matters. Nvidia is feeding a 36tf beast so it needs more data duh. Sony is the one out of wack way more storage io than processing power.
here's what I picked out from Newegg
AMD Ryzen7 3700X 8 core 3.6GHz
$290
ASRock B550M PRO4 MicroATX motherboard
$120
Corsair DDR4 3600 32GB (2 X 16 GB) ram
$115
DIYPC tower case
$90
EVGA 1000W power supply
$200
Intel 1TB QLC SSD
$115
Noctua CPU 140mm fan
$75
Win10 pro 64
$150
Thermal Paste
$8
these comes to around $1163 or so and like I said, just kinda picked them off the store. and I can probably use my old power supply and my SSDs to cut the cost down to $850 or so. adding the $700 of the 3080, that comes to $1550 or so. any better deals out there? I'm not in a hurry hence the mention of Black Friday.
They don't.
Instead of saying: our arch is 1.5-2x more efficient because of IPC gains.
They just inflated all their numbers 2x for no reason.
But it's a better performance.
2080 is 10TF, 3080 is 15TF there is a +50% increase from TF and there are IPC gains so it's another +25-50% from these gains which gets us to the +75-100% over 2080
PC 7GB/sec -> 14GB/sec = 24 Intel cores
Sony 5.5GB/sec-> 22GB/sec = 9 CPU cores
Hell I’ll give you a windows 7 pro key that can be updated to 10 pro for free.
That literally makes no sense, unless you actually meant to say something more along the lines of 30 TF Turing = 15 TF Ampere, which would be more accurate (possibly a bit overboard, but much more on-point).
You're taking Sony's lossy compression ratio figure (3.99:1 or something like that) and comparing it to Nvidia's lossless compression figure (2:1). That's how they get 7 GB/s raw, 14 GB/s compressed. They didn't actually give a lossy compression figure for us to compare to Sony's.
...but seeing as how they are doing the equivalent of 24 Intel CPU cores (what type, we don't know. Could be i3 for all we know), I figure the ratio would be quite high. But that depends on what type of compression algorithm they are running.
They don't.
Instead of saying: our arch is 1.5-2x more efficient because of IPC gains.
They just inflated all their numbers 2x for no reason.
Craig avatar. Not surprised.
Not the same thing for consoles as for PC. Even with an SSD, a PC would need a lot of steps...;But its just faster load times bro![]()
I hope you're aware that DirectStorage was announced for the XSX like a few centuries back.slowest IO award
I hope you're aware that DirectStorage was announced for the XSX like a few centuries back.
I bet it was Microsoft pushing it in NV dirction (allowing the same API to run on both console and windows is in their interests) and if so, it would also happen with RDNA2 chips.I assume this mimicks the "secret sauce" the next gen consoles will deliver with their (de)compression magic.
I guess hating on the XSX is still a thing on here... Anyway...They are saying even with direct storage it will be slower that Pc's with RTX and PS5.
But it's the slowest common denominator.
They are saying even with direct storage it will be slower that Pc's with RTX and PS5.
I've noticed that "troll" posts get much more attention here. Think about it.
Not really. I'm comparing the algo implementation for NV and Sony. Where Sony uses 9 cores and NV - 24 cores.
The algorithm itself doesn't care about what compression rate was there, it just decompresses blocks of data.
But to decompress at higher speed you need a better algo, otherwise you will die a small death on each additional ps of latency.
And I couldn't resist to hint that NV sw engineers are worse than Sony's. Which is kinda true, because all the NV sw so far was pretty heavy (except DLSS which is kinda good, but DL is a new software tech, where programmer's skill doesn't matter, you can read Andrej Karpathy on that)
So I was rewatching Nvidia's presentation and there are lots of features there very similar to what MS did with Xbsex;
Nvidia Reflex - Xbox Dynamic Latency Input
Nvidia DLSS/AI Cores - Xbox DirectML
RTX IO - it even works with DirectStorage
NVCache - using RAM and SSD to support VRAM (MS mentioned that XVA will offer memory multiplier)
You mentioned Sony's 22 GB/s, which is a lossy compression ratio, and compared it with NV's 14 GB/s, which is a lossless compression ratio.
It doesn't matter. At all.
I'm talking about CPU usage here.
The CPU doesn't factor into this at all because NV's solution is offloading work from the CPU. They kind of specify exactly this in the slides.
So it makes no sense to be referring to CPU when the CPU isn't doing any of the decompression work for the cards. All of it is handled through the GPU (btw NV have had ARM & FPGA cores in their GPUs for a while now, at least on some models, to handle access of the data and such, along with using HBCCs).
It actually isn't, but I can see how yesterday's news was hard for some of you guys. It's okay, this is how you cope. I'm here for you.
Yesterdays news were hard for some of us? LOL. Actually, quite the opposite. Even if Nvidia 30xx series are vastly more powerful than consoles, their compression/decompression method is still on par with PS5 solution. Maybe in 40xx series.
It makes a lot of sense.
Since NV could not optimize their CPU algo well enough to use less cores.
Unless their "24 cores" is just pure PR bullshit pulled out of their ass. Which one is it?
Just time for you guys to accept this. Things are as they are. Neither MS nor Sony have any massive advantages over PC tech now (aside perhaps audio, potentially), except in terms of value proposition. Which has always been the strongest strength of consoles anyway, so the performance you're getting from Series X and PS5 at their prices is absolutely amazing and a steal.
They aren't outperforming Nvidia (or higher-end AMD) cards at all, that much is clear. But they are giving more "bang for the buck" relative to the market they serve and that's where they are strongest.
Wrong. They listed a 7 GB/s drive because that's the limit for the fastest drive currently on the market. They actually have much more hardware for decompression/compression than Sony's solution, and they had to do this in order to future-proof it.
Basically, 14 GB/s compressed data on a 7 GB/s raw drive is the lossless standard. With various lossy compression ratios that amount can scale much higher. And PC SSDs will only keep getting faster as NVMe expands the standard to support faster drives in the future.
There is no doubt the PC will ultimately match and exceed the PS5 on raw bandwidth, due to sheer brute force and spending a LOT more money on the problem. Latency will be a different matter. Sony really, really thought out how to minimize the latency end-to-end and how to minimize cache disruption to the GPU. And added the 6 priority levels to ensure high priority requests from the game engine get serviced in time. RTXIO won't match the latency and priority levels any time soon. You will have to wait for second gen RTXIO in a 4XXX series.
jinxPhoenix said:
You do realize this is an implementation of NVIDIA GPUDirect right, and they've thought really really hard about reducing latency of that tech to make really really big supercomputers, really really fast right?Yes, I do. I've worked with Nvidia on plenty of projects, so I understand their capabilities. They do great work, but I stand by my comments.
Take your fanboy tantrum elsewhere. XVA uses DirectStorage. Dev adoption benefits the Series X.Collaberate ....
Is that like where you work with someone while yelling at them that they suck?
This has absolutely nothing to do with the Series X, and never will.
Take your fanboy tantrum elsewhere. XVA uses DirectStorage. Dev adoption benefits the Series X.
I don't understand the bolded. Who "REALLY" thinks a $500 device will outclass a $1500 device? Like how many people on this forum "REALLY" think that?
Their I/O decompression/compression solution is equivalent to 24 Intel cores
the best image reconstruction techniques in the industry at moment with DLSS 2.0 and the such
Just time for you guys to accept this.
I don't understand the bolded. Who "REALLY" thinks a $500 device will outclass a $1500 device? Like how many people on this forum "REALLY" think that?
what benefit does it get passing through the NIC?
Nope. That's not what the slide says.
It says: to decompress a 14GB/sec stream (7GB/sec raw) you will need 24 CPU cores.
Already addressed that: DL is not a regular sw eng. You need data scientists and a shitload of data but programming skills required are rather low. That's why it's such a hot topic.
I don't care.
Again I point out poor messaging and huge marketing bullshit under a "tech" disguise.
It doesn't devalue the tech itself. It devalues the bullshit dealers.
PS5 doesn't need to use GPU cycles for compression crap.
Nvidia use software solution, just like XSX, PS5 use completely hardware solution
, so PS5 is ahead of both actually.
Also PS5 SSD goes max to 22 GB/s. With Oodle Kraken is on average 17 GB/s.
Nvidia's problem is latency too. Also sustainability of SSD's speed at higher temperatures on PC. PC SSDs has 2 priority levels, PS5 has 6 of them.
Just saw Gameseeker's post on REEE :
![]()
NVIDIA announces RTX I/O - PCIe 4.0, on-GPU storage decompression, collaberating with Microsoft on DirectStorage
There is no doubt the PC will ultimately match and exceed the PS5 on raw bandwidth, due to sheer brute force and spending a LOT more money on the problem. Latency will be a different matter. Sony really, really thought out how to minimize the latency end-to-end and how to minimize cache...www.resetera.com
Nope. Oodle Kraken RDO 40 tests showed results of 3.99:1 compression ratio.
It is more likely that the GPU will get the data using the same mechanisms it would without RTX !O- through a RAM resident driver client/server configuration - as normal. But with the new change being that the RTX IO NIC will offloading bulk transfer driver work that normal goes through the CPU - because it was either resident in RAM and prepared for transfer to VRAM, or storage mapped in RAM like a clipmap/megatexture structure and constantly streaming into VRAM.I think you're over-thinking this. Any PCIe device can communicate with any other on the PCIe bus. The GPU just needs to know how to properly request data from the SSD since there's a file system involved.
From this official nvidia article:
So since it will work on 2000-series cards, it can't be a physical slot on the video card.
So what would that 3.99:1 ratio mean for the PS5 as far as compression goes? It takes it from 5.5 GB/s to what? 12 GB/s?
thicc_girls_are_teh_best I wouldn't even waste my time. I see the same handful of the biggest Sony fanboys, trying to shit up every thread, as per usual. Not worth trying to explain facts and reasoning to those who only have one agenda.
thicc_girls_are_teh_best I wouldn't even waste my time. I see the same handful of the biggest Sony fanboys, trying to shit up every thread, as per usual. Not worth trying to explain facts and reasoning to those who only have one agenda.
I'm guessing you felt compelled to respond, as you're a confirmed fanboy. I wouldn't have to see your custom tag to realize, but it's helpful that it's so obvious. It must be a hard pill to swallow, since you and your bandwagon buddies kept making threads, shitting on anything positive about pc, etc. Now that you guys have been shut down, and put in your place, you guys still try to downplay the superior hardware. What gives? Should I just admit the ps5 is the strongest piece of hardware? Not even the ps6 will be as strong? When will you guys learn to give up?What's the problem? Since you said fanboys, looks like PC fanboys trying to spread crap how Nvidia compression/decompression method is better than PS5, which is 100% wrong
I'm guessing you felt compelled to respond, as you're a confirmed fanboy. I wouldn't have to see your custom tag to realize, but it's helpful that it's so obvious. It must be a hard pill to swallow, since you and your bandwagon buddies kept making threads, shitting on anything positive about pc, etc. Now that you guys have been shut down, and put in your place, you guys still try to downplay the superior hardware. What gives? Should I just admit the ps5 is the strongest piece of hardware? Not even the ps6 will be as strong? When will you guys learn to give up?
No, it's thank you Microsoft.Thank you Mark Cerny.
Still didn't learn how to read the graphs correctly I see.
Doesn't change the fact NV have the best solution for this on the market and for the foreseeable future.
Feelings before facts?
You seem to be oblivious to messaging that downplays well-understood factors of various system designs, when the messaging comes from a preferred source however. It's been an M.O of yours for a while now.
Since you never actually balance out your discussion points either, it actually comes off as you devaluing the tech itself. I.e you are always quick to downplay particular tech (especially if it seems to somehow challenge anything PS5 has going for it), and then just...leave it at that. Such is perceivable by many as devaluing the tech, whether you see it that way or not.
More goalpost shifting. Beautiful. Love it!
They are all mix of hardware and software, you think the APIs run on sheets of paper?
Only if you don't understand how the tech works.
Nope. Oodle Kraken RDO 40 tests showed results of 3.99:1 compression ratio.
Assumptions, assumptions, assumptions. They make an ass out of you, not me. That's how the saying goes. Also it seems like you've done zero looking into what DirectStorage (and related patented technologies) are meant to fully address on PC side. Priority levels are expanded, for starters. GPUDirectStorage will implement this so that makes your point moot (like all of your other points tbh).
So now we're trusting randoms on ResetEra again when they say something that is absolutely definitely not placating to a given side simply to score brownie points, because they said some words that confirm preexisting biases?
Welp wrap it up folks, we only need to listen to "Gameseeker" and guys like Matt on Era for now on...even when they are wrong and/or try downplaying anything they perceive as negative to their brand of choice![]()
So what would that 3.99:1 ratio mean for the PS5 as far as compression goes? It takes it from 5.5 GB/s to what? 12 GB/s?
So the year cross generation ends perfect timing and unreal 5 games are out again perfect timing. It be nice sooner but it won't be needed until then when next generation is really going.Note that games that use RTXIO aren't coming anytime soon. See article: https://wccftech.com/rtx-io-and-directstorage-are-coming-but-itll-be-a-while-yet/
The conclusion of the article says:
"It sounds like both Microsoft and NVIDIA are working to address what Tim Sweeney called out when he said the PlayStation 5's storage architecture was way ahead of the standard one in place for PC games. That's great news, but there is a catch: Microsoft is 'targeting' a release for the DirectStorage API at some point next year in the hands of developers, and that's only as a preview. In all likelihood, we'll have to wait until 2022 before we see some games actually taking advantage of both RTX IO and DirectStorage."
thicc_girls_are_teh_best I wouldn't even waste my time. I see the same handful of the biggest Sony fanboys, trying to shit up every thread, as per usual. Not worth trying to explain facts and reasoning to those who only have one agenda.
What's the problem? Since you said fanboys, i will said it too. looks like PC fanboys trying to spread crap how Nvidia compression/decompression method is better than PS5, which is 100% wrong. Looks like you, PC fanboys ( or Nvidina fanboys ) has an agenda too. It's easy to blame others, isn't it?
What I find interesting here is that these "RTX IO" numbers kinda match what MS said about XSX.
If you take the green bar at 14 GB/s and divide it by 2.4GB/s for the XSX SSD you get 5.83. Now divide the .5 cores for "RTX IO" by that 5.83 and you get 0.86 cores.
That's (un)suspiciously close to the CPU overhead figures MS were throwing around for Direct Storage when they were revealing the XSX specs, even accounting for different CPU cores and workloads. The savings are roughly as staggeringly big.
[Edit: I may have boobed a little. I assumed "Read Bandwidth GB/s" meant the drive's read bandwidth - literally what is being read from the actual drive (e.g. 14 GB/s from two raided PCIe 4 SSDs). But if the bars are showing output after reading from the drive and processing / decompression using something like BCPack (would make no sense to do that to me) ... well ... that only makes the XSX look *even better* in terms of relative overhead. Anyway, doesn't change my conclusions below one bit!]
So anyway, we already know that the XSX GPU can read directly from the SSD without it having to go into memory first, that the GPU can process that data and then write it out to GPU memory, and that doing so using Direct Storage has a similarly low (almost negligible) overhead on the CPU.*
Basically, XSX can already do what Nvidia have cleverly branded "RTX IO". At very low cost XSX can pull data directly into the GPU, process it, and write it out to memory for later use. Only differences I can see at this point are that XSX can have (optionally) put it through their hardware decompression block first, and Nvidia aren't tied to a 2.4 GB/s drive.
Then again, it's not like MS can't release an optional faster drive at some point ... in theory. Whether that would make sense is another matter, but I'm pretty sure they could, and they could pump the data straight to the GPU to do whatever decompression they wanted to just like Nvidia are showing in the slide above. It's not like the XSX couldn't afford the CPU overhead.