funking giblet
Member
The use of SSD in this demo comes down one thing.
Virtual Memory used to store Textures and Meshes.
You do not try to pull in an entire texture or an entire mesh, but only a) what you can see b) the appropriate detail level.
Meshes and Textures are usually stored in a data structure which much be traversed and these data structures can be massive, much more massive that your RAM would allow. You really aren't trying to pull in gigs of assets though, but constant small reads for what may have entered the view frustum . So you must traverse the data structure with the mesh index / texture index of the data you needed, and either hit a specific LOD level, or have a structure which has sub details the further you drill down. To the person calling the data, it's transparent to them if the data was memory resident or was pulled from a disk.
The most important thing above is latency, and speed always helps of course if you have big scene changes.
We have no way of knowing how pegged the SSD was in the demo (RAGE did a similar thing with Megatextures.. from a DVD), but it's not hard to imagine you can add more details to the assets, so you can drill further down and resolve more detail if you have more speed. If you don't, you can just return a slightly lower quality asset instead of drilling down. So yes it is possible that a highly optimized SSD will allow more detail, but that doesn't mean it will always be available in the first place.
I also believe they have a fair few more optimizations to make on this, 20 million polygons when you have ~9 million pixels is overdraw (unless they mean it's just in RAM).
Not everyone is going to be using this tech, so while it's a killer app, it's not going to be the be all and end all.
Virtual Memory used to store Textures and Meshes.
You do not try to pull in an entire texture or an entire mesh, but only a) what you can see b) the appropriate detail level.
Meshes and Textures are usually stored in a data structure which much be traversed and these data structures can be massive, much more massive that your RAM would allow. You really aren't trying to pull in gigs of assets though, but constant small reads for what may have entered the view frustum . So you must traverse the data structure with the mesh index / texture index of the data you needed, and either hit a specific LOD level, or have a structure which has sub details the further you drill down. To the person calling the data, it's transparent to them if the data was memory resident or was pulled from a disk.
The most important thing above is latency, and speed always helps of course if you have big scene changes.
We have no way of knowing how pegged the SSD was in the demo (RAGE did a similar thing with Megatextures.. from a DVD), but it's not hard to imagine you can add more details to the assets, so you can drill further down and resolve more detail if you have more speed. If you don't, you can just return a slightly lower quality asset instead of drilling down. So yes it is possible that a highly optimized SSD will allow more detail, but that doesn't mean it will always be available in the first place.
I also believe they have a fair few more optimizations to make on this, 20 million polygons when you have ~9 million pixels is overdraw (unless they mean it's just in RAM).
Not everyone is going to be using this tech, so while it's a killer app, it's not going to be the be all and end all.
Last edited: