• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Velocity Architecture - 100 GB is instantly accessible by the developer through a custom hardware decompression block

Bernkastel

Ask me about my fanboy energy!
We are not the ones that made the baseless affirmation that the medium is "much more demanding" than R&C.
You are the one creating things for others to argue about, making statements with Zero proof or corroboration.
To be frank it was just an extra determiner to my point and I was just replying to another statement with "Zero proof or corroboration."
And in The Medium(which is a much more demanding game), they instantly switch worlds.

And your comments about 3+ seconds in R&C dont have any basis.
 

Deto

Banned
Okay, now we already have comparisons between Ratchet and Clank with an INDIE made by a guy that nobody knows who he is.

stupidity or intellectual dishonesty?

soon we will have to argue about Crackdown 4 vs Horizon Forbidden West
 

Ar¢tos

Member
To be frank it was just an extra determiner to my point and I was just replying to another statement with "Zero proof or corroboration."
The Medium doesn't instantly switch worlds, all you see changing is a wall in front of the player, you don't see nothing changing on the sides or behind. Seasons After Fall already did that this gen (with lower detail /res obviously). In R&C the character falls from above in several of the dimensional rifts, everything in the possible 360 radius is loaded in 2 seconds.
 

Bernkastel

Ask me about my fanboy energy!
The Medium doesn't instantly switch worlds, all you see changing is a wall in front of the player, you don't see nothing changing on the sides or behind. Seasons After Fall already did that this gen (with lower detail /res obviously). In R&C the character falls from above in several of the dimensional rifts, everything in the possible 360 radius is loaded in 2 seconds.
You are literally transferred to another world as you control the player.
The team isn't creating one world, but two: a version of our own, and a reflection of it in the spirit realm. You'll be able to shift seamlessly between the two in The Medium with – Bloober promises – no discernible load times or impact to game performance and graphics, thanks to the power of the Xbox Series X.
 

Bernkastel

Ask me about my fanboy energy!
Interesting use of DirectML for loading textures
Journalist: How hard is game development going to get for the next generation? For PlayStation 5 and Xbox Series X? The big problem in the past was when you had to switch to a new chip, like the Cell. It was a disaster. PlayStation 3 development was painful and slow. It took years and drove up costs. But since you’re on x86, it shouldn’t happen, right? A lot of those painful things go away because it’s just another faster PC. But what’s going to be hard? What’s the next bar that everybody is going to shoot for that’s going to give them a lot of pain, because they’re trying to shoot too high?
Gwertzman: You were talking about machine learning and content generation. I think that’s going to be interesting. One of the studios inside Microsoft has been experimenting with using ML models for asset generation. It’s working scarily well. To the point where we’re looking at shipping really low-res textures and having ML models uprez the textures in real time. You can’t tell the difference between the hand-authored high-res texture and the machine-scaled-up low-res texture, to the point that you may as well ship the low-res texture and let the machine do it.
Journalist: Can you do that on the hardware without install time?
Gwertzman: Not even install time. Run time.
Journalist: To clarify, you’re talking about real time, moving around the 3D space, level of detail style?
Gwertzman: Like literally not having to ship massive 2K by 2K textures. You can ship tiny textures.
Journalist: Are you saying they’re generated on the fly as you move around the scene, or they’re generated ahead of time?
Gwertzman: The textures are being uprezzed in real time.
 
Last edited:

Lethal01

Member
They already said we’re going back to New York, based in Harlem, but in snow. It’s going to be using the same map and most of the same assets, just with higher quality. It’s not a full game. It’s an expandalone Lost legacy type title.

What if the next game takes place in the same city though?
 

Lethal01

Member
You are literally transferred to another world as you control the player.
We really will need to see exactly how it's implemented before we start claiming it's on the same level as ratchet and clank. .

"no discernible load times or impact to game performance and graphics," could still include the new world taking 10 seconds to fade in. What they have demonstrated so far just isn't as impressive.
 
Last edited:
Remember, dev choice is limited as well - by the audience's demands. Sure, in theory, almost any game can be 60 fps, but how is that gonna sell? So you can't simply choose 60 fps regardless of how that impacts the presentation. But when you have more GPU power then that choices becomes easier - because you have to sacrifice less visually in order to get there.


My point is that the ps5 is in a difficult situation where the developer must, do to power pick which is more important 60fps or higher level of Graphics. This is the flaw of the Ps5 architecture.
Taking power from one to help the other. Or lower both. At the sony show the game that was 4k60 didn't look as good as the 30fps game. but you did get 60fps. As more games come out we will see.
 
Last edited:

cormack12

Gold Member
Interesting use of DirectML for loading textures

That's quite interesting. Could do with seeing the original input and the output to see an example.
 

Ascend

Member
still slow , its a bad idea to compromise the performance with the idea to load directly to GPU from SSD there is no gain that justifies the stalls you can quickly get just o gain that small amount of memory, there is more than enough RAM, you need a cache in the middle in RAM o a special embedded memory but not directly, latency is one thing and speed is another but both are important, yo can make a request very fast(latency ) the seek time is very fast in SSD but then there is the actual transfer speed of what you need




your example of 4k textures is not bad, textures dont require super fast data access because its static data but you still require certainty of access with time, any other file you access in SSD will make things worse and worse while you are reading a texture you cant afford to delay so you will need to devote the SSD to stream textures during frametime and the data you can get during a frame is in megabytes so is not a good idea, when its simply better to ensure the texture is available at certain speed in a cache prepared before the frame requires it, the advantage of SSD is not to use them as RAM is to be able to stream data really fast it allow a lot of things is so fast it can work with the speed the user plays and traverse the scenes but not the speed required at mid-frame
MS seems to think it's a good idea... Just to re-iterate...

Phil Spencer:
Thanks to their speed, developers can now use the SSD practically as virtual RAM. The SSD access times come close to the memory access times of the current console generation. Of course, the OS must allow developers access that goes beyond that of a pure storage medium. Then we will see how the address space will increase immensely - comparable to the change from Win16 to Win32 or in some cases Win64.


Of course, the SSD will still be slower than the GDDR6 RAM that sits directly on top of the die. But the ability to directly supply data to the CPU and GPU via the SSD will enable game worlds to be created that will not only be richer, but also more seamless. Not only in terms of pure loading times, but also in terrain mapping. A graphic designer no longer has to worry about when GDDR6 ends and when the SSD starts. I like that Mark Cerny and his team at Sony are also investing in an SSD for the PlayStation 5 ...


A technique called Sampler Feedback Streaming - SFS - was built to more closely marry the memory demands of the GPU, intelligently loading in the texture mip data that's actually required with the guarantee of a lower quality mip available if the higher quality version isn't readily available, stopping GPU stalls and frame-time spikes. Bespoke hardware within the GPU is available to smooth the transition between mips, on the off-chance that the higher quality texture arrives a frame or two later. Microsoft considers these aspects of the Velocity Architecture to be a genuine game-changer, adding a multiplier to how physical memory is utilised.


After thinking about it, I am wondering why they are using 8MB as the size for a 4K texture;

Sampler Feedback Streaming (SFS) – A component of the Xbox Velocity Architecture, SFS is a feature of the Xbox Series X hardware that allows games to load into memory, with fine granularity, only the portions of textures that the GPU needs for a scene, as it needs it. This enables far better memory utilization for textures, which is important given that every 4K texture consumes 8MB of memory. Because it avoids the wastage of loading into memory the portions of textures that are never needed, it is an effective 2x or 3x (or higher) multiplier on both amount of physical memory and SSD performance.



A 4K texture is 4096x4096 pixels. That means it's a total of 16.7 million pixels. If the texture is 32-bit colors, that means you require 536.9 million bits of data for the texture, which is 67.1 million bytes, which is 67.1 MB (or 64MB in binary). Assuming zero compression, of course... Are we going to have 80% compression...? o_O
 
Last edited:

Bernkastel

Ask me about my fanboy energy!
That's quite interesting. Could do with seeing the original input and the output to see an example.
DirectML comparisons are in this thread
 

jimbojim

Banned
My point is that the ps5 is in a difficult situation where the developer must, do to power pick which is more important 60fps or higher level of Graphics. This is the flaw of the Ps5 architecture.
Taking power from one to help the other. Or lower both. At the sony show the game that was 4k60 didn't look as good as the 30fps game. but you did get 60fps. As more games come out we will see.

PS5 in difficult situation? Mark Cerny, what the hell you just did?
 
^ Not related to SSD but reading that thread it seems to clarify that the ML is stil done on the shader cores (so wanting ML performance would mean not using the shader core for graphics), but the RT is being done with dedicated silicon in tandem with the shader cores. This seems like a customization on MS's end but I'm guessing it will also come to AMD PC GPU cards as well, at least maybe the high-end ones.

So I was wrong about the ML (although it's still potent, and the write-up in the thread Bernkastel Bernkastel linked says the Int 8 and Int 4 support on XSX's side was a solution developed between AMD and MS; whether that's just leveraging what AMD already had or this is a solution MS's design team specifically came up with working with AMD engineers on their team (remember, MS and Sony had their own group of AMD engineers to work with exclusive to each other) I don't know), but right about the RT.

Basically you can't measure the RT performance on XSX by standard CU shader compute alone because MS have dedicated silicon in their GPU for that task that way the shaders aren't being forced to choose one or the other. Maybe Sony have something similar? That's up for debate.
 

Allandor

Member
A 4K texture is 4096x4096 pixels. That means it's a total of 16.7 million pixels. If the texture is 32-bit colors, that means you require 536.9 million bits of data for the texture, which is 67.1 million bytes, which is 67.1 MB (or 64MB in binary). Assuming zero compression, of course... Are we going to have 80% compression...? o_O
Textures aren't uncompressed bitmaps. What you describe is a bitmap with transparency. But bitmaps have a huge disadvantage. They are just to big to really work with them.
Therefore you always have some kind of compression there (at least lossless which already saves a huge chunk of memory).
 

SatansReverence

Hipster Princess
This is the Medium trailer:
There is almost zero gameplay, and in the little there is the closest to "loading a world" is when basically a very limited wall changes color in front of the character at 2min.
Are you seriously comparing that to R&C?
Are you here just to troll?


What sort of delusional bullshit is this?

Outside of their being a wall like structure (one is actually a building) and the character, everything else is different.

Geometry is different, textures different, different lighting, different effects and particles.

And then trying to talk up R&C when it has several second long scripted events where transitions occur and they only need to load what is infront of the player for a short time. It's "gameplay" transitions are literally nothing more than a longer ranged dodge/teleport mechanic.

2.4 >>>>> 5.5

Fuck off back to the spec thread where you belong.
 

Panajev2001a

GAF's Pleasant Genius
^ Not related to SSD but reading that thread it seems to clarify that the ML is stil done on the shader cores (so wanting ML performance would mean not using the shader core for graphics), but the RT is being done with dedicated silicon in tandem with the shader cores. This seems like a customization on MS's end but I'm guessing it will also come to AMD PC GPU cards as well, at least maybe the high-end ones.

So I was wrong about the ML (although it's still potent, and the write-up in the thread Bernkastel Bernkastel linked says the Int 8 and Int 4 support on XSX's side was a solution developed between AMD and MS; whether that's just leveraging what AMD already had or this is a solution MS's design team specifically came up with working with AMD engineers on their team (remember, MS and Sony had their own group of AMD engineers to work with exclusive to each other) I don't know), but right about the RT.

Basically you can't measure the RT performance on XSX by standard CU shader compute alone because MS have dedicated silicon in their GPU for that task that way the shaders aren't being forced to choose one or the other. Maybe Sony have something similar? That's up for debate.

Given how nVIDIA implemented it, given what AMD publicly said about RT support and their RT patents, we know roughly what RDNA2 aims at and thus what MS and Sony had to take and possibly tweak: based on the patents and RT being memory bandwidth intensive, the RT HW lives in the TMU block (the memory load and store units of the GPU in a sense) and they take care of managing the BVH and intersection testing of rays against the primitives inside. Thus makes it DCU dependent (CU’s are paired together in a DCU in groups of two).

Using the results of that for shading and lighting is all again down to the shader cores and thus takes away from rendering/compute power just like anything else.

The INT8/4 is an extension of the RPM FP16 work AMD started with Vega where each of the SIMD lanes can be further subdivided and treated as a separate operation running in parallel (a separate “thread”). I would be interested to know if it is one of the RDNA2 features that both consoles have or something that will ship in big Navi that the other console has.

Like Cerny said in the Road to PS5 video (btw, while the less technical, not an Insult, it just the way it is, Spencer used the popular term “SSD used practically as virtual RAM“ while Cerny went over the bits that make that possible... they both can be said to describe the same thing really... whatever it exactly is ;))... there are features in PS5’s GPU that you will see coming in desktop RDNA2 cards later in the year which Sony co-developed hence will not be in XSX, but that likely means XSX may very well have similar unique features PS5 does not have but we will see in desktop RDNA2 cards soon.
 
My point is that the ps5 is in a difficult situation where the developer must, do to power pick which is more important 60fps or higher level of Graphics. This is the flaw of the Ps5 architecture.
Taking power from one to help the other. Or lower both. At the sony show the game that was 4k60 didn't look as good as the 30fps game. but you did get 60fps. As more games come out we will see.

Surely this stands for both consoles and had always been this way with Dev tradeoffs, either 60 fps graphics turned down slightly or 30 fps with more bells and whistles .
 
MS seems to think it's a good idea... Just to re-iterate...

Phil Spencer:
Thanks to their speed, developers can now use the SSD practically as virtual RAM. The SSD access times come close to the memory access times of the current console generation. Of course, the OS must allow developers access that goes beyond that of a pure storage medium. Then we will see how the address space will increase immensely - comparable to the change from Win16 to Win32 or in some cases Win64.


Of course, the SSD will still be slower than the GDDR6 RAM that sits directly on top of the die. But the ability to directly supply data to the CPU and GPU via the SSD will enable game worlds to be created that will not only be richer, but also more seamless. Not only in terms of pure loading times, but also in terrain mapping. A graphic designer no longer has to worry about when GDDR6 ends and when the SSD starts. I like that Mark Cerny and his team at Sony are also investing in an SSD for the PlayStation 5 ...


A technique called Sampler Feedback Streaming - SFS - was built to more closely marry the memory demands of the GPU, intelligently loading in the texture mip data that's actually required with the guarantee of a lower quality mip available if the higher quality version isn't readily available, stopping GPU stalls and frame-time spikes. Bespoke hardware within the GPU is available to smooth the transition between mips, on the off-chance that the higher quality texture arrives a frame or two later. Microsoft considers these aspects of the Velocity Architecture to be a genuine game-changer, adding a multiplier to how physical memory is utilised.


After thinking about it, I am wondering why they are using 8MB as the size for a 4K texture;

Sampler Feedback Streaming (SFS) – A component of the Xbox Velocity Architecture, SFS is a feature of the Xbox Series X hardware that allows games to load into memory, with fine granularity, only the portions of textures that the GPU needs for a scene, as it needs it. This enables far better memory utilization for textures, which is important given that every 4K texture consumes 8MB of memory. Because it avoids the wastage of loading into memory the portions of textures that are never needed, it is an effective 2x or 3x (or higher) multiplier on both amount of physical memory and SSD performance.



A 4K texture is 4096x4096 pixels. That means it's a total of 16.7 million pixels. If the texture is 32-bit colors, that means you require 536.9 million bits of data for the texture, which is 67.1 million bytes, which is 67.1 MB (or 64MB in binary). Assuming zero compression, of course... Are we going to have 80% compression...? o_O


you are jumping to conclusion despite the technical problems, SFS or SF load parts of the texture for subsequent frames not for current frame no texture loaded mid-frame from SSD

this was commented in the link to beyond 3d thicc_girls_are_the_best provided I suggested you to read it, you said you did...

Ronaldo8

SF is not SFS, as you so eloquently explained. SF without the texture filters is not very useful since there will always be a delay between determining that a mip is not resident and it being made available. If you are targeting mip change on a nearly per frame transition basis (as I suspect MS is trying to do), then specialised hardware need to be available to smooth transitions or else it will be pop-in galore which defeats the purpose of texture streaming.


Shifty Geezer

I don't think so. You can just prefetch a little earlier instead of later. SF improves the implementation of tiled resources to make it easier and more efficient. SFS helps hide situations where your prefetch has failed, but if your prefetch is good enough, that shouldn't happen that often (if ever!). Potentially, the faster your storage, the less that is a problem. In RAGE for example, SFS would have softened the texture transitions so they were less jarring, as I understand it. But if running from a modern M.2 SSD, that pop-in wouldn't happen in the first place for SFS to help at all.

Ronaldo8

MS invested in silicon to address situations that will almost never happen... Good to know.

Shifty Geezer

I didn't say that, but it wouldn't be unheard of either. How much did the tessellation hardware in Xenos see? How much action has the ID buffer of PS4 Pro had?

This is a technical discussion. We should discuss the technology without leaning on faith that every choice is awesome or ground-breaking. Talk about what SF is, where SFS fits in, and how they'll be used.

Ronaldo8

Leaning on faith ? I am not the one pulling conclusions about a core next-gen feature based on speculative techniques (that failed) from generations past. James Stanard himself made it clear that those filters were necessary to pull correctly pull off SFS in a tweet convo. Straight from the horse's mouth itself.


Shifty Geezer

Yes. Your discussion isn't technical. In that reply, you aren't talking about how SF and SFS will be used, and what advantages SFS could bring; you've just said, "MS have included it so it must be useful." That's a non-technical argument of faith.

From your tweet, you may notice pop-in at tile boundary. So now let's talk about tile boundaries and possible solutions to tile pop-in and where SFS features and various technical discussions.


Ronaldo8

You doth protest too much. James Stanard, one of the system architects of the XSX, said it was useful. I don't assume to know more than him unlike others.
Tech talk? How about this:

There is an excellent GDC talk by Sean Barret about the case where a page is not yet resident. What to do? Using bilinear filtering (in hardware, as in Barret's own words, software implementation is an unnecessary hassle) on the residency map after "padding" it astutely to solve the issue of sampling texels from adjacent pages that are inherently decorrelated. Sampling from adjacent pages will introduce artifacts as mentioned by Stanard:



Those patented MS "texture filters" are in fact a modified form of bilinear filtering as explained in the patent (https://patentimages.storage.googleapis.com/ae/20/a0/313511519c3caa/US20180232940A1.pdf).

More importantly, texture filtering and blending is done any time a transition to the next LOD is occurring irrespective of whether the next LOD is resident or not. MS explicitly provide an example where a PRT is created and the residency map is constantly updated with successive mip levels corrected by a fractional blending factor.
Also, you can only prefetch what you can foresee and not what you figure out after sampling by which time you already need it.


Shifty Geezer


That's not happening. You can't load a texture from storage the moment the GPU realises it's needed; that's just too slow. If SSD's could work that fast, there'd be no market for Optane. SSD access times are in microseconds, versus nanoseconds for DRAM (which in itself if horrifically slow compared to the working storage of processor caches).



Ronaldo8

SFS as a VRAM capacity (distinct from bandwidth) saver/multiplier implies exactly that.



Shifty Geezer

How do you address the microseconds of latency? Is the GPU going to sit waiting for the data to arrive, or stop what it's drawing, draw something else (or compute something else), then come back to drawing with the texture when it finally arrives and the object with the correct LOD in the right place without messing up the drawing it's already done?

Also, if you can fetch data on demand from disk, why would you need a mechanism for soft transitioning between LODs - SFS? Users would never see a transition because the correct LOD would always be present on demand, no?


iroboto

just thinking about frustum culling etc.
i think from further away say MIP10 because it’s so far away you pull that on demand, and it’s a blend of uggo at that draw distance.

If you’re strafing left and right, you’re loading in tiles that are out of view before you can actually see it I suspect. I don’t know how much of this translates to how tight you can cut it with SSD. But some testing would be required.


Shifty Geezer

Why is a high MIP something you'd pull on demand? As in, why is that more latency tolerant? The issue isn't BW but the time it takes from sampler feedback stating during texture sampling (as the object is being drawn), "I need a higher LOD on this texture" and that texture sampler getting new texture data from the SSD.

Texturing on GPUs is only fast and effective because the textures are pre-loaded into the GPU caches for the texture samplers to read. The regular 2D data structure and data access makes caching very effective. The moment texture data isn't in the texture cache, you have a cache miss and stall until the missing texture data, many nanoseconds away, is loaded. At that point, fetching data from SSD is clearly an impossible ask.

The described systems included mip mapping and feedback to load and blend better data in subsequent frames. You want to render a surface. The required LOD isn't in RAM so you use the existing lower LOD to draw that surface, and start the fetching process. When the higher quality LOD is loaded a frame or two later, you either have pop-in or you can blend between LOD levels, aided by SFS if that is present.

When it comes to mid-frame loads as described in that theoretical suggestion in the earlier interview (things to look into for the future), we'd be talking about replacing data that's no longer needed this frame. There's no way mid-rendering data from storage is every going to happen on anything that's not approaching DRAM speeds. The latencies are just too high.
 
Last edited:

Lethal01

Member
What sort of delusional bullshit is this?

Outside of their being a wall like structure (one is actually a building) and the character, everything else is different.

Geometry is different, textures different, different lighting, different effects and particles.

And then trying to talk up R&C when it has several second long scripted events where transitions occur and they only need to load what is infront of the player for a short time. It's "gameplay" transitions are literally nothing more than a longer ranged dodge/teleport mechanic.



Fuck off back to the spec thread where you belong.

Lighting changes can be done without needing to load huge amounts of data, same goes for effects and particles.
The big data eater is the geometry and textures and the changes we see are extremely minimal compared to what we've seen from ratchet and Clank.
When we see more, this could change but right now we only see it going from looking at one building and the ground to looking another building. It's a very closed off and comparitively easy to do transition.

You say ratchet and Clank is scripted but the transition we see in the trailer for the medium is also scripted and we have no idea how long the cutscene lasts, they said the transition is seamless, but this doesn't mean it's fast. We know R&C can do the transition in under 2 seconds, we have no idea how long the full transition takes in the medium.
 
Last edited:
We really will need to see exactly how it's implemented before we start claiming it's on the same level as ratchet and clank. .

"no discernible load times or impact to game performance and graphics," could still include the new world taking 10 seconds to fade in. What they have demonstrated so far just isn't as impressive.
What Ratchet and Clack do isn't some ultra advance magic that's possible only through PS5... expect other games on XSX and PC to do the same. Stop acting like any other game doing the same is a threat to Sony, it isn't.
 
How did you come to a conclusion that The Medium is a more a demanding game? Do you have the technical details of both games?

Easy.
Textures represent more than 75% of game data.
R&C is filled with objects with no texture that only use solid colors. For example the suit case or the floor in one scene. Or the chairs. That means you can stick a 1x1 or 2x2 texture to represent the solid color (less than 1kb).
That's different than a game that's using 4k textures that is between 8MB -64MB.

So yeah The medium BY DEFAUT is more demanding in terms of streaming
 
Last edited:

Lethal01

Member
What Ratchet and Clack do isn't some ultra advance magic that's possible only through PS5... expect other games on XSX and PC to do the same. Stop acting like any other game doing the same is a threat to Sony, it isn't.

Stop acting like we have seen another game doing the same. We can agree there isn't anything magical about it but we have not seen anything on xbox show that it can do it at anywhere near the same level.

This isn't about threats, I'd be happy to find out that Microsofts storage solution is somehow 20x faster than Sony's.
But acting like they have shown that it's anywhere near as fast in gameplay is just silly and hurts discussion.
 
Last edited:
Stop acting like we have seen another game doing the same. We can agree there isn't anything magical about it but we have not seen anything on xbox show that it can do it at anywhere near the same level.

This isn't about threats, I'd be happy to find out that Microsofts storage solution is somehow 20x faster than Sony's.
But acting like they have shown that it's anywhere near as fast in gameplay is just silly.
You don't even know how this works otherwise you would recognize that what they are doing with The Medium is similar. No need to see the exact same thing, exact same transitions etc. You're acting like every game use some proprietary tools but no, they are mostly the same. How devs use these tools is key. Doesn't mean they are only possible on select hardware.

No need to go 20x better than what Sony has. Sony's storage solution will always be faster. We shouldn't dispute that. But you got to understand that being 2x faster doesn't prevent other solutions from achieving similar results using the same techniques.
 
Easy.
Textures represent more than 75% of game data.
R&C is filled with objects with no texture that only use solid colors. For example the suit case or the floor in one scene. Or the chairs. That means you can stick a 1x1 or 2x2 texture to represent the solid color (less than 1kb).
That's different than a game that's using 4k textures that is between 8MB -64MB.

So yeah The medium BY DEFAUT is more demanding in terms of streaming

that is simply false

R&C objects are fully textured and have layers there are detailed textures to change the reflections, roughness, normal mapping, etc

 

geordiemp

Member
MS seems to think it's a good idea... Just to re-iterate...

Phil Spencer:
Thanks to their speed, developers can now use the SSD practically as virtual RAM. The SSD access times come close to the memory access times of the current console generation. Of course, the OS must allow developers access that goes beyond that of a pure storage medium. Then we will see how the address space will increase immensely - comparable to the change from Win16 to Win32 or in some cases Win64.


Of course, the SSD will still be slower than the GDDR6 RAM that sits directly on top of the die. But the ability to directly supply data to the CPU and GPU via the SSD will enable game worlds to be created that will not only be richer, but also more seamless. Not only in terms of pure loading times, but also in terrain mapping. A graphic designer no longer has to worry about when GDDR6 ends and when the SSD starts. I like that Mark Cerny and his team at Sony are also investing in an SSD for the PlayStation 5 ...


A technique called Sampler Feedback Streaming - SFS - was built to more closely marry the memory demands of the GPU, intelligently loading in the texture mip data that's actually required with the guarantee of a lower quality mip available if the higher quality version isn't readily available, stopping GPU stalls and frame-time spikes. Bespoke hardware within the GPU is available to smooth the transition between mips, on the off-chance that the higher quality texture arrives a frame or two later. Microsoft considers these aspects of the Velocity Architecture to be a genuine game-changer, adding a multiplier to how physical memory is utilised.


After thinking about it, I am wondering why they are using 8MB as the size for a 4K texture;

Sampler Feedback Streaming (SFS) – A component of the Xbox Velocity Architecture, SFS is a feature of the Xbox Series X hardware that allows games to load into memory, with fine granularity, only the portions of textures that the GPU needs for a scene, as it needs it. This enables far better memory utilization for textures, which is important given that every 4K texture consumes 8MB of memory. Because it avoids the wastage of loading into memory the portions of textures that are never needed, it is an effective 2x or 3x (or higher) multiplier on both amount of physical memory and SSD performance.



A 4K texture is 4096x4096 pixels. That means it's a total of 16.7 million pixels. If the texture is 32-bit colors, that means you require 536.9 million bits of data for the texture, which is 67.1 million bytes, which is 67.1 MB (or 64MB in binary). Assuming zero compression, of course... Are we going to have 80% compression...? o_O

No not really,

Textures are saved in a GPU format with 7 different formats that GPUs read. Its 4 x 4 pixel blocks and compressed already into the GPU format that a GPU cache reads and then works on. So BC6 HDR and BC7 is 16 bytes.

Sony and MS can use RDO (a bit like MPEG 4) to prepare the image before it is made (choped up _ into the BCn native GPU format, so BCn format its smaller on disk and RAM anyway.

However, the expension into RAM is after the hardware decompress, the RAM size of textures for both will be same on third party games (likely IMO)

The compression (onto disk) and console HARDWARE APU decompression is either Kraken or Zlib (or equivalent). The big size savings are made when the image is prepared (think jpeg but much more complex) before making BCn.

Note PCs can also do the RDO / oodle / Bcpack whatever name of the day and reduce texture sizes, just wont have that final compress / hardware decompress.

SFS is about using a lower quality and belding in the better quaility one if it arrives late.

Here is a good read if anyone is interested

 
Last edited:

Lethal01

Member
You don't even know how this works otherwise you would recognize that what they are doing with The Medium is similar. No need to see the exact same thing, exact same transitions etc. You're acting like every game use some proprietary tools but no, they are mostly the same. How devs use these tools is key. Doesn't mean they are only possible on select hardware.

No need to go 20x better than what Sony has. Sony's storage solution will always be faster. We shouldn't dispute that. But you got to understand that being 2x faster doesn't prevent other solutions from achieving similar results using the same techniques.

You don't even know how this works otherwise you would recognize that they could be using a very different methods of switching worlds in the medium to have a similar effect.

The claim was made that Xbox has already demonstrated that they can instantly transition between world just lik R&C can, but we have not seen how long the transtitions in the medium take or how big the changes are. It's almost like the people who are using Titanfall as an example of the transition.

Ofcourse it's possible we see people find smart ways to do the same thing, but definitively saying that we have already seen it is crazy
 
Last edited:

dxdt

Member
2.4 >>>>> 5.5
I do believe that the PS5 SSD implementation is superior in both raw speed and ease of implementation. Sometimes though the raw numbers may not tell the whole story. The random numbers, IOPS, and latency are still not known.

It's kinda like comparing a 3 GHz Pentium D to today's 2 GHz Intel 10th gen. The 2 GHz is just able to do more by having better IPC and more modern instructions. The question is how much more that the XSX SSD and XVA can do with less to reduce the gap?

And even today, I don't think anyone can tell us the BW and latency needed for those level of PQ in the PS4 UE5 demo and R&C.
 
You don't even know how this works otherwise you would recognize that what they are doing with The Medium is similar. No need to see the exact same thing, exact same transitions etc. You're acting like every game use some proprietary tools but no, they are mostly the same. How devs use these tools is key. Doesn't mean they are only possible on select hardware.

No need to go 20x better than what Sony has. Sony's storage solution will always be faster. We shouldn't dispute that. But you got to understand that being 2x faster doesn't prevent other solutions from achieving similar results using the same techniques.
PS5 is "5x faster on many occations" according to this 3rd party dev👇

Check this post, vetted by @Mod of War and see how the SX bottleneck could opetare:
It is easy. It is useless to have 12 boxes if they do not fit through the door all together.

You have 12 boxes to fill. So you can't pass all the boxes at once. You must decide which boxes will pass and which will not. That is handled by a coordinator. And the coordinator tells the delivery man which boxes to take.

Mrs. XSX wants to make the move as soon as possible, but it turns out that only 8 boxes can fit on the door at a time. The coordinator is fast, and also uses a box compressor so that 10 boxes can go through instead of 8, but there are several drawbacks. The compressor can only compress the red boxes, and the coordinator also has to coordinate many other things, street traffic, people passing through the door, the space in the room where the boxes are stored, the noise of neighbors who distract the delivery man, search and select what the boxes are filled with, etc. Also, the delivery man is not so fast and is very distracted filling and transporting boxes. So it passes the 10 boxes (not 12) at a certain speed "1x". The lady demands that the boxes arrive, but they do not arrive as quickly as the lady would like, since although she has many boxes, the system is not capable of managing all of them properly.

On the other hand we have Mrs. PS5. You only have 10 boxes to fill. But its door is twice as big, enough for all its boxes to enter at once and there is room for people to also enter and exit through the door. Furthermore, the coordinator has the ability to automatically discard unnecessary boxes, so he doesn't waste time checking boxes that are not going to be used. In addition, anyone in the environment can do the job of the coordinator or the delivery man (even at the same time). The compressor is not that new, but it can compress all boxes, whether they are red or blue. All. And the delivery man is more than twice as fast and manages to pass the boxes at the speed of "2.5x" in the worst case, and "5x" on many occasions. In addition, if someone is left free or without work, they can help to distribute boxes with the delivery man or coordinate work with the coordinator. All this makes this removal company the most efficient ever seen and that the number of boxes available is irrelevant. For that moving system, 12 boxes are not needed, with 10 you can do the same job (and more or better in some cases). Having more boxes would only make the price of the move more expensive without needing any of it.

Of course, having more boxes available always helps to advertise yourself as a top removal company compared to the competition, even if your removal company is normal and ordinary. But it is only that, a smokescreen.

That does not mean that XSX is bad, far from it, it is an extraordinary machine. But PS5 has an efficiency NEVER seen before.

It is true that on PC there are more powerful cards or more powerful systems, but you know that these cards are never used properly, they draw raw power, but they are never used. It is the scourge of PC, an ecosystem that is too varied and unusable. In addition to exorbitant prices.

And I've always been a PCLover, but things as they are, what I've seen on PS5 I only remember something similar when 3DFX and its Glide came out. Its astonishing speed leaves you speechless.
Via: @Insane Metal

PS. Funny times ahead :messenger_tears_of_joy:
 
Last edited:
You don't even know how this works otherwise you would recognize that they could be using a very different methods of switching worlds in the medium to have a similar effect.

The claim was made that Xbox has already demonstrated that they can instantly transition between world just lik R&C can, but we have not seen how long the transtitions in the medium take or how big the changes are. It's almost like the people who are using Titanfall as an example of the transition.

Ofcourse it's possible we see people find smart ways to do the same thing, but definitively saying that we have already seen it is crazy
What R&C does has nothing to do with what Titanfall 2 did. They both use a different approach. Titanfall uses clever level design with teleports. You can actually do something similar pretty easily. Just download Unreal Engine 4, create two bunkers with different designs and place a teleport in both of them. You can also add a nice postprocess effect when you teleport and voilà. Ofc you'd have to disabling and enabling a few things when you teleport to mimic the effect. This can be done easily on the same level.

What R&C is doing seems to load an entire different level like you were already there with everything loaded instantly. The transitions imo are simply a transition level that is there by design more than some sort of loading screen. I think with what's going on it could be loaded faster but you also need to let the player adjust to what's going to happen next. This isn't some tech demo, it has to be playable and fun.

The thing is, you can test a few things yourself. Unreal is free and so are Unity and Blender. Using game engines helps understand a few things. I doesn't mean I'm right though, but by experience, most of these things apply the same basics.
 
PS5 is "5x faster on many occations" according to this 3rd party dev👇

Check this post, vetted by @Mod of War and see how the SX bottleneck could opetare:

Via: @Insane Metal
This is one way of looking at it but the reality is that the "doors" are all different and some are even malleable. If your application is made to make a better use of 10 boxes then yes, but some applications require 12, others 6 and soon some will require 30 or something.
 

Lethal01

Member
What R&C does has nothing to do with what Titanfall 2 did. They both use a different approach. Titanfall uses clever level design with teleports. You can actually do something similar pretty easily. Just download Unreal Engine 4, create two bunkers with different designs and place a teleport in both of them. You can also add a nice postprocess effect when you teleport and voilà. Ofc you'd have to disabling and enabling a few things when you teleport to mimic the effect. This can be done easily on the same level.

It seems like you just aren't reading what I said at all. The fact that Titan fall 2 is doing something totally different is exactly what I was pointing out. The medium could be doing something totally different and using as an example is as silly as using TitanFall as an example. it could be using much more similar assets between transitions since it's 2 versions of the same general world. But even if it isn't, once again, We don't know how long the transitions are during gameplay and how they usually look.

The thing is, you can test a few things yourself. Unreal is free and so are Unity and Blender. Using game engines helps understand a few things. I doesn't mean I'm right though, but by experience, most of these things apply the same basics.

I work as a character modeler/animator and have a solid decade working on games. I'm no longer wasting time making little game engines but I think I'm good.
 
Last edited:

Ascend

Member
you are jumping to conclusion despite the technical problems, SFS or SF load parts of the texture for subsequent frames not for current frame no texture loaded mid-frame from SSD

this was commented in the link to beyond 3d thicc_girls_are_the_best provided I suggested you to read it, you said you did...
I did read it. And again, I see little to suggest that what I'm saying is irrelevant. What is SFS doing? One of the main things that was mentioned and I just quoted was;
SFS - was built to more closely marry the memory demands of the GPU, intelligently loading in the texture mip data that's actually required with the guarantee of a lower quality mip available if the higher quality version isn't readily available, stopping GPU stalls and frame-time spikes.

Shifty Geezer brings valid concerns, but in my view, these have already been addressed by that statement, and funnily enough, his own. His statements were;

Texturing on GPUs is only fast and effective because the textures are pre-loaded into the GPU caches for the texture samplers to read. The regular 2D data structure and data access makes caching very effective. The moment texture data isn't in the texture cache, you have a cache miss and stall until the missing texture data, many nanoseconds away, is loaded. At that point, fetching data from SSD is clearly an impossible ask.

The described systems included mip mapping and feedback to load and blend better data in subsequent frames. You want to render a surface. The required LOD isn't in RAM so you use the existing lower LOD to draw that surface, and start the fetching process. When the higher quality LOD is loaded a frame or two later, you either have pop-in or you can blend between LOD levels, aided by SFS if that is present.


I love how he basically shoots the thing down and immediately answers himself afterwards. The GPU does not need to stall to await the loading of the high quality texture, because the low one is already available, and guaranteed to be available. What is the reason a texture is not available? What does that mean? It means it's not in cache and not in RAM and needs to be read from the SSD. And yet again we arrive where we were.

What is faster?
a) Writing from SSD to RAM, then let the GPU fetch from RAM to fill its cache?
or
b) let the GPU fetch directly from SSD to fill its cache?
 
I did read it. And again, I see little to suggest that what I'm saying is irrelevant. What is SFS doing? One of the main things that was mentioned and I just quoted was;
SFS - was built to more closely marry the memory demands of the GPU, intelligently loading in the texture mip data that's actually required with the guarantee of a lower quality mip available if the higher quality version isn't readily available, stopping GPU stalls and frame-time spikes.

you read but you dont understand what is being discussed

SFS tells you to load mip maps form SSD, but it do that to be used in subsequent frames, you dont load in the current frame the texture is required, that is why it guarantees a lower mip map is always in memory to be used in case the correct mip map is not available in memory, when that happens your object is rendered with low resolution texture and then some frames later it has the correct texture in base of its distance instead of stalling for lack of texture data



I love how he basically shoots the thing down and immediately answers himself afterwards. The GPU does not need to stall to await the loading of the high quality texture, because the low one is already available, and guaranteed to be available.

yes and then a few frames later the correct one appears, usually its called pop-up

What is the reason a texture is not available? .

because a bad implementation of how you load the memory that did not loaded the texture in the cache in ram before it was required

What does that mean? It means it's not in cache and not in RAM and needs to be read from the SSD. And yet again we arrive where we were

no, it means your GPU will stall waiting to fetch the texture required, at that point the frame freezes, if used SFS may ensure a low resolution mip-map is in memory(previously loaded) can be used in its pace to not stall and in its place you have a pop-up a few frames later when the mip map arrives from SSD it also can blend mip maps to make a drastic transitions not as bad

What is faster?
a) Writing from SSD to RAM, then let the GPU fetch from RAM to fill its cache?
or
b) let the GPU fetch directly from SSD to fill its cache?


how big and and where is this cache?
 
Last edited:

sendit

Member
Easy.
Textures represent more than 75% of game data.
R&C is filled with objects with no texture that only use solid colors. For example the suit case or the floor in one scene. Or the chairs. That means you can stick a 1x1 or 2x2 texture to represent the solid color (less than 1kb).
That's different than a game that's using 4k textures that is between 8MB -64MB.

So yeah The medium BY DEFAUT is more demanding in terms of streaming

Again, how do you know what the game is using in terms of textures? Additionally, textures aren’t the only thing that takes up GPU bandwidth. An increase in texture resolution has more to do with VRAM.
 

Ascend

Member
yes and then the correct one appears, usually its called pop-up



because a bad implementation of how you load the memory that did not loaded the texture in the cache in ram before it was required



no, it means your GPU will stall waiting to fetch the texture required, at that point the frame freezes, SFS may ensure a low resolution mip-map is in memory(previously loaded) and can be used in its pace to not stall and in its place you have a pop-up




how big and and where is this cache?
At this point I'm wondering if you are actually understanding what is being said. Because we keep reverting to things that are already a given, or you're simply repeating what I am saying but in different words.

We don't know how big the cache is. But every GPU needs an L1 and L2 cache. So for conceptual and speculative discussion we can assume the L2 cache. We can use RDNA1 as a reference, at least for now;

arch13.jpg


But since we're at it... Some more interesting slides;

arch14.jpg


arch15.jpg
 
that is simply false

R&C objects are fully textured and have layers there are detailed textures to change the reflections, roughness, normal mapping, etc



Uh no I don't think you understand. You can't compare stylized/toy environment to photorealism.
Just because it has a few sprinkled plants that are instanced (meaning there is only one assert/texture in memory) doesn't mean it even sniffs the shader complexity and memory requirement of photo realism.

because its a stylized environment you get away with ridiculous low poly object like these rocks. You simply need a tiled texture or low res texture. I have seen textures that are anywherefrom 16x16 to 512x512 tilable texture to do the job. Also the same is the case for smooth objects.

ksK5hMe.png



Everything in the red are solid colors which uses 1x1 texture. Most dev also add a generic cloud/dirt tiling texture and plug it into roughness input. They can then use this texture through out the entire game. That generic tiling texture could be 512x512 or less. They also use a noise texture that could be as small as 16x16. It doesn't matter because its tiled. This allows them to break up the solid colors in addition to the dirt tiling. You see that a lot in this gameplay.

racket6w4jlq.png


MORE

Now compare that to this demo with similar environment as the racket & clank gameplay.
Take a completely empty TLOUS II bedroom with only 6 walls(forward, backwards, left, right, ceiling, floor). Scratch that just use one wall, just one wall of a TLOUS2 bedroom. If we converted that single wall to next gen specs. You end up with:

4k diffuse
4k roughness
4k spec
4k normal

That single wall would be 100,000+ more complex to stream than the texture data in this entire SEED demo:
You say why? Because the diffuse in this entire demo can be representation by several 1x1 file of each color used.

This is what you end up with:
1x1 diffuse
1x1 roughness
1x1 spec

You then only need a generic cloud/dirty tiling alpha/mask texture and one scratch texture. That's it.



I'm sorry but just one Quisel rock is more complex to stream than all the objects in the R&C screens above.
 
Last edited:
At this point I'm wondering if you are actually understanding what is being said. Because we keep reverting to things that are already a given, or you're simply repeating what I am saying but in different words.

We don't know how big the cache is. But every GPU needs an L1 and L2 cache. So for conceptual and speculative discussion we can assume the L2 cache. We can use RDNA1 as a reference, at least for now;

arch13.jpg


But since we're at it... Some more interesting slides;

arch14.jpg


arch15.jpg

I am not the one reverting to this, its you, your textures are inside RAM, whatever you take from SSD you are not going to search and deliver in the same frame and SFS is not searching for textures for the current frame, the textures retrieved from SSD are for using in next frames not in current so you store them in RAM, you may have somethig in an embeded cache but its not big enough to store textures so you have them in RAM
 
Uh no I don't think you understand. You can't compare stylized/toy environment to photorealism.
Just because it has a few sprinkled plants that are instanced (meaning there is only one assert/texture in memory) doesn't mean it even sniffs the shader complexity and memory requirement of photo realism.

because its a stylized environment you get away with ridiculous low poly object like these rocks. You simply need a tiled texture or low res texture. I have seen textures that are anywherefrom 16x16 to 512x512 tilable texture to do the job. Also the same is the case for smooth objects.

ksK5hMe.png



Everything in the red are solid colors which uses 1x1 texture. Most dev also add a generic cloud/dirt tiling texture and plug it into roughness input. They can then use this texture through out the entire game. That generic tiling texture could be 512x512 or less. They also use a noise texture that could be as small as 16x16. It doesn't matter because its tiled. This allows them to break up the solid colors in addition to the dirt tiling. You see that a lot in this gameplay.

racket6w4jlq.png


MORE

Now compare that to this demo with similar environment as the racket & clank gameplay.
Take a completely empty TLOUS II bedroom with only 6 walls(forward, backwards, left, right, ceiling, floor). Scratch that just use one wall, just one wall of a TLOUS2 bedroom. If we converted that single wall to next gen specs. You end up with:

4k diffuse
4k roughness
4k spec
4k normal

That single wall would be 100,000+ more complex to stream than the texture data in this entire SEED demo:
You say why? Because the diffuse in this entire demo can be representation by several 1x1 file of each color used.

This is what you end up with:
1x1 diffuse
1x1 roughness
1x1 spec

You then only need a generic cloud/dirty tiling alpha/mask texture and one scratch texture. That's it.



I'm sorry but just one Quisel rock is more complex to stream than all the objects in the R&C screens above.


wrong, there are plenty of high resolution textures they can be seen in the trailer, sorry but your argument is ridiculous and easily disproved just by looking at the game images, its possible to use small albedo textures in some things(like smash bros) but not for normals, difure, specular, roughness, etc and extra layers of detail(in a similar fashion to smash bros)

also this is not the first R&C there are games from the series in past consoles and in the current gen and they have lot of textures too despite having a cartoony style
 
Last edited:
wrong, there are plenty of high resolution textures they can be seen in the trailer, sorry but your argument is ridiculous and easily disproved just by looking at the game images, its possible to use small albedo textures in some things(like smash bros) but not for normals, difure, specular, roughness, etc and extra layers of detail(in a similar fashion to smash bros)

also this is not the first R&C there are games from the series in past consoles and in the current gen and they have lot of textures too despite having a cartoony style

OMG I Have done the IMPOSSIBLE!!! I'm using 1x1 diffuse, spec and roughness textures. OMG WHAT HAVE I DONE?!!!

uPAni7l.png


RDtCcCU.png



SMH. You do realize that albedo = diffuse right?
And you do realize that spec and roughness is literally just a scalar between 0 to 1?
 
Last edited:
SMH. You do realize that albedo = diffuse right?

diffuse has additional information like highlights and shadow, its used to give color like albedo but they are not the same

Diffuse-and-Albedo-Map.jpg



R&C is using big textures despite its cartoony look, its ok to compare to city props that use the same color for a big surface(that same apply with other more realistic looking games) but also use plenty of natural environments with more complex textures, I dont have access to the PS5 version to extract textures or something like that and cant right now with the PS4 version but you can compare with the PS4 version since its available if you want, the game also uses plenty of textures and enemies have high detailed textures, and textures is not the only thing you may need to transfer in SSD
 
Last edited:
difuse has additional information like highlights and shadow, its used to give color like albedo but they are not the same

Diffuse-and-Albedo-Map.jpg

"Albedo is the base color input, commonly known as a diffuse map. " - marmoset

The term is used interchangeably. Different engines/rendering software use different term. Unreal Engine and CryEngine call it diffuse and most engines today call it diffuse.
Just admit it. You were wrong on all counts.
 
It seems like you just aren't reading what I said at all. The fact that Titan fall 2 is doing something totally different is exactly what I was pointing out. The medium could be doing something totally different and using as an example is as silly as using TitanFall as an example. it could be using much more similar assets between transitions since it's 2 versions of the same general world. But even if it isn't, once again, We don't know how long the transitions are during gameplay and how they usually look.



I work as a character modeler/animator and have a solid decade working on games. I'm no longer wasting time making little game engines but I think I'm good.
At least you know something. I find it weird though sometimes your comments make it seem you believe in magic. My bad. The thing about The Medium is the people responsible for the game had been interviewed and they said themselves why the game couldn't be done on current gen and it's because of how fast they needed to transition to another world and only with SSDs technology could they achieve that. I agree they didn't showcase it as much as R&C did, but reading what they plan to do and seeing a glimpse during MS May event it tells everything, or at least gives you a solid hint.
 
"Albedo is the base color input, commonly known as a diffuse map. " - marmoset

The term is used interchangeably. Different engines/rendering software use different term. Unreal Engine and CryEngine call it diffuse and most engines today call it diffuse.
Just admit it. You were wrong on all counts.
Albedo is depraved of all shadows and highlights though. See it as a better diffuse.
 
"Albedo is the base color input, commonly known as a diffuse map. " - marmoset

The term is used interchangeably. Different engines/rendering software use different term. Unreal Engine and CryEngine call it diffuse and most engines today call it diffuse.
Just admit it. You were wrong on all counts.


:pie_thinking:
you are quoting marmoset, but apparently you missed a very important part of the definition

Albedo

Albedo is the base color input, commonly known as a diffuse map.

An albedo map defines the color of diffused light. One of the biggest differences between an albedo map in a PBR system and a traditional diffuse map is the lack of directional light or ambient occlusion. Directional light will look incorrect in certain lighting conditions, and ambient occlusion should be added in the separate AO slot.


The albedo map will sometimes define more than the diffuse color as well, for instance, when using a metalness map, the albedo map defines the diffuse color for insulators (non-metals) and reflectivity for metallic surfaces.

marmoset.co



this is the definition from where I took the image

Diffuse and Albedo
Both of these maps have almost the same purpose. They give a material color. But as you can see in the image above, the diffuse map has additional information about shadows and highlights. While the albedo map has more average values of color brightness across the entire image. So It looks less contrasting.
Sometimes too many shadows and highlights on a diffuse map can affect a model’s photorealism for the worse. Because a direction of the shadows and highlights on the map may not coincide with a direction of light source of the model. On the other hand, a model with an albedo map may have poor clarity of surface details. Occasionally it is more a matter of taste which texture to choose.



you can also see the albedo map in defferred rendering techniques dont include highlights or shadows

Render-targets-in-deferred-rendering-albedo-depth-diffuse-radiance-normals-specular.png



the term is used interchangeable because albedo is a diffuse/color map but unlike traditional diffuse maps it lacks shadow and highlights and given we are talking about PBR materials is correct to separate light and shadow information
 
Last edited:

Ascend

Member
I am not the one reverting to this, its you, your textures are inside RAM, whatever you take from SSD you are not going to search and deliver in the same frame and SFS is not searching for textures for the current frame, the textures retrieved from SSD are for using in next frames not in current so you store them in RAM, you may have somethig in an embeded cache but its not big enough to store textures so you have them in RAM
Where did I say that? Oh. Right. I didn't. Obviously everything that is loaded into cache still needs to be processed by the GPU, which also takes time. I'm not an idiot. You jumped to conclusions on what you think I meant.
 
:pie_thinking:
you are quoting marmoset, but apparently you missed a very important part of the definition



marmoset.co



this is the definition from where I took the image





you can also see the albedo map in defferred rendering techniques dont include highlights or shadows

Render-targets-in-deferred-rendering-albedo-depth-diffuse-radiance-normals-specular.png



the term is used interchangeable because albedo is a diffuse/color map but unlike traditional diffuse maps it lacks shadow and highlights and given we are talking about PBR materials is correct to separate light and shadow information

Its pretty simple, engines and rendering softwares kept calling it diffuse even after moving to PBR. No one talks about trad diffuse maps. Everyone ultilizes PBR in their engine.
You are trying to split hairs on something you were clearly wrong at when you said "its possible to use small albedo textures in some things(like smash bros) but not for normals, difure, specular, roughness, etc and extra layers of detail"
 
Its pretty simple, engines and rendering softwares kept calling it diffuse even after moving to PBR. No one talks about trad diffuse maps. Everyone ultilizes PBR in their engine.
You are trying to split hairs on something you were clearly wrong at when you said "its possible to use small albedo textures in some things(like smash bros) but not for normals, difure, specular, roughness, etc and extra layers of detail"

its as simple as searching in google(I use startpage) I quoted definitions even from the same site where you quoted and they mention the distinction and there are many people that mention the same in artists forums, game development, engines, etc, you dont have to believe me, but if you dont want to make a distinction its ok as long as you undertand what the texture includes and what not so you dont pick the wrong diffuse map for your work that is ok you dont need my approval I am not your boss
 
Last edited:
Top Bottom