• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD FidelityFX Super Resolution 2.0 - FSR 2.0 vs Native vs DLSS - The DF Tech Review




It's no secret that we weren't hugely impressed with FSR 1.0 - but AMD has stepped up its game dramatically with its image reconstruction-based successor. Up there with the best software upscalers out there, comparable to native resolution rendering and DLSS, this is everything FSR 1.0 should have been. Alex Battaglia has this in-depth look at the first supported game: Arkane's wonderful Deathloop

00:00:00 Introduction
00:00:50 What is FSR 2.0?
00:02:56 Measuring FSR 2.0's frame-time cost on Nvidia and AMD
00:07:50 FSR 2.0 performance conclusions and use cases
00:09:24 Image quality: methodology
00:11:22 Image quality: static images - DLSS vs. FSR vs. Native
00:14:32 Image quality: camera movement - DLSS vs. FSR vs. Native
00:15:31 Image quality: animation - DLSS vs. FSR vs. Native
00:16:44 Image quality: particles - DLSS vs. FSR vs. Native
00:17:44 Image quality: effects and vegetation - DLSS vs. FSR vs. Native
00:18:45 Concluding thoughts
 
Last edited:
Results are a bit different because Alex used DLSS 2.3 instead of DLSS 2.0.
From what I remember the 2.3 update addressed specifically those defects still present in FSR 2.0.


And from what I understood when talking about the performance, those "bloated" Floating Point numbers in Ampere are actually being useful here.
 

kikkis

Member
I think that is pretty good analysis by Alex. One point is that would take the performance difference of 0.6ms and 1.1ms on Nvidia bit of grain of salt given the test method. Animation test looked somewhat bad on zoomed in, but its hard to say what it would look like on monitor without zoom when playing the game.
 

SlimySnake

Flashless at the Golden Globes
I watched the hardware unboxed video on FS2.0 and unless DF finds something different, I think the following holds true.

- 5500xt (Overclocked XSS GPU at 5.2 Tflops) - 24% gain in FSR Quality mode. 50% in FSR Performance.
- 5700xt (9.7tflops) - 37% gain in FSR Quality mode. 61% gain in Performance.
- 3080 - 38% Quality. 74% Performance mode.
- 3080 DLSS is 4-6% better than FSR 2.0.

According to AMD, the more powerful the GPU, the faster the FSR processing times which leads to better FSR performance. So something like an XSS will stand to gain less than the XSX which is far more powerful.
 

kyliethicc

Member
My takeaways:

FSR 2.0 looks good and boosts performance.

Can be used on a lot of GPUs.
It offers better IQ than FSR 1.0 but at lower performance.

The 4K quality mode is higher performance than native 4K with about the same IQ.
The vid’s example shows 72 FPS (FSR) vs 46 FPS (native).

About as good IQ as DLSS 2.3 in 4K quality modes.
Not as good IQ as DLSS 2.3 in 4K performance modes.

AMD now have a direct competitor to DLSS with similar benefits.
 

01011001

Banned
I think that is pretty good analysis by Alex. One point is that would take the performance difference of 0.6ms and 1.1ms on Nvidia bit of grain of salt given the test method. Animation test looked somewhat bad on zoomed in, but its hard to say what it would look like on monitor without zoom when playing the game.

having tested it myself yesterday it has noticeable flicker at 1440p with quality mode. but I didn't change the sharpening.
 

SlimySnake

Flashless at the Golden Globes
My takeaways:

FSR 2.0 looks good and boosts performance.

Can be used on a lot of GPUs.
It offers better IQ than FSR 1.0 but at lower performance.

The 4K quality mode is higher performance than native 4K with about the same IQ.
The vid’s example shows 72 FPS (FSR) vs 46 FPS (native).

About as good IQ as DLSS 2.3 in 4K quality modes.
Not as good IQ as DLSS 2.3 in 4K performance modes.

AMD now have a direct competitor to DLSS with similar benefits.
AMD killed it and the biggest thing no one seems to be talking about is that they dont need any tensor cores. No extra hardware required which means smaller and cheaper chips, lower temps which means lower cooling requirements and higher cost savings.

Lisa Su is a fucking genius. Now i just hope she pays devs to implement this as a feature like Nvidia did with RTX and DLSS back in the early days. You cant expect devs to support this on their own when they are struggling with delays. You have to invest in it and write it off as marketing. She did have Ubisoft work with AMD on their ray tracing which led to fantastic results almost on par with Nvidia's RTX implementation so hopefully she continues to do that going forward.

I chose to buy a 3080 over the 6900xt because of DLSS, but now after seeing AMD cards performing better than 30 series cards in the Matrix demo, FSR being as good as DLSS and even ray tracing performance improving on AMD cards, I am leaning towards the 6900xt. Way lower power draw, 6GB more VRAM, and even cheaper than most 3080 models? Why not.
 

DukeNukem00

Banned
I think that is pretty good analysis by Alex. One point is that would take the performance difference of 0.6ms and 1.1ms on Nvidia bit of grain of salt given the test method. Animation test looked somewhat bad on zoomed in, but its hard to say what it would look like on monitor without zoom when playing the game.

The reason these comparisons are zoomed is so we, the audience, cand see what he's trying to say. You would of course notice everything when playing normally, why wouldnt you ? A character fils a large part of the screen, its not small.

Some people, if they dont like the results, have latched onto the zooming part, as if you wont notice the defects in an image without it. The zooming is done so he doesnt have to say "hey poeple, just look at that corner of the painting, in the upper left corner of the screen, next to the opened fridge door". Its a qualiy of life feature in the videos, but people who dont like certain results interpret this as " oh, you need to zoom 400% to see that defect, it doesnt matter when playing normally". No, it matters and you will see the defect playing normally
 

DukeNukem00

Banned
AMD killed it and the biggest thing no one seems to be talking about is that they dont need any tensor cores. No extra hardware required which means smaller and cheaper chips, lower temps which means lower cooling requirements and higher cost savings.

Lisa Su is a fucking genius. Now i just hope she pays devs to implement this as a feature like Nvidia did with RTX and DLSS back in the early days. You cant expect devs to support this on their own when they are struggling with delays. You have to invest in it and write it off as marketing. She did have Ubisoft work with AMD on their ray tracing which led to fantastic results almost on par with Nvidia's RTX implementation so hopefully she continues to do that going forward.

I chose to buy a 3080 over the 6900xt because of DLSS, but now after seeing AMD cards performing better than 30 series cards in the Matrix demo, FSR being as good as DLSS and even ray tracing performance improving on AMD cards, I am leaning towards the 6900xt. Way lower power draw, 6GB more VRAM, and even cheaper than most 3080 models? Why not.


FSR 2 is worse than DLSS universally. At detail rendering and stability in motion, at hair, at transparency, at animation movement, at particles and the cost on AMD cards is almost double than that of Ampere. Its a good solution for lower end cards, but DLSS is better in every way and runs better. Maybe watch the video and less fanboyism. The only reason that AMD did this is because of nvidia. They cant release proprietary tech because their market share is non existent. They're not good guys doing you a favor, nvidia forced their hand. Just look at their cpu's if you feel like staning for AMD. They were behind intel for near 20 years and not a single second had passed after they reached parity in gaming (they never were ahead) and they stoped releasing budget cpus and they raised the prices for every single model and almost refused to allow the cpus on older mobos. After intel again wiped the floor with them with Alder Lake, only then they released budget cpus. Chose another company to be a fan of, cause AMD isnt it
 

Riky

$MSFT
Will be interesting to see the Deathloop results once it comes to Xbox, that RT mode should run a lot better with this, can't wait to see it.
 

SlimySnake

Flashless at the Golden Globes
FSR 2 is worse than DLSS universally. At detail rendering and stability in motion, at hair, at transparency, at animation movement, at particles and the cost on AMD cards is almost double than that of Ampere. Its a good solution for lower end cards, but DLSS is better in every way and runs better. Maybe watch the video and less fanboyism. The only reason that AMD did this is because of nvidia. They cant release proprietary tech because their market share is non existent. They're not good guys doing you a favor, nvidia forced their hand. Just look at their cpu's if you feel like staning for AMD. They were behind intel for near 20 years and not a single second had passed after they reached parity in gaming (they never were ahead) and they stoped releasing budget cpus and they raised the prices for every single model and almost refused to allow the cpus on older mobos. After intel again wiped the floor with them with Alder Lake, only then they released budget cpus. Chose another company to be a fan of, cause AMD isnt it
LMAO Get some help. I have read two different articles by Techpowerup and Toms Hardware on this and just posted results from the Hardware Unboxed video. Some praise for AMD seems to have triggered you. This aint healthy chief.
 

DukeNukem00

Banned
LMAO Get some help. I have read two different articles by Techpowerup and Toms Hardware on this and just posted results from the Hardware Unboxed video. Some praise for AMD seems to have triggered you. This aint healthy chief.

You must think that going of the rails instead of staying on topic makes you sound smart. It doesnt, just emphasises the clownery. Like i said and the aspects i presented, instead of watching more amateuristic and surface level articles, especially the PR piece from techpowerup, watch Alex's video with visual proof. As in, not someones opinion, but hard facts presented in front of your eyes.

You know what would be unhealthy ? To buy an amd card that runs FSR 2 twice slower than Ampere. Look at the video. Alex was so baffled by this that he used two different testing methods to make sure
 

SlimySnake

Flashless at the Golden Globes
You must think that going of the rails instead of staying on topic makes you sound smart. It doesnt, just emphasises the clownery. Like i said and the aspects i presented, instead of watching more amateuristic and surface level articles, especially the PR piece from techpowerup, watch Alex's video with visual proof. As in, not someones opinion, but hard facts presented in front of your eyes.

You know what would be unhealthy ? To buy an amd card that runs FSR 2 twice slower than Ampere. Look at the video. Alex was so baffled by this that he used two different testing methods to make sure
Nah, your post was so over the top and hysterical that I didnt feel it warranted a proper rebuttal. Your rant implied im an AMD fanboy even though I specifically stated I own a 3080. You went on a tangent about AMD CPUs like some deranged hobo even though it has nothing to do with the topic once again implying that I am a AMD fanboy when I actually own an i7-11700k lmao.

Your insistence on labeling tech powerup, hardware unboxed, and toms hardware as amatueristic and surface level is fucking hilarious. I dont even know what to say to that. No wonder you think I am an AMD fanboy seeing as how you write off reputable PC websites and youtube channels just for giving this thing a favorable review.

P.S You chastising me for straying off topic when your post literally went off rails after your first sentence. Tell me which of these statements of yours is on topic of FSR2.0?

  1. Maybe watch the video and less fanboyism.
  2. The only reason that AMD did this is because of nvidia.
  3. They cant release proprietary tech because their market share is non existent.
  4. They're not good guys doing you a favor, nvidia forced their hand.
  5. Just look at their cpu's if you feel like staning for AMD. They were behind intel for near 20 years and not a single second had passed after they reached parity in gaming (they never were ahead) and they stoped releasing budget cpus andthey raised the prices for every single model and almost refused to allow the cpus on older mobos.
  6. After intel again wiped the floor with them with Alder Lake, only then they released budget cpus. Chose another company to be a fan of, cause AMD isnt it
If this is staying on topic then you're right, I was way off topic. :messenger_tears_of_joy:
 

DukeNukem00

Banned
Nah, your post was so over the top and hysterical that I didnt feel it warranted a proper rebuttal. Your rant implied im an AMD fanboy even though I specifically stated I own a 3080. You went on a tangent about AMD CPUs like some deranged hobo even though it has nothing to do with the topic once again implying that I am a AMD fanboy when I actually own an i7-11700k lmao.

Your insistence on labeling tech powerup, hardware unboxed, and toms hardware as amatueristic and surface level is fucking hilarious. I dont even know what to say to that. No wonder you think I am an AMD fanboy seeing as how you write off reputable PC websites and youtube channels just for giving this thing a favorable review.

P.S You chastising me for straying off topic when your post literally went off rails after your first sentence. Tell me which of these statements of yours is on topic of FSR2.0?

  1. Maybe watch the video and less fanboyism.
  2. The only reason that AMD did this is because of nvidia.
  3. They cant release proprietary tech because their market share is non existent.
  4. They're not good guys doing you a favor, nvidia forced their hand.
  5. Just look at their cpu's if you feel like staning for AMD. They were behind intel for near 20 years and not a single second had passed after they reached parity in gaming (they never were ahead) and they stoped releasing budget cpus andthey raised the prices for every single model and almost refused to allow the cpus on older mobos.
  6. After intel again wiped the floor with them with Alder Lake, only then they released budget cpus. Chose another company to be a fan of, cause AMD isnt it
If this is staying on topic then you're right, I was way off topic. :messenger_tears_of_joy:

I find it a bit troublesome that deranged people like you walk among us. You could probably pass as sane until you open your mouth.

I see you're still commenting sideways without having watched the video. Coming to techpowerups defense with no context of the video that rebuts their claims. Keep going, you're amusing people reading here.

Maybe at some point tommorow you will actually have a clue on the subject this thread is about.
 

Zathalus

Member
I think it's pretty obvious that DLSS is the superior technology, especially when considering fine detail stability. DLSS runs faster as well.

That being said FSR is almost as good and a good attempt by AMD, but no real reason to use it over DLSS if you have a RTX card.
 

assurdum

Banned
FSR 2 is worse than DLSS universally. At detail rendering and stability in motion, at hair, at transparency, at animation movement, at particles and the cost on AMD cards is almost double than that of Ampere. Its a good solution for lower end cards, but DLSS is better in every way and runs better. Maybe watch the video and less fanboyism. The only reason that AMD did this is because of nvidia. They cant release proprietary tech because their market share is non existent. They're not good guys doing you a favor, nvidia forced their hand. Just look at their cpu's if you feel like staning for AMD. They were behind intel for near 20 years and not a single second had passed after they reached parity in gaming (they never were ahead) and they stoped releasing budget cpus and they raised the prices for every single model and almost refused to allow the cpus on older mobos. After intel again wiped the floor with them with Alder Lake, only then they released budget cpus. Chose another company to be a fan of, cause AMD isnt it
The hell is that post. Seems almost FSR2.0 be so good hurts your feeling. Considered not needs a dedicate hardware I find outstanding to it to be comparable to DLSS 2.0. No one would have believed it was possible such result without tensor cores, some years ago.
 
Last edited:

elliot5

Member
As Alex says, it's not a killer but it is a competent TAAU solution with more flexibility like dynamic res and so on. Nvidia RTX users should stick to DLSS, but FSR 2.0 is a great step up and hopefully AMD makes improvements to how it handles alpha effects. Maybe it being open source will lead to quicker solutions and tweaks to the algorithm. They did fine work with thin structures like railings.
 

DaGwaphics

Member
Great video. Good to see AMD moving in the right direction. The issue with the animations is a bit disappointing. If they can get that sorted I don't see why anyone would choose native res anymore.
 

DukeNukem00

Banned
The hell is that post. Seems almost FSR2.0 be so good hurts your feeling. Considered not needs a dedicate hardware I find outstanding to it to be comparable to DLSS 2.0. No one would have believed it was possible such result without tensor cores, some years ago.

Some of you really seem ill equiped to follow a simple thing as a forum conversation. Me pointing out the results from Alex's video is me being hurt ? Who exactly didnt believe these results were possible ? Its exactly how solutions like checkerboarding or insomniacs solution works on console or Epic's TSR in Unreal 5. It does good job, but it also comes with a number of flaws, neither small nor few in number, but thats up to each individual with what compromises he wants to live with to get extra performance.

Lets just hope there wont be blockage for DLSS like amd liked to do in its sponsored games because dlss still is better in every point if you have a gpu for it
 

Mister Wolf

Member
Results are a bit different because Alex used DLSS 2.3 instead of DLSS 2.0.
From what I remember the 2.3 update addressed specifically those defects still present in FSR 2.0.


And from what I understood when talking about the performance, those "bloated" Floating Point numbers in Ampere are actually being useful here.

Seems absurd that other sites didn't use the latest version of DLSS.
 

assurdum

Banned
Some of you really seem ill equiped to follow a simple thing as a forum conversation. Me pointing out the results from Alex's video is me being hurt ? Who exactly didnt believe these results were possible ? Its exactly how solutions like checkerboarding or insomniacs solution works on console or Epic's TSR in Unreal 5. It does good job, but it also comes with a number of flaws, neither small nor few in number, but thats up to each individual with what compromises he wants to live with to get extra performance.

Lets just hope there wont be blockage for DLSS like amd liked to do in its sponsored games because dlss still is better in every point if you have a gpu for it
Good luck to have such results with CBR or Nsomniac solution at 1/4 of native resolution as FSR 2.0. Neither Alex claim such absurdity. Oh and DLSS 2 too has flaws but sure lets pretend a software based solution should have less.
 
Last edited:
I watched the hardware unboxed video on FS2.0 and unless DF finds something different, I think the following holds true.

- 5500xt (Overclocked XSS GPU at 5.2 Tflops) - 24% gain in FSR Quality mode. 50% in FSR Performance.
- 5700xt (9.7tflops) - 37% gain in FSR Quality mode. 61% gain in Performance.
- 3080 - 38% Quality. 74% Performance mode.
- 3080 DLSS is 4-6% better than FSR 2.0.

According to AMD, the more powerful the GPU, the faster the FSR processing times which leads to better FSR performance. So something like an XSS will stand to gain less than the XSX which is far more powerful.

Reminder that the Virgin and SeX GPUs are RDNA 2 (TWO), with a bit improvement on the rate of packed math over RDNA 1 (One). So the potential gains should be a bit higher than in your calculations.

AMD killed it and the biggest thing no one seems to be talking about is that they dont need any tensor cores. No extra hardware required which means smaller and cheaper chips, lower temps which means lower cooling requirements and higher cost savings.

DLSS don't need Tensor. The presence of Tensor cores just means that the code is executed faster, with the additional advantage of also being executed in parallel with other tasks.
BTW, RDNA3 is coming soon and is expected to come with big improvements on this area. There's a possibility that it'll have contributions from Xilinx for better matrix math.

FSR 2 is worse than DLSS universally. At detail rendering and stability in motion, at hair, at transparency, at animation movement, at particles and the cost on AMD cards is almost double than that of Ampere. Its a good solution for lower end cards, but DLSS is better in every way and runs better. Maybe watch the video and less fanboyism. The only reason that AMD did this is because of nvidia. They cant release proprietary tech because their market share is non existent. They're not good guys doing you a favor, nvidia forced their hand. Just look at their cpu's if you feel like staning for AMD. They were behind intel for near 20 years and not a single second had passed after they reached parity in gaming (they never were ahead) and they stoped releasing budget cpus and they raised the prices for every single model and almost refused to allow the cpus on older mobos. After intel again wiped the floor with them with Alder Lake, only then they released budget cpus. Chose another company to be a fan of, cause AMD isnt it

You miss the point.
When Zen launched the Intel CPUs were still better, weren't they? But Zen was competitive and offered better prices... just like AMD GPUs now. If in the next generation Nvidia continues with more expensive GPUs while AMD offer the same features for lower prices it's inevitable that AMD will gain market share. AMD don't need to beat Nvidia in one day, that's not the goal.

I think it's pretty obvious that DLSS is the superior technology, especially when considering fine detail stability. DLSS runs faster as well.

That being said FSR is almost as good and a good attempt by AMD, but no real reason to use it over DLSS if you have a RTX card.

Instead of "superior", is more correct to say that "it's more mature".
Remember, Alex here tested version 2.3 while the other sites tested version 2.0.



Seems absurd that other sites didn't use the latest version of DLSS.

The opposite. The official version that this game uses right now is version 2.0, it was Alex that went over the game and side loaded version 2.3 with new DLLs.

EDIT: DAMMIT! I double checked now and Deathloop really uses DLSS 2.3, but sites keep saying "version 2.0" without identifying exactly which number of 2.0 it is. But is this actually better for FSR2.0? Because I thought it was being compared with an older version of the competitor I was thinking the difference in quality was actually greater.
 
Last edited:

Xyphie

Member
How do i install dlss 2.3? Im up to date on Geforce drivers. I thought i already had the latest version of DLSS.

You don't really install it via the driver. A game ships with a given version of DLSS in a DLL file (nvngx_dlss.dll), you can update the version by dropping a newer version in the game directory. Deathloop ships with 2.3.0, latest version is 2.4.0.
 
Last edited:

THE DUCK

voted poster of the decade by bots
AMD killed it and the biggest thing no one seems to be talking about is that they dont need any tensor cores. No extra hardware required which means smaller and cheaper chips, lower temps which means lower cooling requirements and higher cost savings.

Lisa Su is a fucking genius. Now i just hope she pays devs to implement this as a feature like Nvidia did with RTX and DLSS back in the early days. You cant expect devs to support this on their own when they are struggling with delays. You have to invest in it and write it off as marketing. She did have Ubisoft work with AMD on their ray tracing which led to fantastic results almost on par with Nvidia's RTX implementation so hopefully she continues to do that going forward.

I chose to buy a 3080 over the 6900xt because of DLSS, but now after seeing AMD cards performing better than 30 series cards in the Matrix demo, FSR being as good as DLSS and even ray tracing performance improving on AMD cards, I am leaning towards the 6900xt. Way lower power draw, 6GB more VRAM, and even cheaper than most 3080 models? Why not.

This is huge news, those are massive improvements and they can be applied pc and console without special hardware.

Have they improved anything on the AI processing front or is that still a big edge for Nvidia?
 
I think it's pretty obvious that DLSS is the superior technology, especially when considering fine detail stability. DLSS runs faster as well.

That being said FSR is almost as good and a good attempt by AMD, but no real reason to use it over DLSS if you have a RTX card.
That's all AMD needed to do, to have something close enough that it doesn't really matter. Since their approach works for everything it will become the standard.
 
Last edited:

01011001

Banned
That's all AMD needed to do, to have something close enough that it doesn't really matter. Since their approach works for everything it will become the standard.

well it does matter. DLSS still has better performance and quality.

way more interesting would be comparing it with already existing reconstruction methods other engines already use. like Unreal's TSR or Call of Duty's reconstruction used in the last couple of games
also comparing it against Checkerboard Rendering. which I still think is extremely underused
 
Last edited:

Alexios

Cores, shaders and BIOS oh my!
For all that footage he really couldn't try testing it with sharpening down to 5 or 0 to see if that has any effect on some of the more grainy looking in motion stuff? I'd think it's plausible it affects them (unless others have shown that's not the case). He seemed to miss stuff in the first movement test, where he says the native TAA has obvious ghosting, but the FSR there also showed a very weird grainy effect trailing the antenna over the grass background which I don't think was commented on? Anyway, cool that it actually beats native on occasion still, I thought it was already a great option in Ghostwire: Tokyo and Death Stranding DC with the 1.0 imlementation for a GTX1080, it certainly didn't look anywhere near as bad as playing 1080p (or whatever the Ultra setting reduced resolution to) stretched to 1440p without utilizing such upscaling techniques and getting an aliased mess.
 
Last edited:

Mister Wolf

Member
well it does matter. DLSS still has better performance and quality.

way more interesting would be comparing it with already existing reconstruction methods other engines already use. like Unreal's TSR or Call of Duty's reconstruction used in the last couple of games
also comparing it against Checkerboard Rendering. which I still think is extremely underused

Its funny that FSR 2.0 actually runs better on Nvidia Gpus. They're really digging their own grave.
 
DLSS 2.3
sexy black and white GIF


VS

FSR 2.0
sad fat woman GIF



FSR 2 has a looooong way to go still. Too much flickering and too much artifacting.
 

Buggy Loop

Member
FSR 2 is worse than DLSS universally. At detail rendering and stability in motion, at hair, at transparency, at animation movement, at particles and the cost on AMD cards is almost double than that of Ampere. Its a good solution for lower end cards, but DLSS is better in every way and runs better. Maybe watch the video and less fanboyism. The only reason that AMD did this is because of nvidia. They cant release proprietary tech because their market share is non existent. They're not good guys doing you a favor, nvidia forced their hand. Just look at their cpu's if you feel like staning for AMD. They were behind intel for near 20 years and not a single second had passed after they reached parity in gaming (they never were ahead) and they stoped releasing budget cpus and they raised the prices for every single model and almost refused to allow the cpus on older mobos. After intel again wiped the floor with them with Alder Lake, only then they released budget cpus. Chose another company to be a fan of, cause AMD isnt it

Also, yea, there’s hardware on RDNA 2 and RDNA 1 to help these calculations, they’re basically juggling with the pipeline to do int 4/8/16/32 maths and back to FP32 for shading. Has performance impact of course and with RT being in the same pipeline it could get even more choked.

Wow, they have no hardware, cool!
Well no, they have everything in the same pipeline, maths, shading and RT blocks. It has a cost. Tensor cores are there to speed it up and DLSS is only one of many things planned with tensor cores.

Dedicated ASIC like RT cores is also giving the advantage to Nvidia.

So the question is, how the fuck with Nvidia dedicating so much silicon to tensor and RT cores, they still can hold or beat AMD even are pure rasterization? That’s the question.
 
Last edited:

01011001

Banned
Ah shit, I was hoping it would work on them since FSR worked....

I think people really have to understand that FSR 1.0 is quite literally just a slightly better sharpening filter... I think all the comparison against DLSS and articles saying it's the answer to DLSS made it sound like it was comparable.

but FSR 1.0 isn't even really related to FSR 2.0 other than by name. it's kinda weird that they use the same name for both tbh considering that one is simply a filter and the other a reconstruction algorithm
 

01011001

Banned
Yeah, people here are way too harsh on it. Of course it's not as good as DLSS at lower internal resolutions, but at 4K quality setting or even 1440p quality setting it's super close. This is gonna be huge for consoles.

for consoles it's still not necessarily always the answer.

look at Ghostwire Tokyo, that game has TSR support, which is almost identical to FSR 2.0, but on console they only use FSR 1.0 for upscaling because TSR was most likely to expensive and would have had a bad impact on performance.

of course they could have run it at an even lower internal resolution and then apply TSR to it but maybe they would have needed to go too low for their tastes.

so it's still hard to tell how much use FSR 2.0 will actually get on console.

depending on performance impact Intel's XeSS might actually be the better answer for consoles, but that depends on how fast consoles can run it without dedicated hardware
 
Last edited:

azertydu91

Hard to Kill
I think people really have to understand that FSR 1.0 is quite literally just a slightly better sharpening filter... I think all the comparison against DLSS and articles saying it's the answer to DLSS made it sound like it was comparable.

but FSR 1.0 isn't even really related to FSR 2.0 other than by name. it's kinda weird that they use the same name for both tbh considering that one is simply a filter and the other a reconstruction algorithm
Yep I have to admit I am generally pretty informed techwise but I just assumed it was a simple iteration and I believe that the vast majority of people are like me on this case.I was really excited especially since I recently downloaded RPCS3 and the ps360 gen was a AA nightmare (and those blurry texture s ....), so I was hoping that FSR 2.0 would be implemented straight away but I don't think it is feasible anymore ....Maybe someday down the line (those people creating emulators are talented AF) it wouldn't surprise me.
 

nkarafo

Member
Can this be used in Emulators like Cemu Emu or RPCS3?
Why though?

When you emulate, all the heavy lifting is done by the CPU. Other than that, any current low/mid range GPU should be able to handle any system that can be emulated (the most advanced being the PS3) at higher resolutions, without the need of a performance boosting image reconstruction technique.
 

amigastar

Member
Why though?

When you emulate, all the heavy lifting is done by the CPU. Other than that, any current low/mid range GPU should be able to handle any system that can be emulated (the most advanced being the PS3) at higher resolutions, without the need of a performance boosting image reconstruction technique.
Yes, you're right. Emulating is done by the CPU, so these image reconstruction wouldn't help much.
 
Top Bottom