• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD's GAMING SUPER RESOLUTION Patent

Jose92

[Membe
Abstract:
A processing device is provided which includes memory and a processor. The processor is configured to receive an input image having a first resolution, generate linear down-sampled versions of the input image by down-sampling the input image via a linear upscaling network and generate non-linear down-sampled versions of the input image by down-sampling the input image via a non-linear upscaling network. The processor is also configured to convert the down-sampled versions of the input image into pixels of an output image having a second resolution higher than the first resolution and provide the output image for display



DETAILED DESCRIPTION

[0008] Conventional super-resolution techniques include a variety of conventional neural network architectures which perform super-resolution by upscaling images using linear functions. These linear functions do not, however, utilize the advantages of other types of information (e.g., non-linear information), which typically results in blurry and/or corrupted images. In addition, conventional neural network architectures are generalizable and trained to operate without significant knowledge of an immediate problem. Other conventional super-resolution techniques use deep learning approaches. The deep learning techniques do not, however, incorporate important aspects of the original image, resulting in lost color and lost detail information.

[0009] The present application provides devices and methods for efficiently super-resolving an image, which preserves the original information of the image while upscaling the image and improving fidelity. The devices and methods utilize linear and non-linear up-sampling in a wholly learned environment.

[0010] The devices and methods include a gaming super resolution (GSR) network architecture which efficiently super resolves images in a convolutional and generalizable manner. The GSR architecture employs image condensation and a combination of linear and nonlinear operations to accelerate the process to gaming viable levels. GSR renders images at a low quality scale to create high quality image approximations and achieve high framerates. High quality reference images are approximated by applying a specific configuration of convolutional layers and activation functions to a low quality reference image. The GSR network approximates more generalized problems more accurately and efficiently than conventional super resolution techniques by training the weights of the convolutional layers with a corpus of images.

[0011] A processing device is provided which includes memory and a processor. The processor is configured to receive an input image having a first resolution, generate linear down-sampled versions of the input image by down-sampling the input image via a linear upscaling network and generate non-linear down-sampled versions of the input image by down-sampling the input image via a non-linear upscaling network. The processor is also configured to convert the down-sampled versions of the input image into pixels of an output image having a second resolution higher than the first resolution and provide the output image for display.

[0012] A processing device is provided which includes memory and a processor configured to receive an input image having a first resolution. The processor is also configured to generate a plurality of non-linear down-sampled versions of the input image via a non-linear upscaling network and generate one or more linear down-sampled versions of the input image via a linear upscaling network. The processor is also configured to combine the non-linear down-sampled versions and the one or more linear down-sampled versions to provide a plurality of combined down-sampled versions. The processor is also configured to convert the combined down-sampled versions of the input image into pixels of an output image having a second resolution higher than the first resolution by assigning, to each of a plurality of pixel blocks of the output image, a co-located pixel in each of the combined down-sampled versions and provide the output image for display.

 
Last edited:

ZywyPL

Banned
It's reconstruction and anti-aliasing in one. Checkerboard is much more advanced than fanboys give it credit for.

They are two techniques to tackle the same problem but with completely different results. Don't be fooled by Sony fanboys.

The results vary game by game basis, sometimes DLSS makes wonders, other times it misses some details or creates artifacts, same thing for CBR, sometimes it get's the job really well done, other times the image is still blurry. DLSS is only applicable on Nvidia RTX cards, and even then it is not very popular among the most demanding AAA titles that would really need it, to say the least, CBR on the other hand is MUCH more popular, but the results often fell short.

And that's why IMO it's so crucial to have a one, generic solution, that can be applied to most/any game just like that, on any hardware, with consistently good results. So fingers crossed AMD delivers.
 

supernova8

Banned
The results vary game by game basis, sometimes DLSS makes wonders, other times it misses some details or creates artifacts, same thing for CBR, sometimes it get's the job really well done, other times the image is still blurry. DLSS is only applicable on Nvidia RTX cards, and even then it is not very popular among the most demanding AAA titles that would really need it, to say the least, CBR on the other hand is MUCH more popular, but the results often fell short.

And that's why IMO it's so crucial to have a one, generic solution, that can be applied to most/any game just like that, on any hardware, with consistently good results. So fingers crossed AMD delivers.

I'd be interested to hear from actual developers working with both technologies. It may well be that DLSS provides the better upscaling result (less noise) but if it takes 2x more effort/time etc to do it then maybe the AMD solution is a better all-round option. Or it could be the other way around.
 

M1chl

Currently Gif and Meme Champion
It's reconstruction and anti-aliasing in one. Checkerboard is much more advanced than fanboys give it credit for.
Checkerboarding breaks the lines and most of the time it looks "not stable" lot of lines breaking and it's sort of like out of focus or rather if you have fucked only one eye and second one is 20/20. I can't find the picture of Metro Exodus Enhanced Edition, where is pretty clear, that DLSS 2.0 is absolutely unbeatable. CB is probably the worst technique to upscale picture.

Everything might not be apparent on 1080tv or if you have projector, but on 4k TV, not the mention on PC monitor, it's not a contest.

I will edit this post when I found it.
 

jhjfss

Member
Honestly checkerboard accomplishes basically the same thing. Don't be fooled by Nvdia fanboys.
Mr Burns Drugs GIF
 

GuinGuin

Banned


Checkerboarding breaks the lines and most of the time it looks "not stable" lot of lines breaking and it's sort of like out of focus or rather if you have fucked only one eye and second one is 20/20. I can't find the picture of Metro Exodus Enhanced Edition, where is pretty clear, that DLSS 2.0 is absolutely unbeatable. CB is probably the worst technique to upscale picture.

Everything might not be apparent on 1080tv or if you have projector, but on 4k TV, not the mention on PC monitor, it's not a contest.

I will edit this post when I found it.

Depends how well you implement it. Looks great in God of War.
 

Jose92

[Membe
Instead of copying resetera threads maybe look at the fact that it was filed in 2019.
Well do you know that patents are not granted in a day ? and companies file plenty of patents yearly and this patent may actually be how it will work or not.

I thought it was an intresting patent for some discussion on the percussion of such a technique being implemented on next/current generation consoles , but anyway if you dont like it just press ignore on the thread you dont have to be a dickhead.
 
Last edited:
Well do you know that patents are not granted in a day ? and companies file plenty of patents yearly and this patent may actually be how it will work or not.

I thought it was an intresting patent for some discussion on the percussion of such a technique being implemented on next/current generation consoles , but anyway if you dont like it just press ignore on the thread you dont have to be a dickhead.


im not a dickhead, but you shouldnt post what reeeeera's post there. And this is probably not aplicable to what amd is doing today. Its too old
 

Andodalf

Banned
Depends how well you implement it. Looks great in God of War.

CB 1800p looked like a blurry 1440 P, with meh anti aliasing. 4K DLSS with an internal 1440P similar perf, and gives an image as good or better than Native 4k, and acts as very good AA.



There’s a reason Sony studios like ND didn’t use CB and just went for native 1440p and GOAT tier TAA. Insomniac too favored temporal tech over CB.
 
Last edited:

Sentenza

Member
Honestly checkerboard accomplishes basically the same thing. Don't be fooled by Nvdia fanboys.
I mean, no, it does not.
One of the things that made DLSS so broadly discussed to begin with is PRECISELY that, coming from other "imagine reconstruction" techniques like checkerboard rendering that smeared the whole imagine like some vaseline shit, it was impressive how much detail it was maintained on the Nvidia solution.
 

assurdum

Banned
CB 1800p looked like a blurry 1440 P, with meh anti aliasing. 4K DLSS with an internal 1440P similar perf, and gives an image as good or better than Native 4k, and acts as very good AA.



There’s a reason Sony studios like ND didn’t use CB and just went for native 1440p and GOAT tier TAA. Insomniac too favored temporal tech over CB.
1800 CBR is definitely sharper than 1440p if the CBR tech is decent. ND uses a very aggressive TAA and probably that's the reason because CBR is out of the question. In RDR2 the aggressive TAA screw the CBR reconstruction.
 
Last edited:

FireFly

Member
Instead of copying resetera threads maybe look at the fact that it was filed in 2019.
In the past we've seen patents pop up just before a feature was due to be implemented/announced. For example, the one that seemed to relate to InfinityCache was filed in 2019 but appeared just before the Big Navi announcement event.


Given how long it takes to develop these features, it makes sense that AMD were already working on FSR in 2019. And the fact that the patent explicitly calls it "super-resolution" hardly leaves much room for doubt.
 

Caio

Member
Honestly checkerboard accomplishes basically the same thing. Don't be fooled by Nvdia fanboys.

I'm happy AMD is working on something very similar to Nvidia DLSS, and I'm very curious to see how it will compare to it.
Death Stranding on PC with DLSS 2.0 clearly shows a much superior IQ than PS4 Pro Checkerboarding, mostly in 400% or 800% Zoom, difference between night and day. I'm a BIG Playstation fan, but when something is clearly superior, there's nothing to say apart from acknowledge it.
I hope AMD will work hard on their FidelityFX super resolution, so to be implemented in PS5 and XSX games, and to be implemented in FULL GLORY in 2027(?) on PS6 and the Next XBox.
 
While the description of the methodology is cool and all, the most important thing is performance.

Until we see it employed on real hardware it's hard to know how efficient (or not) it will be.
 

llien

Member
And it`s still nowhere near what DLSS can accomplish
Bovine faces.
Put of those glasses of yours, that make you NOT see that glorified TAA derivative by NV (known as DLSS 2.0) exhibits ALL cons of TAA derivatives: wipe of fine detail, added blur, terrible with small things, terrible with quickly moving items.

It does improve lines. A lot. With a significant performance impact.

F FireFly
Let us compare double 7870 to a PC, shall we...

 
Last edited:

Haggard

Banned
Bovine faces.
Put of those glasses of yours, that make you NOT see that glorified TAA derivative by NV (known as DLSS 2.0) exhibits ALL cons of TAA derivatives: wipe of fine detail, added blur, terrible with small things, terrible with quickly moving items.
I was shortly thinking about going over the whole

"waaah waaah Nvidia`s RTX is just fakery and DLSS is only glorified TAA" bullshit bait from our usual troll suspect here again...but seriously.....It´s a waste of time.


On a sunny day this guy`d claim the sky is red as long as it fits his agenda, cherry picking to the extreme, ignoring everything that doesn`t fit....
Ignored to spare me and everyone else the time.
 
Last edited:

GuinGuin

Banned
Bovine faces.
Put of those glasses of yours, that make you NOT see that glorified TAA derivative by NV (known as DLSS 2.0) exhibits ALL cons of TAA derivatives: wipe of fine detail, added blur, terrible with small things, terrible with quickly moving items.

It does improve lines. A lot. With a significant performance impact.

F FireFly
Let us compare double 7870 to a PC, shall we...



Boom goes the dynamite!
 

KyoZz

Tag, you're it.
Well do you know that patents are not granted in a day ? and companies file plenty of patents yearly and this patent may actually be how it will work or not.

I thought it was an intresting patent for some discussion on the percussion of such a technique being implemented on next/current generation consoles , but anyway if you dont like it just press ignore on the thread you dont have to be a dickhead.
I mean...



So yeah if you want to discuss, just up one of those thread ? Especially if it's just to copy/paste content and not actually discussing.
 

FireFly

Member
F FireFly
Let us compare double 7870 to a PC, shall we...


I was comparing with checkerboard rendering, not CAS. Though it is interesting that CAS is able to recover details that DLSS does not. However, comparing the native image to the CAS versions in the DSOG article, the CAS ones look noticeably sharper to me. So it looks like the native image is already pretty soft, and would benefit from having a sharpening filter applied, independently of upscaling. In principle, there's nothing stopping you from using DLSS and then applying a sharpening filter on top. And if FSR is also a temporal solution, developers may want to combine it with CAS as well.
 

yamaci17

Member
Nvidia mostly omits sharpening out from DLSS and let users add sharpening through NVCP or Geforce Experience. Sharpening is not an ideal solution. In static pictures, it will make it look like textures gained details, but in actual gameplay, effect can be jarring and usually generate the feeling of "oversharpened" picture. If you want the same effect, you can practically use Nvidia's own sharpening filter so... It is a pretty much pointless argument. But as i stated previously, both DLSS, fidelityfx and sharpening, every one of them looks only "good" on static pictures.

And finally, I really think that all of these are unnecessary for TVs. Almost everyone I see on the forums says 1200-1440p games looks nearly native 4K on 4K TVs. I never see one, but if that's true, then there's no point for consoles to push for good upscalers. People are already content with what they're getting.

Its the monitors that have a huge problem. Even a slight variation from native resolution can be highly noticable up close. That's why Nvidia/AMD trying to do good upscalers. Especially Nvidia, because the majority of their playerbase uses 1440p/4K monitors. From my point of view, dedicated upscaling tech is only worth it for monitors. And there aint much gamers who use monitors for consoles.

Funnily enough, RE Village's checkerboarded 4K on consoles look better than PC's native 4K in most instances. Pretty wild stuff.
 
Last edited:

Sentenza

Member
Bovine faces.
Put of those glasses of yours, that make you NOT see that glorified TAA derivative by NV (known as DLSS 2.0) exhibits ALL cons of TAA derivatives: wipe of fine detail, added blur, terrible with small things, terrible with quickly moving items.

It does improve lines. A lot. With a significant performance impact.
I mean, we had direct comparisons between DLSS and Fidelity FX even on this forum months ago and the difference was night and day in favor of the first. With images clearly showing the obvious gap in quality.
There's only so much bullshit people should take at face value from reddit in face of overwhelming practical evidence.
 
I mean, we had direct comparisons between DLSS and Fidelity FX even on this forum months ago and the difference was night and day in favor of the first. With images clearly showing the obvious gap in quality.
There's only so much bullshit people should take at face value from reddit in face of overwhelming practical evidence.
Everyone agreed that DLSS looks superior and performs better. It's just a little ironic to still be shitting on NVIDIA, in a thread about AMD having their own version. Why is he only shitting on NVIDIA and not AMD for this? Weird...
 

Irobot82

Member
Lots of flaws are hidden by DLSS worshipers, like the weird lines of those small bugs on Death Stranding and other stuff.

I would rather have lower native resolution or have consistent checkerboarding/temporal injection.
I wouldn't know yet.... Still on a 1080....still waiting for MRSP prices.....

One day I'll find out about DLSS or Super Resolution.....one day.........maybe.
 

OverHeat

« generous god »
Lots of flaws are hidden by DLSS worshipers, like the weird lines of those small bugs on Death Stranding and other stuff.

I would rather have lower native resolution or have consistent checkerboarding/temporal injection.
It’s easy to see in still but in motion it look excellent in most games.
 

ZywyPL

Banned
I just had a thought, that it would be so damn amazing to use such tech for BC titles that have their res fixed for PS4/XB1 specs. That would complement the FPS boost the games receive. Something that's build-in, working on a system level, without the need of any additional work from the devs.
 

GuinGuin

Banned
I just had a thought, that it would be so damn amazing to use such tech for BC titles that have their res fixed for PS4/XB1 specs. That would complement the FPS boost the games receive. Something that's build-in, working on a system level, without the need of any additional work from the devs.

That's one thing that PC gaming has going for it. Fans are making AI improved textures using the original assets and releasing them as mods.
 

Buggy Loop

Member
Everyone agreed that DLSS looks superior and performs better. It's just a little ironic to still be shitting on NVIDIA, in a thread about AMD having their own version. Why is he only shitting on NVIDIA and not AMD for this? Weird...


Because you can find AMD fans out there who believes this Fidelity CAS is superior to DLSS. Some of them are in a complete mental dissonance with the rest of the industry, and probably won’t find it cool until AMD brings their copy of DLSS, likely worse, likely less performing, but somehow superior for them.

r/amd is a fucking cult, there’s no other descriptions for peoples defending the era of 5700XT not working correctly for nearly a year after launch for thousands of peoples. Sprinkling console fanboys into the mix is just the worst
 
Last edited:
Because you can find AMD fans out there who believes this Fidelity CAS is superior to DLSS. Some of them are in a complete mental dissonance with the rest of the industry, and probably won’t find it cool until AMD brings their copy of DLSS, likely worse, likely less performing, but somehow superior for them.

r/amd is a fucking cult, there’s no other descriptions for peoples defending the era of 5700XT not working correctly for nearly a year after launch for thousands of peoples.
It's almost as if their CPU team and GPU team are playing tug of war, and the CPU team is clearly winning.

Those same fanboys fucking hate DLSS, yet will praise AMD version like it was the first of it's kind, the pioneer into the future, the sweet taste of Lisa Su's socks after a 15 mile run.
 
Top Bottom