• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[INTEL] Introducing XeSS answer to Nvidia’s DLSS

ErRor88

Member
It's about to get very interesting...

Intel Arc is the company’s first big foray into dedicated gaming GPUs, coming in Q1 2022, but we got another preview at some of additional details for the upcoming graphics cards at the company’s Architecture Day 2021 event — including a first look at Intel’s AI-accelerated super sampling, now known as XeSS.

XeSS looks set to take on Nvidia’s own Deep Learning Super Sampling (DLSS) tech, will make its debut alongside the first Arc GPU architecture, known as Alchemist, in early 2022. Much like DLSS, it will upscale games from a lower resolution to provide smoother frame rates without a noticeable compromise in image quality.

Intel is also using dedicated Xe-cores in its upcoming GPUs to power its XeSS technology, with dedicated Xe Matrix eXtensions (XMX) matrix engines inside to offer hardware-accelerated AI processing....


Screen_Shot_2021_08_19_at_8.16.08_AM.png
https://www.theverge.com/2021/8/19/...ss-super-sampling-ai-architecture-day-preview




EDIT: Added DF interview.

 
Last edited:

Kuranghi

Member
Obviously you can see the difference but its a bum comparison, the livestream has messed with the results too much, wait for a proper direct feed comparison.
 

Bo_Hazem

Banned
XeSS seems that it runs on... EVERYTHING.

AMD could just start using it, and just like FSR (it works on EVERYTHING), then on RDNA3 they add a small hardware complex that accelerates it and DONE.
NO techniques that just works on 1 single platform (Nvidia/DLSS)

You sure? Is it open-source like FSR?
 

Zathalus

Member
You sure? Is it open-source like FSR?
Doesn't appear to be open-source (might be? unsure at this point) but it runs on everything. It uses DP4a to run on all non-Xe GPUs, and uses XMX when running on a Xe GPU, so a slight performance advantage inherent to Xe GPUs.

The better ML capabilities the GPU has, the better it should run, so Ampere GPUs with dedicated Tensor cores should run it better then RDNA2. But RDNA2 is capable with accelerating ML tasks quite well, so it should work well on both the XSX and PS5 (I am assuming the PS5 didn't strip out any of the ML capabilities of RDNA2).

Basically, this is going to kill FSR, and if Nvidia does not open DLSS to everything, that will die out as well.

Edit: Based on this, I have a feeling Nvidia might make DLSS run on everything and just use Tensor cores to have it run best on Nvidia hardware.
 
Last edited:

Bo_Hazem

Banned
Doesn't appear to be open-source (might be? unsure at this point) but it runs on everything. It uses DP4a to run on all non-Xe GPUs, and uses XMX when running on a Xe GPU, so a slight performance advantage inherent to Xe GPUs.

The better ML capabilities the GPU has, the better it should run, so Ampere GPUs with dedicated Tensor cores should run it better then RDNA2. But RDNA2 is capable with accelerating ML tasks quite well, so it should work well on both the XSX and PS5 (I am assuming the PS5 didn't strip out any of the ML capabilities of RDNA2).

Basically, this is going to kill FSR, and if Nvidia does not open DLSS to everything, that will die out as well.

Edit: Based on this, I have a feeling Nvidia might make DLSS run on everything and just use Tensor cores to have it run best on Nvidia hardware.

Thanks for the details! This sounds great.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Doesn't appear to be open-source (might be? unsure at this point) but it runs on everything. It uses DP4a to run on all non-Xe GPUs, and uses XMX when running on a Xe GPU, so a slight performance advantage inherent to Xe GPUs.

The better ML capabilities the GPU has, the better it should run, so Ampere GPUs with dedicated Tensor cores should run it better then RDNA2. But RDNA2 is capable with accelerating ML tasks quite well, so it should work well on both the XSX and PS5 (I am assuming the PS5 didn't strip out any of the ML capabilities of RDNA2).

Basically, this is going to kill FSR, and if Nvidia does not open DLSS to everything, that will die out as well.

Edit: Based on this, I have a feeling Nvidia might make DLSS run on everything and just use Tensor cores to have it run best on Nvidia hardware.

AMD doesnt have any GPUs that support DP4a Instructions.
EDIT: Seems some Chips do have it, but I cant find if any consumer chips support the instruction set.
So runs on anything might be a bit of a stretch.
Funny thing if the Series X truly is "every RDNA2 feature" DP4a is actually an optional feature of RDNA and RDNA2.

IT does however mean that games that decide to opt for XeSS will be compatible with Nvidia GPUs going back 2 generations.
Though how worthwhile that would be is yet to be seen.

Intel were NOT fucking around aye!

4K Youtube Compressed.
It look mighty impressive for a first go:
 
Last edited:

Schmick

Member
Its will be soo good for the consumer if Intel can come up with a competitive range of GPU and features.

We really need it.
 
First samples of their AI upscaling:

Intel-XeSS-demo2.jpg



Intel-XeSS-demo1.jpg



Looks really really good if you ask me. Now AMD has 2 players to catch up.
Kind of hard to notice in the first comparison but the 2nd one is clear as day. Color gamut also looks better with the upscale on the 2nd image, might be partly due to the increase in sharpness helping make the colors pop a bit more too tho.

C'mon Intel, bring the heat. A viable 3rd competitor in the GPU space could make things extremely interesting and I want that excitement!

DP4a stands for Vector Dot Product and it's supported with rapid packed math on INT8 and INT4 on all RDNA1 and RDNA2 GPUs except for Navi 10.

I think the bigger point is with AMD you have to sacrifice compute performance in other areas like graphics rendering to make room for such a thing, as they lack dedicated hardware acceleration units built within the RDNA 2 GPUs which can do this themselves like with Nvidia and now Intel.*

*Would have said "in parallel" but it's not necessarily in concurrency since the frame needs to be rendered first before it can be reconstructed for upscaling.

Still, the dedicated hardware acceleration in Nvidia and seemingly Intel's GPUs are able to do this more efficiently and at better precision than AMD's GPUs which have to allocate generic resources to do such. At least, to my understanding.
 
Last edited:
we will move towards for something like this:

Nvdia > Intel > AMD
or Nvidia/Intel > AMD

This is an ambitious assertion given that Xe Gen 1 (in tiger lake) has worse per shader performance than Mobile Vega.

Also if you guys think AMD Drivers are bad...
 
Last edited:

ToTTenTranz

Banned
I think the bigger point is with AMD you have to sacrifice compute performance in other areas like graphics rendering to make room for such a thing, as they lack dedicated hardware acceleration units built within the RDNA 2 GPUs which can do this themselves like with Nvidia and now Intel.*
The argument was about using DP4a, not Intel's dedicated XMX matrix multiply cores.

Though Intel did let us know what to expect in terms of performance when using DP4a for XeSS compared to XMX, and the difference seems pretty small in the grand scheme of things:

WIPe3Fi.png




we will move towards for something like this:

Nvdia > Intel > AMD
or Nvidia/Intel > AMD
Not according to the performance leaks we have so far.
The highest end "Arc Alchemist" is a competitor to the RX 6700XT at best.




The fully enabled 512 EU part is at best some 14% faster, so it's wtill in the ballpark of a 6700XT.
Not to mention that by Q1 2022 the Arc GPUs will be one quarter away from having to compete with RDNA3.
 

elliot5

Member
This is an ambitious assertion given that Xe Gen 1 (in tiger lake) has worse per shader performance than Mobile Vega.

Also if you guys think AMD Drivers are bad...
Are you talking about iGPU performance..? This is about dedicated graphics cards from Intel
 
Are you talking about iGPU performance..? This is about dedicated graphics cards from Intel

Either way its an apples to apples comparison. Xe in Tiger Lake has the exact same constraints as Vega (power limits, lack of memory bandwidth etc.)
Same company, using a newer version of that same architecture.

Now I grant you, it is newer so it could in fact be better than Vega. Unfortunately, AMD have moved onto RDNA2 and by the time this will actually see the light of day, RDNA3.

See: Apisak Tweet above.

The biggest Xe GPU, which is 512EU (4096 ALUs) will have ballpark performance of a 6700XT (2560 ALUs) / 3070 (5888 ALUs - but Ampere works in a unique way).

This is very impressive I grant you, for a first attempt. But its a clear margin short of the performance of the 6900XT and 3090, which are AMD and Nvidia's current flagship GPUs.
And again, by the time it launches we'll be looking at RDNA3 and Lovelace from AMD and Nvidia.

Its pretty fucking good and I'm glad Intel are making strides here, but they are far, faaaaar behind the competition in this space.
 
I had forgotten, one reason why the implementation is so similar is because Intel with their infinite wisdom(money) hired one of the key responsibles for doing this work on Nvidia. That's how they got it made so fast.

Anyway, good that Intel like AMD is making this code work on anything that can do lower precision math.
I hope that AMD will partner with Intel do co-develop and sell this solution to the industry and public. There's no reason to continue to work on their own implementation that compete late.
Also, Insomniac won again? That update to Miles Morales that makes spider's body deform realistically and keep it's correct structure is just an implementation of a Intel's product. I wonder if they'll put this Temporal SS solution in one of their future games.
 
I never understands the surprised "it runs on everything".
Yes people, because it just requires doing the math. Everything runs on everything, but for some mysterious reason people see Nvidia's paywalls and believe that there's more than the suppositories they take in their asses.
 

onesvenus

Member
I never understands the surprised "it runs on everything".
Yes people, because it just requires doing the math. Everything runs on everything, but for some mysterious reason people see Nvidia's paywalls and believe that there's more than the suppositories they take in their asses.
It's never a question of running or not, it's a question of how quick does it run. If the Intel GPUs have dedicated hardware like Nvidia's tensor cores, it won't run well in AMD for example
 

Arioco

Member
Looks really cool. 👍 We'll probably see many new reconstruction methods this generation, similar to what happened with antialiasing last gen. Can't wait to see what devs come up with!
 

Buggy Loop

Member
Impressive entry of a 3rd player I must say. Already showcasing more knowledge of ML than a deeply rooted industry player like AMD.

Even if this card is 3070 territory, if it’s at the right price, it’ll be a very impressive first generation and also in the right market for the most potential sales, as high ends are like what, ~2% market shares? If I was Intel I would not even consider competing against a 3090.

Can’t wait to see more information on this GPU.
 

GHG

Gold Member
AMD enjoying that DP action from Nvidia and Intel.

DPSS.

Looks good though, this is the kind of competition we need. I know everyone likes to parrot "competition is good" but that's not always true, only good competition is good. You don't want someone stinking up the place and dragging standards down. Thankfully Intel look like they've got their shit together here and everything looks promising so far.

What will make or break this card will be the drivers.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Impressive entry of a 3rd player I must say. Already showcasing more knowledge of ML than a deeply rooted industry player like AMD.

Even if this card is 3070 territory, if it’s at the right price, it’ll be a very impressive first generation and also in the right market for the most potential sales, as high ends are like what, ~2% market shares? If I was Intel I would not even consider competing against a 3090.

Can’t wait to see more information on this GPU.

This is exactly right.
Dont bother hunting the 3080Ti and 3090.
Just make ~3070 level cards because thats the majority of sales and will get you mind share as more people are actually able to buy your cards.
Once your settled you can start hunting the top-end.

Heck is the RX6900XT actually the strongest card out there?
But simply having the strongest card on the market hasnt done enough for AMD to really take a bit out of Nvidia.

Intel shouldnt bother going all the way to the top.
Get the majority at the right price then start fucking the consumer.
Because, we all know they really want to fuck us.
 
Top Bottom