• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Intel’s Flagship ARC Alchemist Gaming Graphics Card With 32 Xe Cores Spotted: On Par With NVIDIA RTX 3070 Ti, Up To 2.1 GHz Clocks

tusharngf

Member
Intel-ARC-Alchemist-GPU-Q1-2022-Launch-1920x1080.png



Brand new entry of Intel's flagship ARC Alchemist graphics card based on the Xe-HPG architecture has been leaked within the SiSoftware Sandra database. The new entry is for the 32 Xe SKU (DG2-512) which features 4096 ALUs and a clock speed of up to 2.10 GHz. There's also 4 MB of L2 cache and the GPU most likely features 16 GB of GDDR6 memory configured along a 256-bit wide bus interface. It's easy to tell that this is still an engineering sample considering that it has no official branding attached to it whereas the ARC A380 has already shown up in SANDRA with its naming scheme.

Intel-ARC-Alchemist-Flagship-Gaming-Graphics-Card-Benchmark-Leak-_1-1480x840.png

Intel-ARC-Alchemist-Flagship-Gaming-Graphics-Card-Benchmark-Leak-_2-1480x783.png

Intel-ARC-Alchemist-Flagship-Gaming-Graphics-Card-Benchmark-Leak-_3-1480x1034.png



The top Alchemist 512 EU (32 Xe Cores) variant has just one configuration listed so far and that utilizes the full die with 4096 cores, 256-bit bus interface, and up to 16 GB GDDR6 memory featuring a 16 Gbps clock though 18 Gbps cannot be ruled out as per the rumor.

The Alchemist 512 EU chip is expected to measure at around 396mm2 which makes it bigger than the AMD RDNA 2 and NVIDIA Ampere offerings. The Alchemist -512 GPU will come in the BGA-2660 package which measures 37.5mm x 43mm. NVIDIA's Ampere GA104 measures 392mm2 which means that the flagship Alchemist chip is comparable in size while the Navi 22 GPU measures 336mm2 or around 60mm2 less. This isn't the final die size of the chip but it should be very close.

The Xe-HPG Alchemist 512 EU chip is suggested to feature clocks of around 2.2 - 2.5 GHz though we don't know if these are the average clocks or the maximum boost clocks. Let's assume that it's the max clock speed and in that case, the card would deliver up to 18.5 TFLOPs FP32 compute which is 40% more than the RX 6700 XT but 9% lower than the NVIDIA RTX 3070.


Intel ARC Alchemist vs NVIDIA GA104 & AMD Navi 22 GPUs​

Graphics Card NameIntel ARC A780?NVIDIA GeForce RTX 3070 TiAMD Radeon RX 6700 XT
GPU NameAlchemist DG-512NVIDIA GA104AMD Navi 22
ArchitectureXe-HPGAmpereRDNA 2
Process NodeTSMC 6nmSamsung 8nmTSMC 7nm
Die Size~396mm2392mm2335mm2
FP32 Cores32 Xe Cores48 SM Units40 Compute Units
FP32 Units409661442560
Memory Bus256-bit256-bit192-bit
Memory Capacity16 GB GDDR68 GB GDDR6X12 GB GDDR6
LaunchQ1 2022Q2 2021Q1 2021



Source : https://wccftech.com/intels-flagshi...with-nvidia-rtx-3070-ti-up-to-2-1-ghz-clocks/
 

Bo_Hazem

Banned
Does anybody believe their driver team will be as robust as nVidia's or AMD's on day 1?

For decoding and encoding at least they'll be superior. I might go their route actually for video editing as their CPU as well is superior in that area. Content creators would be the main aim here, and professionals.
 
Last edited:

Boss Mog

Member
For decoding and encoding at least they'll be superior. I might go their route actually for video editing as their CPU as well is superior in that area. Content creators would be the main aim here, and professionals.
I meant more for gaming. WIll they have optimized drivers for every major game release on day 1 like their competitors?
 

Haggard

Banned
If those can be used for mining no normal consumer will ever see one at MSRP, or Intel will just raise the MSRP to Scalper levelout of the gate to join the margin ralley the other 2 are having.
 

nkarafo

Member
OK but what else? NVIDIA also has cores for better Ray Tracing performance, DLSS, etc.

These the things Intel must compete the most.
 
An old myth that isnt true anymore
You can tell that to my old laptop that has AMD GPU that is supported by new drivers and AMD iGPU that is not supported at all, which causes the discrete GPU to shit the bed and not function.
I wouldn't call dropping driver support for your 5-year old hardware leaving clients with dysfunctional piece of silicon "robust".
 

Kenpachii

Member
If they have DLSS as they advertise, its probably better then AMD straight out of the gate. AMD really needs to drop there next gpu's then.
 
Last edited:

JohnnyFootball

GerAlt-Right. Ciriously.
I don't want to hear about any of this "on par with...." bullshit. Show me the fucking benchmarks. I'll believe it when I see it and even then I will have questions about driver support.

Raja Koduri has been one whimper after another. I hope it changes due to badly needed competition, but I ain't got my hopes up.
 
Last edited:
Wouldn’t it be funny if Intel could produce the most and sell the most cause the others are always sold out.That would be huge for Intel.Or their cards are useless for mining which again would be the only buyable ones.
 

mitchman

Gold Member
You can tell that to my old laptop that has AMD GPU that is supported by new drivers and AMD iGPU that is not supported at all, which causes the discrete GPU to shit the bed and not function.
I wouldn't call dropping driver support for your 5-year old hardware leaving clients with dysfunctional piece of silicon "robust".
Which iGPU?
 
Ampere is almost 18 months old at this point in time and intel's gpu which supposedly sits at the top of their lineup is comparable to a 3070? I mean, not that it's a bad thing, especially if it's competitively priced, but Nvidia's Ada architecture should be out by late summer and I doubt nvidia will go easy on intel with their latest & greatest.
 
Last edited:
Why isn’t intel fabbing it?
Not enough manufacturing capacity on their smallest node. The way they architected their 10nm EUV is particularly bad on GPU's, judging from the first chips they put to market on that process all coming with GPU's disabled.

Their smallest node (10nm) is only equal to 7nm even by their figures so 6nm is most likely superior.
 
Last edited:

Xyphie

Member
Why isn’t intel fabbing it?

Doing the product on an external fab is lower risk because if they did it on internal fabs they'd have to either cut into production of higher-margin products like CPUs, or extend capacity and risk having idle fabs if the product was uncompetitive. In retrospect extending capacity and doing it internally would've been better because the GPU doesn't even need to be good to sell out given the current market, but there was obviously no way of knowing what the GPU situation would be today when they started designing this product like half a decade ago.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
Competition is good..

But I wish the fab situation was worked out.

Because theoretically this is also competition for fabrication space lol
 

mitchman

Gold Member
8650g, it's a well-known issue, with well-known anal work-arounds that don't work 80% of the time. I'm the 80%.
Ah, that one was in AMD A10-5750M, a APU that is now 9 years old. Not having updated drivers probably isn't totally unexpected. Perhaps the default Windows drivers will do a better job?
 
Not having updated drivers probably isn't totally unexpected.
Not for Nvidia products, as much as you can hate them for cringy marketing.
Their newest drivers still fully support the GTX 600 series, which are 10 years old.
Perhaps the default Windows drivers will do a better job?
It crashloops as soon as it installs.
The only driver that works was made for Windows 8, and it doesn't exactly cooperate with up-to-date Win10. It advertizes Dead Island Pandemic during the installation, to get you an idea just how fucking old and obsolete it is.
So yeah, install Windows 8 if you wanna use our old hardware you bought.
Robust driver support.
 

IbizaPocholo

NeoGAFs Kent Brockman

Intel hasn't launched its Arc Alchemist desktop graphics cards yet. However, hardware sleuth Benchleaks has already uncovered a benchmark for the Arc A770, Intel's presumed flagship SKU that will compete with the best graphics cards.

It doesn't take a scientist to realize that the Arc A770 is the desktop counterpart to the mobile Arc A770M. According to the Geekbench 5 submission, the Arc A770 has 512 execution units, or Xe Vector Engines (XVE), as Intel calls them. That would mean that the graphics card sports the full ACM-G10 silicon with 32 Xe-cores. Therefore, the Arc A770 has the same configuration as the A770M but has faster clock speeds.

The Arc A770M from the Geekbench 5 submission could be an engineering sample, so don't take the reported clock speeds to heart. The program didn't record the Arc A770's base clock; however, the software picked up the boost clock, which came down to 2,400 MHz. For comparison, the Arc A770M has a 1,650 MHz base clock. The benchmark software also detected 12.7 GB, but that's likely a misreport. The Arc A770 should have 16GB of GDDR6 memory like the A770M.
 
Top Bottom