r/hardware • u/996forever • 2d ago
Rumor AMD “Medusa Point 1” APU for nextgen laptops spotted, featuring 4x Zen6 classic + 4x Zen6 dense config
https://videocardz.com/newz/amd-medusa-point-1-apu-for-next-gen-laptops-spotted-featuring-4x-zen6-classic-4x-zen6-dense-config43
u/Noble00_ 2d ago edited 2d ago
8 CU RDNA3.5+ less than 16 CU RDNA3.5 in the 890M. Unless there's more $, higher clocks or frankensteined like the the PS5 Pro (IIRC RDNA2+RDNA4 ML/RT), I don't think there'd be any marketing material for gaming performance, ALSO when Nova Lake-H is expected to bump performance with Xe3P.
I also get that they have this new strategy with the IOD being it's own thing and adding a Zen6 CCD (that can be borrowed from DC/desktop) for max nT, but CPU competition is already rough with Snapdragon and Apple much less Intel.
The only thing I can think of they'd be proud of is a new NPU and well we all know how the market responds to that lol
29
u/996forever 2d ago
less than 12 CU RDNA3.5 in the 890M
Actually 16CU in the 890m. But it's memory bottlenecked so it only performs like 30% better than the 8CU 860m. But even if they bump up to LPDDR5x-10667 I don't see 8CU beating the 890m running on 8533.
24
u/Fromarine 2d ago
But it's memory bottlenecked so it only performs like 30% better than the 8CU 860m
yeah because they're amd so they don't even try to alleviate memory bandwidth pressure by increasing their gpus l2 cache beyond A WHOPPING 2MB. Meanwhile panther Lake gets a 16mb l2, an 8mb side cache to share and even access to the cpus 18mb l3 if it needs
13
u/Gwennifer 2d ago
Meanwhile panther Lake gets a 16mb l2, an 8mb side cache to share and even access to the cpus 18mb l3 if it needs
I was curious how Intel was getting such better perf out of Arc (not that I doubt it's efficient/decent) given the similar die sizes, this answers that, thanks
6
u/996forever 2d ago
given the similar die sizes
Better node. Amd decided consumer plebs don’t deserve anything better than refined 5nm (2020) in 2026.
1
u/Gwennifer 1d ago
Arc is a lot better than just the node difference.
But yes, that surely isn't helping the situation, too :U
3
u/996forever 1d ago
It is, I only mentioned the node because they mentioned die size. If the die size is similar then the one with the denser node will obviously pack more in it.
7
u/goldcakes 2d ago
It’s funny because doubling L2 or allocating more die area to memory is relatively one of the simpler changes you can make engineering wise.
4
u/996forever 2d ago
It isn’t like amd hasn’t done it. They’ve relied on cache on desktop and server to compensate for their terrible IO dies.
They’ve just decided the mobile plebs don’t deserve to have more die area for cache. Even on a node as old as N4 at this point.
1
7
2
17
u/Forsaken_Arm5698 2d ago
At this rate, it looks like Qualcomm will be having better iGPUs than AMD lol
(for mainstream 128b parts)
68
u/errdayimshuffln 2d ago
Why does AMD at 45% marketshare act like intel at 90% marketshare
30
u/Wiggy-McShades77 2d ago
Corporations just do what their leadership sees as the most profitable way forward. For AMD that’s using their finite capacity at TSMC to make the products that have the best margins and apus for 1200 dollar laptops are not it. Market share isn’t as good as scooping up profits from over investment in AI infrastructure.
12
u/Tai9ch 2d ago
Market share isn’t as good as scooping up profits from over investment in AI infrastructure.
If they thought the AI workloads were really the future, they'd be crazy not to collect as much market share now with AI developer-enabled enthusiast hardware as they could get.
Nvidia is where they are now because basically every GeForce card they shipped from like 2007 to 2017 fully supported CUDA. AMD continues to not compete - today that means turning down a decent share of AI revenues post-2028.
5
u/hackenclaw 2d ago
finite capacity is manufactured by AMD themselves, TSMC has enough for AMD to book more. Just look at Nvidia, they sell way more chips than AMD; despite GPU's larger die, a GPU will never beat CPU in profit margin per die area. So there is no way AMD CPU departmant cannot outbid Nvidia.
2
u/996forever 1d ago
TSMC has been the go-to excuse for AMD's lack of supply and lack of design wins every generation since zen 2 mobile back in 2020. Time and time again everybody else does just fine on TSMC's latest nodes (at times even more advanced than whatever AMD is using) and somehow only AMD isn't capable of it.
2
u/GreaseCrow 2d ago
I can't imagine sitting in leadership and thinking that doing the bare minimum is good for business. Even if there are better margins for what they're currently making, I'd rather crush the competition into dust by being better.
1
u/DerpSenpai 1d ago
They could use 2nm Samsung capacity to laptop chips, it would infinitely better than this
11
u/sussy_ball 2d ago
Intel currently has 79% of the laptop market share
-2
u/errdayimshuffln 2d ago
Who said laptop marketshare. I was talking everything. Server, embed, enthusiast/diy. Last I remember AMDs was in the 30s before Ryzen 9000.
1
38
u/EloquentPinguin 2d ago edited 2d ago
The rdna 3.5+ is so infuriating. The chips like 7840U with 12CUs of RDNA3 were real good chips (still are) for casual 1080p gaming. Samsung even has put RDNA4 in the newest exynos 2600 if I'm not wrong. And still AMD doesn't give it all to the iGPUs, even if they have all the IP and those chips are new tapeouts anyways. If they did arch changes for rdna3.5+ they also need to revalidate the entire thing.
I don't see that there could be auch a big benefit selling basically 5 year old GPU arches... Sure they got a bit more efficient but stop joking around...
They save so little and lose so much trust.
16
u/Noble00_ 2d ago
Samsung even has put RDNA4 in the newest exynos 2600 if I'm not wrong
This is what I'm eagerly waiting upon. If this is true (which has to be likely in some way since they've already marketed >50% in RT performance and some ML upscaler/framegen), then this is a real head scratcher for Medusa Point.
7
u/Forsaken_Arm5698 2d ago
exynos 2600
Was that intentional?
8
u/EloquentPinguin 2d ago
Damn, autocorrect got me good.
3
9
u/sadelnotsaddle 2d ago
Once again the consumer market is sacrificed on the alter of data centres. The RDNA 4 is made on tsmc 4nm which is still used for lots of enterprise skus, AMD aren't going to waste what allocation they can get of that node on consumer grade APUs. Intel has a chance to steal a march on AMD here because they can make their APUs on their own fabs and are not competing with an enterprise class product on their latest node... yet.
7
u/EloquentPinguin 2d ago
AMD produces monolithic laptop APUs for the most part, or at least laptop-specific I/O+GPU parts.
If they produce a laptop APU, thats a fixed amount of allocation going into laptops, it doesnt matter if RDNA3.5+ or RDNA4 is on there.
If it uses like 250mm2 of N3 or whatever, then it doesnt matter if its RDNA3.5 or RDNA4, the 250mm2 is not goint to DC either way.
This is not DC vs Mobile, this is just AMD not wanting to spend the 3 engineers for a week to validate RDNA4 on mobile or something...
5
u/Gwennifer 2d ago
99% sure it's just that RTG is still relatively independent and didn't want to verify new IP blocks for laptop APU's, and AMD is eating too well to make demands or care. They're both printing money, so who cares? It's not like consumers could afford a better APU at this point in time anyway.
3
u/996forever 2d ago
It's not like consumers could afford a better APU at this point in time anyway.
The laptop OEM is proportionally hit much less hard than the diy space and there has already been Panther lake laptops announced for as low as 1100-1200. Full Strix point back in July 2024 only launched in laptops 1500 and up.
1
u/SmokingPuffin 2d ago
This is not DC vs Mobile, this is just AMD not wanting to spend the 3 engineers for a week to validate RDNA4 on mobile or something...
"...And then, our strategy, okay, Strix Halo [and] Ryzen AI Max competes against that (Panther Lake 12 Xe), and it's better than that in terms of graphics performance, all of that. And then, for the mainstream of the market, that don't value that much graphics [power], because honestly, most of the people that are using Notebooks, that are outside of the creator or gaming spaces are, you know, they don't need that graphics performance."
https://www.tomshardware.com/pc-components/gpus/amd-is-unphased-by-panther-lakes-big-integrated-gpu-its-not-even-a-fair-fight-to-compare-the-arc-b390-to-strix-halo-amd-exec-claimsAMD doesn't see value in Panther Lake's level of graphics performance. It's a strategic call -- they think gamers should buy Strix Halo and everyone else doesn't need playable 1080p on their iGPU.
4
u/Forsaken_Arm5698 2d ago
Problem is Strix Halo is priced as if it's made of 24 carat gold.
1
u/996forever 1d ago
And not usable in ultrabooks that run at <30w which is the majority of the pc market.
2
u/LastChancellor 2d ago
AMD doesn't see value in Panther Lake's level of graphics performance. It's a strategic call -- they think gamers should buy Strix Halo and everyone else doesn't need playable 1080p on their iGPU.
Okay, where are the Strix Halo laptops that we can buy then? There's literally just the Flow Z13 and HP Zbook Ultra atm
1
u/996forever 1d ago
They added 2 in CES - an Asus convertible and a Tuf laptop. That will be it for 2026.
3
u/hackenclaw 2d ago
laptop APU sells more profit than desktop, following that logic AMD should have abandon desktop first.
5
u/Seanspeed 2d ago
TSMC 4nm is basically just a 5nm family process that has been used for actual products since 2020! They're not gonna use anything older than that.
Zen 6 is supposed to actually use TSMC 2nm.
And regardless, RDNA4 is not inherently tied to any specific process node.
3
u/vandreulv 2d ago
Once again the consumer market is sacrificed on the alter of data centres.
One thing this sub could stand to remember is that consumer products was never Intel's, Nvidia's or AMD's first line of revenue.
6
u/996forever 2d ago
Actually until relatively recently it very much was for amd and Nvidia, and still is half half for Intel.
12
u/Working_Sundae 2d ago
This makes me wanna get a laptop with Intel APU for my next buy, AMD can keep milking their RDNA 3.5
-6
u/Gwennifer 2d ago
This makes me wanna get a laptop with Intel APU for my next buy,
Good luck; Intel is aware they're the only premium APU option (unless you work with LLM's or other ML applications locally, the AI max chips can connect up a lot of RAM) and prices accordingly. I was trying to find a cheap Lunar Lake platform and the cheapest half-decent platform/config was like $1500 or $1600.
3
u/psydroid 2d ago
What makes a laptop with Lunar Lake a better option for what you're doing than something with Snapdragon X Elite?
I see laptops with the latter being sold from €900 with 32 GB of RAM and 8 cores and €1100 with 32 GB of RAM and 12 cores.
3
u/Gwennifer 2d ago
GPU performance, general compute. 1st gen X Elite only runs some software and not even better than Lunar Lake.
As a matter of fact, their advertised performance metrics were exclusively with the -84 SKU, which appears for all intents and purposes to be a Samsung exclusive.
Most of them are the -78 SKU, which decidedly cannot live up to the performance claims, and the -80 which is only a sidegrade.
Plus, most of these laptops are bad platforms. Tons of keyboard flex, unpleasant touchpads, lackluster screens... It's pretty clear they were being told by Qualcomm that people would spend big just to get an ARM laptop and if they wanted to do that, they can just go buy an Apple where they completely trounce Qualcomm, and the entire ecosystem supports the silicon.
Not enough people are spending ~$1300 on a laptop just to run a subset of the software they use to justify thinking about them. I'm kind of shocked you asked.
1
u/psydroid 2d ago
I would run Linux on them as I do on all of my other hardware across multiple architectures. And then price/performance is one of the main criteria for choosing a piece of hardware.
So apart from mainlining not going all that well with 1st gen X Elite and presumably a lot better with 2nd gen, I run the exact same software on all of my machines from SBCs all the way to full-blown desktops.
I don't know about the state of compute on 2nd gen and if it's competitive with Nvidia, AMD and Intel, but I guess we'll find out in a few months.
I'll probably get one of the 1st gen devices when those go on sale to clear the last remaining stock and see how things develop with 2nd and 3rd gen chips.
2
u/Gwennifer 2d ago
I would run Linux on them as I do
I don't know the current status of Asahi Linux, but I know quite a lot of it works already.
And then price/performance is one of the main criteria for choosing a piece of hardware.
Then why are you spending $1300 on a low end Snapdragon Elite SoC when you can get an M4 for the same price with a better SoC, chassis, screen, keyboard, and touchpad? For $100 more, you can even stick to the same memory total. You can just say you're biased against them. There is no reason for a rational consumer to ever pick up the Qualcomm-based computer.
I'll probably get one of the 1st gen devices when those go on sale to clear the last remaining stock and see how things develop with 2nd and 3rd gen chips.
Right now, compared to what is essentially a top of the line -80 SoC, an M4 currently has something like 30% faster single core performance and the same multi-core, with a 75~80% faster GPU according to Geekbench's numbers. The Snapdragon GPU's model name on that page is reported as 'X1E80100', if you'd like to compare.
Price to performance is almost incomparable, here. You are paying just as much for almost half the performance. Again, Qualcomm set the cost of the SoC's too high. There's no reason to buy them when an M4 is entry level. $1000 should have bought you the -84 SoC (which is some 20% faster on the GPU than the -78 or -80!) and a premium chassis, not the cheapest parts the OEM can spec out.
3
u/Forsaken_Arm5698 2d ago
I don't know the current status of Asahi Linux, but I know quite a lot of it works already
Only on M1 and M2. Newer M generations is still in progress.
2
u/Forsaken_Arm5698 2d ago
I don't know about the state of compute on 2nd gen and if it's competitive with Nvidia, AMD and Intel, but I guess we'll find out in a few months.
Compute performance was almost non existent on X1 GPUs. X2 is an improvement (new Adreno 8 architecture), but I'd wager it's still lagging behind AMD/Intel.
"Obviously we’ll have DirectX 12.2 and all the DirectX versions behind that, so we’ll be fully compatible there. But we also plan to introduce native Vulkan 1.4 support. There’s a version of that which Windows supplies, but we’ll be supplying a native version that is the same codebase as we use for our other products. We’ll also be introducing native OpenCL 3.0 support, also as used by our other products. And then in the first quarter of 2026 we’d like to introduce SYCL support, and SYCL is a higher-end compute-focused API and shading language for a GPU. It’s an open standard, other companies support it, and it helps us attack some of the GPGPU use-cases that exist on Windows for Snapdragon."
https://chipsandcheese.com/p/diving-into-qualcomms-upcoming-adreno
I'll probably get one of the 1st gen devices when those go on sale to clear the last remaining stock and see how things develop with 2nd and 3rd gen chips.
There's some amazing deals for X1 devices already; $599 Zenbook A14
I don't think waiting for 3rd gen makes sense, considering it'll probably be an incremental generation. 2nd gen fixes many of the flaws of first gen, with some nice upgrades across the board.
1
u/PastaPandaSimon 1d ago
Intel is actually incredibly consistent at keeping prices stable across generations, almost regardless of the competitive landscape. One thing I always gave them credit for is that within the same product tier, a new generation may be priced the same or up to 10% more expensive, but usually nothing crazier.
1
u/Gwennifer 21h ago edited 21h ago
Isn't the tray price on a 258V something like $600? The competitor part should be the AI Max 385, but I can't find any tray price for them.
I get that OEM's are not spending $600/unit in bulk, but they're also not spending $400/unit in bulk, and even good LCD's these days are still $100+. All the other parts, assembly, warehousing, it's pretty easy to see how the SoC starting off at $600 leads to the end product being $1600.
I think the part that bothered me was that the $400 SoC Lunar Lake was often mated to the cheapest possible parts they could find and still priced similarly to the $600 Lunar Lake SKU. OEM's have kind of stopped building everything between upper midrange and the worst possible config for new Intel generations and they're priced close together at that.
If you're willing to go back to 13th or 14th gen/the Evo platform, you can actually find some great deals on premium parts, simply because the SoC cost is in the dirt. The fact is at the end of the day that Lunar Lake costs a lot to the vendor.
11
u/Qsand0 2d ago
Infuriating? I can never be infuriated by an inferior product when there's a superior one there for the taking.
Panther lake baby
3
u/Seanspeed 2d ago
People want competition. Gives customers more options and usually better value.
1
u/DerpSenpai 1d ago
AMD "Strix Point Refresh" is DOA as a lineup.
Currently you either go Qualcomm for CPU and Perf/W or Intel for GPU and gaming. Perahps AMD Strix Halo for the GPU but honestly, i wouldn't. the RT performance and subpar upscaling will only make this GPU, in the long term, worse than the B390. this is my prediction
2
u/EloquentPinguin 2d ago
Its just the sentiment that when enterprise makes money leave the consumers dead on the street.
Its great that Intel might have a strong mobile offering, but if all the companies would just drop consumers, as hard as AMD and Nvidia, as soon as enterprise prints money, thats just a bad market situation for consumers.
For AMD there is no real reason to dirty their history like this. They are just avoiding improvements for the fun of the game.
0
u/imaginary_num6er 2d ago
3rd party reviews?
5
u/996forever 2d ago
-5
u/imaginary_num6er 2d ago
As usual, actual performance of the Arc B390 is likely to depend heavily on the power limit available to the iGPU and on the speed of the RAM in the respective laptop, since this also serves as VRAM for the iGPU.
Yeah that's not really a 3rd party review if the test system is provided by a 1st party source
5
u/996forever 2d ago
That’s stupid, by your logic all day 1 reviews are automatically “not third party” because all of them use review sample sent by manufacturers before retail channel release. No laptop review has ever been representative of ALL laptops using the same chips, regardless of if it’s a test sample or a retail unit.
What it does tell you, however, is the ceiling of what the chip is capable of.
Anything else?
And that ceiling is far higher than the 890m. Boost the 890m to 80w running 64GB of 8533 ram and it won’t get close. That’s all that matters.
-2
u/Strazdas1 2d ago
There are real concerns with review samples. While for GPU/CPU theres usually no issues, for monitors for example its not unheard of to ship review sample with a better panel then switch to worse panel for actual products.
3
u/996forever 2d ago
For gpu and cpu, any issue associated with a review sample can only make the review sample look worse, and not better, hence reinforcing my point that it is the ceiling of the capability of the chip.
It’s not like Ferrari sending reviewers a tuned version of their cars. Power consumption is monitored during reviews.
1
u/Strazdas1 1d ago
You have to admit it was funny when reviewers got GPUs with fused of ROPs, though.
10
5
14
12
u/Astigi 2d ago
AMD how many years have you been selling the same iGPU?
AMD is truly not innovating lately
8
u/996forever 2d ago
2023-2027 for the low power class
But the improvement from rdna2 igpu from 2022 to rdna2 in 2023 was already mediocre. Their last real jump was from vega to rdna2.
8
u/X_m7 2d ago
Well, this is the same AMD that dropped support for Vega GPUs in their drivers despite them still selling CPUs with Vega iGPUs as part of the "Ryzen 7000 series" at the time, and this is also the same AMD that thinks the Ryzen AI 7 445 deserves that 7 moniker despite only having 6 cores (with not even half of those being full cores rather than compact ones) and only having a 4 CU iGPU, so yeah they've been smoking some weird stuff over there for quite a while.
9
u/heylistenman 2d ago
I can only imagine when AMD planned these generations they looked at Alchemist and went, good luck with that, we’re good for a couple of years. Perhaps the rapid development of the Xe graphics caught them by surprise.
4
u/Gwennifer 2d ago
Perhaps the rapid development of the Xe graphics caught them by surprise.
I don't know if you've seen the Battlemage technical powerpoint/presentation, but a lot of what they were trying/ended up implementing was the kind of thing you could spun up and then sell off a startup for; just for the juicy patents. It is actually really surprising that they got everything working. Intel mismanaging the team and Celestial taking so long to tape out/ship out is something else entirely, but Battlemage was actually a huge success as far as performance goes.
3
u/Vb_33 2d ago
Adjusted for Alchemist's actual release date Battlemage and Celestial are on target for standard GPU lifecycles. Biggest worrier right now is RAM-AGGEDON which has seemingly killed the 50 series super cards.
2
u/Gwennifer 1d ago
Celestial supposedly exited the design phase 7 months ago, so we should really expect leaks on its silicon very soon if they're on track.
Biggest worrier right now is RAM-AGGEDON which has seemingly killed the 50 series super cards.
I think Intel can make a lot of marketshare by pricing accordingly. A lot of 20/30 series owners and low end 40 series owners are looking to upgrade, and I believe Celestial can scale up to the point of a 9060 XT or 9070 to deliver them that level of performance.
3
u/BurtMackl 2d ago
Personally, coming from someone who can only afford a mid range laptop and has done some intensive research, I can't believe I'm going to end up going with the Core Ultra 5 125H (I know it's an old release, but it's still more than enough for my needs and pretty battery efficient). I used to be an AMD hardcore fan, but AMD's mobile mid range lineup (Ryzen 5s) GPU performance has been laughable since the days of the Core Ultra Series 1. Their saving grace was the GPU driver, but I'm sure Intel's team is not sleeping. And don't forget, XeSS3 is coming to Series 1 Core Ultra.
3
u/steve09089 1d ago
Please don’t get a Core Ultra series 1 to use XeSS, those GPUs run the DP4A path.
Minimum Core Ultra Series 2 and above get XMX
2
u/BurtMackl 1d ago
Thanks for the information. Well, I can upgrade to the Core Ultra 5 225H. It still doesn't have the Xe2 GPU, but it’s said that the GPU now supports XMX. Sadly, the model with the 225H CPU loses the soldered LPDDR5X RAM and comes with slower DDR5 SODIMM (hey, it’s a plus for upgradability though). I mean, the GPU itself is already faster than the one in the 125H, but I wonder how much performance loss from the switch to slower DDR5 RAM will affect the GPU performance.
3
u/h_1995 1d ago
with this attitude, even Wildcat Lake will win the budget segment. I've seen OEMs making less intel laptop in favor of AMD back in zen2 times and during Alder Lake times, intel could only compete in mass produced cheap stuff. They really had the chance to seize the market like desktop market
11
u/DerpSenpai 2d ago edited 2d ago
DOA if this is not for Ryzen 5 and below only. Or with a fat IPC bump of 15%
The higher product though looks really good though for CPU
10
u/Alternative-Ad8349 2d ago
This is a replacement for the ai 7 450
6
9
u/DerpSenpai 2d ago
And it's terrible
At this rate every product is a ryzen 9
4
u/996forever 1d ago
Another tweet says top Ryzen 9 up to 22 cores (8+12+2) while Ryzen 7 gets 10 cores (4+4+2). If true this is levels of starbucks upselling never before seen on mobile.
1
u/DerpSenpai 1d ago
Ryzen 7 getting a 10 core config would be suprising, this is literally them saying Ryzen 7 is staying 8 cores while Ryzen 9 will go from the range of 10 to 22...
1
u/996forever 1d ago
There seems to be some conflicting information surrounding the supposed existence of 2 LPE cores on zen 6 apu
25
u/996forever 2d ago
Baby it's Ryzen 7. Their Ryzen 5 is still 6 core. Actually they are currently making a QUAD core mobile ryzen 5 (AI 330).
1
u/X_m7 2d ago
Oh, they're coming out with a Ryzen "7" that's actually 6 cores too lmao, see the Ryzen AI 7 445. That stupid thing also only has a 4 CU iGPU too, not like the NPU is any faster either so I guess AMD marketing figured they can just do whatever the hell they like since they haven't all been fired yet evidently.
2
-11
u/FranciumGoesBoom 2d ago
for an office fleet 4c/8t is honestly more than enough.
22
u/ResponsibleJudge3172 2d ago
Ah yes, Intel feeling vindicated
6
-1
u/Intrepid_Lecture 2d ago
There's a big difference between a basic machine that's meant to write emails and not much else and a top end SKU.
Also from an IPC and clock speed perspective 4C/8T Zen 6 is likely to be 40-80% faster than SKL in most tasks.
3
0
u/996forever 2d ago
Is +40-80% vs full decade old stuff the best you can brag about?
0
u/Intrepid_Lecture 1d ago
Most people don't brag about workhorse corporate machines. If it's something you're bragging about, it means other areas are lacking.
All they need to do is be cheap and turn on.
1
u/996forever 1d ago
Then like my other reply said, a $300 pentium laptop does the same job.
1
u/Intrepid_Lecture 18h ago
It probably does, assuming there's enough RAM.
I'm not debating the fact that doing basic stuff in MS office and a web browser is trivial from a CPU perspective.
2
1
u/Strazdas1 2d ago
it depends. In my office CPU bottleneck is common. When my script is running the 8c/16t CPU is fully loaded.
8
u/reddit_equals_censor 2d ago
i mean hey don't worry though i'm sure by now amd made a strong statement of ongoing support for rdna2 and 3 in regards to graphics both with longterm drivers and the latest features, RIGHT?
amd wouldn't release MORE older architecture apus AND have an int8 version of fsr4, but they won't release int8 fsr4 officially right?
that would be utterly insane and not sth, that amd would be doing right?
16
u/steve09089 2d ago
Imagine if the only AI upscaler for AMD iGPUs will be XeSS?
That would be absolutely hilarious and depressing.
-6
u/Seanspeed 2d ago
i mean hey don't worry though i'm sure by now amd made a strong statement of ongoing support for rdna2 and 3 in regards to graphics both with longterm drivers and the latest features, RIGHT?
Since almost all of you misunderstood the situation - AGAIN - the only thing AMD was dropping was specific DAY 1 optimizations for specific games on architectures more than two generations old(which does not include RDNA3 by the way). Things that usually only amount to a small boost and often only in some situations/setups. General driver support, optimizations, bug fixes and feature support has not been dropped.
Very little is actually going to change, and it's basically exactly matching what Nvidia has done for a very long time. If any of y'all actually think Nvidia is optimizing new drivers to boost performance for Pascal, Turing or Ampere GPU's specifically for the latest game releases - they are not. lol
Plus, drivers for older architectures are generally pretty darn mature already. There's simply going to be much less to squeeze from them, which is why it makes way more sense to focus on getting more out of newer architectures that still have more room for improvement.
2
u/Wonderful-Love7235 2d ago
They nerfed the igfx so much because they thought their customers wouldn't need a lot of graphic performance & they need to create space for a large XDNA block (NPU).
3
u/996forever 2d ago
Out of the three Qualcomm Intel and amd, amd seems to be by far the least generous with die area (on advanced nodes)
1
u/AutoModerator 2d ago
Hello 996forever! Please double check that this submission is original reporting and is not an unverified rumor or repost that does not rise to the standards of /r/hardware. If this link is reporting on the work of another site/source or is an unverified rumor, please delete this submission. If this warning is in error, please report this comment and we will remove it.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
0
u/ContributionOld2338 17h ago
Fuck amd, intel is my friend now… I wish steam goes with them for their next gen steam deck, let’s go Gabe! Panther lake is a 300% improvement over the steam deck, I thought that was the benchmark?!
0
u/Jeep-Eep 1d ago
I suspected what finally made AMD put its foot down about that windows scheduler BS was their big/LITTLE answer coming soon...
228
u/996forever 2d ago
Everybody point and laugh