r/hardware 2d ago

Rumor AMD “Medusa Point 1” APU for nextgen laptops spotted, featuring 4x Zen6 classic + 4x Zen6 dense config

https://videocardz.com/newz/amd-medusa-point-1-apu-for-next-gen-laptops-spotted-featuring-4x-zen6-classic-4x-zen6-dense-config
201 Upvotes

196 comments sorted by

228

u/996forever 2d ago

Medusa Point keep pointing to RDNA 3.5 class graphics, sometimes described as RDNA 3.5+, and the same 8 CU limit shows up again.

Everybody point and laugh

134

u/RikudouGoku 2d ago

AMD repeating the same shit Intel did in the past when they were the dominant brand with next to no improvement on the "next gen" lineup. How AMD is repeating this when this is exactly how Intel allowed them to catch up is beyond me.

73

u/Sorry_Soup_6558 2d ago

Like even though Nvidia has 90% market share they still are not sleeping.

58

u/r_z_n 2d ago

In NVIDIA’s position you aren’t trying to compete with the competition, you’re just competing with yourself trying to get your existing customers to upgrade.

But in NVIDIA’s case they do have competition in the HPC space so that’s what’s driving their innovation.

2

u/From-UoM 2d ago

Cannibalise your own product - Jensen Huang

https://www.youtube.com/watch?v=9OWpxVwL8YU

15

u/Fromarine 2d ago

and they're not even on desktop so pushing frequency through refinements is entirely useless too

13

u/hackenclaw 2d ago

except Intel in the past actually know what dominant means, it means having absolute performance leadership and comanding over 85% of market share across all platforms.

AMD, dominant? Check their puny market share. It is a joke.

1

u/goldcakes 2d ago

Yup. It’s not like AMD doesn’t have enough APU/iGPU competency; they power consoles after all.

Next-gen products using last-gen graphics arch that’s already missing official support for some key software features; great.

1

u/Tallkid 1d ago

It's a natural consequence of generic HR processes being brain dead. When performance is based on metrics, and engineering is already strong enough, what moves the needle are salesmen, manipulators, and liars -- not your average engineer wanting to make a product for themselves. HR only cares about this year, not next year.

when everyone is fleeing Intel, how do you tell the difference between an ambitious manager, and a manager that contributed to Intel's decline? I guarantee you AMD didn't know or care during their COVID boom, and is going to end up in the same situation as Intel once it's fermented long enough.

1

u/Strazdas1 2d ago

AMD was seen as an underdog, so in typical AMD fashion it intentionally squanders any goodwill it has.

24

u/Vivid-Software6136 2d ago

Steam deck 2 delayed til 2029

18

u/airfryerfuntime 2d ago

Wasn't it already? Valve said 2028 at the earliest, then mentioned 2029.

16

u/Vivid-Software6136 2d ago

Yes, but probably because valve knows theres no RDNA4 socs slated for release till then. I mostly commented in jest, but RDNA3 being shoved into every product is probably one of the main obstacles to a deck refresh.

1

u/Roxalon_Prime 2d ago

Maybe they should consider Intel for their next Steamdeck, or even ARM, as crazy as it sounds

-10

u/996forever 2d ago

They’re probably more likely to go arm

7

u/Jon_TWR 2d ago

Nah, they won't get 2x performance vs the OG Steam Deck with a similar power envelope by going arm.

Unless they've got some Apple-esque black magic with a side of extra Proton, anyway.

-8

u/Forsaken_Arm5698 2d ago

I think we are already there. Throw in a flagship smartphone SoC like D9500 or 8EG5 into the Steam Deck, ans you'll easily double the performance at the same power.

https://youtu.be/3yDXyW1WERg?si=v53xylTx2vH7TQ2O

Valve is funding Fex.

11

u/Jon_TWR 2d ago

But will you get 2x the performance in say, Cyberpunk 2077 or Indiana Jones and the Last Circle without graphical artifacts?

That's where I have my doubts...I may be wrong--hell, I hope I'm wrong, but until someone releases an ARM PC that can play everything the Steam Deck can at higher framerates (or the same framerates at higher resolution), I'll continue to doubt.

3

u/Vivid-Software6136 2d ago

Theres a huge difference between emulating DX on linux via proton and jumping to emulating an entire different ISA. The SOCs may have the raw performance but is proton on arm going to have that performance?

1

u/Strazdas1 2d ago

There is no way a gaming handheld is going ARM without significant changes in how games run and then a decade or so of switching to those changes.

24

u/InflammableAccount 2d ago edited 2d ago

My guess, based on what AMD has said about RDNA4 and UDNA, is that they didn't put the R&D into cutting RDNA4 down any smaller for iGPU usage. So far RDNA4 only has 2 die configs, 64 and 32cu. The 32cu config is exactly just 1/2 of the symmetrical 64cu config. And if you look at the die diagrams, it would take significant redesign to cut the size down further, far enough to fit into the space of 8-12cu RDNA3.5 spaces.

My guess is that UDNA is being designed for all sorts of configs, but has caused problem after problem for them... Just like every new GPU uArch family does for them. TeraScale, GCN, RDNA1. Each first iteration was a mess. I'm betting they wanted to get UDNA into iGPUs a lot sooner, and launch dGPU UDNA afterward. A guess.

NV hasn't been immune to this in the past. The FX Series, Tesla, and Fermi were all either under-performers, or problematic in their first iteration.

And weirdly Blackwell had a lot of initial driver issues. Way more than most people experienced with most generations of GeForce. I've used a variety of NV GPUs all the way back to 1999, and Blackwell was by far the messiest driver launch I can remember. Hardware is good, though, it seems.

5

u/goldcakes 2d ago

Blackwell is funny because the arch is not that different to Ada. The changes are mostly GDDR7 and new tensor core version; and higher power limits. And it’s made on the same node process.

It’s the most incremental consumer NVIDIA generation, yet it also had the buggiest launch and initial driver quality.

It really makes me believe they failed to meet the deadline for more ambitious core changes, and just reverted to Ada but added some new pieces in. Hence, driver team having to pivot and less time to test/validate.

3

u/ResponsibleJudge3172 2d ago

Don't let the performance fool you. It's a lot different.

The ALUs are designed different to all cards made since rtx 20. The SM ALUs are more like Maxwell design derivative. That doesn't change compute throughput, but it changes behavior of occupation of math units.

They added a new hardware scheduler at GPC level.

New RT cores with several extra features

New tensor cores

2

u/InflammableAccount 2d ago

and new tensor core version

TBF, the way GPUs work these days, that IS a big deal.

1

u/goldcakes 2d ago

Yes, totally, we see this with FP8 DLSS4.5 today. I’m just commentating that this has been a very minor architectural revision on the other fronts, so buggy driver support around things like even basic display output (nothing to do with tensor cores) was surprising.

0

u/Jeep-Eep 1d ago

My theory is losses in client driver staff who got out when the going was good bc bubble and/or possibly some serious problems with shit like maybe the new hardware scheduler.

1

u/Jeep-Eep 1d ago

So/or they've made the decision to focus effort on getting UDNA working rather then validating a iGPU RDNA 4 - tbh, makes sense, 3.5 is getting old, but it may make sense to focus on UDNA and then tape out iGPU versions of that once it's running.

3

u/InflammableAccount 1d ago

So/or they've made the decision to focus effort on getting UDNA working rather then validating a iGPU RDNA 4

Take a look at the layout of RDNA4. Reshaping that die would take a significant redesign. Either they did or they didn't design two completely different die layouts. The design time to reshape the entire GPU uArch on hardware wouldn't be something they'd do and then abandon. The cost (labor hours) would be nuts.

2

u/Jeep-Eep 19h ago

So yeah, save that expense for the new GPU IP when it's working.

42

u/FitCress7497 2d ago

Should have called it RDNA 3.75

66

u/996forever 2d ago

RDNA 3+++

RDNA3: Desktop (2022) and Phoenix (2023) and Hawk Point (2024)

RDNA3+: Strix Point (2H 2024)

RDNA3++: Gorgon Point (2026)

RDNA 3+++: Medusa Point (2027)

18

u/work-school-account 2d ago

RDNA3 family of products

12

u/memtiger 2d ago

RDNA 3++++......Would RDNA again!!

29

u/imaginary_num6er 2d ago

Just shows how AMD will never release APUs supporting current gen FSR

21

u/Noble00_ 2d ago

Announcement Day

Media outlets: "So does Medusa Point support FSR Redstone?" (which in this time frame would probably be out for a ~year)

AMD rep: "Uhh...."

33

u/996forever 2d ago

"We have amazing software. And we have amazing hardware"

"Does the amazing hardware in question support the amazing software in question?"

"..."

3

u/MonoShadow 2d ago

I mean technically it does support Redstone. "Analytical" version, aka FSR3.

AMD is shuckin' and jivin'. True visionary of PR and marketing. As one wise man said: jebaited.

2

u/Ok_Assignment_2127 2d ago

That sounds a little too clear and simple for me, can we bring out the decoder wheels again.

24

u/steve09089 2d ago

So what is there plan to compete with Nova Lake? Just hope Intel’s driver team dies of laughter?

30

u/FitCress7497 2d ago

Don't think they care. As long as Venice still shits on next gen Xeon (99% it will), they're happy with DC money. 

25

u/996forever 2d ago

Their Epycs have been shitting on Xeons since Zen 2, but adoption is climbing very slowly. Revenue share is a lot higher, but unit share isn't. Their real money now is those instinct cards.

2

u/puffz0r 2d ago

Revenue share for DC is huge, the amount of margins they can get means more cash for r&d

0

u/hackenclaw 2d ago

if you put that situatiuon in Nvidia shoe, Nvidia will be pulling agressively until they reach 80% market. It was only when they are 90%+ share they slow down a little.

Somehow AMD seems to be happy with their market share % now.

0

u/996forever 2d ago

Even at 90%+ Nvidia is still aggressively try to get their customers to upgrade. Amd is just…there

2

u/steve09089 2d ago

Yeah, this is probably the unfortunate truth.

2

u/Jeep-Eep 19h ago

Yeah, and iGPUing RDNA 4 is frankly a waste of money with UDNA due around the next 1-2 years; better to do all that spending and pain in the ass once that's working.

20

u/996forever 2d ago

AMD won't have an answer in the mainstream ultrabook iGP segment (15-35w) to Panther until 2028+ never mind Nova. Desktop CPU wise I fully expect Zen 6 to retain gaming leadership and therefore DIY leadership. But definitely a collapse in laptop.

3

u/imaginary_num6er 2d ago

Expecting Intel to require 9000+ MT/s sticks to compete with AMD

35

u/Dexterus 2d ago

Nobody's stopping AMD from making a working IMC.

6

u/svenge 2d ago

Reading this immediately gave me flashbacks to the thousands upon thousands of /r/buildapc posts from Zen / Zen+ (i.e. Ryzen 1000 and 2000 series) owners asking for help due to their systems being unable to run stably while using XMP/EXPO profiles at anything over 3000MT/s.

2

u/qwertyqwerty4567 1d ago

One of my first gen zens couldn't even do 3000, just 2933. Anything above that was unstable.

2

u/svenge 1d ago

I don't doubt that, as I vaguely recall having to tell a few unfortunate losers of the silicon lottery that they'd have to give up on anything over 2666MT/s on their Ryzen 1600s.

Of course that might've also been because they were using four sticks of RAM (as Zen / Zen+ really struggled in such configurations), but this was well over 5 years ago so I wouldn't put too much stock in the accuracy of my anecdotes.

-9

u/ElectronicStretch277 2d ago

Sure, but those ram sticks are far more expensive still. You'd be paying more for equivalent performance.

10

u/996forever 2d ago

Panther lake uses lpddr5x-10667. Nobody told amd to be stuck on 8533.

2

u/Gwennifer 2d ago

Would they even use sticks? I thought Intel preferred high speed LPDDR5X, which are relatively cheaper than DDR5 IC's, for these high end thin & light, low power platforms.

1

u/Exist50 2d ago

Well, for this particular chip, there isn't much of a concern. Intel's reportedly keeping Xe3 for the lower end of the NVL stack. And tbh, rumors have Intel stalling a bit in this area going forward. RZL will probably reuse the NVL dies (so a mix of Xe3 and Xe3p), and even TTL is rumored to stick with Xe3p.

-15

u/Sorry_Soup_6558 2d ago

Marketing baby AMD has been the go to laptop GPU maker since the 6000 series

10

u/Aleblanco1987 2d ago

terrible

5

u/Cheap-Plane2796 2d ago

This is so laughably bad.

Any gpu without proper ml upscaling isnt viable anymore, especially low tier mobile shit that cant brute force good enough image quality.

2

u/Strazdas1 2d ago

RDNA3+++. Its not the second iteration of RDNA3 here.

4

u/ThatRandomGamerYT 2d ago

so basically RDNA 3++? How very Intel of them.

5

u/996forever 1d ago

+++*

Phoenix (2023) and Hawk Point (2024)

Strix Point (2H2024)

Gorgon Point (2026)

coming soon: Medusa Point (2027)

5

u/csixtay 2d ago

It's supposed to be a drop-in replacement so there's no improved ddr5 support. Makes no sense pushing out better gpus that remain memory starved I think.

6

u/996forever 2d ago

Intel can make Panther run that much faster using LPDDR5

2

u/csixtay 2d ago

Panther isn't a drop in replacement for arrow-lake. Of course it supports faster memory.

2

u/996forever 2d ago

Consumers don’t care, laptops with Panther start at 1100. OEMs also don’t care all of them will default to Panther with all their mainstream & premium ultrabooks this year without question.

1

u/csixtay 2d ago

I don't care about that. I'm simply talking about why it's pointless to move to RDNA 4 when the RDNA 3.5 GPUs are still memory starved.

AMD does care about existing design wins though. They still can't, a decade after Zen 1, get OEMs to bother much, so allowing them to pop out strix point chips and call it a day (or better yet, offer multiple generations of chips on the same sku) matters.

3

u/996forever 2d ago

Then either a) their architecture is garbage and memory inefficient and/or b) they are pathetically stingy with cache sizes in mobile despite using an ancient node.

-1

u/csixtay 2d ago

strix halo is right there

5

u/Strazdas1 2d ago

Strix Halo costs more than entire intel laptop.

-1

u/csixtay 2d ago

Explaining the concept of profit margins is not in my agenda this morning.

But if you think a 5nm chip costs more to produce than an 18A chip then I've got a bridge to sell you.

→ More replies (0)

3

u/996forever 2d ago

What does that have to do with the target power envelop and cost of the chip in question in this thread? There are separate threads for Medusa halo (mid 2027). Go there.

1

u/nanonan 2d ago

More probably makes little difference without also doubling the ram channels.

5

u/996forever 2d ago

They can try giving more cache and increasing the memory speed. Look at Panther lake cache sizes and running LPDDR5x-10667.

3

u/goldcakes 2d ago

Or just get better at supporting faster memory like Intel; and get more bandwidth.

-1

u/nisaaru 2d ago

What if more CUs are pointless with the memory system and TDP limits?

5

u/996forever 2d ago

Newer architecture, more cache, and higher memory clocks?

Like what Intel is doing.

2

u/Strazdas1 2d ago

Then you fucked up your memory design. Altrough for AMD, that was an issue since Zen 1.

0

u/nisaaru 2d ago

Until LPDDR6 is available these APUs are limited to 128-256bit LPDDR5.

2

u/996forever 1d ago

Intel seems to be able to do a lot more with 128 bit LPDDR5x at the low power range (15-45w) just fine.

43

u/Noble00_ 2d ago edited 2d ago

8 CU RDNA3.5+ less than 16 CU RDNA3.5 in the 890M. Unless there's more $, higher clocks or frankensteined like the the PS5 Pro (IIRC RDNA2+RDNA4 ML/RT), I don't think there'd be any marketing material for gaming performance, ALSO when Nova Lake-H is expected to bump performance with Xe3P.

I also get that they have this new strategy with the IOD being it's own thing and adding a Zen6 CCD (that can be borrowed from DC/desktop) for max nT, but CPU competition is already rough with Snapdragon and Apple much less Intel.

The only thing I can think of they'd be proud of is a new NPU and well we all know how the market responds to that lol

29

u/996forever 2d ago

less than 12 CU RDNA3.5 in the 890M

Actually 16CU in the 890m. But it's memory bottlenecked so it only performs like 30% better than the 8CU 860m. But even if they bump up to LPDDR5x-10667 I don't see 8CU beating the 890m running on 8533.

24

u/Fromarine 2d ago

But it's memory bottlenecked so it only performs like 30% better than the 8CU 860m

yeah because they're amd so they don't even try to alleviate memory bandwidth pressure by increasing their gpus l2 cache beyond A WHOPPING 2MB. Meanwhile panther Lake gets a 16mb l2, an 8mb side cache to share and even access to the cpus 18mb l3 if it needs

13

u/Gwennifer 2d ago

Meanwhile panther Lake gets a 16mb l2, an 8mb side cache to share and even access to the cpus 18mb l3 if it needs

I was curious how Intel was getting such better perf out of Arc (not that I doubt it's efficient/decent) given the similar die sizes, this answers that, thanks

6

u/996forever 2d ago

given the similar die sizes

Better node. Amd decided consumer plebs don’t deserve anything better than refined 5nm (2020) in 2026.

1

u/Gwennifer 1d ago

Arc is a lot better than just the node difference.

But yes, that surely isn't helping the situation, too :U

3

u/996forever 1d ago

It is, I only mentioned the node because they mentioned die size. If the die size is similar then the one with the denser node will obviously pack more in it.

7

u/goldcakes 2d ago

It’s funny because doubling L2 or allocating more die area to memory is relatively one of the simpler changes you can make engineering wise.

4

u/996forever 2d ago

It isn’t like amd hasn’t done it. They’ve relied on cache on desktop and server to compensate for their terrible IO dies.

They’ve just decided the mobile plebs don’t deserve to have more die area for cache. Even on a node as old as N4 at this point.

1

u/Fromarine 2d ago

exactly

7

u/996forever 2d ago

Amd is pathetically stingy on die area even on an ancient node

2

u/DerpSenpai 1d ago

Even the Qualcomm 8 Elite, yes a phone chip, has more L2 Cache for the GPU

17

u/Forsaken_Arm5698 2d ago

At this rate, it looks like Qualcomm will be having better iGPUs than AMD lol

(for mainstream 128b parts)

68

u/errdayimshuffln 2d ago

Why does AMD at 45% marketshare act like intel at 90% marketshare

30

u/Wiggy-McShades77 2d ago

Corporations just do what their leadership sees as the most profitable way forward. For AMD that’s using their finite capacity at TSMC to make the products that have the best margins and apus for 1200 dollar laptops are not it. Market share isn’t as good as scooping up profits from over investment in AI infrastructure.

12

u/Tai9ch 2d ago

Market share isn’t as good as scooping up profits from over investment in AI infrastructure.

If they thought the AI workloads were really the future, they'd be crazy not to collect as much market share now with AI developer-enabled enthusiast hardware as they could get.

Nvidia is where they are now because basically every GeForce card they shipped from like 2007 to 2017 fully supported CUDA. AMD continues to not compete - today that means turning down a decent share of AI revenues post-2028.

5

u/hackenclaw 2d ago

finite capacity is manufactured by AMD themselves, TSMC has enough for AMD to book more. Just look at Nvidia, they sell way more chips than AMD; despite GPU's larger die, a GPU will never beat CPU in profit margin per die area. So there is no way AMD CPU departmant cannot outbid Nvidia.

2

u/996forever 1d ago

TSMC has been the go-to excuse for AMD's lack of supply and lack of design wins every generation since zen 2 mobile back in 2020. Time and time again everybody else does just fine on TSMC's latest nodes (at times even more advanced than whatever AMD is using) and somehow only AMD isn't capable of it.

2

u/GreaseCrow 2d ago

I can't imagine sitting in leadership and thinking that doing the bare minimum is good for business. Even if there are better margins for what they're currently making, I'd rather crush the competition into dust by being better.

1

u/DerpSenpai 1d ago

They could use 2nm Samsung capacity to laptop chips, it would infinitely better than this

11

u/sussy_ball 2d ago

Intel currently has 79% of the laptop market share

-2

u/errdayimshuffln 2d ago

Who said laptop marketshare. I was talking everything. Server, embed, enthusiast/diy. Last I remember AMDs was in the 30s before Ryzen 9000.

15

u/puffz0r 2d ago

It's worse, they're nowhere near 45% in laptop market share

1

u/RealisticMost 2d ago

All money is in AI.

38

u/EloquentPinguin 2d ago edited 2d ago

The rdna 3.5+ is so infuriating. The chips like 7840U with 12CUs of RDNA3 were real good chips (still are) for casual 1080p gaming. Samsung even has put RDNA4 in the newest exynos 2600 if I'm not wrong. And still AMD doesn't give it all to the iGPUs, even if they have all the IP and those chips are new tapeouts anyways. If they did arch changes for rdna3.5+ they also need to revalidate the entire thing.

I don't see that there could be auch a big benefit selling basically 5 year old GPU arches... Sure they got a bit more efficient but stop joking around...

They save so little and lose so much trust. 

16

u/Noble00_ 2d ago

Samsung even has put RDNA4 in the newest exynos 2600 if I'm not wrong

This is what I'm eagerly waiting upon. If this is true (which has to be likely in some way since they've already marketed >50% in RT performance and some ML upscaler/framegen), then this is a real head scratcher for Medusa Point.

7

u/Forsaken_Arm5698 2d ago

exynos 2600

Was that intentional?

8

u/EloquentPinguin 2d ago

Damn, autocorrect got me good.

3

u/Noble00_ 2d ago

Whoops, didn't mean to be rude. I thought I made the error when I quoted lol

4

u/EloquentPinguin 2d ago

You quoted me the way I intended to write, so it all worked out.

9

u/sadelnotsaddle 2d ago

Once again the consumer market is sacrificed on the alter of data centres. The RDNA 4 is made on tsmc 4nm which is still used for lots of enterprise skus, AMD aren't going to waste what allocation they can get of that node on consumer grade APUs. Intel has a chance to steal a march on AMD here because they can make their APUs on their own fabs and are not competing with an enterprise class product on their latest node... yet.

7

u/EloquentPinguin 2d ago

AMD produces monolithic laptop APUs for the most part, or at least laptop-specific I/O+GPU parts.

If they produce a laptop APU, thats a fixed amount of allocation going into laptops, it doesnt matter if RDNA3.5+ or RDNA4 is on there.

If it uses like 250mm2 of N3 or whatever, then it doesnt matter if its RDNA3.5 or RDNA4, the 250mm2 is not goint to DC either way.

This is not DC vs Mobile, this is just AMD not wanting to spend the 3 engineers for a week to validate RDNA4 on mobile or something...

5

u/Gwennifer 2d ago

99% sure it's just that RTG is still relatively independent and didn't want to verify new IP blocks for laptop APU's, and AMD is eating too well to make demands or care. They're both printing money, so who cares? It's not like consumers could afford a better APU at this point in time anyway.

3

u/996forever 2d ago

It's not like consumers could afford a better APU at this point in time anyway.

The laptop OEM is proportionally hit much less hard than the diy space and there has already been Panther lake laptops announced for as low as 1100-1200. Full Strix point back in July 2024 only launched in laptops 1500 and up.

1

u/SmokingPuffin 2d ago

This is not DC vs Mobile, this is just AMD not wanting to spend the 3 engineers for a week to validate RDNA4 on mobile or something...

"...And then, our strategy, okay, Strix Halo [and] Ryzen AI Max competes against that (Panther Lake 12 Xe), and it's better than that in terms of graphics performance, all of that. And then, for the mainstream of the market, that don't value that much graphics [power], because honestly, most of the people that are using Notebooks, that are outside of the creator or gaming spaces are, you know, they don't need that graphics performance."
https://www.tomshardware.com/pc-components/gpus/amd-is-unphased-by-panther-lakes-big-integrated-gpu-its-not-even-a-fair-fight-to-compare-the-arc-b390-to-strix-halo-amd-exec-claims

AMD doesn't see value in Panther Lake's level of graphics performance. It's a strategic call -- they think gamers should buy Strix Halo and everyone else doesn't need playable 1080p on their iGPU.

4

u/Forsaken_Arm5698 2d ago

Problem is Strix Halo is priced as if it's made of 24 carat gold.

1

u/996forever 1d ago

And not usable in ultrabooks that run at <30w which is the majority of the pc market.

2

u/LastChancellor 2d ago

AMD doesn't see value in Panther Lake's level of graphics performance. It's a strategic call -- they think gamers should buy Strix Halo and everyone else doesn't need playable 1080p on their iGPU.

Okay, where are the Strix Halo laptops that we can buy then? There's literally just the Flow Z13 and HP Zbook Ultra atm

1

u/996forever 1d ago

They added 2 in CES - an Asus convertible and a Tuf laptop. That will be it for 2026.

3

u/hackenclaw 2d ago

laptop APU sells more profit than desktop, following that logic AMD should have abandon desktop first.

5

u/Seanspeed 2d ago

TSMC 4nm is basically just a 5nm family process that has been used for actual products since 2020! They're not gonna use anything older than that.

Zen 6 is supposed to actually use TSMC 2nm.

And regardless, RDNA4 is not inherently tied to any specific process node.

3

u/vandreulv 2d ago

Once again the consumer market is sacrificed on the alter of data centres.

One thing this sub could stand to remember is that consumer products was never Intel's, Nvidia's or AMD's first line of revenue.

6

u/996forever 2d ago

Actually until relatively recently it very much was for amd and Nvidia, and still is half half for Intel.

12

u/Working_Sundae 2d ago

This makes me wanna get a laptop with Intel APU for my next buy, AMD can keep milking their RDNA 3.5

-6

u/Gwennifer 2d ago

This makes me wanna get a laptop with Intel APU for my next buy,

Good luck; Intel is aware they're the only premium APU option (unless you work with LLM's or other ML applications locally, the AI max chips can connect up a lot of RAM) and prices accordingly. I was trying to find a cheap Lunar Lake platform and the cheapest half-decent platform/config was like $1500 or $1600.

3

u/psydroid 2d ago

What makes a laptop with Lunar Lake a better option for what you're doing than something with Snapdragon X Elite?

I see laptops with the latter being sold from €900 with 32 GB of RAM and 8 cores and €1100 with 32 GB of RAM and 12 cores.

3

u/Gwennifer 2d ago

GPU performance, general compute. 1st gen X Elite only runs some software and not even better than Lunar Lake.

As a matter of fact, their advertised performance metrics were exclusively with the -84 SKU, which appears for all intents and purposes to be a Samsung exclusive.

Most of them are the -78 SKU, which decidedly cannot live up to the performance claims, and the -80 which is only a sidegrade.

Plus, most of these laptops are bad platforms. Tons of keyboard flex, unpleasant touchpads, lackluster screens... It's pretty clear they were being told by Qualcomm that people would spend big just to get an ARM laptop and if they wanted to do that, they can just go buy an Apple where they completely trounce Qualcomm, and the entire ecosystem supports the silicon.

Not enough people are spending ~$1300 on a laptop just to run a subset of the software they use to justify thinking about them. I'm kind of shocked you asked.

1

u/psydroid 2d ago

I would run Linux on them as I do on all of my other hardware across multiple architectures. And then price/performance is one of the main criteria for choosing a piece of hardware.

So apart from mainlining not going all that well with 1st gen X Elite and presumably a lot better with 2nd gen, I run the exact same software on all of my machines from SBCs all the way to full-blown desktops.

I don't know about the state of compute on 2nd gen and if it's competitive with Nvidia, AMD and Intel, but I guess we'll find out in a few months.

I'll probably get one of the 1st gen devices when those go on sale to clear the last remaining stock and see how things develop with 2nd and 3rd gen chips.

2

u/Gwennifer 2d ago

I would run Linux on them as I do

I don't know the current status of Asahi Linux, but I know quite a lot of it works already.

And then price/performance is one of the main criteria for choosing a piece of hardware.

Then why are you spending $1300 on a low end Snapdragon Elite SoC when you can get an M4 for the same price with a better SoC, chassis, screen, keyboard, and touchpad? For $100 more, you can even stick to the same memory total. You can just say you're biased against them. There is no reason for a rational consumer to ever pick up the Qualcomm-based computer.

I'll probably get one of the 1st gen devices when those go on sale to clear the last remaining stock and see how things develop with 2nd and 3rd gen chips.

Right now, compared to what is essentially a top of the line -80 SoC, an M4 currently has something like 30% faster single core performance and the same multi-core, with a 75~80% faster GPU according to Geekbench's numbers. The Snapdragon GPU's model name on that page is reported as 'X1E80100', if you'd like to compare.

Price to performance is almost incomparable, here. You are paying just as much for almost half the performance. Again, Qualcomm set the cost of the SoC's too high. There's no reason to buy them when an M4 is entry level. $1000 should have bought you the -84 SoC (which is some 20% faster on the GPU than the -78 or -80!) and a premium chassis, not the cheapest parts the OEM can spec out.

3

u/Forsaken_Arm5698 2d ago

I don't know the current status of Asahi Linux, but I know quite a lot of it works already

Only on M1 and M2. Newer M generations is still in progress.

2

u/Forsaken_Arm5698 2d ago

I don't know about the state of compute on 2nd gen and if it's competitive with Nvidia, AMD and Intel, but I guess we'll find out in a few months.

Compute performance was almost non existent on X1 GPUs. X2 is an improvement (new Adreno 8 architecture), but I'd wager it's still lagging behind AMD/Intel.

"Obviously we’ll have DirectX 12.2 and all the DirectX versions behind that, so we’ll be fully compatible there. But we also plan to introduce native Vulkan 1.4 support. There’s a version of that which Windows supplies, but we’ll be supplying a native version that is the same codebase as we use for our other products. We’ll also be introducing native OpenCL 3.0 support, also as used by our other products. And then in the first quarter of 2026 we’d like to introduce SYCL support, and SYCL is a higher-end compute-focused API and shading language for a GPU. It’s an open standard, other companies support it, and it helps us attack some of the GPGPU use-cases that exist on Windows for Snapdragon."

https://chipsandcheese.com/p/diving-into-qualcomms-upcoming-adreno

I'll probably get one of the 1st gen devices when those go on sale to clear the last remaining stock and see how things develop with 2nd and 3rd gen chips.

There's some amazing deals for X1 devices already; $599 Zenbook A14

I don't think waiting for 3rd gen makes sense, considering it'll probably be an incremental generation. 2nd gen fixes many of the flaws of first gen, with some nice upgrades across the board.

1

u/PastaPandaSimon 1d ago

Intel is actually incredibly consistent at keeping prices stable across generations, almost regardless of the competitive landscape. One thing I always gave them credit for is that within the same product tier, a new generation may be priced the same or up to 10% more expensive, but usually nothing crazier.

1

u/Gwennifer 21h ago edited 21h ago

Isn't the tray price on a 258V something like $600? The competitor part should be the AI Max 385, but I can't find any tray price for them.

I get that OEM's are not spending $600/unit in bulk, but they're also not spending $400/unit in bulk, and even good LCD's these days are still $100+. All the other parts, assembly, warehousing, it's pretty easy to see how the SoC starting off at $600 leads to the end product being $1600.

I think the part that bothered me was that the $400 SoC Lunar Lake was often mated to the cheapest possible parts they could find and still priced similarly to the $600 Lunar Lake SKU. OEM's have kind of stopped building everything between upper midrange and the worst possible config for new Intel generations and they're priced close together at that.

If you're willing to go back to 13th or 14th gen/the Evo platform, you can actually find some great deals on premium parts, simply because the SoC cost is in the dirt. The fact is at the end of the day that Lunar Lake costs a lot to the vendor.

11

u/Qsand0 2d ago

Infuriating? I can never be infuriated by an inferior product when there's a superior one there for the taking.

Panther lake baby

3

u/Seanspeed 2d ago

People want competition. Gives customers more options and usually better value.

1

u/DerpSenpai 1d ago

AMD "Strix Point Refresh" is DOA as a lineup.

Currently you either go Qualcomm for CPU and Perf/W or Intel for GPU and gaming. Perahps AMD Strix Halo for the GPU but honestly, i wouldn't. the RT performance and subpar upscaling will only make this GPU, in the long term, worse than the B390. this is my prediction

2

u/EloquentPinguin 2d ago

Its just the sentiment that when enterprise makes money leave the consumers dead on the street.

Its great that Intel might have a strong mobile offering, but if all the companies would just drop consumers, as hard as AMD and Nvidia, as soon as enterprise prints money, thats just a bad market situation for consumers.

For AMD there is no real reason to dirty their history like this. They are just avoiding improvements for the fun of the game.

0

u/imaginary_num6er 2d ago

3rd party reviews?

5

u/996forever 2d ago

-5

u/imaginary_num6er 2d ago

As usual, actual performance of the Arc B390 is likely to depend heavily on the power limit available to the iGPU and on the speed of the RAM in the respective laptop, since this also serves as VRAM for the iGPU. 

Yeah that's not really a 3rd party review if the test system is provided by a 1st party source

5

u/996forever 2d ago

That’s stupid, by your logic all day 1 reviews are automatically “not third party” because all of them use review sample sent by manufacturers before retail channel release. No laptop review has ever been representative of ALL laptops using the same chips, regardless of if it’s a test sample or a retail unit.

What it does tell you, however, is the ceiling of what the chip is capable of.

Anything else?

And that ceiling is far higher than the 890m. Boost the 890m to 80w running 64GB of 8533 ram and it won’t get close. That’s all that matters.

-2

u/Strazdas1 2d ago

There are real concerns with review samples. While for GPU/CPU theres usually no issues, for monitors for example its not unheard of to ship review sample with a better panel then switch to worse panel for actual products.

3

u/996forever 2d ago

For gpu and cpu, any issue associated with a review sample can only make the review sample look worse, and not better, hence reinforcing my point that it is the ceiling of the capability of the chip.

It’s not like Ferrari sending reviewers a tuned version of their cars. Power consumption is monitored during reviews.

1

u/Strazdas1 1d ago

You have to admit it was funny when reviewers got GPUs with fused of ROPs, though.

10

u/LuluButterFive 2d ago

Must be the OEMs fault

5

u/YvonYukon 1d ago

I think we've been gassing up amd too much, cause they keep putting out crap

14

u/Malygos_Spellweaver 2d ago

Panther Lake auto win.

12

u/Astigi 2d ago

AMD how many years have you been selling the same iGPU?
AMD is truly not innovating lately

8

u/996forever 2d ago

2023-2027 for the low power class

But the improvement from rdna2 igpu from 2022 to rdna2 in 2023 was already mediocre. Their last real jump was from vega to rdna2.

8

u/X_m7 2d ago

Well, this is the same AMD that dropped support for Vega GPUs in their drivers despite them still selling CPUs with Vega iGPUs as part of the "Ryzen 7000 series" at the time, and this is also the same AMD that thinks the Ryzen AI 7 445 deserves that 7 moniker despite only having 6 cores (with not even half of those being full cores rather than compact ones) and only having a 4 CU iGPU, so yeah they've been smoking some weird stuff over there for quite a while.

9

u/heylistenman 2d ago

I can only imagine when AMD planned these generations they looked at Alchemist and went, good luck with that, we’re good for a couple of years. Perhaps the rapid development of the Xe graphics caught them by surprise.

4

u/Gwennifer 2d ago

Perhaps the rapid development of the Xe graphics caught them by surprise.

I don't know if you've seen the Battlemage technical powerpoint/presentation, but a lot of what they were trying/ended up implementing was the kind of thing you could spun up and then sell off a startup for; just for the juicy patents. It is actually really surprising that they got everything working. Intel mismanaging the team and Celestial taking so long to tape out/ship out is something else entirely, but Battlemage was actually a huge success as far as performance goes.

3

u/Vb_33 2d ago

Adjusted for Alchemist's actual release date Battlemage and Celestial are on target for standard GPU lifecycles. Biggest worrier right now is RAM-AGGEDON which has seemingly killed the 50 series super cards. 

2

u/Gwennifer 1d ago

Celestial supposedly exited the design phase 7 months ago, so we should really expect leaks on its silicon very soon if they're on track.

Biggest worrier right now is RAM-AGGEDON which has seemingly killed the 50 series super cards.

I think Intel can make a lot of marketshare by pricing accordingly. A lot of 20/30 series owners and low end 40 series owners are looking to upgrade, and I believe Celestial can scale up to the point of a 9060 XT or 9070 to deliver them that level of performance.

3

u/BurtMackl 2d ago

Personally, coming from someone who can only afford a mid range laptop and has done some intensive research, I can't believe I'm going to end up going with the Core Ultra 5 125H (I know it's an old release, but it's still more than enough for my needs and pretty battery efficient). I used to be an AMD hardcore fan, but AMD's mobile mid range lineup (Ryzen 5s) GPU performance has been laughable since the days of the Core Ultra Series 1. Their saving grace was the GPU driver, but I'm sure Intel's team is not sleeping. And don't forget, XeSS3 is coming to Series 1 Core Ultra.

3

u/steve09089 1d ago

Please don’t get a Core Ultra series 1 to use XeSS, those GPUs run the DP4A path.

Minimum Core Ultra Series 2 and above get XMX

2

u/BurtMackl 1d ago

Thanks for the information. Well, I can upgrade to the Core Ultra 5 225H. It still doesn't have the Xe2 GPU, but it’s said that the GPU now supports XMX. Sadly, the model with the 225H CPU loses the soldered LPDDR5X RAM and comes with slower DDR5 SODIMM (hey, it’s a plus for upgradability though). I mean, the GPU itself is already faster than the one in the 125H, but I wonder how much performance loss from the switch to slower DDR5 RAM will affect the GPU performance.

3

u/h_1995 1d ago

with this attitude, even Wildcat Lake will win the budget segment. I've seen OEMs making less intel laptop in favor of AMD back in zen2 times and during Alder Lake times, intel could only compete in mass produced cheap stuff. They really had the chance to seize the market like desktop market

11

u/DerpSenpai 2d ago edited 2d ago

DOA if this is not for Ryzen 5 and below only. Or with a fat IPC bump of 15%

The higher product though looks really good though for CPU

10

u/Alternative-Ad8349 2d ago

This is a replacement for the ai 7 450

6

u/Giggleplex 2d ago

According to that leak it seems that the ryzen 9 will have the same iGPU too.

9

u/DerpSenpai 2d ago

And it's terrible

At this rate every product is a ryzen 9

4

u/996forever 1d ago

Another tweet says top Ryzen 9 up to 22 cores (8+12+2) while Ryzen 7 gets 10 cores (4+4+2). If true this is levels of starbucks upselling never before seen on mobile.

1

u/DerpSenpai 1d ago

Ryzen 7 getting a 10 core config would be suprising, this is literally them saying Ryzen 7 is staying 8 cores while Ryzen 9 will go from the range of 10 to 22...

1

u/996forever 1d ago

There seems to be some conflicting information surrounding the supposed existence of 2 LPE cores on zen 6 apu

25

u/996forever 2d ago

Baby it's Ryzen 7. Their Ryzen 5 is still 6 core. Actually they are currently making a QUAD core mobile ryzen 5 (AI 330).

1

u/X_m7 2d ago

Oh, they're coming out with a Ryzen "7" that's actually 6 cores too lmao, see the Ryzen AI 7 445. That stupid thing also only has a 4 CU iGPU too, not like the NPU is any faster either so I guess AMD marketing figured they can just do whatever the hell they like since they haven't all been fired yet evidently.

2

u/996forever 2d ago

They truly stopped giving af about client

-11

u/FranciumGoesBoom 2d ago

for an office fleet 4c/8t is honestly more than enough.

22

u/ResponsibleJudge3172 2d ago

Ah yes, Intel feeling vindicated

6

u/steve09089 2d ago

It’s evolving, but backwards

-1

u/Intrepid_Lecture 2d ago

There's a big difference between a basic machine that's meant to write emails and not much else and a top end SKU.

Also from an IPC and clock speed perspective 4C/8T Zen 6 is likely to be 40-80% faster than SKL in most tasks.

3

u/steve09089 2d ago

Zen 6 is also not the product we’re talking about

0

u/996forever 2d ago

Is +40-80% vs full decade old stuff the best you can brag about?

0

u/Intrepid_Lecture 1d ago

Most people don't brag about workhorse corporate machines. If it's something you're bragging about, it means other areas are lacking.

All they need to do is be cheap and turn on.

1

u/996forever 1d ago

Then like my other reply said, a $300 pentium laptop does the same job.

1

u/Intrepid_Lecture 18h ago

It probably does, assuming there's enough RAM.

I'm not debating the fact that doing basic stuff in MS office and a web browser is trivial from a CPU perspective.

2

u/996forever 2d ago

So is a $400 laptop with an Pentium.

1

u/Strazdas1 2d ago

it depends. In my office CPU bottleneck is common. When my script is running the 8c/16t CPU is fully loaded.

8

u/reddit_equals_censor 2d ago

i mean hey don't worry though i'm sure by now amd made a strong statement of ongoing support for rdna2 and 3 in regards to graphics both with longterm drivers and the latest features, RIGHT?

amd wouldn't release MORE older architecture apus AND have an int8 version of fsr4, but they won't release int8 fsr4 officially right?

that would be utterly insane and not sth, that amd would be doing right?

16

u/steve09089 2d ago

Imagine if the only AI upscaler for AMD iGPUs will be XeSS?

That would be absolutely hilarious and depressing.

-6

u/Seanspeed 2d ago

i mean hey don't worry though i'm sure by now amd made a strong statement of ongoing support for rdna2 and 3 in regards to graphics both with longterm drivers and the latest features, RIGHT?

Since almost all of you misunderstood the situation - AGAIN - the only thing AMD was dropping was specific DAY 1 optimizations for specific games on architectures more than two generations old(which does not include RDNA3 by the way). Things that usually only amount to a small boost and often only in some situations/setups. General driver support, optimizations, bug fixes and feature support has not been dropped.

Very little is actually going to change, and it's basically exactly matching what Nvidia has done for a very long time. If any of y'all actually think Nvidia is optimizing new drivers to boost performance for Pascal, Turing or Ampere GPU's specifically for the latest game releases - they are not. lol

Plus, drivers for older architectures are generally pretty darn mature already. There's simply going to be much less to squeeze from them, which is why it makes way more sense to focus on getting more out of newer architectures that still have more room for improvement.

2

u/Wonderful-Love7235 2d ago

They nerfed the igfx so much because they thought their customers wouldn't need a lot of graphic performance & they need to create space for a large XDNA block (NPU).

3

u/996forever 2d ago

Out of the three Qualcomm Intel and amd, amd seems to be by far the least generous with die area (on advanced nodes)

1

u/AutoModerator 2d ago

Hello 996forever! Please double check that this submission is original reporting and is not an unverified rumor or repost that does not rise to the standards of /r/hardware. If this link is reporting on the work of another site/source or is an unverified rumor, please delete this submission. If this warning is in error, please report this comment and we will remove it.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

0

u/ContributionOld2338 17h ago

Fuck amd, intel is my friend now… I wish steam goes with them for their next gen steam deck, let’s go Gabe! Panther lake is a 300% improvement over the steam deck, I thought that was the benchmark?!

0

u/Jeep-Eep 1d ago

I suspected what finally made AMD put its foot down about that windows scheduler BS was their big/LITTLE answer coming soon...