Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I just can't get behind intel as a viable option. The risk of cancellation is too great. Maybe if they were not in such dire financial straits. Discrete GPUs are not core to their business, and takes several years of market presence. That is, its my opinion that it won't be a profitable business unit for several years as they try to compete ,and catch up to nvidia and AMD.

Secondly, their track record - this isn't their first foray into discrete GPUS. They tried in 1998 with the i740 and then the i752 in 1999 - nothing came of that. Larrabee (never saw the light of day), and Iris Xe Max

Overall their financial situation I think is what will be the Arc's undoing, we'll maybe something in 2026, and updates, in both hardware and software will slowly grind to a halt. I'm hope I'm wrong, but I don't see them continuing

nVidia is dropping support for 1xxx GPUs with release 580 (we're on 576.80) though they haven't released the schedule. So I'm living on borrowed time anyways. I think that INTC stock provides enough commentary on where the market thinks that they are going. I'm actually more bearish than the market.

Which brings me back to the 5050.

There's one card on Amazon for $300 (MSRP is $250). There's a 5060 for $300. Maybe we get the thing where a new card is MSRP+ for a month or two and then gets back down to MSRP. It's good to see at least one for sale. I'll check Microcenter as well.

I'm going to put the 1660 Ti back in the desktop for now and buy something newer shortly. Either a 5050, 5060 or 4060.

sc.png
 
  • Like
Reactions: maflynn
Intel is probably a worthy option. If you aren’t worried about raytracing, machine learning performance, or the latest and greatest game optimizations, the Arc GPUs compete fairly well — if you can get them near (a decent) MSRP), of course.


I don’t have anything less than “70” tier cards, but maybe the following will provide some useful comparisons, nonetheless.

First up, a 4070 Ti (non-Super) Asus ProArt OC, ~2.5-slot card.

While testing with Geekbench 6.4, the majority of scores landed between 202,000 and 212,000. There were only a few outliers. The initial run after install was 176,167 but also a 198,215 and a 196,###. In the 80-100 percent power (limit) range, the average score was ~207,000. At 50% power limit, the scores fell between 196,000 and 206,000.

Next is an RTX 5070 Founders Edition, 2-slot with a length shorter than a triple fan but longer than a typical third-party dual fan card.

Geekbench scores had a range from 225000 to 231,000. The average was ~228,400. Somewhat surprisingly, the GPU performed overall slightly better at 70% power limit.* In the same number of runs, the scores landed between 225,000 and 232,000 with an average of ~228,700. Also, there were no outliers to mention.

Regarding power and thermals, here are some HWiNFO 64 screenshots from after each test, spanning power limit equivalent runs, to show comparisons of max GPU power, thermals, and fan(s) speed.

View attachment 2527552

50-percent power limit:
View attachment 2527554


View attachment 2527555

70-percent power limit:
View attachment 2527558

Geekbench is a shorter test. As such, I decided to add Superposition data to further demonstrate power usage as well as the benefits of adjusting/tuning power limits. For example, the 4070 Ti had ~3% less performance with 25% less maximum power available.

RTX 4070 Ti - 100% (stock) power cap
1080p Extreme score: 11718
View attachment 2527561

RTX 4070 Ti - 75% power limit
1080p Extreme score: 11385
View attachment 2527562

RTX 4070 Ti - 50% power limit:
1080p Extreme score: 10248
View attachment 2527563


RTX 5070 - 100% (stock) power cap
1080p Extreme score: 14794
View attachment 2527564

RTX 5070 - 75% power limit
1080p Extreme score: 14784
View attachment 2527565

RTX 5070 - 70% power limit*
1080p Extreme score: 14779
View attachment 2527566

RTX 5070 - 70% power limit*
4K Optimized score: 19235
View attachment 2527567

* The RTX 50 series GPUs don’t have as low of a power floor versus previous generations. This has been noted by others, unfortunately, I don’t recall exact video titles to locate and reference.

Other testing information: ambient temp range of ~27 to ~28ºC. Case fans were at ‘idle’ (i.e., <=1000 RPM). Systems featured the AM5 platform, Windows 11, latest drivers and BIOS/firmware. Tool used to adjust GPU power limit was ASUS GPU Tweak III.
Yeah only the RTX Pro 6000 Blackwell has the ability to go to 50% this generation. No clue why they limit all the other cards.
 
I just can't get behind intel as a viable option. The risk of cancellation is too great. Maybe if they were not in such dire financial straits. Discrete GPUs are not core to their business, and takes several years of market presence. That is, its my opinion that it won't be a profitable business unit for several years as they try to compete ,and catch up to nvidia and AMD.

Secondly, their track record - this isn't their first foray into discrete GPUS. They tried in 1998 with the i740 and then the i752 in 1999 - nothing came of that. Larrabee (never saw the light of day), and Iris Xe Max

Overall their financial situation I think is what will be the Arc's undoing, we'll maybe something in 2026, and updates, in both hardware and software will slowly grind to a halt. I'm hope I'm wrong, but I don't see them continuing
Considering the Arc architecture is also used in some of their integrated GPUs I'm not too worried. Maybe they won't release too many successors but I don't think they will instantly drop support for drivers.
 
The 1660 is inand it's using about 11-12 watts with two of my programs running. GPU core usage is in the low single digits. So I can live with this for now but I'd love to get a 50xx to see if I can get it lower and so I can get driver updates.
 
Considering the Arc architecture is also used in some of their integrated GPUs I'm not too worried. Maybe they won't release too many successors but I don't think they will instantly drop support for drivers.
True, at worse you would just use older drivers.
 
Considering the Arc architecture is also used in some of their integrated GPUs I'm not too worried. Maybe they won't release too many successors but I don't think they will instantly drop support for drivers.
Except that the same thing happened with regard to the Intel Xe max/Xe LP architecture, and they discontinued the discrete line while still using the architecture for iGPUs.

Intel draws a line in the sand to boost gross margins — new products must deliver 50% gross profit to get the green light The article states that its new products but I think you can infer that he's willing to take an axe to under performing business units.

My point is, that just because its being used in iGPUs doesn't protect the dGPU market. They may not cancel it, and I could be totally wrong - won't be the first, nor the last. But, and its a big but, are you willing to spend several hundred dollars on a GPU that has driver/compatibility short comings with the risk of them walking away?
 
Except that the same thing happened with regard to the Intel Xe max/Xe LP architecture, and they discontinued the discrete line while still using the architecture for iGPUs.

Intel draws a line in the sand to boost gross margins — new products must deliver 50% gross profit to get the green light The article states that its new products but I think you can infer that he's willing to take an axe to under performing business units.

My point is, that just because its being used in iGPUs doesn't protect the dGPU market. They may not cancel it, and I could be totally wrong - won't be the first, nor the last. But, and its a big but, are you willing to spend several hundred dollars on a GPU that has driver/compatibility short comings with the risk of them walking away?

It looks like my 1050 was launched in 2016 so it's had nine years of support and might get up to ten.

That's about the limit of what Apple provides for Macs and I definitely think that's reasonable. The problem with Intel is failure-risk down the road. I think that they would get acquired or split up but the company acquiring them might take a hatchet to product support.
 
Except that the same thing happened with regard to the Intel Xe max/Xe LP architecture, and they discontinued the discrete line while still using the architecture for iGPUs.

Intel draws a line in the sand to boost gross margins — new products must deliver 50% gross profit to get the green light The article states that its new products but I think you can infer that he's willing to take an axe to under performing business units.

My point is, that just because its being used in iGPUs doesn't protect the dGPU market. They may not cancel it, and I could be totally wrong - won't be the first, nor the last. But, and its a big but, are you willing to spend several hundred dollars on a GPU that has driver/compatibility short comings with the risk of them walking away?
There is a massive difference though because they never released those discrete GPUs. Arc cards are on the market today and available to buy. There are consumers like me with Arc cards already they have to support.

Personally I already bought an Arc card so clearly I am willing to take that risk. I am interested in the Pro cards too.
 
I just can't get behind intel as a viable option. The risk of cancellation is too great. Maybe if they were not in such dire financial straits. Discrete GPUs are not core to their business, and takes several years of market presence. That is, its my opinion that it won't be a profitable business unit for several years as they try to compete ,and catch up to nvidia and AMD.

Secondly, their track record - this isn't their first foray into discrete GPUS. They tried in 1998 with the i740 and then the i752 in 1999 - nothing came of that. Larrabee (never saw the light of day), and Iris Xe Max

Overall their financial situation I think is what will be the Arc's undoing, we'll maybe something in 2026, and updates, in both hardware and software will slowly grind to a halt. I'm hope I'm wrong, but I don't see them continuing
Considering the Arc architecture is also used in some of their integrated GPUs I'm not too worried. Maybe they won't release too many successors but I don't think they will instantly drop support for drivers.
Except that the same thing happened with regard to the Intel Xe max/Xe LP architecture, and they discontinued the discrete line while still using the architecture for iGPUs.

Intel draws a line in the sand to boost gross margins — new products must deliver 50% gross profit to get the green light The article states that its new products but I think you can infer that he's willing to take an axe to under performing business units.

My point is, that just because its being used in iGPUs doesn't protect the dGPU market. They may not cancel it, and I could be totally wrong - won't be the first, nor the last. But, and its a big but, are you willing to spend several hundred dollars on a GPU that has driver/compatibility short comings with the risk of them walking away?
There is a massive difference though because they never released those discrete GPUs. Arc cards are on the market today and available to buy. There are consumers like me with Arc cards already they have to support.

Personally I already bought an Arc card so clearly I am willing to take that risk. I am interested in the Pro cards too.
 
  • Love
Reactions: maflynn
Nowadays, due to time constraints, I go for pick-up and play, turn-based, and puzzle games for the most part. However, if it’s not that, in most scenarios, I want spectacular visuals. Path tracing looks to be the next evolution but… Dang!.. with an RTX 5090 struggling...


Pivoting a little back to the Mac side, without strong optimization, it’s showing promise.

 
Last edited:
Seems extreme, I've not watched the entire 48 minute video, but there's clear differences. The question is, does that difference justify the up charge for a 5080 or 5090
 
Seems extreme, I've not watched the entire 48 minute video, but there's clear differences. The question is, does that difference justify the up charge for a 5080 or 5090
In fast paced games it doesn't matter as much. In slower games it can make them feel more immersive. Of course if you are not playing at 4k you probably don't need a 5090.
 
In fast paced games it doesn't matter as much. In slower games it can make them feel more immersive. Of course if you are not playing at 4k you probably don't need a 5090.
You're preaching to the choir, I'm one of those odd people who disables ray tracing on games, as I find it unnecessary. That's largely the reason why I don't opt for a high end GPU when looking to upgrade.
 
You're preaching to the choir, I'm one of those odd people who disables ray tracing on games, as I find it unnecessary. That's largely the reason why I don't opt for a high end GPU when looking to upgrade.
Yeah, it also helps to skip multiple generations when possible. You get bigger performance improvement that way.

I am curious to see this Super refresh folks are talking about. I have no real need for more vram, so I guess I am going to sit on the sidelines for it.
 
I corrected the Mac Studio M3 Ultra link.

I also added another source — the embedded YT video. I’ve been playing a variety of games on various levels of hardware since the ‘90s. Quite frankly, a solid 30+ FPS is certainly sufficient, marketing is unlikely to convince me otherwise. Although, a solid 60 FPS is a good target. Anyway, the witnessed ~24 FPS 1% lows is cutting things way too close, in my opinion, when spending $3000+ USD on the hardware.

I am curious to see this Super refresh folks are talking about. I have no real need for more vram, so I guess I am going to sit on the sidelines for it.
Same. The VRAM drama is, as usual, quite exaggerated. Are there valid scenarios? Yes. However, even the “reviewers” have had the typical telling disclaimers of something like “Nvidia doesn’t market this as a 4K card and nobody would probably really play at that resolution. But we’ll prove 4K exceeds the VRAM capacity.”
🤦‍♂️
In fact, now that I look back at the video, his Ultra settings with path tracing demo is reporting ~17 GB of VRAM used. Yes, that exceeds the 5080’s (16GB) capacity. However, not by much — and that’s not the real bottleneck anyway. With “DLSS Quality” it’s less than 16GB. So, yeah.
 
  • Like
Reactions: eltoslightfoot
Looks like the 12VHPWR still has issues, this one of those things that gives me pause with the higher end cards

Older video, but its still has good points

 
Looks like the 12VHPWR still has issues, this one of those things that gives me pause with the higher end cards

Older video, but its still has good points

If I understand, the problem is the lack of balancing across the 6 power pins. The 3090ti did balance them (I think it was 2 pins per) and for whatever reason nvidia stopped doing that for the 40/50 series high end cards. The Asus ROG Astral doesn't balance, but it will alert you to an issue, I dunno how many of those cards have burned power connectors versus others so I cannot say how useful it is.
 
If I understand, the problem is the lack of balancing across the 6 power pins. The 3090ti did balance them (I think it was 2 pins per) and for whatever reason nvidia stopped doing that for the 40/50 series high end cards. The Asus ROG Astral doesn't balance, but it will alert you to an issue, I dunno how many of those cards have burned power connectors versus others so I cannot say how useful it is.
seems to be an ongoing problem and I think both Nvidia and the consumers were hoping with the 50 series cards the problem is behind them, but that's simply not the case.
 
Yeah, I saw that, and there he is acting shocked about the his connector burning up after pumping 1200 watts through the connector (that's rated for 600 watts). Its a click baity YT
True. Although, 95%+ of YouTube are, even admittedly, “satisfying the algorithm” so they can be (unnecessarily) paid.

On the other hand, I think — agree with some commenters — it’s a decent demonstration about how the issue itself has been extremely over dramatized.
Is the 12VHPWR connection, not simply the connector, design too close to safety margins? Apparently and mathematically, yes.
Is it a widespread, guaranteed problem? Probably, seemingly not.
 
Is it a widespread, guaranteed problem? Probably, seemingly not
Its wide enough that it causes major concerns with the video card buying public. Granted youtubes like Jaytwocents is partly to blame. They're being helpful by bringing this issue to front, where nvidia was more likely trying to buy it, but also continuing the discussion just for clicks. Tbh, I'm rather surprised at this point that he's still on the 12VHPWR topic.

it’s a decent demonstration about how the issue itself has been extremely over dramatized.
Over dramatized? He's guilty of that specifically by over-powering the card.

I subscribe to Jay and enjoy his videos, but this one, I think its a bit much.

And even then the card didn't die per se.
From what little I know, how over-engineered a video card is, all depends on the board maker, for instance, not all board makers use fuses. Some are more prone to electrical spikes then others, some use cheaper components. So while the Asus card survived - it may not be indicative of how robust every 50 series card is from every manufacturer
 
Its wide enough that it causes major concerns with the video card buying public. Granted youtubes like Jaytwocents is partly to blame. They're being helpful by bringing this issue to front, where nvidia was more likely trying to buy it, but also continuing the discussion just for clicks. Tbh, I'm rather surprised at this point that he's still on the 12VHPWR topic.


Over dramatized? He's guilty of that specifically by over-powering the card.

I subscribe to Jay and enjoy his videos, but this one, I think its a bit much.


From what little I know, how over-engineered a video card is, all depends on the board maker, for instance, not all board makers use fuses. Some are more prone to electrical spikes then others, some use cheaper components. So while the Asus card survived - it may not be indicative of how robust every 50 series card is from every manufacturer
Yup and that can all be attributed to how much a card costs versus MSRP.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.