Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It’s not even close but it does put it firmly in 1050 territory if not slightly ahead. That’s totally fine for a 15” ultrabook. Expecting something with the power of a 1080ti in this chassis right now would be asking for too much.
Thatˋs certainly correct.

However, its only accurate to a limited extent considering the Pascal series is about to be replaced by the next iteration of nVidia cards - Turing.
Yes, it will take a couple of months until the mobile chips show up on the market - but still: AMD seems consistently a whole generation behind (which is at current state of affairs about two years)

Will be interesting to see a comparison btwn a Vega 20 Macbook and a 2060 Ti Max-Q Dell XPS. My bet is on the Dell having a 30%+ advantage in GPU power
 
Last edited:
Bad analogy is bad.

It's more like if in January '19 they decide to introduce a 1 TB option for the iPhone XS, when higher capacity NAND chips become available. That one will obviously be more expensive than the 512 GB model.

Really that's a much better analogy to the MBP Vega situation right now. It's still the same model as before, they just added higher end GPU options (at higher prices) since higher end chips became available.

Nothing to complain or get pissed about. Except if you're that childish.

No because external SSD storage can always be purchased and it is quite fast now, either with USB-C 3.1 gen 2 or Thunderbolt 2 and many people ordered machines with 1-2 TB which is plenty. There is no substitute for a good internal GPU. An eGPU can be added to any of the 2016s and newer regardless of whether or not they even have a dGPU but carrying that around is not practical! External SSDs are small and can be thrown in a backpack or notebook bag.

While I have now gotten over it, the fact is that Apple has never changed the GPU in between two releases. What they released when the original models dropped is what they had through the entire cycle. This is unprecedented and I hope it’s not the new normal.

The fact is that two-three years from now when people want to upgrade and put their machines on eBay those with the 560X will no longer fetch top dollar. People will only pay the extra cash for Vega 16 and 20. In fact a 1TB machine with Vega 20 will probably sell for the same as a 2 TB machine with the 560X. The GPU is that important.

It’s similar to the desirability of machines with 32 GB RAM vs. 16. Two years from now 16 GB will not be very attractive.
[doublepost=1542309315][/doublepost]
Thatˋs certainly correct.

However, its only accurate to a limited extent considering the Pascal series is about to be replaced by the next iteration of nVidia cards - Turing.
Yes, it will take a couple of months until the mobile chips show up in the market - but still: AMD seems consistently a whole generation behind (which is at current state of affairs about two years)

Definitely. I’m not disputing that but this is almost as drastic a difference as the hexa core i7 vs Kaby Lake quad core i7. The 560X was really just a slightly and I mean slightly faster 560 which was basically a rebadged 460. These Vega GPUs with HBM2 increase the bandwidth from 81 to way over 200 GB/s. More like 240 GB/s unless I’m mistaken.

What does this mean? Now the 15” MBP surpasses the graphical power of an Xbox One S (Xbox One refresh) and definitely the original 2013 PS4.
[doublepost=1542309494][/doublepost]
Thatˋs certainly correct.

However, its only accurate to a limited extent considering the Pascal series is about to be replaced by the next iteration of nVidia cards - Turing.
Yes, it will take a couple of months until the mobile chips show up on the market - but still: AMD seems consistently a whole generation behind (which is at current state of affairs about two years)

Will be interesting to see a comparison btwn a Vega 20 Macbook and a 2060 Ti Max-Q Dell XPS. My bet is on the Dell having a 30%+ advantage in GPU power

Certainly. Even 1070 mobile is much stronger. The RTX generation is like the difference between Sandy & Ivy Bridge and what AMD offered at the time. It’s that big of a deal.
 
Last edited:
This is pretty common in the industry and there is nothing wrong with it. In most of the industry they do not waste the time to troubleshoot the exact circuit that is bad, it is faster and less time consuming to just start replacing components until the system is fixed. This is also used for large screen TVs and even in cars in the auto industry. The repair costs can be more on parts but much lower on labor. So instead of a repair taking 4 to 5 hours or days it can take 20 minutes. Logistically this also makes sense on a large scale. Time is money!
I know that.

I also know that Apple doesn't do component-level troubleshooting/repair outside of their real repair depots. So, Apple saying it was a logic-board replacement (or a display replacement; can't recall exactly) was actually in line with their (and most large OEM) repair procedures.

What was fraudulent was Rossman "magically" zeroing-in on the REALLY RARE (for that basically permanently-connected connector, at least) failure of a "bent-back pin", insinuating that the Apple repair tech would have found it instantly, too IF ONLY THEY WEREN'T INTERESTED IN "HIKING UP" REPAIR CHARGES. Of course that was pure b.s., because, if a typical tech had seen the same computer with the same problem, WITHOUT BEING TIPPED-OFF by the "Reporter" before the cameras rolled, they could EASILY have spent HOURS finally tracing-down that failure.
 
What was fraudulent was Rossman "magically" zeroing-in on the REALLY RARE (for that basically permanently-connected connector, at least) failure of a "bent-back pin", insinuating that the Apple repair tech would have found it instantly, too IF ONLY THEY WEREN'T INTERESTED IN "HIKING UP" REPAIR CHARGES. Of course that was pure b.s., because, if a typical tech had seen the same computer with the same problem, WITHOUT BEING TIPPED-OFF by the "Reporter" before the cameras rolled, they could EASILY have spent HOURS finally tracing-down that failure.

BINGO! These sorts of undercover reports have merit when they’re talking about independent car shops. On cars, it’s very obvious to any competent mechanic what the issue really is. They do this every single day and can detect what the real problem is on any but the most exotic cars.

I like Louis and he does do great work but he definitely has a chip on his shoulder regarding Apple. He makes a living off of Apple repair but not without trashing them left and right.
 
Yes.
[doublepost=1542305476][/doublepost]
There was a Canadian(?) TV expose that was trying to prove that Apple was unnecessarily duping customers into high-cost repairs, when a simple fix was all that was required.

Rossman was supposed to have been an independent service tech that the reporter just-happened to bring a pre-broken MacBook pro to, after Apple had said it needed a logic board replacement.

Rossman takes the laptop, and IMMEDIATELY zeros-in on a ridiculously rare in real-life issue with a bent-back connection "finger" on the display connector. The take-home message was that Apple couldn't even be bothered to look for this (really uncommon!) failure, when a "random independent tech" found it in 2 seconds.

The deceit came in because it was OBVIOUS that Rossman was TOLD by the Reporter BEFORE the cameras rolled exactly what the problem was. I have been an electronic tech before, and you simply DON'T find the out-of-the-ordinary failures right away, absent blind luck. How do I know that would be an out-of-the-ordinary failure? Because it was a failure that simply would never happen on a laptop, unless the owner was in the habit of disassembling and reassembling it on a regular basis. That connector gets plugged in at the factory, and pretty much NEVER gets unplugged. So, unless that failure occurred the FIRST time the connector was mated, and then simply didn't "fail" until later (hard to believe if you see the folded-back connector finger), then it just simply wouldn't occur.

THAT's why I say he is a liar and a cheat.
Or simply he has such an experience with problems with those computers, and knows the systems so well that he can diagnose the problem right away, without trial and error.

Its funny how you claim he is a liar and a cheat, when he was not responsible for the TV material.

I thought your post was some sort of trolling, but you are dead serious. And you exactly prove his points he constantly speaks about Apple cult, and Apple users.
 
I agree. Im not happy with the move. I have 2013 MP and Im really pissed about the propertiary connector on the internal SSD and inability to connect eGPU. But I ordered the case and will try to go through the TB3 to TB2 dongle. If it doesn't work I will be using it for MBP. But that is cheap move on apple side as you basically can't upgrade D700

FYI I have a TB2 MBP (2014) and just setup my egpu (Razer Core X w/ a TB2/TB3 dongle, AMD Vega 64) - it was incredibly easy, and I mean incredibly easy. I ran one script and restarted my computer and the egpu now is hot pluggable, and I just eject the gpu the way I would a flash drive when I'm disconnecting. I went with an AMD GPU since drivers are natively available, but the Vega 64 is way more than enough for what I do. If you have questions shoot me a PM I'm happy to discuss.
 
  • Like
Reactions: Equitek
I also know that Apple doesn't do component-level troubleshooting/repair outside of their real repair depots. So, Apple saying it was a logic-board replacement (or a display replacement; can't recall exactly) was actually in line with their (and most large OEM) repair procedures.

What was fraudulent was Rossman "magically" zeroing-in on the REALLY RARE (for that basically permanently-connected connector, at least) failure of a "bent-back pin", insinuating that the Apple repair tech would have found it instantly, too IF ONLY THEY WEREN'T INTERESTED IN "HIKING UP" REPAIR CHARGES. Of course that was pure b.s., because, if a typical tech had seen the same computer with the same problem, WITHOUT BEING TIPPED-OFF by the "Reporter" before the cameras rolled, they could EASILY have spent HOURS finally tracing-down that failure.

Willful ignorance isn't an excuse to overcharge customers for repairs. If it was easy for a third party like Rossman to pinpoint and fix the issue then it should be dead simple for Apple since they fully own the product life cycle. Even retail stores that don't do component repairs should still have access to a knowledge base of common failures that is standard procedure across all industries. If I take my car into the dealership for a service light and they want me to pay to replace my engine or car without them checking the knowledge base or troubleshooting the fault then they've failed and deserve bad publicity and lawsuit if rampant.
 
I'm just now hearing about this, but am sorta glad the upgrade isn't TOO spectacular.

We aren't talking about Nvidia, and we aren't talking about 6 or 8GBs of RAM.

Also, I can always attach an eGPU if I REALLY needed the speed.
 
Agreed. On the other hand that would somehow screw up the concept of a mobile computer, wouldn't it?
 
Btw, it actually does not matter if you compare CUDA with Windows or the CPU market situation.
Point being that Apple certainly does not care at all about the GPU market. They did not care about the CPU market either, so why should they? Apple is trying to make money, looooots of money. Maybe they even care about their customers, trying to create better products (as they tend to claim). But no, they certainly have no desire to act as saviors of the GPU market; even the thought seems preposterous (no insult intended). Why would they want to sell inferior products, putting them in a bad, bad market position. That's the last thing they'd do

Whatever the reason is - it seems unreasonable. They intentionally sell inferior products to customers at extremely high prices without a good cause. Very, very, very irrational

It's simple; they don't.

AMD tends to cater to Apples needs. Nvidia does not, Nvidia likes to be the one in control in any business relationship. AMD is happy if they have a big, and prestigious, customer like Apple.

AMD probably supplies Apple with specifically selected chips that have the best efficiency, at still probably a competitive price. So they could fit the fully enabled Polaris 11/21 chip inside the MBP's 35W thermal GPU envelope while those chips normally are 60+ watts (even as mobile parts). Sure, they're also clocked a little lower, but that alone won't reduce power draw by 40-50 percent...

Nvidia doesn't have any chips that offer similar performance in that TDP range. And certainly none that will come close to that new Vega 20 option..

As for the MBP, they'd have to sell an inferior product if they were to go with Nvidia.
 
Last edited:
Nvidia doesn't have any chips that offer similar performance in that TDP range. And certainly none that will come close to that new Vega 20 option..
Nah. There are 1050s and 1060s (and even faster ones). There are no bechmarks yet, so there is no dependable data so far. However, it is to be expected the Vegas are perhaps on par, more likely still nVidia ahead.
Anyway, we are comparing newly introduced AMDs vs outgoing nVidias here.
A fair comparison will be Vega 20 vs 2060 Ti Max-Q. Given the current situation it is not too difficult to predict that this comparison will not turn out too favourable for AMD. And yes, both in terms of compute AND power consumption (the latter not exactly an AMD stronghold).
Apart from that, CUDA still is THE de-facto standard in AI. Not offering a CUDA option whatsoever means Apple does not offer any product at all to researchers/practitioners in the field - a field that is expected to define the future.

As for the MBP, they'd have to sell an inferior product if they were to go with Nvidia.

Hence, this statement is nothing but wrong
 
Last edited:
  • Like
Reactions: farewelwilliams
Nah. There are 1050 and 1060s. There are no bechmarks yet, so there is no dependable data so far. However, it is to be expected those are on par/nVidia ahead.
Anyway, we are comparing newly introduced AMDs vs outgoing nVidias here.
A fair comparison will be Vega 20 vs 2060 Ti Max-Q. Given the current situation it is not too difficult to predict that this comparison will not turn out too favourable for AMD.
Apart from that, CUDA still is THE de-facto standard in AI. Not offering a CUDA option whatsoever means Apple does not offer any product at all to researchers/practitioners in the field - a field that is expected to define the future.



Hence, this statement is nothing but wrong
No. None GTX 1050, no GTX 1050 Ti, and No GTX 1060 in mobile versions are 35W TDP.

Vega Pro 20 will be 10-15% slower than GTX 1060 Max-Q, while fitting in 35W TDP. Vega 16 will be on the same level, or between GTX 1050 and 1050 Ti in performance, while consuming half as much power( GTX 1060 MQ is 80W TDP, GTX 1050 is 60W, and GTX 1050 Ti is 75W TDP).


GTX 2060 Ti will be too big and too power hungry to fit in MBP, if it will be based on TU106 chip. Turing GPUs are consuming more power than Pascal GPUs, in the same class, So I have no idea where do you take your belief they will be more efficient than Vega Pro 20?


No CUDA is not standard in AI. ROCm is emerging as viable, open source, open platform for AI, that is genuine competitor.

Maybe you should consider being more flexible? Only you are responsible for locking yourself to CUDA solutions.
 
That depends what you consider worthless. Look at this. 15” 2017 with 2.8/560/1TB with 49 cycles. Sold for $1200. Cost $3300 and as much as $3600 with tax. That is pretty worthless to me after a year or less of ownership. Will be even worse for 2016s and 2017s after these new Vega machines hit eBay. Much of this has to do with Coffee Lake and that’s not Apple’s fault that it was the most significant upgrade since Sandy and Ivy Bridge over original Core.
https://www.ebay.com/itm/2017-MacBo...sid=p2349624.m43663.l10137#vi__app-cvip-panel

In fact a 13” 2017 base MBP or Touch Bar fetches just a couple hundred less than a machine that was twice as expensive. This guy had debts to pay but buying this 15” MBP was a terrible financial decision for him.

If someone's selling it for that price, it's really his own fault. Even twice as much would be a bargain for that config.
 
Well: the 1050 Max-Q is rated 34-40 Watts (https://www.notebookcheck.net/NVIDIA-GeForce-GTX-1050-Max-Q-GPU.277746.0.html).
And no. ROCm is
1 - not released
2 - pretty much AMD only. Some Intels are on the supported CPU list. However, neither is there are stable version nor can either of this compete with CUDA performancewise

Don't get me wrong, I'm basically not on nVidias side. I'd love to have a different option. However currently there is none.

3. Yes. CUDA IS a de facto standard. Tensorflow, e.g. is CUDA only, OpenCL versions not available, and not in the forseeable future. There are not even AMD options in cloud computing - the big 3, GCP, AWS and Azure as of today do not even offer an AMD option
 
Yes. CUDA IS a de facto standard. Tensorflow, e.g. is CUDA only, OpenCL versions not available, and not in the forseeable future.
I suggest you educate yourself on latest information on ROCm, especially after 6th November 2018 ;).
 
Ok. Maybe my info regarding releases is a bit dated. :)
However, the remainder of the said is true.

Not to my liking, but that's just how it is. I struggled a lot lately on this matter, coz I looked into options apart from using CUDA.

Now, CUDA is great; brilliant in fact. However, mostly coz I develop on a Mac (and deploy on Linux) I was looking for an option that support this setup - develop on a (recent) Mac, deploy on Linux (perhaps Windows). And pretty much failed.
Apple supports OpenCL only up to 1.2 (no dynamic parallelism). nVidia simiarly supports OpenCL only up to version 1.2. No AMD available on GCP, AWS or Azure (Alibaba actually offers Fire GLs).

CUDAs or CUDA supporting (eg Thrust, CUB) libraries are also more in numbers, more mature, better support.

There just no match for CUDA elsewhere
 
Last edited:
Ok. Maybe my info regarding releases is a bit dated. :)
However, the remainder of the said is true. Not to my liking, but that's just how it is
Nope. ROCm supports TensorFLOW very much. ROCm 2.0, that is. Maybe you should check, whats new? ;)
 
Or simply he has such an experience with problems with those computers, and knows the systems so well that he can diagnose the problem right away, without trial and error.

Its funny how you claim he is a liar and a cheat, when he was not responsible for the TV material.

I thought your post was some sort of trolling, but you are dead serious. And you exactly prove his points he constantly speaks about Apple cult, and Apple users.
Nice try; but no.

Unless there is an engineering or manufacturing defect that affects a LOT of units, NO tech, no matter HOW experienced, is going to catch a random, RARE defect as fast as he did. No way.

I've been an electronic bench tech before, and I smelled a rat immediately. Either they edited-out the REAL troubleshooting time, or he was tipped-off. Either way, it was fraudulent.

Oh, and funny, AppleInsider seems to AGREE with me for some reason. I must be in collusion with them. And Apple. Right?

BTW, I JUST saw this article, looking for a link to the original "Expose" one I saw...

https://appleinsider.com/articles/1...olicies-are-abusive-but-proof-falls-far-short
 
  • Like
Reactions: MrUNIMOG
Granted, didn't have that one on the radar.
Still performance per watt isn't quite up there with the Pro 560X though, and it certainly doesn't go near the new Vega options.

When the current MBP design was introduced back in 2016, Apple really had no choice other than going with Polaris. Back then, performance per watt of the chips they got from AMD was far above anything Nvidia had to offer.

And with those mobile Vega chips now, it seems that choice holds up pretty well.
 
  • Like
Reactions: chucker23n1
Willful ignorance isn't an excuse to overcharge customers for repairs. If it was easy for a third party like Rossman to pinpoint and fix the issue then it should be dead simple for Apple since they fully own the product life cycle. Even retail stores that don't do component repairs should still have access to a knowledge base of common failures that is standard procedure across all industries. If I take my car into the dealership for a service light and they want me to pay to replace my engine or car without them checking the knowledge base or troubleshooting the fault then they've failed and deserve bad publicity and lawsuit if rampant.
Bzzt! Sorry!

It seems AppleInsider ALSO smelled a rat.

I JUST ran across this article while looking for the original "Expose" report I saw.

https://appleinsider.com/articles/1...olicies-are-abusive-but-proof-falls-far-short

But I guess that AppleInsider and I are just paid Apple shills, right?
 
For now: agreed.
Things may change, however, once Turing is available
Won't change. Turing has not pushed the performance/watt boundaries, at all. Nvidia once again burned quite a lot of xTors on physical design to push the clocks in Turbo states over 2 GHz. We are talking about 1024 CUDA core GPU, with AT BEST the same clock speed as Vega(1.3 GHz), but in 60W TDP, due to GDDR5/6 memory. Nothing will shift here.
 
  • Like
Reactions: MrUNIMOG
Nope. ROCm supports TensorFLOW very much. ROCm 2.0, that is. Maybe you should check, whats new? ;)
Hm. Curious. The official Tensorflow Github states Tensorflow is available for CUDA enabled GPUs. No mention of AMD or ROCm. I know they had inofficial builds for quite some time, but none of these were considered stable.

Question is: is the standard Tensorflow-GPU install ROCm enabled?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.