Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apple has finally software, that diminishes the advantage Nvidia always had with CUDA. Now the situation is as it should be always: Compute horsepower always should win.

At that point, going on my prior post, do you believe Apple is slowly moving towards their own GPU? With enough of a framework and incorporation, they will likely be able to produce something. There will be a transition and I am sure gaming won't be discounted -- things that don't use Metal will inherit it by virtue of backwards compatibility.
 
Yeah, METAL sounds great, but if your raw GPU power is still going to be the deciding factor for any speed increase, best, cleanest METAL code, will still be slow as molasses on a slow GPU, and The CUDA farms I worked on for MAYA, MAXWELL, and 3D Max rendering are so blazingly fast, even sloppy code couldn't slow them down.

CUDA rules supremely when it comes to 3D GPU rendering. Whether you use RedShift, V-Ray GPU RT or Octane, you need nVidia card, no questions asked. Several of the 3D GPU render engines talked about making support for Open CL, but I doubt that ever happens to any meaningful degree. Yeah, Bender uses Open CL in Cycles but whatever.

This is a major reason why at lot of people in the 3D industry have a big problem with Apple going AMD. At least all those that need GPU rendering.
 
CUDA rules supremely when it comes to 3D GPU rendering. Whether you use RedShift, V-Ray GPU RT or Octane, you need nVidia card, no questions asked. Several of the 3D GPU render engines talked about making support for Open CL, but I doubt that ever happens to any meaningful degree. Yeah, Bender uses Open CL in Cycles but whatever.

This is a major reason why at lot of people in the 3D industry have a big problem with Apple going AMD. At least all those that need GPU rendering.
YES, don't get me started.. :eek::mad::(
 
  • Like
Reactions: Macintosh IIcx
At that point, going on my prior post, do you believe Apple is slowly moving towards their own GPU? With enough of a framework and incorporation, they will likely be able to produce something. There will be a transition and I am sure gaming won't be discounted -- things that don't use Metal will inherit it by virtue of backwards compatibility.
They already did. Actually Radeon Pro 450/455 and 460 are the child of Apple and AMD collaboartion. Even Lisa Su have said that AMD and Apple co-engineered the GPUs together.

http://semiaccurate.com/2013/01/02/apples-silicon-design-capabilities-increase/
http://semiaccurate.com/2014/03/25/details-apples-gpu-emerge/
http://semiaccurate.com/2013/12/17/apple-samsung-intel-foundry-plans/
http://semiaccurate.com/2013/07/12/apple-has-their-own-fab/
http://semiaccurate.com/2013/08/26/a-third-player-emerges-apples-foundry-plans/
And last one: http://semiaccurate.com/2013/03/25/what-is-apple-doing-at-14nm/

All of this is behind paywall, however. If you ask about future. Well, the future is long.
 
At that point, going on my prior post, do you believe Apple is slowly moving towards their own GPU? With enough of a framework and incorporation, they will likely be able to produce something. There will be a transition and I am sure gaming won't be discounted -- things that don't use Metal will inherit it by virtue of backwards compatibility.

I think Apple will abandon OS X with standard PC elements altogether, so yes, its a possibility, I think these new Laptops with Low Specs and Lame Duck GPU's are to test the water when to release mobile device to replace the laptop, using their own ARM and GPU.

People are already saw ARM friendly code in MacOS
http://www.idownloadblog.com/2016/0...-replace-intel-in-macs-with-custom-arm-chips/

Honestly the next step with such low specs on a Laptop is to replace with the GUTS of the iPad Pro, force everyone to re-compile their software to ARM and move on... Then the Apple MicroVerse will be complete.
[doublepost=1479320575][/doublepost]
Oh, I forgot Maxwell 4 and iRay as CUDA only GPU renders too.
[doublepost=1479320142][/doublepost]

So I take it that you wasn't too happy about those AMD's in the Mac Pro either?! :eek:
AMD hasn't been my friend in a long long time, I feel like the only reason they used them was price, AMD is known to give chips to high end vendors, APPLE, SONY etc.. AT or below cost to keep their brand in the market.

Honestly NVIDIA wants them to LIMP along so they don't face any anti-trust problems when AMD goes under.
 
I think Apple will abandon OS X with standard PC elements altogether, so yes, its a possibility, I think these new Laptops with Low Specs and Lame Duck GPU's are to test the water when to release mobile device to replace the laptop, using their own ARM and GPU.

People are already saw ARM friendly code in MacOS
http://www.idownloadblog.com/2016/0...-replace-intel-in-macs-with-custom-arm-chips/

Honestly the next step with such low specs on a Laptop is to replace with the GUTS of the iPad Pro, force everyone to re-compile their software to ARM and move on... Then the Apple MicroVerse will be complete.
[doublepost=1479320575][/doublepost]
AMD hasn't been my friend in a long long time, I feel like the only reason they used them was price, AMD is known to give chips to high end vendors, APPLE, SONY etc.. AT or below cost to keep their brand in the market.

Honestly NVIDIA wants them to LIMP along so they don't face any anti-trust problems when AMD goes under.
Your whole post is complete and utter fallacy.

First of all, the ARM chip is driving the Touch Bar. Thats why it need the support from the OS. Secondly, AMD is not selling the hardware at a loss. They have other problems, like Wafer Supply Agreement which is making them loose money. Finaly they got rid of it to some degree with Polaris, and with latest renegotiation of WSA agreement.
 
They already did. Actually Radeon Pro 450/455 and 460 are the child of Apple and AMD collaboartion. Even Lisa Su have said that AMD and Apple co-engineered the GPUs together.

http://semiaccurate.com/2013/01/02/apples-silicon-design-capabilities-increase/
http://semiaccurate.com/2014/03/25/details-apples-gpu-emerge/
http://semiaccurate.com/2013/12/17/apple-samsung-intel-foundry-plans/
http://semiaccurate.com/2013/07/12/apple-has-their-own-fab/
http://semiaccurate.com/2013/08/26/a-third-player-emerges-apples-foundry-plans/
And last one: http://semiaccurate.com/2013/03/25/what-is-apple-doing-at-14nm/

All of this is behind paywall, however. If you ask about future. Well, the future is long.
Apple has OPTED not to buy AMD, they could have, their is a reason, Apple would have to keep the products that AMD makes in the Market, CHIPs,GPU's etc.. Or AMD stock would plummet. They would have to make and market them, if they decided to just GUT the company of whats valuable to APPLE, JUST GPU, the value of the company's stock would crash, and be worth nothing.. And apple would have paid a premium for nothing.
http://www.fool.com/investing/gener...inc-will-not-acquire-advanced-micro-devi.aspx

All Apple has to do is create their own FABs and GPU pipeline or wait for AMD to implode, either way its a win win for Apple.
[doublepost=1479321334][/doublepost]
Your whole post is complete and utter fallacy.

First of all, the ARM chip is driving the Touch Bar. Thats why it need the support from the OS. Secondly, AMD is not selling the hardware at a loss. They have other problems, like Wafer Supply Agreement which is making them loose money. Finaly they got rid of it to some degree with Polaris, and with latest renegotiation of WSA agreement.

Whatever the reason Nvidia Stock trades 10 times AMD. Intel and Nvidia need AMD to stay alive for anti-trust. AMD can try and make a low watt GPU here and there and convince us they are relevant.. but they would be hard pressed to succeed without making jumps in computing power.

http://www.fool.com/investing/general/2016/01/26/can-advanced-micro-devices-inc-survive-in-2016.aspx
 
There never been a Macbook with 32 gb of RAM. Look it up on Everymac.

There weren't laptops with 32GB on RAM in the past period. Now they are. The NEW MacBook Pro doesn't have it. Times have changed and the new release hasn't kept up with the industry!

Professional once as 16GB, now it takes 32GB. No 32GB? Not professional anymore! Got it?

You're just trolling, period, end of story.

You're the one suggesting people are trolling when you can't understand simple facts and are trying to make hyperbole mistatements of what I've explained to you - I'd say you're the true troll here. I suspect you'll be crying soon enough when your new (OLD) machine is obsolete and Apple releases a 32GB variant down the line very shortly.

Enjoy your new (OLD, non-pro) purchase while you can. You'll regret it not having 32GB and then all I'll say to you in a year or two - told you so!
 
Whatever the reason Nvidia Stock trades 10 times AMD. Intel and Nvidia need AMD to stay alive for anti-trust. AMD can try and make a low watt GPU here and there and convince us they are relevant.. but they would be hard pressed to succeed without making jumps in computing power.

http://www.fool.com/investing/general/2016/01/26/can-advanced-micro-devices-inc-survive-in-2016.aspx
You are repeating the logical fallacies over and over again.

For example. 28 nm process node, best what Nvidia offered: 6 TFLOPs on GTX Titan X. Best what AMD offered: 8.6 TFLOPs of compute power.

How come AMD GPUs were slower in compute? Because of the CUDA SOFTWARE! There is nothing Magical in Nvidia hardware that makes it faster than AMD, but it has software, that is better than AMD. Exactly the point I was making before, which you appears to be not understanding.

AMD now offers at best 5.8 TFLOPs RX 480. What is the counterpart on Nvidia side? 4.4 TFLOPs GTX 1060. How it will behave in Metal applications? Which one of the GPUs will be faster here, considering the technology behind Metal?
What is funnier one of people on this very forum has compared the RX 480 with GTX 1070 in compute applications. Effect? In the same environment, the GTX 1070 was only 10% faster. Why? because it has 6.5 TFLOPs vs 5.8 TFLOPs on RX 480. IF AMD GPUs would be able to use CUDA on them you would get exactly the same results. Because that is what differs the Nvidia GPUs in compute performance brackets.

Current information is that AMD will provide Vega 10 with 64 Compute units and over 12 TFLOPS of compute power in 225W Thermal envelope. If that rumor comes true it will be better than anything Nvidia offers.

Ask developers to optimize your software for ALL of the vendors out there. Then cry on forums, how one brand is worse from another.
 
According to the Geekbench OpenCL numbers there is around a 15.9% performance penalty going from a desktop RX 460 to the Radeon Pro 460
 
You are repeating the logical fallacies over and over again.

For example. 28 nm process node, best what Nvidia offered: 6 TFLOPs on GTX Titan X. Best what AMD offered: 8.6 TFLOPs of compute power.

How come AMD GPUs were slower in compute? Because of the CUDA SOFTWARE! There is nothing Magical in Nvidia hardware that makes it faster than AMD, but it has software, that is better than AMD. Exactly the point I was making before, which you appears to be not understanding.

AMD now offers at best 5.8 TFLOPs RX 480. What is the counterpart on Nvidia side? 4.4 TFLOPs GTX 1060. How it will behave in Metal applications? Which one of the GPUs will be faster here, considering the technology behind Metal?
What is funnier one of people on this very forum has compared the RX 480 with GTX 1070 in compute applications. Effect? In the same environment, the GTX 1070 was only 10% faster. Why? because it has 6.5 TFLOPs vs 5.8 TFLOPs on RX 480. IF AMD GPUs would be able to use CUDA on them you would get exactly the same results. Because that is what differs the Nvidia GPUs in compute performance brackets.

Current information is that AMD will provide Vega 10 with 64 Compute units and over 12 TFLOPS of compute power in 225W Thermal envelope. If that rumor comes true it will be better than anything Nvidia offers.

Ask developers to optimize your software for ALL of the vendors out there. Then cry on forums, how one brand is worse from another.

That is all great, but I'm a front end user, I can't talk politics and semantics about current hardware when I am asked to finish a job quickly and efficiently, I can't cite anything you are saying. You might be right, but thats not my concern, its an applied science thing right now, not absolute.

I wish I could put a sign on our companies door that says.. "WE ARE CLOSED, UNTIL VENDORS OPTIMIZE THEIR DRIVERS, BECAUSE AMD CHIPS ARE MORE EFFICIENT, THANK YOU."

Right now MacBook Pro put a horrible GPU in an expensive Laptop, emphasizing slow and moderately unsupported OpenCL. Its not anything that can compete with NVIDIA and CUDA, its not even close, as to WHY? I don't really care, you can Thermal Envelope all you want, but it can't change the fact its a horrible solution and sometimes not a solution at all for film, tv and media production.
 
What laptops would you suggest get the overall balance better than the new MacBook Pro's? You're an engineer - give us some specifics.

I'm not looking as much for balance as much as pure raw power. Balance = compromise. And for raw power the Razer Blade Pro for example ticks all the right boxes.

The new rMBPs are taking a step down the scale towards ultraportability when the highest end offering should be aimed at desktop replacement (ie. power > weight). It doesn't have to be at the extreme end of power, but the balance has been totally skewed towards portability instead.

The first generation rMBP actually is a good example of balance... useful ports, thin and light, but not to the extremeness of the 2nd gen release. Many people were perfectly happy with the old form factor, people just wanted better internals and some USB-C ports (but not the removal of everything else).

The Razer Blade is saddled with the 16GB RAM limit but overall it still has better balance and its just only a bit thicker/heavier than the new rMBP. With a proper keyboard and more ports and an Nvidia 1060. So its not like Apple had to do what they did, they just did it caused of total form over function design ethos.
 
Last edited:
That is all great, but I'm a front end user, I can't talk politics and semantics about current hardware when I am asked to finish a job quickly and efficiently, I can't cite anything you are saying. You might be right, but thats not my concern, its an applied science thing right now, not absolute.

I wish I could put a sign on our companies door that says.. "WE ARE CLOSED, UNTIL VENDORS OPTIMIZE THEIR DRIVERS, BECAUSE AMD CHIPS ARE MORE EFFICIENT, THANK YOU."

Right now MacBook Pro put a horrible GPU in an expensive Laptop, emphasizing slow and moderately unsupported OpenCL. Its not anything that can compete with NVIDIA and CUDA, its not even close, as to WHY? I don't really care, you can Thermal Envelope all you want, but it can't change the fact its a horrible solution and sometimes not a solution at all for film, tv and media production.
So how come every reviewer claims that the MBP is faster than anything on the market?

As a side note, I would love to see direct benchmarks and comparisons between even lets say previous generation GTX 965M in compute applications using CUDA, and on MBP using RP460 using Final Cut Pro X.

Why those two GPUs? Because they have similar compute power.

Back to topic, everyone ;).
According to the Geekbench OpenCL numbers there is around a 15.9% performance penalty going from a desktop RX 460 to the Radeon Pro 460
Luxmark 3.0 numbers for Radeon RX 460 around 6338 pts, Luxmark 3.0 numbers for Radeon Pro 460 - 6014 if I remember correctly.

5% difference.
 
I'm not looking as much for balance as much as pure raw power. Balance = compromise. And for raw power the Razer Blade Pro for example ticks all the right boxes.

The new rMBPs are taking a step down the scale towards ultraportability when the highest end offering should be aimed at desktop replacement (ie. power > weight). It doesn't have to be at the extreme end of power, but the balance has been totally skewed towards portability instead.

The first generation rMBP actually is a good example of balance... useful ports, thin and light, but not to the extremeness of the 2nd gen release. Many people were perfectly happy with the old form factor, people just wanted better internals and some USB-C ports (but not the removal of everything else).

I agree, I'm upset because my whole universe has been Apple, now I feel abandoned..
Also another note on RAM limits. Apple put 4GB of RAM in 2008, then 16GB of RAM in a laptop in 2010, then took a 6 year break until now.

I don't know about anyone else, but for me the technology jump from 2010 to 2016 is far far beyond any of technology jumps in 2006 to 2010.

what I am working on now. Larger files, Longer rendering times, Web Browers and media players that eat RAM like crazy. The jump to 32GB of ram should have been just as organic and standard as they where before.. WTF happened?
 
I'm not looking as much for balance as much as pure raw power. Balance = compromise. And for raw power the Razer Blade Pro for example ticks all the right boxes.

The new rMBPs are taking a step down the scale towards ultraportability when the highest end offering should be aimed at desktop replacement (ie. power > weight). It doesn't have to be at the extreme end of power, but the balance has been totally skewed towards portability instead.

The first generation rMBP actually is a good example of balance... useful ports, thin and light, but not to the extremeness of the 2nd gen release. Many people were perfectly happy with the old form factor, people just wanted better internals and some USB-C ports (but not the removal of everything else).

Balance is what sells laptops because only a tiny minority of users would accept an 8lb+ laptop with perhaps 4 hours (optimistically) of battery life like the RBP. Not to mention what the real world performance of that GPU is once stressed. That laptop is every bit a compromise as well. A compromise on footprint, a compromise on battery life, a compromise on weight.

The current gen MBP is actually much, much closer in overall power to a typical desktop of today, with a much more flexible selection of ports than the first gen.
 
The current gen MBP is actually much, much closer in overall power to a typical desktop of today, with a much more flexible selection of ports than the first gen.

What's the point - if you shave the weight and have to carry around a bunch of dongles that adds weight back to get any useful connection out of the current gen? Its got a lot of power but still they could have skewed the balance towards a better mix of more power (like a 1060 GPU), better keyboard, and still appeal to pretty much everyone else.

I know an 8lb laptop won't fly, but take a 4lb compromise, add back power and mix of ports that makes it 4.5lbs, most people won't blink at the difference.

Extremes are bad - they didn't have to make it exactly 4lbs... its just dumb design ethos like I said. And they could have offered a choice - you want battery life? We'll give you this dinky little GPU. You want more? Here go ahead you can customize and get a 1060. And 32GB of less power efficient memory vs. 16GB of more power efficient memory.

Its NOT that hard... unfortunately Apple just didn't care to even try to strike a better balance. Or allow people to shift/customize the balance.
 
So how come every reviewer claims that the MBP is faster than anything on the market?

It is faster, but its not "faster" enough, it made a small leap, where technology around us, took a bigger leap. Apple didn't take a big enough leap. It lept for watching Netflix and blogging, light video editing, It lept sideways, IMHO.

As a side note, I would love to see direct benchmarks and comparisons between even lets say previous generation GTX 965M in compute applications using CUDA, and on MBP using RP460 using Final Cut Pro X.

The 455 in our testing was about the same speed as the m370x, from the 2015 MacBook Pro, sometimes slower, using OpenCl in Resolve. The 460 was a little faster, not much in Resolve using OpenCL, just a little..
What do you want me to test in CUDA? I have 3x 980ti in my GPU expander, thats not going to be a good test, since its really fast.

You want me to test the GTX 965M? From what computer platform? I would imagine the 460 with 4GB is a little faster if not the same speed as the GTX 965M with 2GB just based on specs, but then their is a OpenCL versus CUDA/NVIDIA thing to slow it down.. Which Laptop its in, and what exact 965M it is.. Like if their is a 965m with 4GB or RAM, I would say that is faster than a 460 with 4 GB of RAM, but slower than a 965M with 2GB of ram.

If you tell me which laptop your referring too, I could see if anyone I know out doing remote grading can test it.
 
What's the point - if you shave the weight and have to carry around a bunch of dongles that adds weight back to get any useful connection out of the current gen? Its got a lot of power but still they could have skewed the balance towards a better mix of more power (like a 1060 GPU), better keyboard, and still appeal to pretty much everyone else.

I know an 8lb laptop won't fly, but take a 4lb compromise, add back power and mix of ports that makes it 4.5lbs, most people won't blink at the difference.

Extremes are bad - they didn't have to make it exactly 4lbs... its just dumb design ethos like I said. And they could have offered a choice - you want battery life? We'll give you this dinky little GPU. You want more? Here go ahead you can customize and get a 1060. And 32GB of less power efficient memory vs. 16GB of more power efficient memory.

Its NOT that hard... unfortunately Apple just didn't care to even try to strike a better balance. Or allow people to shift/customize the balance.

The problem is that seemingly little change in graphics cards completely shifts the balance. You are talking a jump from 35W TDP to ~90W. The Razer Blade, which seems to be what you want, has a real world battery life of between 3-5 hours. That's completely outside of the design goals that Apple is clearly pursuing. And the throttling issues with the 1060 in the Razer Blade are well known. https://www.reddit.com/r/razer/comments/5co2f6/razer_blade_1060_performance_issue_and_fix/

As for the 'weight' of those dongles - the vast majority of users will get away with 1-2 of them with a total weight of 1-2 ounces. It's not even worth mentioning.
 
Last edited:
  • Like
Reactions: mwb
There weren't laptops with 32GB on RAM in the past period. Now they are. The NEW MacBook Pro doesn't have it. Times have changed and the new release hasn't kept up with the industry!

Professional once as 16GB, now it takes 32GB. No 32GB? Not professional anymore! Got it?
I can buy a laptop with 64GB in it now. So by your reasoning, 32GB is no longer professional?
 
And the throttling issues with the 1060 in the Razer Blade are well known.

As for the 'weight' of those dongles - the vast majority of users will get away with 1-2 of them with a total weight of 1-2 ounces. It's not even worth mentioning.

Same thing with MBPs as far as throttling goes - I wonder how much the Radeons are going to throttle and the CPU will do the same as well. And battery life will be fine - there's still the onboard GPU to fall back on... when you need that extra GPU power, I fully expect people to be plugged in.

Why carry dongles and risk losing them. Most of those users wouldn't even need a dongle if the ports were built in. Add back 1-2 ounces and more ports. Its simply a more elegant and convenient solution. Its just pointless pushing USB-C only so soon, its time is not here yet. The beauty of USB-C is that all you really need is 1 or at most 2 USB-C ports and you can get everything hooked up to those, there's no need for 4 of them (unless you were to stick in a bunch of dongles, which means they should just be other ports).
 
So how come every reviewer claims that the MBP is faster than anything on the market?

Because these reviewers are talking about FinalCut, which is probably the best-optimised app of its kind. OS X (especially with Metal) makes it fairly easy to offload work to the GPU.
[doublepost=1479325300][/doublepost]
Same thing with MBPs as far as throttling goes - I wonder how much the Radeons are going to throttle and the CPU will do the same as well.

Are there any tests or reports regarding throttling in the 2016 models?

Add back 1-2 ounces and more ports. Its simply a more elegant and convenient solution.

More convenient, yes, but certainly not more elegant. And you can't achieve it with 1-2 ounces. You need to make the laptop considerably thicker, which obviously means adding more weight.
 
More convenient, yes, but certainly not more elegant.

Its way more elegant than having appendages sticking out (ie. dongles).

And you can't achieve it with 1-2 ounces. You need to make the laptop considerably thicker, which obviously means adding more weight.

The dongles really would be more than 1-2 ounces anyways. So what if its 0.5 lbs heavier... can you even tell the difference? 4 to 4.5lbs is hardly anything.

There's actually too thin IMO - the 12.9" iPad Pro is a prime example. Its already so wide/tall, that its thinness actually makes it feel "wrong" in the hands. The 13" being thinner is fine but the 15" is where you get into this too thin issue. Its like looking at an anorexic model!
 
Whatever the reason Nvidia Stock trades 10 times AMD. Intel and Nvidia need AMD to stay alive for anti-trust. AMD can try and make a low watt GPU here and there and convince us they are relevant.. but they would be hard pressed to succeed without making jumps in computing power.

Lots of reasons to buy NVidia and why their stock has shot up 300% this year. Great video cards, and more important diversification out of computers with strong support for Machine Learning and AI using CUDA as low cost supercomputers. Also, new products for self-drive cars and future automation. Glad I bought at the beginning of the year.
 
The dongles really would be more than 1-2 ounces anyways. So what if its 0.5 lbs heavier... can you even tell the difference? 4 to 4.5lbs is hardly anything.

The Apple Digital AV adapter which includes USB-A, HDMI, and pass-through charging is 1.2 ounces.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.