Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Intel is slowing down in delivering performance. IBM did it once and Apple switched to Intel.
Now it's Intel's turn to be dropped, I guess. Soon or later Apple will come out with some X chips without all the old instructions set that x86 chips come with or ARM multiprocessors capable of overcoming their present limit with x86 chips.

What I don't understand is why are they so stubbornly linked to AMD for the GPUs till the point of frustrating users with Adobe showing how PS is fast on an iPad while it sits down on a Mac, and they don't try at least to use Ryzen or EPYC to offer a step in performance.

What do I miss?
 
If Apple is serious about next Mac Pro, even if they are slated for next year, they should show at least the prototype before the year's end. There is no reason to hide it since no other workstation manufacturers will bother copying it no matter how radical it is.
That way, at least we don't have to waste time waiting if it is another lackluster one like MPtc.
 
Intel is slowing down in delivering performance. IBM did it once and Apple switched to Intel.
Now it's Intel's turn to be dropped, I guess. Soon or later Apple will come out with some X chips without all the old instructions set that x86 chips come with or ARM multiprocessors capable of overcoming their present limit with x86 chips.

What I don't understand is why are they so stubbornly linked to AMD for the GPUs till the point of frustrating users with Adobe showing how PS is fast on an iPad while it sits down on a Mac, and they don't try at least to use Ryzen or EPYC to offer a step in performance.

What do I miss?

I don’t know for sure, but different leadership and different times plays a role.
 
Intel is slowing down in delivering performance. IBM did it once and Apple switched to Intel.
Now it's Intel's turn to be dropped, I guess. Soon or later Apple will come out with some X chips without all the old instructions set that x86 chips come with or ARM multiprocessors capable of overcoming their present limit with x86 chips.

What I don't understand is why are they so stubbornly linked to AMD for the GPUs till the point of frustrating users with Adobe showing how PS is fast on an iPad while it sits down on a Mac, and they don't try at least to use Ryzen or EPYC to offer a step in performance.

What do I miss?
Its because AMD GPUs are very well documented, and it is very easy to work with their drivers, compared to Nvidia.
AMD is willing to work with Apple, not against Apple. What AMD is not doing is that they do not provide enough libraries so that devs, have already optimized software: they have to optimize it themselves, for their own software(Kinda like Apple, actually: Metal, anyone?), which is exact opposite of what Nvidia does, and for what Nvidia is blessed by developers: complete libraries, with already optimized code.

Also. Ask Adobe why their software is not optimized for AMD GPUs?

At least we can hope that money stream from EPYC, Zen1, Threadripper, EPYC2, Zen2 and Navi(which also appears to be not bad product, at all) products will result in better software initiatives from AMD. So far, they have come from far away, but still have very long road in front of them.
 
Intel is slowing down in delivering performance. IBM did it once and Apple switched to Intel.
Now it's Intel's turn to be dropped, I guess. Soon or later Apple will come out with some X chips without all the old instructions set that x86 chips come with or ARM multiprocessors capable of overcoming their present limit with x86 chips.

What I don't understand is why are they so stubbornly linked to AMD for the GPUs till the point of frustrating users with Adobe showing how PS is fast on an iPad while it sits down on a Mac, and they don't try at least to use Ryzen or EPYC to offer a step in performance.

What do I miss?
AMD has not been a trustworthy CPU supplier - and needs to rebuild customer confidence that AMD can consistently innovate and deliver. Intel had been a trustworthy supplier - although there have been some hiccups on timetables.

AMD's current and upcoming core count advantages are mostly meaningless - since in the workstation space most applications struggle to utilize more than a handful of cores. (Obviously, some apps do use many-core effectively - but only people depending on those particular apps will care about high core count. For the rest, Intel's better per thread performance is an advantage.)

In the server space, 32 to 64 to 128 cores are very important - but Apple has explicitly dropped out of the server space.

Switching to ARM desktops and high end laptops would be a disaster. What if pro app builders (like Adobe) simply say "we're not going to do another architecture port" and drop Apple as a platform?
[doublepost=1533774806][/doublepost]
AMD is willing to work with Apple, not against Apple.
Evidence that Nvidia is working against Apple?
 
  • Like
Reactions: askunk
Nvidia kicked out of Apple ecosystem for good.

Need more?
Nvidia continues to provide web drivers for Apple systems...

Apple's petty crusade against Nvidia doesn't mean that Nvidia is working against Apple.

And "for good"? Boy will you look silly if the MP7,1 fully supports recent (Pascal/Volta) GEForce and Tesla GPUs.

(And Nvidia support could be like Microsoft's support - the base OS and UEFI boot code provides basic support - and you load the web driver for accelerated support. Oh wait, Apple doesn't support UEFI - so that could be a problem.)

ps: Nvidia regularly submits WHQL-certified drivers to Microsoft, so Windows Update will install accelerated drivers. Pulling the latest WHQL driver from NVidia.com will often give you a later driver.
 
Last edited:
Nvidia continues to provide web drivers for Apple systems...

Apple's petty crusade against Nvidia doesn't mean that Nvidia is working against Apple.

And "for good"? Boy will you look silly if the MP7,1 fully supports recent (Pascal/Volta) GEForce and Tesla GPUs.
Its good that you used there IF, not when WHEN ;).

Forget about Nvidia hardware on Apple computers. They are not coming back in any way shape or form, for foreseeable future.

This is what Apple said to Nvidia, ask Jensen!
:D
 
Its good that you used there IF, not when WHEN ;).

Forget about Nvidia hardware on Apple computers. They are not coming back in any way shape or form, for foreseeable future.

This is what Apple said to Nvidia, ask Jensen!
:D

Apple just needs to provide traditional PCI-e slots on Mac Pro and it is only a matter of time until they are supported.
 
Its good that you used there IF, not when WHEN ;).

Forget about Nvidia hardware on Apple computers. They are not coming back in any way shape or form, for foreseeable future.

This is what Apple said to Nvidia, ask Jensen!
:D
Clever clip - but you failed to realize that it's six years old, and when it was produced Apple was using Nvidia GPUs.

Ooops.
 
Clever clip - but you failed to realize that it's six years old, and when it was produced Apple was using Nvidia GPUs.

Ooops.
It does not matter.

Tell me Aiden. What makes you believe Nvidia will come back to Apple computers, when nothing points to this scenario?
Apple just needs to provide traditional PCI-e slots on Mac Pro and it is only a matter of time until they are supported.
Are you sure this is what you would get from Mac Pro? ;)
 
It does not matter.

Tell me Aiden. What makes you believe Nvidia will come back to Apple computers, when nothing points to this scenario?

Are you sure this is what you would get from Mac Pro? ;)

Is having a choice of getting Nvidia GPUs in Mac Pro a bad thing? You know, they do have superior set of choices on GPUs and probably for next several years ;)
Do you even care about Mac Pro or you are just interested being an AMD shill?
 
Intel is slowing down in delivering performance. IBM did it once and Apple switched to Intel.
Now it's Intel's turn to be dropped, I guess. Soon or later Apple will come out with some X chips without all the old instructions set that x86 chips come with or ARM multiprocessors capable of overcoming their present limit with x86 chips.

What I don't understand is why are they so stubbornly linked to AMD for the GPUs till the point of frustrating users with Adobe showing how PS is fast on an iPad while it sits down on a Mac, and they don't try at least to use Ryzen or EPYC to offer a step in performance.

What do I miss?


You are missing money. It's how the world works, especially in a trillion dollar company. If they show a prototype, they will crater sales in iMac Pro and probably i9 MBP until 2019 or whenever the MP is released.
 
Is having a choice of getting Nvidia GPUs in Mac Pro a bad thing? You know, they do have superior set of choices on GPUs and probably for next several years ;)
Do you even care about Mac Pro or you are just interested being an AMD shill?
Am I an AMD shill diagnosed by Nvidia shill/fanboy? ;)

You haven't answered the questions I posted.

What makes you believe that Nvidia will come back to Apple platform in any way, shape or form, and why do you believe you would get open PCIe slots in Mac Pro, and not proprietary connection that uses PCIe connection, for add-on Apple designed Custom GPUs made by AMD?

This is exact description of recent hardware releases from Apple(Everything proprietary, even eGPU from BlackMagic is designed with Apple principles, and help, and is using AMD GPU).

So, again: what makes you believe that open PCIe slots is what you will get with next MP, and why do you believe that Nvidia is coming back to Apple computers, where Apple deliberately is doing everything so that this will not happen?

And yes, I do care about Mac Pro. Thats why I would want AMD CPU based on Zen2 in Mac Pro, if Intel would not come up with anything new to the table. If they would release IceLake, next year - the discussion would be open, but it won't happen.
 
AMD has not been a trustworthy CPU supplier - and needs to rebuild customer confidence that AMD can consistently innovate and deliver. Intel had been a trustworthy supplier - although there have been some hiccups on timetables.

Over last 2-3 years Intel has hiccuped about as badly as AMD 4-6 years ago. But AMD has hiccuped on GPUs ( Polaris was slow in coming and Vega was in the same boat on timeliness). AMD is getting those produced at quantity once shipped, but they haven't been the "we are delivering early" vendor for Mac graphics.

So it is more than just hiccups. Frankly, Apple's delays on refactoring the Mac Pro make both of their hiccups look tame.



AMD's current and upcoming core count advantages are mostly meaningless - since in the workstation space most applications struggle to utilize more than a handful of cores. (Obviously, some apps do use many-core effectively - but only people depending on those particular apps will care about high core count. For the rest, Intel's better per thread performance is an advantage.)


The focus on the mega die combo package and ultra sized socket solutions is probably missing the point for the Mac Pro ( and rest of Mac line up). If AMD can stay a process node update ahead of Intel then the single threaded advantage is going to slide backwards a bit. if it is just simply math metrics with mundane branching, then just cranking the clocks a bit higher with incrementally improved branch predicators will have a substantive effect. AMD will simply throw more transistors and clock at it.

So if they make their 6-18 core option to be within 1-3% of Intel and it is $100-200 cheaper and do custom work for Apple and it doesn't look like AMD is going to shoot themselves in the foot again, then Apple may bite. That's a number of "ands". It is 'safer' ( known quantity) for Apple to ride with Intel for a while longer.


Switching to ARM desktops and high end laptops would be a disaster. What if pro app builders (like Adobe) simply say "we're not going to do another architecture port" and drop Apple as a platform?

the bulk of the Intel CPU+GPU packages that Apple buys though are in the lower-mid laptop space though. ( even the entry level Mac Mini. and 'edu' iMac 21.5 ).

If Apple dumped Intel (and AMD) for all of those then probably would change the dynamic for Apple using AMD for the "rest" of the Mac line up. I think it would be a very bad idea for Apple to split the Mac line up on architecture. half-in , half-out would likely cause more problems than helps. Apple might be able to goose some extra Scrooge McDuck money pit money out of it is just trim off the lowest fringe of Macs, but that's just largely stock option/grant gyrations.

However, going Rip Van Winkle for 5 years is kind of goofy too.. If the Intel bulk discount isn't there then AMD's better pricing would play a bigger factor. Plus it is lower volume of chip needed for the remaining x86 line up so AMD's smaller capacity volume is also less of a factor.



[doublepost=1533774806][/doublepost]
Evidence that Nvidia is working against Apple?

1. Possible burnt bridge.

https://techcrunch.com/2014/09/04/n...-samsung-galaxy-devices-blocked-from-the-u-s/

Apple has been planned to jump into GPU market with both feet. At this point they have and are shipping large volume. If Nvidia has sent lawyers sniffing around Apple about infringement and Apple's need to give them a stream of the iPhone revenue, then they probably burned a bridge with dynamite.

This happened in 2014 and 2013 is last time saw Nvidia GPUs in a Mac. None. Coincidence... could be. Maybe not. How man Qualcomm raidos are going into this years iPhones now that Apple and Qualcomm are trading legal blows ? Is Intel's radio insanely great better? No.

Could Apple and Nvidia have worked out some patent detente by 2019? Sure but it would take two to tango in that process. The Nvidia fanboys who blame the whole absence on Apple would be missing the bigger picture.


2. Nvidia kneecapped OpenCL.

AMD, Intel , Imagination Tech all had OpenCL 1.2 drivers long before Nvidia. Nvidia was "prepping" OpenCL 2.0 support in 2017 ( OpenCL 2.0 got released in 2013 ) Fours years later .... that's like Apple Mac Pro like development cycle time. If ranting about Apple about being slow is harmful to customers , that kind of delay is in practically the same boat.

if Samsung was 4 years late supplying new OLED they'd be dropped as an Apple supplier. Not materially different in being grossly late as a software supplier to Apple. Being timely on CUDA isn't a value add as an Apple subcomponent supplier.

Nvidia could have done CUDA and OpenCL (one didn't have to 'loose' for the other to 'win'). About the same time Nvidia got 'sue folks' happy they also made moves to create bigger moat around CUDA. Slow rolling OpenCL. Chopping "open compute" parts out of Portland Group Stack after acquisition. Yes that is they right to get better profits for their own company. However, that isn't making them a trustworthy Apple subcontractor.

Apple released Metal in 2014 ( another coincidence on timing?) . Probably was in development before then. There is some unclear chicken-and-egg thing here. Did Apple start telling the GPU vendors they were bolting away from OpenCL ( so Nvidia felt justified in back burnering it as a Apple contractor) or was Apple nudged into Metal in part because the "committee' shepherding OpenCL was going off track from their perspective. I don't 100% blame just on one party here, but I highly doubt Nvidia is pristine clean here.

[ Apple has similar sore point. They shouldn't drop all open compute framework. ( or at least make hooks for them to plug in cleanly). Similarly, not having a empty, secondary x16 standard slot so folks could optionally plug in something for CUDA is shortsighted. Metal should have to compete harder on a mostly level playing field.

P.S. external PCI-e enclosures ( ePGUS) pragmatically means just about all Macs in the future (those with TBv3 that form the large base at that point) can have end empty secondary x16 (physical) slot added to them. It isn't like another GPU is going to extra extremely rare. Not mainstream, but they will be around. Keeping a open secondary slot out of a Mac Pro isn't going to 'help' much if have reasonable requirements for its size. ]
 
Last edited:
2. Nvidia kneecapped OpenCL.

AMD, Intel , Imagination Tech all had OpenCL 1.2 drivers long before Nvidia. Nvidia was "prepping" OpenCL 2.0 support in 2017 ( OpenCL 2.0 got released in 2013 ) Fours years later .... that's like Apple Mac Pro like development cycle time. If ranting about Apple about being slow is harmful to customers , that kind of delay is in practically the same boat.

if Samsung was 4 years late supplying new OLED they'd be dropped as an Apple supplier. Not materially different in being grossly late as a software supplier to Apple. Being timely on CUDA isn't a value add as an Apple subcomponent supplier.

Nvidia could have done CUDA and OpenCL (one didn't have to 'loose' for the other to 'win'). About the same time Nvidia got 'sue folks' happy they also made moves to create bigger moat around CUDA. Slow rolling OpenCL. Chopping "open compute" parts out of Portland Group Stack after acquisition. Yes that is they right to get better profits for their own company. However, that isn't making them a trustworthy Apple subcontractor.

Apple released Metal in 2014 ( another coincidence on timing?) . Probably was in development before then. There is some unclear chicken-and-egg thing here. Did Apple start telling the GPU vendors they were bolting away from OpenCL ( so Nvidia felt justified in back burnering it as a Apple contractor) or was Apple nudged into Metal in part because the "committee' shepherding OpenCL was going off track from their perspective. I don't 100% blame just on one party here, but I highly doubt Nvidia is pristine clean here.

[ Apple has similar sore point. They shouldn't drop all open compute framework. ( or at least make hooks for them to plug in cleanly). Similarly, not having a empty, secondary x16 standard slot so folks could optionally plug in something for CUDA is shortsighted. Metal should have to compete harder on a mostly level playing field. ]
Metal is based on AMD Mantle. It is exactly the same feature set, and idea parity is 1:1, but it is much less low-level than Vulkan is.

But exactly the same way as Vulkan, Metal is combining graphics and compute into single queue. And it also combines OpenGL and OpenCL as one, single, simple API. This is the reason why Apple transitioned from OpenCL, to Metal. because Metal already has OpenCL in itself.

It is like FreeSync. AMD took OpenSource standard, and made it... Proprietary, but with a twist that AMD can control some features, and specs of it. The same thing is for Metal: its based on OpenSource initiative, but with proprietary twist, that Apple can control it: its development and feature set.
 
Am I an AMD shill diagnosed by Nvidia shill/fanboy? ;)

You haven't answered the questions I posted.

What makes you believe that Nvidia will come back to Apple platform in any way, shape or form, and why do you believe you would get open PCIe slots in Mac Pro, and not proprietary connection that uses PCIe connection, for add-on Apple designed Custom GPUs made by AMD?

This is exact description of recent hardware releases from Apple(Everything proprietary, even eGPU from BlackMagic is designed with Apple principles, and help, and is using AMD GPU).

So, again: what makes you believe that open PCIe slots is what you will get with next MP, and why do you believe that Nvidia is coming back to Apple computers, where Apple deliberately is doing everything so that this will not happen?

And yes, I do care about Mac Pro. Thats why I would want AMD CPU based on Zen2 in Mac Pro, if Intel would not come up with anything new to the table. If they would release IceLake, next year - the discussion would be open, but it won't happen.

Unlike you, I don't remember posting Nvidia preaching posts like you litter the forum with AMD PR slides or Semiaccurate garbages. ;)

Why shouldn't I hope for open PCIe slots or at least the ability to replace GPU without much restriction. They specifically said next Mac Pro will be "modular". While that does not guarantee that Apple will come out with traditional design like 5.1, but at least it gives the reason to have some hope.

There is no such thing as permanent marriage or divorce in tech world. Depends on how Apple would like to position the next Mac Pro, they can always bring back Nvidia, and pretend like nothing ever happened between them.

If you even care about Mac Pro, why would you be against having a Nvidia GPU as an option? It would be much of same reason you want to see Zen 2 in Mac Pro. Are you that much of a hypocrite?
 
  • Like
Reactions: askunk and OS6-OSX
What makes you believe that Nvidia will come back to Apple platform in any way, shape or form, and why do you believe you would get open PCIe slots in Mac Pro, and not proprietary connection that uses PCIe connection, for add-on Apple designed Custom GPUs made by AMD?

There is only a substantive reasonable need for the primary display GPU for a proprietary slot. For a "Compute" GPU there is no good, rational reason not to have at least one open slot. If the only connection to the realtively large Compute card is PCI-e then a standard PCI-e slot would work. There isn't a lot of substantive value to skipping it given the wide variety of Compute cards available.

Unless, trying exactly match the Mac Pro 2013 design constraints ( which admittedly got them into a 'corner' ) , basically are passing up a key differentiator between any new Mac Pro and the iMac Pro.

Cutting off an open x16 slot basically gets in the way of Apple's stated objectives for the Mac Pro. Does that mean maximum number of standard slots? No. But zero basically screws up their own statement (unless cracked out on Cupertino kool-aid. ) .



This is exact description of recent hardware releases from Apple(Everything proprietary, even eGPU from BlackMagic is designed with Apple principles, and help, and is using AMD GPU).

But that stuff is not the Mac Pro . Again this largely hinges on whether Apple is making the Mac Pro a literal desktop or will go deskside again. The BlackMagic eGPU has a quiet premium and minimized desktop footprint because it is on the desk that is of limited size ( working folks probably have 'stuff" it needs to share the desktop with).

If Apple moves the system farther away and off the desk then there is little reason for them to throw that constraint on the new system. Apple has a literal desktop pro compute solution: iMac Pro. The deeply pressing question is why they would need two that try to fill almost the same exact role right down to sitting on the desktop with minimal footprint????????
 
Unlike you, I don't remember posting Nvidia preaching posts like you litter the forum with AMD PR slides or Semiaccurate garbages. ;)

Why shouldn't I hope for open PCIe slots or at least the ability to replace GPU without much restriction. They specifically said next Mac Pro will be "modular". While that does not guarantee that Apple will come out with traditional design like 5.1, but at least it gives the reason to have some hope.

There is no such thing as permanent marriage or divorce in tech world. Depends on how Apple would like to position the next Mac Pro, they can always bring back Nvidia, and pretend like nothing ever happened between them.

If you even care about Mac Pro, why would you be against having a Nvidia GPU as an option? It would be much of same reason you want to see Zen 2 in Mac Pro. Are you that much of a hypocrite?
Unlike you, I do not talk about people, and I do not show my opinions about them. I do talk about tech on every forum, and on Twitter.

Why shouldn't you hope for PCIe slots? For very simple reason. Its Apple you are talking about. Yes, you can expect modularity from next Mac Pro, but done "Apple way". Isn't it what defines Apple computers, lately?

Why I would not want Nvidia GPU on Mac Pro? Price to performance ratio, and cost of ownership is the first thing that comes to mind. Vega in compute is not worse than GP102, but costs less to buy, and implement in Apple computers(its SoC, with very small footprint). And if you clock it properly its much more efficient than even GP104(because at the same power deliver higher performance - for example here is a proof: https://www.bitsandchips.it/english...ficient-as-a-gtx-1080-it-s-possible?showall=1 125W of power consumed, but delivering 32k points in LuxMark 3.0 Simple Scene with GPU only. That is 98% of score of Nvidia GP100, in the same test, but at 1/20th the cost of the GPU. You can check the scores on Luxmark render benchmark records table).
 
If Apple moves the system farther away and off the desk then there is little reason for them to throw that constraint on the new system. Apple has a literal desktop pro compute solution: iMac Pro. The deeply pressing question is why they would need two that try to fill almost the same exact role right down to sitting on the desktop with minimal footprint????????

Because the iMac "Pro" has the exact same limitations as the trashcan (CPU & GPU are thermally limited), with a side order of screen roulette?

The use cases are different. I'd still have to drop about an additional $1,500 to replace missing functionality that is in my current MacPro.
 
.. If they show a prototype, they will crater sales in iMac Pro and probably i9 MBP until 2019 or whenever the MP is released.


This appears to have the implicit presumption that most of the folks buying iMac Pros don't actually want iMac Pros. That isn't much to objectively support that. Apple noted there has been an ongoing trend of folks going from Mac Pro/PowerMac like systems to iMacs. Not everyone, but substantive number. ( as the iMac got more desktop CPU and GPU class options then numbers went up. ). In previous years some folks were buying Mac Pro that didn't want exactly those features. Just needed more than the then limited with laptop parts iMac. The integrated screen wasn't a deal breaker.

Those folks aren't necessarily going to pile out of an iMac Pro into a Mac Pro where the iMac Pro is good enough.

Similarly, a substantive number of folks who bought MP 2013 will move to iMac Pro even in context of a new Mac Pro.

The system that will be screwed up more is the technically current MP 2013. Apple is limping along with some jury rigged inventory scheme (from normal Apple procedures). If the demo is 9 months early do they kill it and leave a gap.
Those systems are already cratered from 2014 levels but this likely would be another step function drop.

The real marketing problem for Apple is not folks not buy iMac Pro and MBP i9. It is folks circling the airiport on 2010 era ( about 10 years old) workstations. if all Apple has is a kind of working prototype by June and Dell/HP/Lenovo spend all of April-May promoting their band new speed bumped workstations, then Apple is going to have problems from folks who just plain left the ecosystem. That would be a bigger drop than "lost" ( fratricide) iMac Pro sales.

iMac Pro has same problem is Apple is snoozing through the Intel ( and AMD GPU) upgrades while rest of workstation market is shipping. ( drag the iMac Pro upgrade out into late 2019 also. )

When Apple announced the 10.14 wasn't a dead end for Mac Pro 2010-2012 there was an uptick in forum actiivity of folks asking about GPU and other parts to extend the "circle the airport" time another year. If Apple loosing the vast majority of those folks then the new Mac Pro probably isn't viable.


The other major problem with a prototype too early is that it may set the expectations that Apple is almost done. A prototype and then essentially saying another 9-11 months. That's going to cause major problems also with a substantive number of those waiting. If Apple shows in a bout a Quarter or so in advance then the "Osborne" effect would be highly limited on iMac Pro. ( if iMac Pro is on the front end of a refresh cycle, bumped iMac Pro now Q1 19 and Mac Pro maybe slide into 2020, then even less so. )
[doublepost=1533840501][/doublepost]
Because the iMac "Pro" has the exact same limitations as the trashcan (CPU & GPU are thermally limited), with a side order of screen roulette?

Apple is going to repeat exactly what they did before for the set of folks they said they missed ? Probably not going to happen.


Have then largely already done that yes. It is perfectly fine since the MP 2013 wasn't completely without fans. Nor has the iMac been a value proposition failure. Some folks bought the MP 2013 were relatively OK with it. For most of them the problem was that the machine went stale. Not the sub-500W limitations on the power curve. It doesn't work for everyone, but it probably works for enough for iMac Pro to be a long term viable product.

There are another folks of folks it didn't work for. Which was my point. If saddle the next Mac Pro with almost same constraints as the iMac Pro ( and mostly the MP 2013) then have two products sitting on that first group. It isn't likely large enough to support two Macs. They should try to address a large enough faction of the other group.

Does that mean every last drop of stuff in a 'filled to the brim' 5.1 container can be thrown in the new Mac Pro? Probably not. What probably would work is some compromise where Apple gets some control over some aspects of the new Mac Pro and users get some. A 2nd GPU was a common enough occurrence in current systems that disabling that option would mean probably not covering a large enough faction of the other group.


The use cases are different. I'd still have to drop about an additional $1,500 to replace missing functionality that is in my current MacPro.

Equipment is different a decade later too though. A what is now roughly valued at a couple of $50-150 3.5" HDD drives, I wouldn't bet on Apple making the top 3 feature, essential to cover, list. Even more so completely empty 5.25" bays with detached SATA cables loose in the inside.

Some folks aren't going to be able to fit all of their stuff in.
 
Last edited:
Metal is based on AMD Mantle. It is exactly the same feature set, and idea parity is 1:1, but it is much less low-level than Vulkan is.

It's based on Mantle as much as it's based on DirectX 12, which is really to say it's not based on either, but it has the same concepts everyone in industry was thinking about at the time. Mantle didn't even exist when Metal Apple was working on Metal (unless they wrote Metal in a month), so I'm not sure how Metal could be based on Mantle.

DirectX 12, Mantle, and Metal all arrived about the same time, and all of them look kind of similar from a distance, but none of them are based on any of the others. Apple had been releasing HSA architectures for years, they had been working on Metal for a while.
[doublepost=1533844000][/doublepost]
Nvidia kicked out of Apple ecosystem for good.

Need more?

Nvidia isn't shipping any hardware, but they're not kicked out of the software ecosystem.

Any failures Nvidia is encountering with their retail drivers are on them. Apple has nothing to do with that. I'm even pretty sure they'd get eGPU support if their drivers worked well with the AMD drivers. In since Apple sells a large number of laptops with AMD cards it's a problem.

Nvidia has always had software and drivers that aren't great. Someone here already mentioned they dropped the ball on OpenCL 2.0. They always try to force lock in with proprietary technologies so that they don't have to put the effort in to actually ship stuff that works well.

They're basically the Microsoft of the graphics space. Using their position to create proprietary APIs to shut down competition, and generally shipping half baked stuff.

I get that a lot of people around here have CUDA workflows, but I don't understand why everyone is so supportive of vendor lock in here.
 
It's based on Mantle as much as it's based on DirectX 12, which is really to say it's not based on either, but it has the same concepts everyone in industry was thinking about at the time. Mantle didn't even exist when Metal Apple was working on Metal (unless they wrote Metal in a month), so I'm not sure how Metal could be based on Mantle.

DirectX 12, Mantle, and Metal all arrived about the same time, and all of them look kind of similar from a distance, but none of them are based on any of the others. Apple had been releasing HSA architectures for years, they had been working on Metal for a while.
And how long it took Microsoft implementing Metal in DX11.2 to make DX12? ;)
Mantle is just low-level part of each of the APIs we have in the industry: Metal, Vulkan, DX12.

Here is direct quote from AMD blog post about Mantle:
https://community.amd.com/community/gaming/blog/2015/05/12/on-apis-and-the-future-of-mantle
  1. The Mantle SDK also remains available to partners who register in this co-development and evaluation program. However, if you are a developer interested in Mantle "1.0" functionality, we suggest that you focus your attention on DirectX® 12 or GLnext.
It means that Mantle is in DX12, and GLNext, which is Vulkan ;). At least, the low- level part. Take out Mantle from DX12, and you end up with DX11.2, which was developed by MS at the time, but when AMD came with Mantle SDK - they implemented it in the API, which created DX12.
Nvidia isn't shipping any hardware, but they're not kicked out of the software ecosystem.

Any failures Nvidia is encountering with their retail drivers are on them. Apple has nothing to do with that. I'm even pretty sure they'd get eGPU support if their drivers worked well with the AMD drivers. In since Apple sells a large number of laptops with AMD cards it's a problem.

Nvidia has always had software and drivers that aren't great. Someone here already mentioned they dropped the ball on OpenCL 2.0. They always try to force lock in with proprietary technologies so that they don't have to put the effort in to actually ship stuff that works well.

They're basically the Microsoft of the graphics space. Using their position to create proprietary APIs to shut down competition, and generally shipping half baked stuff.

I get that a lot of people around here have CUDA workflows, but I don't understand why everyone is so supportive of vendor lock in here.
I do agree with you on this, especially on the last part. Imagine a situation that AMD comes with much better hardware, but your software is locked to Nvidia CUDA software. What do you do then? Wait 2 years for Nvidia to catch up with AMD?
 
Unlike you, I do not talk about people, and I do not show my opinions about them. I do talk about tech on every forum, and on Twitter.

Why shouldn't you hope for PCIe slots? For very simple reason. Its Apple you are talking about. Yes, you can expect modularity from next Mac Pro, but done "Apple way". Isn't it what defines Apple computers, lately?

Why I would not want Nvidia GPU on Mac Pro? Price to performance ratio, and cost of ownership is the first thing that comes to mind. Vega in compute is not worse than GP102, but costs less to buy, and implement in Apple computers(its SoC, with very small footprint). And if you clock it properly its much more efficient than even GP104(because at the same power deliver higher performance - for example here is a proof: https://www.bitsandchips.it/english...ficient-as-a-gtx-1080-it-s-possible?showall=1 125W of power consumed, but delivering 32k points in LuxMark 3.0 Simple Scene with GPU only. That is 98% of score of Nvidia GP100, in the same test, but at 1/20th the cost of the GPU. You can check the scores on Luxmark render benchmark records table).

Different workflows demand different configurations. Why are you acting like having Nvidia GPU on Mac Pro would be a bad thing? I don't remember saying AMD GPUs should be replaced with Nvidia GPUs. I always said on Mac Pro, we should be able to have option to choose between either AMD or Nvidia GPUs depends on your software of choice.
 
Different workflows demand different configurations. Why are you acting like having Nvidia GPU on Mac Pro would be a bad thing? I don't remember saying AMD GPUs should be replaced with Nvidia GPUs. I always said on Mac Pro, we should be able to have option to choose between either AMD or Nvidia GPUs depends on your software of choice.
I always love, when people are reading way too much in my posts than I actually I have written. Its not my fault that, in a post which talks about, the tech behind it, and having open platform, that is not locked in to one GPU vendor through software, you read me saying that it is a bad thing to have Nvidia.

It will be great if their hardware will be better than AMD's in non-bloated by proprietary, GPU vendor-made, software environment, and you will not be forced by CUDA to use and buy Nvidia GPUs.

Its the point of my question: what happens when AMD comes with much better hardware than Nvidia, but your software can ONLY work on CUDA?

Crap yourself, because all of people who were not locked to any GPU vendor by software, are advancing their technology, and development?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.