Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Errr. No. For the primary GPU yes it is too slow. But for a supplementary GPU it works. There are already plenty of demos with the iMac Pro that shows that it does. Apple did some demos at WWDC also.

I'm curious about this, and perhaps this is not the place; but there seems to be some confusion as to how an EGPU will actually work.

Was cruising some reviews of the new blackmagic box, and it seems as though that unit in particular acts like a 'co-processor' unit, handling compute functions like physics or optimized encoding/decoding or whatever, but will not render any sort of desktop image to framebuffer/muxing over the TB port back to the original screen.

I really don't trust Apple demos to show us how it will not work where one might expect. And it seems a lot of reviewers are unsure under what usage cases EGPU will and will not work.
 
Will we even get pci-e 4.0 or will intel/amd just wait for 5.0?

Probably PCI-e v4.0. If you look at the leaked Intel diagram in my post just above yours there is a color coded legend at the bottom. green -- shipping blue -- in development purple -- in planning. grey directional (meaning not deeply working on it right now. specs are software and more research-ish )

Cascade Lake and Ice Lake are "In development". That means the final specs train left the station a while back. They are trying to implement what they wanted to implement. PCI-e v4.0 probably got looped in for Ice lake because in late 2016 it was close to final (and went final in 2017 ). So it was reasonable to have certified test equipment by 2018.

Cooper Lake seems to be a bit of a small 'hack' that will take parts of Ice Lake targeted features and "back port" them into the baseline Skylake/ CascadeLake design. Same Cascade Lake cores (with perhaps some additional fixes) , just some 'uncore' adjustments in the memory channel design and perhaps in the PCI-e controller ( bump them to PCI-e v4). Plus whatever the need for EMIB "glue" between the dies.

If Intel has to compete with AMD Eypc Rome (Zen2) with Cooper Lake I can see them pitching "we have faster PCI-e " as a intermediate term stop gap until they can get Ice Lake out the door. ( I suspect that AMD does not have PCI-e v4 since sticking with the same socket and also needed to limit hiccups so wouldn't be waiting around for PCI-e v4 was final before locking up Zen 2 design. ) [ Intel's Ice Lake has taken so many rollout delays that they problem opened it back up to having PCI-e v4 as soon as knew were going to slide by a very long time. ]

That's why Cooper Lake can be "in planning" so late before launch... it is just a relatively small iteration with just a few features.

Additionally, I don't think PCI-e v5 is going to be affordable by the normal workstation mainstream for a while. I think that is largely going to be thrown at cards that are in the "if you have to ask the price, you can't afford" zone for first several years. For the most part, confined to server rooms.

(e.g., along the same lines of how 5GbE had to come along for 10GbE to get into the affordable range. where 4.0 might play role of being the "cheaper" option to the high priced 5.0 cards. )

The only thing with PCIe v4 right now is latest bleeding edge IBM Power.
 
Last edited:
AMD may be able to do pci-e v4/v5 in the same socket.

4/5 may open the door to the same lane numbers but more switches say take your cpu X16 out and use the drive 2 video cards at X8 X8 or X16 X16 3.0 off of an 4.0 X16 link.

pci-e 4.0 will not last long and does intel / amd really want to get lock in to it stock wise for a very late rollout.
 
I'm curious about this, and perhaps this is not the place; but there seems to be some confusion as to how an EGPU will actually work.

My comment was mainly directed at the hyper-moudlarity folks who pitch that the core unit could contain either no or some minimal GPU. That a reasonable sized GPU could be almost completely pitched to the eGPU ( or snap on) enclosure.

Apps could do "frame buffer copy". It just not on Apple's recommended approach configurations.

https://developer.apple.com/documen..._components/macos_devices/about_gpu_bandwidth

Not recommended is different from can't do/won't do/not implemented. The recommended configuration is to hang the external monitors directly off the GPU that have display draws want to accelerate 3D content on.

Was cruising some reviews of the new blackmagic box, and it seems as though that unit in particular acts like a 'co-processor' unit, handling compute functions like physics or optimized encoding/decoding or whatever, but will not render any sort of desktop image to framebuffer/muxing over the TB port back to the original screen.

I think the Blackmagic units are skewed to being more useful as DaVinci accelerator modules than as a general purpose eGPU. They only have one video out (presuming that HDMI is actually hooked to the external GPU's output path and not some DP->HDMI conversion off TB). So they aren't particularly geared to being a GPU with "drawable" resources (frame buffer). It would work in a one 4K monitor set up (which is reasonable for a MBP , box (to offload the internal GPU so can keep CPU clock highest ) , and reference 4K monitor ( a viewer representative of what most folks would watch on. )

There probably will be folks that hang no monitors off of these if have more than one connected to a single Mac system.

I really don't trust Apple demos to show us how it will not work where one might expect. And it seems a lot of reviewers are unsure under what usage cases EGPU will and will not work.

I don't think the vast majority of software has caught up to what can/can't do with these. just getting highly stable across a wide variety of contexts eGPU connections is step zero. Apple needs to finish that off. (there is still a list of "can'ts' with Apple's current support. )
 
  • Like
Reactions: Biped
My comment was mainly directed at the hyper-moudlarity folks who pitch that the core unit could contain either no or some minimal GPU. That a reasonable sized GPU could be almost completely pitched to the eGPU ( or snap on) enclosure.

Aye, my query was taking things in a bit of a tangent.

good link, appreciated, thanks.

Still seems like a solution that very few people want, to a problem that Apple designed in the first place. That in the end doesn't even resolve the problem .. yet / maybe / pending Apple 'finishing it off' :p
 
I think the Blackmagic units are skewed to being more useful as DaVinci accelerator modules than as a general purpose eGPU. They only have one video out (presuming that HDMI is actually hooked to the external GPU's output path and not some DP->HDMI conversion off TB).

They have an HDMI port, and a downstream TB port that also supports usb-c monitors, AFAIK compute eGPUs don't use their second TB port to loop back to the CPU for data return, so theoretically it should support 2 displays.
 
They have an HDMI port, and a downstream TB port that also supports usb-c monitors,

The question was more so which GPU is running the output for each of those ports. For the "downstream" TB port that really isn't a question. It is the GPU at the 'head' of the TB stream. That DP output is being offloaded from the Thunderbolt bus. TBv3 controllers have two DP inputs and one DP source. The inputs aren't used on peripherals (only on computer systems) .

The question is whether the HDMI is being driven by a DP stream offload or a direct feed from the local GPU. It would be more then sensible if have a custom, local GPU in the peripheral to simply just use the local GPU. If it is an "off the shelf", discrete GPU card then may not because would have to do some awkward loop back cable.

[ when 3rd party TB controllers arrive, we may see some TB controllers that are for daisy chain peripherals only and don't have any DP inputs. For the most part, Intel just makes one kind that is used in hosts and peripherals. There is another that for small dongles and single port. ]


AFAIK compute eGPUs don't use their second TB port to loop back to the CPU for data return, so theoretically it should support 2 displays.

TBv3 docks with a HDMI/DP and a second TB port can support two displays. Two isn't at issue. Which GPU is driving them is.

CPU or GPU? The second TB port on a TB peripheral is most certainly connected to the CPU via the TB bus.

As noted above the DP input ports in a peripheral aren't used. So no, a remote GPU would not be hooked to a TB port directly. If the BMagic GPU is hooked to the HDMI port then if one monitor is hooked to the TB port and another to the HDMI port then they'll driven by two different GPUs. ( the computer system GPU and remote GPU respectively. )

Found a teardown of Blackmagic eGPU and board traces indicate that the HDMI port is hooked to the GPU on the custom board they build. ( TB controller and GPU are all on the same board. )

[ image attached. ]
https://egpu.io/forums/thunderbolt-...-egpu-radeon-pro-580-thunderbolt-3-enclosure/

lower right of board image here is the HDMI port and the 4 pairs of traces coming from GPU (framed by metal square on board) between the two.

This Blackmagic eGPU isn't purely a compute GPU, but it is heavily skewed that way. ( one and only one display can be driven. Even a single port Macbook can drive two. ).
 

Attachments

  • blackmagic-egpu-radeon-pro-580-gpu-thunderbolt-controller-traces.jpg
    blackmagic-egpu-radeon-pro-580-gpu-thunderbolt-controller-traces.jpg
    141.4 KB · Views: 156
  • Like
Reactions: Synchro3
The question was more so which GPU is running the output for each of those ports.

I was under the impression that when you select the eGPU as your graphics option, that all GPU functions are shifted to it, and the internal is effectively powered down, but maybe that changes with the app-by-app options for tying apps to the eGPU.
 
I was under the impression that when you select the eGPU as your graphics option, that all GPU functions are shifted to it, and the internal is effectively powered down, but maybe that changes with the app-by-app options for tying apps to the eGPU.

So I think where some of the confusion comes from;

In macbook world, where the laptop had an intel CPU with embedded GPU, and an embedded third party mobile GPU, both GPU's output ran to a mux chip and then to the screen. So Either/Or GPUs could send their outputs to the same screen. It seems like a some reviewers expected similar behaviour from their eGPU boxes when connected to their laptops/imacs. Where they would be able to select the external unit and expect an accelerated 'drawing' on their primary screen. Which is not the case. Under some circumstances it seems that accelerated 'drawing' was only available with a secondary screen hooked up to the eGPU.

This is moot for a headless macpro with no primary screen.

But it does suggest that the eGPU is meant more as an app specific co-processor, ( or potentially an iDockPro ™ )rather than providing any update utility down the road. It feels like a lot of folks have this misconception ?
 
AMD may be able to do pci-e v4/v5 in the same socket.

4/5 may open the door to the same lane numbers but more switches say take your cpu X16 out and use the drive 2 video cards at X8 X8 or X16 X16 3.0 off of an 4.0 X16 link.

pci-e 4.0 will not last long and does intel / amd really want to get lock in to it stock wise for a very late rollout.
AMD epyc rome are to be socket compatible and they are planing to have pci-e 4.0 maybe pci-e 5.0
 
So I think where some of the confusion comes from;

It seems like a some reviewers expected similar behaviour from their eGPU boxes when connected to their laptops/imacs. Where they would be able to select the external unit and expect an accelerated 'drawing' on their primary screen.

It certainly seems in the Windows world, piping graphics processing out to the eGPU, and then the resulting display back in to the laptop's monitor is a thing, but I'm thinking in terms of a separate display plugged into the eGPU, being driven by the eGPU. Apple's strategy seems to be to allow eGPU, but still ensure the primary obsolescer - the built-in display GPU, is still in the loop.
 
So I think where some of the confusion comes from;
.....

Confusion or folks just stuck in dogma ?

This is moot for a headless macpro with no primary screen.

But it does suggest that the eGPU is meant more as an app specific co-processor, ( or potentially an iDockPro ™ )rather than providing any update utility down the road. It feels like a lot of folks have this misconception ?

Update the 'power' being sent to your discrete monitor? Plug it into the new faster GPU. Requires a small amount of work to move the cable from the 'old' plug to the 'new' plug but if the monitor is discrete generally you have a choice of where you plug it into.

"Upgrading" the video that is on the Thunderbolt bus is somewhat different issue. Similar, to upgrading the content flow to a monitor (embedded) that the new GPU's outputs is not hooked to physically at all.
The GPUs that has a physical trace coupling to the TB controllers can be a upgradable card. ( MP 2013 had cards that could be replaced on defect. Not casual end user friendly, but replaceable. ).

If someone thinks Apple isn't going to provide upgrade options so that physical GPU+TB connections then a Thunderbolt Display docking station wouldn't be the best discrete display to selection. For the overwhelming vast majority of the 3rd party monitor options to selection from, attaching to a new GPU later to "upgrade" is not a problem. In the Mac Pro solution space, this isn't a deep traction issue.

If Apple wants to more highly mingle Mac Pro (with Thunderbolt) users with their TB Display docking stations then they should be looking to make discrete card upgrades easier. ( not off the shelf from Fry's easier, but much easier than the last iteration. ).

The almost linear performance improvements some apps can get with added eGPUs is app specific. But the general class of solutions that Apple has enabled with the baseline eGPU support is hardly limited to just that. Not having or not being able to afford a reasonable discrete monitor is a different issue than eGPU's general utility scope.
[doublepost=1533141617][/doublepost]
.... but I'm thinking in terms of a separate display plugged into the eGPU, being driven by the eGPU. Apple's strategy seems to be to allow eGPU, but still ensure the primary obsolescer - the built-in display GPU, is still in the loop.

Apple puts no restrictions at al onto the eGPU as to whether it provides physical monitor output ports or not ( i.e., how many). What DP output data is being distributed on the TB network/bus is a completely different issue.

The built in embedded GPU is the only GPU physically connected to the built-in/embedded display. How could it not be "in the loop" ? It is the only physical connection. That is not an Apple strategy thing. Every vendor who has one connection between those two has the same issue.

What Apple has is not so much a strategy but a design of a graphics stack that is more rigid than the Windows' stack. Apple writes the "top half' of the OpenGL stack and others GPU vendors plug into that at the lower levels. Windows ( basically punts on OpenGL from Microsoft ) has a graphics stack where can plug in OpenGL implementations from the vendors. Windows spun up on "plug and play" TB eGPU driver support easier because there are hooks for virtual graphics drivers. Apple's graphics stack doesn't try to cover everything under the Sun. ( e.g., doesn't support SLI/Crossfire either. )

The strategy is to have a somewhat simpler graphics stack so that can deliver a more stable one (along with an reasonably scoped development staff) Whether they hit that balance much better than Windows is debatable but it isn't primarily a hardware lock-in thing. Windows approach requires more developers, more testing, and far more vectors of potential instability. ( Windows grosses more money so more people and resources are easier to cover. Instability is traded off for holding onto monopoly sized market share; more people 'covered' just for the sake of more people. )
 
Last edited:
I may be one of the people who is said to be guessing "hyper-modularity", and I don't think we'll see no or minimal GPU in the base - my best guess is a Vega 56, with an optional 64 (or the next generation of comparable AMD GPUs, or possibly FirePro versions of something similar). Basically iMac Pro level GPUs, probably higher clocked, quite possibly with an option for two of them (single or dual GPU is a perfectly reasonable configuration choice), and with a (slight) possibility of an option for >2...

Where I think Apple is going to disappoint some people is that you won't be able to stick a standard PC GPU in there. MAYBE some closely related Radeons will work, especially models that show up as upgrades from Apple or in refreshes to the Mac Pro (there will probably at least be a way of flashing these to work, if Apple's GPU power delivery is standard). NVidia cards will only work if hacked, or maybe only under BootCamp, or maybe not at all.

Apple has two very good reasons for doing this... One is actually beneficial to users - by limiting potential configurations, they minimize instability, and gaming GPUs are a major source of instability on Windows. The second is that Apple likes profits (trillion-dollar companies tend to :) ), and has no interest in losing high-margin sales to commodity hardware.
 
...

Apple has two very good reasons for doing this... One is actually beneficial to users - by limiting potential configurations, they minimize instability, and gaming GPUs are a major source of instability on Windows. The second is that Apple likes profits (trillion-dollar companies tend to :) ), and has no interest in losing high-margin sales to commodity hardware.

I don't believe that "trillion-dollar company" can't handle few more configurations.
 
Errr. No. For the primary GPU yes it is too slow. But for a supplementary GPU it works. There are already plenty of demos with the iMac Pro that shows that it does. Apple did some demos at WWDC also.

Oops, did not know that. I would appreciate some links if you could :)
I was, anyway, referring to the main GPU. That is, the mMP modularity won't be translated into a box connected to all its main parts with TB3, too slow to compete with the iMP.

err no. Intel probably has one or two interim bumps for Intel W deviates off of the -SP baseline infrastructure. Scalable has two.

Cool. Thank you. I thought the announcement about the further delay meant that there would have been no more Xeons for more than a year.

I personally think that the mMP could have the same components as the iMP, but it would make sense only if they were released at the same time. Should Apple update the iMP (if ever) only in 2020, while releasing a mMP in 2019 and updating it not before 2021, I would expect interleaving speed bumps with the iMP following the innovation pushed by the mMP.

Another scenario is that in 2019 Apple releases a new iMP and a mMP, with the same chip and base GPU and keeps the two machine in sync. It would cost less and make more sense since Apple's apalling past history on Pro machines... and it would let Apple update the two only when needed.

It would be strange - from a marketing standpoint at least - to present a machine with the same performance than a one-year-old iMP.

my best guess is a Vega 56, with an optional 64

Same video cards? The same CPU even? I would feel being ripped off regardless of the higher cooling capacity and some Tx chip with additional features. After all this wait, I hope at least they would offer Vega 2s and updated Xeons.
 
... quite possibly with an option for two of them (single or dual GPU is a perfectly reasonable configuration choice), and with a (slight) possibility of an option for >2...
....
Apple has two very good reasons for doing this... One is actually beneficial to users - by limiting potential configurations, they minimize instability, and gaming GPUs are a major source of instability on Windows.

This is an exceedingly poor reason; not a good one. First, there really isn't a sensible reason to exclude two GPUs as an optional configuration. ( unless trying to duplicate the iMac Pro as much as possible it is a beyond dubious move. ). Second, the 2nd GPU doesn't necessarily have the functional requirements of the first. Presuming Apple is trying to cleanly integrate the primary display GPU into the Thunderbolt subsystem then that is not a requirement they need to extend to the 2nd "slot" ( which could optionally be extended to a 2nd GPU.)

If Apple highly integrates the primary, 'boot' display then they should have placed a floor under the stability of the overall system. It will support the various boot options. The standard support matrix that Apple chooses to support could be tested with a reasonable matrix of known hardware.

Apple doesn't have to extend out the same level of stability to random stuff that folks plug in with random drivers.

The third major missing point here is that GPU cards are hardly the major cards that are missing. x4 , x8 , x16 cards should all slot into a x16 slot. There is highest end video/audio capture, top end storage I/O, higher end SAN/Network (especially if limited the internal storage capacity upper limit) and legacy ( someone with Firewire demands to some expensive external infrastructure they can't let go of. ). The Mac Pro should leverage the high end I/O capabilities that the iMac Pro cuts off at the knees. (leaves about x20 lanes of I/O on the 'floor' unused. )

Otherwise, Apple would be making a highly redundant product. ( it is a good way to kill off both product due to extremely poor differentiation. They won't get around that with application of a reality distortion field. ).


To clearly think about this take the AMD vs Nvidia fanboy drama off the table and look at what the real functional problems are. The functional area the Mac Pro covered expanded past just simply purely CPUs and GPU. Apple doesn't have to cover all of that ground, but some portion is probably necessary for the product to be viable.



The second is that Apple likes profits (trillion-dollar companies tend to :) ), and has no interest in losing high-margin sales to commodity hardware.

Dead, unsuccessful products don't generate large profits. Two , three, four, or five open PCI-e standard slots won't guarantee that the Mac Pro would be linearly more profitable ( more slots more profits) , but zero open PCI-e slots probably will not succeed long term. The vast bulk of the folks "circling the airport" right now won't buy it. If people don't buy your products it is pretty darn hard to make a profit.

You can start hand waving and say that Apple will just charge the folks who are left more ( even fatter margins .... boost Mac Pro tax up to 50-60% ). That too will fail. Even the folks who wouldn't mind zero slot will largely bolt at that too over the long term. Apple's mark up on workstation is already high. Higher still is highly doubtful. At some point the volume will drop so low that Apple will stop making it because "nobody" is buying it.
 
The third major missing point here is that GPU cards are hardly the major cards that are missing. x4 , x8 , x16 cards should all slot into a x16 slot. There is highest end video/audio capture, top end storage I/O, higher end SAN/Network (especially if limited the internal storage capacity upper limit) and legacy ( someone with Firewire demands to some expensive external infrastructure they can't let go of. ). The Mac Pro should leverage the high end I/O capabilities that the iMac Pro cuts off at the knees. (leaves about x20 lanes of I/O on the 'floor' unused. )

Otherwise, Apple would be making a highly redundant product. ( it is a good way to kill off both product due to extremely poor differentiation. They won't get around that with application of a reality distortion field. ).

...

Dead, unsuccessful products don't generate large profits. Two , three, four, or five open PCI-e standard slots won't guarantee that the Mac Pro would be linearly more profitable ( more slots more profits) , but zero open PCI-e slots probably will not succeed long term. The vast bulk of the folks "circling the airport" right now won't buy it. If people don't buy your products it is pretty darn hard to make a profit.

...


Good stuff as ususal .

I would argue though that every additional PCIe slot - up to a point - will help maintain and even regain Mac market share , while the opposite will lose more customers and attract no new ones .
Hence I believe more PCIe slots - at least 4 - would significantly increase profit short, mid and certainly long term .

While many, maybe most users would not benefit from them ( right away ) , there are whole segments of Mac users who moved to other brands after the tcMP fiasko , and others who would never consider getting into Mac which has restricted expandibility/upgradability .

Same goes for RAM and storage .
 
I wouldn’t be mad if the base MP7,1 is a headless iMac Pro if that means it will start at $3999 or lower.

Why not? The old Mac Pro's started at $2,399.00-2,499.00.

Besides you can already get the iMac Pro for $4,299.00 from Micro Center.
 
Why not? The old Mac Pro's started at $2,399.00-2,499.00.

Besides you can already get the iMac Pro for $4,299.00 from Micro Center.


The LG 5K display sells on Apple's website for $1,299. Even shaving $400 off of that so $4,299 - $899

$3,499

drop from 8 core to 6 core ( about a $200 drop ) and in the $3,299 range. Drop SSD down a bit and can creep pretty close to the $2,999 . I suspect that is about as far as Apple is going to limbo down. ( They could drop down to a mid-range AMD Polaris used in the iMacs to shave off a bit more. But have low expectations they are even going to do 3 variants on GPU for this Mac Pro. )
[doublepost=1533270351][/doublepost]
Oops, did not know that. I would appreciate some links if you could :)
I was, anyway, referring to the main GPU. That is, the mMP modularity won't be translated into a box connected to all its main parts with TB3, too slow to compete with the iMP.


Go to about 1:18 into demo ( you can search in transcript for "external" and when get to eGPU reference just click. It will shift the video to that point ... or just read and skip the video . :) )

https://developer.apple.com/videos/play/wwdc2018/102/?time=4701

They used multiple eGPUs in the demo. (and yes the workload is embarrassingly parallel so not too hard. )


Some multiple eGPUs used here with the iMac Pro. Plus one eGPU is a big jump. Plus another isn't quite clear where the bottleneck is ( i.e., not enough data to split up (and/or software) , bottlenecked on RAM , etc. )

https://9to5mac.com/2018/04/19/macos-egpu-performance-test-davinci-resolve-video/

Does this work for all workloads in all contexts? No. But for folks with embarrassingly parallel work that can be sliced up into workload slices the eGPUs work. [ There is a whole series of eGPU reviews on 9to5mac.com ]


Cool. Thank you. I thought the announcement about the further delay meant that there would have been no more Xeons for more than a year.

Not going to get core count increases, but there will be new stuff that isn't x86 core focused coming ( well bug fixes in the x86 which actually could get some performance back the firmware patches put in. )


I personally think that the mMP could have the same components as the iMP, but it would make sense only if they were released at the same time. Should Apple update the iMP (if ever) only in 2020, while releasing a mMP in 2019 and updating it not before 2021, I would expect interleaving speed bumps with the iMP following the innovation pushed by the mMP.

To start off it make sense to do at the same time. After that though the GPUs (and add in options ) could diverge.
Even if they did the major upgrades joint, the Mac Pro could get optional GPU add-in options added at intervals different that the iMac Pro

If Apple has a limited team to work on both then leapfrogging would be better than the long "Rip Van Winkle" naps. both 2019 early then 2020 iMac Pro 2021 Mac Pro 2022 iMac Pro ....
Consistent effort would help build trust back up.


Another scenario is that in 2019 Apple releases a new iMP and a mMP, with the same chip and base GPU and keeps the two machine in sync. It would cost less and make more sense since Apple's apalling past history on Pro machines... and it would let Apple update the two only when needed.

too often over last decade Apple has turned "when needed" into "when we have copious spare time for folks that primarily work on something else". If tasked to do something every year, that would get that excuse out of the 'dog ate my homework' hole it has been in.


It would be strange - from a marketing standpoint at least - to present a machine with the same performance than a one-year-old iMP.

if the power range of the iMac Pro is ~450 W and the new Mac pro is about ~900 W then it really shouldn't be about the same performance. ( for the completely stripped down, entry models perhaps, but up in the middle to upper ranges the type of workloads covered should diverge substantively. )


Same video cards? The same CPU even? I would feel being ripped off regardless of the higher cooling capacity and some Tx chip with additional features. After all this wait, I hope at least they would offer Vega 2s and updated Xeons.


I doubt Vega 2 is going to be appropriate for the Mac Pro; at least as the primary GPU display video card. Most things so far indicate that the card is pointed at something else as a primary workload. ( as an optional add-in .... perhaps over time. ). The HBM 2 capacity seems to be high ( i.e., expensive). Throw Apple's 25-32% markup on top of expensive and end up with too expensive for more than few folks.


Could this be a bragging top, top, top end card? Perhaps. But it would extremely dubious to hold up the whole product launch just for that (for an option that most aren't going to buy). Similarly, Apple get their own semicustom Vega 2 that aren't sky high priced? Again seems a dubious boat anchor to put on the launch. If cards can be added in reasonably later that could be a config upgrade later in the year.
 
  • Like
Reactions: askunk
Thanks for all your answers and the links, deconstruct60. :)

I must say I do agree with you on the Vega2 being sold later... after all these years I realised I lost the perception of what "interchangeable GPU" meant :D

if the power range of the iMac Pro is ~450 W and the new Mac pro is about ~900 W then it really shouldn't be about the same performance. ( for the completely stripped down, entry models perhaps, but up in the middle to upper ranges the type of workloads covered should diverge substantively. )

Sure. You are right. I thought that too... but marketing the computer as faster JUST because of a Tx and improved cooling would mean admitting that the other machines are drastically thermally throttled. I don't think that would be the strategy from Apple.
[doublepost=1533306600][/doublepost]Sell it with faster RAM, higher real world turbo speeds (better cooling), faster buses, better wifi, higher expanding options, more ports, future upgradeability at the same price or lower (exluding the screen, obviously) and I may consider the same hardware one year later. :)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.