Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
*shakes head*
That's what I did while reading the last few pages of this thread. I am by all means just a user, and I cannot comprehend how coupling DP into TB is in any way a debate on a pro workstation, if it creates such an engineering challenge. This thing is supposed to be sitting on a desk and the cabling seldom change upon setup. I don't know how anyone justifies prioritizing that over ensuring enough open PCI-e bandwidth is available for adopting to various (3rd party) use cases.
 
You dont understand hoe modern GPU works period.

I am asking what I posted in the link : is that wrong ? Why are you deflecting attention twice now by insisting I don’t know how modern GPUs work ( and earlier stuffing a crossfire/SLI context when it was never mentioned )

and my position of a nvlink type of interconnected bridge would not only enable faster communication between two GPUs but instantly double the capacity of these GPUs to load geometry/textures on the GPU ram so that the scene can render instead of failing to do so because it cannot fit that information on GPU ram or having to stream them off the system RAM or HDD ( which slows the process )... is that wrong too ?
 
That's exactly wrong.

PCIe 1.0 and higher is full duplex. That actual speed of PCIe per lane is DOUBLE the rated speed. Except half is reserved for down and half is reserved for up.

To pull from Deconstruct's example, x4 PCI-e v3 is rated at 32 gb/s, but the actual speed is 64 gb/s total. But in since half is reserved for each direction, the top speed is 32 gb/s in a single direction.

PCie reserves lines for up and down and halves them evenly.

Just pop "PCIe bidirectional" into Google and do some reading.

(You should also realize this is the way PCIe works without doing the reading because this is the way Thunderbolt works, which is basically PCIe over a cable. Thuderbolt 3's top speed in both directions combined is 80 gb/s, but the max speed in a single direction is 40 gb/s.)

Wikipedia also spells this out pretty darn clearly:
Not,, you are wrong.

First you cant take a full duplex interface and convert to half duplex unless its foresee in its protocol (as some COMM ports), simple this interrups the protocol every time you send data in one direciton you need a feedback to acnowledge the data was properly delivered w/o errors.

Second, PCIe is FULL DUPLEX on a a single copper line, it uses different signal schemes to keep it dual channel active in both directions at the same time, but cant reverse this signal.


Read Page 13 (if you cant understand it, sorry it is aimed at people with an degree in Digital Electronics or Comunicaitons, not DIYers or PC Gamers):
https://pcisig.com/sites/default/files/files/PCI_Express_Electrical_Basics.pdf

The Point, is you propose something 'possible' for Almighty Apple, when your proposal, besides being technically flawed, its impractical, and more expensive than other paths already developed by Apple in the tcMP.

I'm not upset on my predictions, I dont know what Apple will sell, I dream I could put at least nVidia Compute GPGPU (gpu/w/o frame buffers, useful only for compute), but one thing its sure Apple cant build an mMP with TB3 and STD PCIe GPUs, the solutions range from commisions std GPU with an new internal DP connector, or re-cycle the tcMP solution to link the GPU's PCIe/DP/Fabric to the Motherboard using an NON std solution, propietary or not, to me a thing its sure the mMP wont arrive with STD GPU solution, unless APPLE diches TB3, and even TB3 Titan Ridge requires USB-C Alt mode to deliver DP1.4 signals.
[doublepost=1522585154][/doublepost]
I am asking what I posted in the link : is that wrong ? Why are you deflecting attention twice now by insisting I don’t know how modern GPUs work ( and earlier stuffing a crossfire/SLI context when it was never mentioned )

and my position of a nvlink type of interconnected bridge would not only enable faster communication between two GPUs but instantly double the capacity of these GPUs to load geometry/textures on the GPU ram so that the scene can render instead of failing to do so because it cannot fit that information on GPU ram or having to stream them off the system RAM or HDD ( which slows the process )... is that wrong too ?
Yes, its wrong.

Lets me explain:

Earlier GPUs use Dual Port RAM aka V-RAM where the images rendered, one port is connected to the GPU processors and the other to the DAC which deliver its output to an display, because the DAC cant sync or interrupt the GPU processors .

Modern GPU names V-Ram to single port DDR5 RAM or HBM ram, the Approach now is different, given modern RAM is fast enough to share its bandwidth among multiple cores w/o interruptions, the DAC part now works as another core connurently sharing the GPU RAM, but it targets only an small block of ram named the FRAME BUFFER, even now there are some GPU w/o such DAC that cant be used to connect displays but still can be used to render graphics on another's GPU Frame Buffer.

nVLink and Infinity Fabric, what does is to interconnect the GPUs System Buses at least twice faster than normally does PCIe (the same function is intrinsic on the PCIe only GPUs).

meanwhile, CrossFire/SLI where aimed at the beggining only to link VRAM (the old vram) or just to Sync the FrameBuffers.

Given the PCIe3 performance nVLink-like solutions only provide dividends on deployments with more than 2 GPUs for those Algorithms that require full access to the whole shared VRAM/SRAM, but that is only for compute nVlink wont allow you to render higher resolutions, this is only function of the GPUs DAC and its DP bus, given an 8K display requires about 25MB of actual RAM space, the only restriction to render bigger images is the DAC-DP part not the VRAM interconnect.
 
Last edited:
Given the PCIe3 performance nVLink-like solutions only provide dividends on deployments with more than 2 GPUs for those Algorithms that require full access to the whole shared VRAM/SRAM, but that is only for compute nVlink wont allow you to render higher resolutions, this is only function of the GPUs DAC and its DP bus, given an 8K display requires about 25MB of actual RAM space, the only restriction to render bigger images is the DAC-DP part not the VRAM interconnect.

What are you talking about ? Why are you conflating display related GPU frame buffers with non display related rendering ? Why are you suggesting that the developers over at chaos group have no idea about nvlink and that shared memory across two GPUs via a nvlink to fit the geometry and textures to render (which otherwise the renderercannot unless the it has some feature wherein it will stream the data from the system RAM or HDDs which in turn slows down the rendering process which in turn defeats the purpose of using GPUs to render ) is incorrect ?
 
What are you talking about ? Why are you conflating display related GPU frame buffers with non display related rendering ? Why are you suggesting that the developers over at chaos group have no idea about nvlink and that shared memory across two GPUs via a nvlink to fit the geometry and textures to render (which otherwise the renderercannot unless the it has some feature wherein it will stream the data from the system RAM or HDDs which in turn slows down the rendering process which in turn defeats the purpose of using GPUs to render ) is incorrect ?
Render is not the same as Display, to render you may need to cache data using all the VRAM, but you only draws the framebuffer.

The Pascal GP100 GPU has a memory bandwidth close or barely over what allows 8x PCIe 3, so two GP100 can operate w/o bottlenecks w/o nVlink and share the whole 32GB VRAM across all its cores and frambuffers, if you want 4 GPUs then go for nVlink or Infiniy Fabric, anyway you dont need even a 2nd GPU to render an 8Kx4K Frame Buffer unless you need to cache textures/data, this is while AMD is to deliver PRO GPUs with integrated m.2 NVMe slots to Offload the PCIe3/Infinity Fabric from Texture Data Flow (or any kind of data, given GPUs are now used too for compute applications)
 
Render is not the same as Display, to render you may need to cache data using all the VRAM, but you only draws the framebuffer.

The Pascal GP100 GPU has a memory bandwidth close or barely over what allows 8x PCIe 3, so two GP100 can operate w/o bottlenecks w/o nVlink and share the whole 32GB VRAM across all its cores and frambuffers, if you want 4 GPUs then go for nVlink or Infiniy Fabric, anyway you dont need even a 2nd GPU to render an 8Kx4K Frame Buffer unless you need to cache textures/data, this is while AMD is to deliver PRO GPUs with integrated m.2 NVMe slots to Offload the PCIe3/Infinity Fabric from Texture Data Flow (or any kind of data, given GPUs are now used too for compute applications)

Which is what I was talking about all along. Why did you even throw sli/crossfire into the mix or suggest the chaos group developers were wrong or I have no idea how modern GPUs work ? A whole lot of arguments over unrelated and never suggested points.
 
I dont know what Apple will sell, I dream I could put at least nVidia Compute GPGPU (gpu/w/o frame buffers, useful only for compute), but one thing its sure Apple cant build an mMP with TB3 and STD PCIe GPUs, the solutions range from commisions std GPU with an new internal DP connector, or re-cycle the tcMP solution to link the GPU's PCIe/DP/Fabric to the Motherboard using an NON std solution, propietary or not, to me a thing its sure the mMP wont arrive with STD GPU solution, unless APPLE diches TB3, and even TB3 Titan Ridge requires USB-C Alt mode to deliver DP1.4 signals.
So your belief that the mMP won't get a standard PCI slot, is based on the assumption that Apple won't ditch Thunderbolt(3), and then the DP coupling issue cannot be easily solved by a non-proprietary method?
--------
I genuinely want to be educated;
aside from looking butt ugly, is there any technical difficulty or disadvantage in the approach below:
Thunderbolt-3-AIC-Install-step3-1.jpg
 
Last edited:
No more Thunderbolt port equipped monitors: problem solved.
Display Port, HDMI (latest port versions) and/or USB Type C should suffice, no?
Just abandon any further efforts towards shunting video to a display device via Thunderbolt, sort of how Firewire was abandoned. Keep it for eGPU, external storage devices & etc. purposes.
And: allow nVidia back in the "Apple sheep fold" again, as if the "AMD video only" business decision never happened. Let the customer decide which video solution they prefer, using any of the three available brands of industry standard GPU hardware: nVidia, AMD or Intel.
Agree to let any lawsuit threats between nVidia and Apple fade into the sunset.
The only sticking point being: the geographic location of the "sit down summit" between CEO's of nVidia & Apple.
Maybe: Antarctica?
You're welcome.
 
Last edited:
  • Like
Reactions: Synchro3
the DP coupling issue cannot be easily solved by a non-proprietary method?

Yes, it can be solved by non-propietary methods: creating an New Open Source PCIe+DP+Fabric GPU Interface, or just releasing to the public domain the one on the tcMP, the good the Old tcMP may got GPU updates, the bad it will be a GPU-only solution likely no body will release a peripheral for this interface.
 
That's what I did while reading the last few pages of this thread. I am by all means just a user, and I cannot comprehend how coupling DP into TB is in any way a debate on a pro workstation, if it creates such an engineering challenge.

it isn't a technical challenge. Most of this is about being as inexpensive as possible, not technology. [ some relatively woefully inaccurate technology about the characterization of PCI-e etc. with off the wall ... honestly is more about misdirection than technology either. ]

Super Duper "cheap" (for Apple) option: std PCI-e GPUs . no thunderbolt (or add in card) . minimal effort on graphics drivers ( if they work from other custom Mac GPU work great. If not ... oh well. ). If the development of the Mac Pro costs more than $1-2M then it is too painful for Apple so they'll just pull stuff off the shelf to lower costs.

Non minimal but limited Apple budget. std PCI-e GPUs perhaps offset in 2" from edge to run 1-2 very short (3-") loop back cables inside Mac Pro. one (maybe two) TB socket pairs. A bit more than minimal effort on graphics drivers and OS support. Apple picks 1-2 GPU implementations outside the set of GPUs support embedded in the rest of the Macs for do quality QA and support. The rest ... if doesn't work oh well.

Modest Mac Pro development budget. Apple does a relatively modest derivation on a couple of reference cards to re-route display port output without "loop back" cables being necessary. Might clean up power cable so that can just insert card and it just works (no cable hook ups at all). Apple does the work so that Mac Pro 3-4 years back from latest card also are covered by supported matrix ( Apple still retires the older add in cards as new ones come out later in same class).


Note the differences here are really about R&D and Support validation (and QA) money spent. super cheap, cheap, and enough money to demonstrate putting in reasonable , sustained effort every year.
 
If the development of the Mac Pro costs more than $1-2M then it is too painful for Apple so they'll just pull stuff off the shelf to lower costs.

They'll burn through way more than $1-2M bringing the next Mac Pro to market (or any redesigned computer). Multiply by 10. Development costs include everyone's salary working on the project. Tooling, manufacturing. It's enormously expensive unless you're just swapping boards in a recycled square box.
 
They'll burn through way more than $1-2M bringing the next Mac Pro to market (or any redesigned computer). Multiply by 10. Development costs include everyone's salary working on the project. Tooling, manufacturing. It's enormously expensive unless you're just swapping boards in a recycled square box.

I only used $1-2M because of the "chicken little' posts here about how the sky is falling on Mac Pro R&D if the a couple of QA folks have an adjusted overhead cost of $200K a piece.

Even multiplying by 10-still isn't out of "chicken little" status. Did some quick spreadsheet numbers.

avg sell price | number/year | 25% markup of total | 50% of net markup

Code:
3,299                    20,000             16,495,000.00              8,247,500.00
3,299                    30,000             24,742,500.00              12,371,250.00
3,299                    40,000             32,990,000.00              16,495,000.00
3,299                    50,000             41,237,500.00              20,618,750.00
   
4,999                   20,000              24,995,000.00              12,497,500.00
4,999                   30,000              37,492,500.00              18,746,250.00
4,999                   40,000              49,990,000.00              24,995,000.00
4,999                   50,000              62,487,500.00              31,243,750.00
4,999                   60,000              74,985,000.00              37,492,500.00


Let's say Apple sold Mac Pro's for an average selling price of $3,2999 and iMac Pro's at an average selling price of $4,999. The latter is the entry level price so the average is quite likely higher ( so this is a conservative estimate). Hopefully wouldn't be the base entry price ( perhaps still at $2,999 of previous iteration and then some blends of much higher prices. ).... MP at $3299 is also still a conservative estimate.

So instead of sending that 50% of the net markup off the the Scrooge McDuck money pit so that Apple can wallow in even more money that aren't going to spend. ( Apple's mark up is more in the 30% range but leaving some slop on the table here for returns and other overhead. ). The plan would be to invest "desktop" Pro profits back into the business for 4 years in part to demonstrate Apple actually is truly committed to the business.

If the initial run rate for the iMac Pro was 40K in year one and settled back to 20K in year two. And the Mac Pro used mostly the same stuff so and had a initial run rate of 50K in year one and settle back to 30K in year two . Let's say do every other year for both. So keep alternating initial bump years.

So
2018 ( defacto just iMac Pro ) ~ $24M
2019 ( new Mac Pro , older iMacP ) ~ $32M ( 20 + 12 )
2020 ( new iMacP , older Mac Pro ) ~$36M ( 24 + 12 )
2021 ~32M
2022 ~$36M

Even if it cost $20M per year to stamp out a new top line "Pro" each year, this is doable run rate. It is just a matter of greed.

Apple could do a $0.04 charge back to all Thunderbolt Macs to cover work for eGPU driver work. At a run rate of about 10M TB capable TB Macs per year that would be $400K which is enough for have folks to validate whatever card Apple might make that could also be sold into a eGPU also.

Is there enough for Apple to do 3-4 cards from two different GPU architectures? No. But the iMac Pro cards could be moved up to Mac Pro and vice versa (Mac Pro cards be moved down to iMac Pro) basically do substantive cost sharing across the two models.

Even if both iMac Pro and Mac Pro drop down to 20K each per year and flatline ... the combined is still over $20M per year. So if the CPU and GPU vendors have an extra long crap upgrade cycle , Apple could extend out to a temporary 3 year cycle and still not have a funding collapse.


If can get up in range where iMac Pro and Mac Pro combine for 100K per year run rate and this $20M (and even $30-40M/year ) is even easier.
 
Last edited:
I know it's de rigueur to blame anything people do not like about Apple on Tim Cook, but folks really need to find a new hobby. He worked at Steve's side from almost the day he came back to run Apple to the day he passed so if anyone knew how Steve would have run Apple after 2011, it's him. Hell, we probably have Tim to thank for still having a Mac at all since Steve seemed pretty convinced the iPad was where computing would go and the "truck computer" was an anachronism.

The idea that Tim Cook saying "hey, gay people are human beings" is somehow diverting all his time from managing a company well is... well, dumb as ****, and Mago should think about why he's acting like an idiot for suggesting such.
 
...and all of this, for a goal which at the end of the day, amounts to being able to use one single cable for display and peripheral bus, rather than one cable for each.
...

Thunderbolt doesn't necessarily do that. For a standard 2 port TB controller with the standard provisioning of inputs a user could plug one cable into port 1 that is a active (or shorter passive) TB cable. The cable plugged into port 2 could be a Type-C-to-DisplayPort cable that runs "pure" DisplayPort to the monitor. Those two cables have Type-C in common just on one end; so they are different cables. The standard TB implementation on a host computer system only 'blends" the data onto one cable when user is trying to push both video and PCI-e data done a single stream (one and only one cable).

USB Type-C is a one of the standard "display out" ports at this point. Thunderbolt doesn't have to present. Thunderbolt is typically an indication that both alt-mode Thunderbolt and alt-mode DisplayPort are present on the Type-C port the user is looking at. Most of the new upscale laptops sold in the Windows market are going to line up with MBP ones and line up with Type-C display out as a standard port.


As GPUs creep up into the 280-350W range is is about just as "odd" that 30-50% of the venting edge of the card is clogged up with video port reducing airflow. If have a large cooling problem on the card then what want to do is maximize airflow not minimize it. "but you have to put the ports there". Actually no. That's one reason why DisplayPort went to carrying multiple display traffic on a single cable. So if the GPU could support 6 screens maybe only have 4 ports on the outer edge. 2 of those could carry another 1 if bandwidth allows.

A lot of top end hot cards actually can't managed with just the back vent and pragmatically have to resort to liquid cooling solutions to push the heat to another outer edge that has a better oriented fan.

Apple could follow same trend in pushing 4 DisplayPort streams off the interior edge of the card and perhaps leave 1-2 HDMI ports hanging off the back edge. That would allow a substantive increase in airflow ( cooler with less noise). The mainstream market won't investigate at all because it is "race to bottom" pricing which largely leads to "monkey see, monkey do" adaption of legacy solutions.
 
Thunderbolt doesn't necessarily do that.

No, but in practical terms, it is the only thing about thunderbolt which is novel, aside from putting pci cards in external, bandwidth-starved, chassis, which was something that existed prior anyway (eg Magma's solutions).

Apple could follow same trend in pushing 4 DisplayPort streams off the interior edge of the card and perhaps leave 1-2 HDMI ports hanging off the back edge. That would allow a substantive increase in airflow ( cooler with less noise). The mainstream market won't investigate at all because it is "race to bottom" pricing which largely leads to "monkey see, monkey do" adaption of legacy solutions.

Yes, it's vitally important to change the design and implementation so that they're more quiet than the off-the-shelf retail cards are currently... which is quieter than an the ambient noise of an empty "switched off" building.
 
What if Apple Installs STD PCIe GPUs ALL Inside (absolutely no external access to the GPUs), It will allow to Plug DP output to TB3 headers, solve the TB3 issue, using COTS hardware in a iMacPro-Like Sealed Mac, so to avoid void the warranty you had to send it to Apple Authorized centers to Upgrade its GPUs, with Apple-Blessed parts.

This is an Dirty Possibility I just Realized.
 
PC Workstation vendors like HP on the Z8 offer ThunderBolt 3 via PCIe expansion cards. To my knowledge they do not require special cabling for supporting video from their (multiple optional) video cards. Why can't the Mac Pro do that rather than have the TB3 ports on the system board and avoid this hassle? Apple can make the TB3 card standard or they could offer it as a BTO option, making more money from those who need TB3 and saving money for those who do not.
 
  • Like
Reactions: mattspace
What if Apple Installs STD PCIe GPUs ALL Inside (absolutely no external access to the GPUs), It will allow to Plug DP output to TB3 headers, solve the TB3 issue, using COTS hardware in a iMacPro-Like Sealed Mac, so to avoid void the warranty you had to send it to Apple Authorized centers to Upgrade its GPUs, with Apple-Blessed parts.

Waste of time and design effort.

1. If lock up the whole case to also loop in making it painful to get to RAM , store, and secondary PCI-e slot then folks aren't going to buy it. You are trying to make the system as close to the iMac Pro as possible in that value proposition dimension. If you render them equal proposition then there is no differentiation.

No way to do any straightforward upgrades and the folks left over from the iMac Pro aren't going to buy the system.

So getting into the internal is like the current Mac Pro ( RAM and other things are straightforward to get to) . Cover comes off but it is just now there is some locked down card bar/cover that hides the card ( and perhaps all of the significantly increased cabling that fills up the headers. )

2. Problem is that this new Mac Pro would need to be assembled. That means it can be disassembled. Tamper tape on somewhat rare torx screws probably isn't going to be worth anyway. If it really were a card that Apple could have bought at Fry's/Microcenter/newegg then that tape is going to be ridiculed endlessly. Apple is adding no value to the card but oh you have to send it to a service center to replace it. Either the card adds substantive value or it doesn't.

If it doesn't then there is little good reason to block folks from getting to it.


[ The iMsc Pro is a difference context and that the display should be attached to the frame. There is nothing here without the display that merits gluing thing down. ]


3. This is more my hunch is that the more complicated contraption that you ask the Apple industrial design team to come up with the longer it will take to get to a new mac pro. So how much is the case cost ( to produced and assemble finished product) increase, and the extra, probably custom wiring, going to offset the COTS card savings.
 
PC Workstation vendors like HP on the Z8 offer ThunderBolt 3 via PCIe expansion cards. To my knowledge they do not require special cabling for supporting video from their (multiple optional) video cards. Why can't the Mac Pro do that rather than have the TB3 ports on the system board and avoid this hassle? Apple can make the TB3 card standard or they could offer it as a BTO option, making more money from those who need TB3 and saving money for those who do not.
and they don't do stuff like tie TB bandwidth to stuff like HDMI out like that 2013 mac pro did.
 
PC Workstation vendors like HP on the Z8 offer ThunderBolt 3 via PCIe expansion cards. To my knowledge they do not require special cabling for supporting video from their (multiple optional) video cards. Why can't the Mac Pro do that rather than have the TB3 ports on the system board and avoid this hassle?

1. It isn't TBv3 ports. The solutions are typically to a singular TB port. These cards don't add more than one port.


That is in part because this solution doesn't scale well. The standard TBv3 controller for a host system has two DP inputs. IF have two TBv3 out and two mini-DP ports in the mix of cables coming in/out of the card gets bigger.


2. Because they don't know if the video card output is in slot 1,2,3,4 etc the length of the external loop back has to reach all of the slots. That is relatively easy for a cable that could also be used to reach a nearby monitor. So the shortest off-the-shelf cable is used.


3. Most of these workstation vendors don't care what it looks like. The hypocrisy here is a bit humorous. There are hundreds of posts in this Mac Pro forum since 2013 that have wailed again an increase in the number of cables coming of the Mac Pro being bad, evil, etc. What does this external loopback system do? .... increase the number of cables coming out the back of the Mac Pro.

A lot of folks are going to say "well I just won't plug in the loop back DisplayPort cable so it won't".

Apple said they were coming back with displays. The Mac Pro is probably at best 1% of sales for Apple. The vast majority of the product line can actually use a TB display docking station. So what is Apple likely going to bring back? The 2004-5 era monitors that ignore laptop docking functionality or the revision something like the LG models for about a year ago with Type-C DP/TB ?

It is probably the latter ( new TB monitors). So with the BTO loop back cable TB you'd have to get TB display and connect up two cables to get it... plus add the card to the BTO. This is versus order the Mac Pro and TB display and simply just hook them up with one cord. Which one of those two is Apple likely to do?

Apple monitor sales probably won't completely dominate other monitors in Mac Pro sales, but Apple isn't going to make it put them further behind the curve by making the set up substantively worse to choose Apple's solution.

The workstation vendors are different. They do sell DP/HDMI only 'pure' monitors. So they are biased a different way away from TB. They aren't trying to sell into the Mac ecosystem.


Apple can make the TB3 card standard or they could offer it as a BTO option, making more money from those who need TB3 and saving money for those who do not.

Making TB3 BTO probably doesn't save money. The run rate of the Mac Pro is small enough that all the abnormal engineering and prep they have to do provide probably will mostly get bundled every Mac Pro for cost recovery. The extra TB header on the board and internal cabling between that and the card is hardly going to be any different from internal headers for DP and custom cables for that too.

It is more cost effective to let Apple solder the TC controllers to the board via the same solution they have worked out for the rest of the Mac market. The costs can be spread out over the whole product line if do it the same way. When the Mac Pro does it solely by itself that charge back is going to come only to the Mac Pro.

If Apple sells a TB display docking station it is going to make far more money for them than a BTO TB card.
[doublepost=1522701720][/doublepost]
and they don't do stuff like tie TB bandwidth to stuff like HDMI out like that 2013 mac pro did.

The MP 2013 does no such thing. The video coming out the HDMI port doesn't do anything to the TB bandwidth.
The video stream that HDMI port uses is actually shared with an input into the TB controllers. When HDMI activated that is switched away to the HDMI port. If anything the HDMI would actually increase TB bandwidth if you disabled a remote display from working.
 
Been busy so I haven't been able to get to replies yet, but thought I would note today's ARM rumor also put the next Mac Pro into 2019.

So it's to be believed we might not see the Mac Pro at WWDC this year.

Still think it points to Apple doing something bigger than just taking off the shelf components and fudging them into custom slots, which would take a lot of time.

Also might be time to take the AMD-Mac-in-2017-now-2018 rumor and give it a viking funeral.

I dunno if we even want to get into the ARM Mac thing and the Mac Pro, but it's odd that Apple would do an Intel Mac Pro in 2019 and then switch to ARM in 2020.
 
(snip) I dunno if we even want to get into the ARM Mac thing and the Mac Pro, but it's odd that Apple would do an Intel Mac Pro in 2019 and then switch to ARM in 2020.

IF, in fact, Apple wants to expand their in house (or at least directly controlled partner) chip usage all the way to the workstation level - why even do an Intel mMP? Let the iMacPro hold people over until a new school mMP is ready for prime time. With better eGPU support finally available, many power users would be able to add some muscle via external boxes to get by.

Cheers
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.