Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Thank God for that! Finally common sense reigns! There is TB1 on this 2011 iMac and it is a complete White Elephant. Hardly anything worthwhile available to plug into it and what does exist, due to it being a proprietary tech that Intel insists on covering additional charges for, cost prohibitive. Eight years later out comes USB4, better late than never. Should we wish that the greed-bags involved in TB pricing should die protracted painful deaths, or not? There'd be a lotta folk we could add to that list LOL.
 
... its also what Apple have done with the 2017 iMac, iMac Pro and the 2018 Mini ...
Since when do any of those include card readers? What iMac supports HDMI?

The iMacs ( normal and Pro) have had SDXC card readers on the back for a while and still do. The complaint there is often falls into the category of "can't easily reach it, so don't use it" . The 2018 Mini tosses the SDXC card to reshuffle space so that can add more air exhaust and shrinks number of type A ( 4 TBv3 instead of 2 TBv2 ). However, very similar issue the Mini is how do you get to the card reader slot on the rear in many setups.

HDMI on on a system that internally has a large display already built in kind of answers the HDMI question. Specific Mac product context matter. The exact same set of ports on every Mac system is just as flawed as arbitrarily taking them off.

There isn't a port mix that will make everyone happy. Different groups adopt to changes at different rates.

Folks who mainly use Wi-Fi and/or a dock for a network connection aren't miss a Ethernet port much. Folks who only want to plug in via Ethernet (and almost never use Wi-Fi) probably will. It really isn't which group is "more Pro". Wi-Fi or a Ethernet jack doesn't make someone professional or not in and of itself. It far more so boils down to which group is substantively bigger in the target audience Apple is selling product into.

There are a chunk of folks who will complain about SDXC card on a computer who shoot photo/video in the context where unload card A onto the Mac while in parallel fill up card B with new stuff with the camera. Fill card b, swap in card C and back-up card B. If carrying multiple cards is also carrying a card reader that much more; very often not really. But it is a "change" and many folks don't like change.
 
  • Like
Reactions: RandomDSdevel
My guess is, that iNtel made tb3 free because tb4 is just around the corner.
And aPple needs tb4 to finally go above dp1.2 with new MP.
Cheaper tb3 for me and pro level displays to new MP.
All good.
[doublepost=1551822140][/doublepost]
:( I suppose I'll have to replace all my USB wall sockets soon. :mad:
If there’s more than 10Gbps moving inside your walls, yes, you should.
 
This doesn't mean that MacBooks should still include VGA ports, so why does it follow that they should still include USB A ports?

Let's see - maybe because VGA was superseded sometime in the early 00s, needs a honking great connector that would be too thick even for the 2015 MBP design and - although it is still needed sometimes - we've put up with needing a VGA dongle for the last 18 years and were all complained out about it sometime in 2008.

DisplayPort still turns up in new devices, so does Ethernet and optical audio and card readers, some even have ****ing optical and floppy drives still.

Right, got it. The only two choices are either strictly USB-C only, or something the size of a suitcase with every single connector known to humanity that still exists on any bit of hardware, anywhere. No possibility whatsoever of some sort of sensible compromise with the one or two still most commonly used interfaces...

This much I agree with. I can make 4 work because they can do multiple things (eg run an eGPU and provide power back on one port) but 6 would be a better setup.

...but part of the reason for the connector rationing is that a fully-functional TB3/USB-C port is more complex and expensive to implement (requiring a controller and couple of PCIe lines per pair of TB3 ports, a feed from the GPU and a link to the power supply/charging circuits) than a couple of extra single-function ports (the chipset can drive a bucketload of USB 2/3 ports). The alternative being a mixture of full-fat TB3 and restricted USB-C ports.

Since when do any of those include card readers? What iMac supports HDMI?

The Mac Mini has TB3, USB-A, Ethernet and HDMI. The iMac has TB3, USB-A, Ethernet and SD card (and, yeah, a HDMI port or extra DisplayPort would be good, but it isn't there). Sorry, I didn't think I needed to dot every i and cross every t.
 
  • Like
Reactions: RandomDSdevel
There have been no physically complete controllers until now. Part of the issue is where is the huge demand pulling this to come faster. 3.2 still isn't as fast as Thunderbolt v3. It is faster, but what needs faster. USB keyboards and mice. No. Single HDDs? No. Thumb Flash Drives? Largely no. SATA SSD? No.

That was my point - For a peripheral that does need more than 5Gbps, I'm not sure why device manufacturers would put much effort into adopting USB 3.2 x2 instead of the faster TB3 which is faster and should be compatible with USB4. Sure TB3/USB4 won't be turning up on ARM (or AMD?) hosts for a couple of years, but 3.2 is probably still a year off turning up any host in quantity whereas TB3 is already out there in significant numbers.
 
I gave up trying to understand the standards. Overly complicated methods of naming and renaming.
 
Watch Apple will get confused and come out with another cable standard again.
Thunderbolt standard was developed by Intel, not Apple.
And Thunderbolt 3 has twice the bandwidth as best USB C.
Competition is better.
[doublepost=1551826062][/doublepost]
$ - is the only reason why it has not happened.
Competition is better for customers, if Intel didn't release thunderbolt then USB would still be using USB 2.0 standard for transferring data.
 
sensible compromise

Sensible to who though? That’s my whole point - literally the only other common use port is usb-a and it wouldn’t stop the complaints from people who want Ethernet or hdmi or vga or FireWire or whatever they refuse to use a $15 adapter cable for.
 
So, if Apple transitions to ARM processors, does that mean that Apple can use the thunderbolt port?

I'm not sure that was ever impossible (ask an actual systems designer whether you could build an Intel TB3 controller into an ARM system - I don't know - ARM systems-on-a-chip found in phones & tablets tend not to support PCIe, which would be end-of-argument, but the newer server-oriented chips have PCIe). Even if it were possible, Intel could potentially just refuse to license it for non-Intel chips.

However the USB4 announcement means that ARM and ARM SoC builders can start designing/building their own TB3-compatible "USB4" controllers that can be incorporated into new ARM-based chips. That's going to make the prospect far more likely in the long term - esp. the ~3-year timescale that it would probably take to safely migrate the Mac line to ARM.

Short-term, the prime target for the first foray into ARM-based Macs is the 12" MB which doesn't have (or really need) Thunderbolt and is less likely to be used for x86-specific applications.
 
  • Like
Reactions: RandomDSdevel
That was my point - For a peripheral that does need more than 5Gbps, I'm not sure why device manufacturers would put much effort into adopting USB 3.2 x2 instead of the faster TB3 which is faster and should be compatible with USB4.

There has been a faction that wanted to make new updates to USB so that it would be a "Thunderbolt killer". Thunderbolt wasn't a committee based standard so therefore it was a "good thing" to make moves to 'kill' it by covering many of the features that differentiated Thunderbolt. Other faction was just competitive ( "have to beat it because USB is 'better' design and philosophy. Some folks hate the encoding for transport of a combination of protocols. Data only no Video or anything else that isn't USB as that makes implementation easier of the controller and completed systems. ).

Pragmatically in order for Intel (and Apple) to open up Thunderbolt as a community standard they were going to need to pass this off to some standards group. One option was to form a new group that would take ownership. Another option would be to find a home in a bodies that would take it in. The USB Type-C alt mode was a path to doing the latter.

This could backfire on the Thunderbolt advocates if the "kill Thunderbolt' crowd inside the USB-IF gets an upper hand over time. If it was more of "hate Intel/Apple control" than "kill Thunderbolt" then it will probably do OK.


Sure TB3/USB4 won't be turning up on ARM (or AMD?) hosts for a couple of years, but 3.2 is probably still a year off turning up any host in quantity whereas TB3 is already out there in significant numbers.

USB 3.2 isn't a year off at all.
https://www.anandtech.com/show/14027/usb-32-at-20-gbs-coming-to-highend-desktops-this-year

USB4 isn't going to turn up on any system probably for 1-2 years. ( if USB4 bring USB into the scope of using active cables then event Intel doesn't have something right now that "just works". They are probably more ready than everyone else, but there is probably some "extra stuff" that USB4 is going to throw on top to pull the mix deeper into the USB sphere of influence. ) .


But USB3.2 blocked some how isn't really true at all. It is years away from being imbedded into the host's core I/O support chipset, but into systems that isn't true. Non core chipset rollout is exactly not the pattern than USB 3.0 , USB 3.1 ... rolled out on. The first step is adding discrete USB controllers to the motherboards. Typically to desktops first because usually space on the motherboard ( where may be an agument of the current chipset due to bandwidth sharing across ports in a USB controller. The ports on a single USB controller aren't additive in bandwidth.]

It would be far more so a matter of USB 3.2 drivers written for AMD or ARM systems that talked to the discrete chipsets.

One of the ongoing dust ups with USB roll out after USB 2.0 is that the discrete USB controllers makers get a running start before the core I/O chipsets from the CPUs suck up most of the port provisioning. That's an economical thing not a specific CPU implementational thing.

Since USB 3.2 is 20Gb/s the USB controller is going to need something more than just a x2 PCI-e v3 budget. Pragmatically there isn't really a x3 so they'll need a x4. For more than a few boards that will be the "hang up". But it will probably be cheaper ti implement, test, deploy to assign x4 to a USB 3.2 controller than to x4 TBv3 in many desktops. Especially in the context of soldering it to the motherboard. The cheaper path is one most motherboard vendors will take. Another "cheap" path is to take the x2 PCI-e v3 assigned to a 3.1 gen 2 controller and just incrementally under provision the 3.2 with just 16 Gb/s. 11-12 would still be faster than 8-9 (minus USB overhead) .

Yes Thunderbolt 3 is already out there. But USB 3.1 gens , 3.0 and 2.0 are already out there too. More of those devices really doesn't hurt USB4 because it covers all of those. 3.2 doesn't have to be a killer. It just needs to cover incrementally more stuff. For the deeply cost adverse it probably will it is a bit faster than older USB and cheaper than Thunderbolt.
 
  • Like
Reactions: RandomDSdevel
USB 3.2 isn't a year off at all.

From your link "According to the organization that sets the standards for the USB interface, discrete USB 3.2 controllers capable of supporting the standard's new 20 Gb/s Type-C mode will be available this year." - those chips have to turn up in motherboards, those motherboards have to turn up in systems, then those systems actually have to ship. And that's just higher-end desktops that use discrete controllers rather than cheaper systems and laptops that will have to wait for USB 3.2 in Intel/AMD chipsets. So, I'd say "a year off" is fairly reasonable, certainly before there is any significant demand for 3.2 peripherals...

USB4 isn't going to turn up on any system probably for 1-2 years.

But the point is that USB4 == Thunderbolt 3 for most practical purposes - USB-IF have said that it's going to be compatible. There are already systems of all types with Thunderbolt 3 in circulation who can buy your TB3 devices... and you know what they won't be able to use? USB 3.2x2.

It looks like part of Intel's strategy has been to make their TB3 chipset the "go to" discrete USB-C controller, with TB3 capability as a bonus, which has helped build a (maybe) critical mass of TB3 devices. I expect they're partly betting on having a 'first-mover advantage' when competing USB4 controllers actually appear.

But it will probably be cheaper ti implement, test, deploy to assign x4 to a USB 3.2 controller than to x4 TBv3 in many desktops.

Not sure why - if a full-stack discrete USB 3.2 controller is going to need 4xPCIe, a DisplayPort stream and a hook-up to the power supply then the implementation and wiring is going to be much the same as a Thunderbolt 3 controller, so its just down to the cost of the chip. At the moment, 3.2 controllers don't exist, the first few will probably be expensive and after that one assumes that Intel will adjust their prices to compete.

Meanwhile, Intel have also said that future CPUs will include TB3 on-chip... which will make it much easier and cheaper to implement TB3. The interesting question is - will Intel add USB 3.2x2 support at the same time? Not doing that could pretty much kill 3.2g2.
 
  • Like
Reactions: RandomDSdevel
There is not real increase in speed here. This is more about leaving the Type-A port behind as much as anything about data bandwidth performance.

Extra VRAM in a external GPU would be RAM but trying to separate main CPU from RAM neither Thunderbolt or USB have been or are trying to solve that issue in the future. There are some looney rumors to the effect, but they are just looney, that isn't what the USB-IF is doing or even wants to do.

I know it's hard to do but that feature would be necessary for a modular Mac so i'm hoping for a technical solution to make it possible.
 
I know it's hard to do but that feature would be necessary for a modular Mac so i'm hoping for a technical solution to make it possible.
Why would it be necessary (or even advantageous) to separate the CPU from the RAM to make a modular Mac? It should be possible with an optical interconnect (not sure what the maximum distance would be) but why would you want to physically separate RAM from the CPU?
 
  • Like
Reactions: RandomDSdevel
Why would it be necessary (or even advantageous) to separate the CPU from the RAM to make a modular Mac? It should be possible with an optical interconnect (not sure what the maximum distance would be) but why would you want to physically separate RAM from the CPU?

[doublepost=1551985776][/doublepost]
User replaceable RAM modules are literally a thing that exists. It doesn’t need to be external to be modular/upgradable.

To upgrade or add an extra CPU in the mix, my fantasy is that one could add a second or third CPU into the mix, bigger RAM module, and one or more GPU's. All in neatly packaged boxes that stack. :)

If everything except the hard disk and GPU is in one module then it's hardly more modular than a Mac mini with an external GPU.
 
[doublepost=1551985776][/doublepost]

To upgrade or add an extra CPU in the mix, my fantasy is that one could add a second or third CPU into the mix, bigger RAM module, and one or more GPU's. All in neatly packaged boxes that stack. :)
All you need are four or eight sockets for RAM next to the CPU, you’ll have all the memory upgrade potential you could hope for. There’s no need to physically separate the RAM from CPU in separate enclosures, for the machine to still be extremely expandable/upgradable.
 
  • Like
Reactions: RandomDSdevel
Highly unlikely. there will likely be TB3 features that are optional in USB 4. Like 100W Power Delivery.

Ok, yeah I understand now. It's the damn "optional" part that's the kicker. I hate standards that have options. There's a communication protocol called DNP 3 that I personally think is garbage, largely due to how optional so much of it is. It ends up meaning "lowest common denominator" for device comms using the protocol.
 
  • Like
Reactions: RandomDSdevel
To upgrade or add an extra CPU in the mix, my fantasy is that one could add a second or third CPU into the mix, bigger RAM module, and one or more GPU's. All in neatly packaged boxes that stack. :)


For RAM modules every CPU plausible comes with design constraints that the RAM modules can't be more than X inches/centimeters away from the CPU package. That has to do with the extremely high speed data bus between the CPU package and the memory. Latency and signal quality are extremely important. "X" inches/centimeters often for modern memory speeds is in the ballpark of 4 inches ( 10 centimeters). Putting RAM in a module 2-3 inches away isn't practical when will have even more inches on the logic board between connectors and CPU/RAM slots.

The real issues isn't what the connecting network protocol is... running out of physical distance is the primarily issue. That's why it falls into the looney toon category. Let's change physics and then it will work..... easy to do in cartoon world.

"Snap on CPU". Again there are design limits to how far apart the CPUs sockets can be and just the multiple package links provided in some CPU models ( e.g., Xeon SP , AMD EYPC ). Those are typically in the same range as the RAM limits for primarily the same latency and signal quality issues.

Folks have built very large CPU socket number systems that covered multiple cabinets. Those are all significantly higher NUMA (non uniform memory access0 systems with custom OS kernels to deal with he enhanced NUMA impacts. Is Apple going to fork off a custom version of macOS for a high NUMA model? Probably not. Those custom CPU "glue' chips that they'd need to create to compose the high NUMA system tend of be quite expensive also. Is Apple going to try to crank the implementation costs dramatically higher? Again ... probably not.

Adding "yet another" GPU can be just be done with normal Thunderbolt v3 ( or USB4 in the future). If put the nominal 1-2 inside the Mac system and just leave the 3+ to Thunderbolt there is nothing extremely special that Apple needs to create. Apple already has deployed a system that does that.

If everything except the hard disk and GPU is in one module then it's hardly more modular than a Mac mini with an external GPU.

Hard disk? It is unlikely that any future Apple system comes with a hard disk as the default configuration at all.
If there is something likely to be chucked out of the system box into a "snap on" module that would be the hard disk(s). For example a "snap on" 2-5 bay module for SATA devices. That could basically just be Thunderbolt 3 tweaked into a "snap on" connector.

Trying to maximize it with the characteristics with Mini makes it more like a Mini. Apple has a Mini in their line up. They don't need another one. The Mini is discrete GPU less in part to keep it out of the iMac space; let alone the Mac Pro space. Gimping the Mac Pro without a GPU is beyond loopy for a Graphical UI (GUI) focused operating system. If focused on graphics then a graphics processor is a key, essential component of the system; not some optional widget.

The Mini has a GPU. Some folks may not like its limitations but it does have one ( and is "good enough" for a wide variety of uses).

Additionally, the notion of literal desktop stackable and high performance is highly questionable too. Controlling noise , providing independent high power , occupying smaller literal desktop footprint all get more problematical as pull the system closer to the user and desktop working area.

The Mini largely avoids that by capping the power used.
 
  • Like
Reactions: RandomDSdevel
Highly unlikely. there will likely be TB3 features that are optional in USB 4. Like 100W Power Delivery.
Ok, yeah I understand now. It's the damn "optional" part that's the kicker.

As for 100W power delivery, (a) pretty sure its already been absorbed into the USB Power Delivery spec and (b) its already optional in Thunderbolt 3 (iMac TB3 ports are max. 15W out - for example). You'll only find it in a few TB3 docks and the LG 5k Thunderbolt display.

If you try for a 'one plug for everything' system, then 'optional' features are inevitable - its not feasible for every device to support every feature.
 
  • Like
Reactions: RandomDSdevel
For RAM modules every CPU plausible comes with design constraints that the RAM modules can't be more than X inches/centimeters away from the CPU package. That has to do with the extremely high speed data bus between the CPU package and the memory. Latency and signal quality are extremely important. "X" inches/centimeters often for modern memory speeds is in the ballpark of 4 inches ( 10 centimeters). Putting RAM in a module 2-3 inches away isn't practical when will have even more inches on the logic board between connectors and CPU/RAM slots.

The real issues isn't what the connecting network protocol is... running out of physical distance is the primarily issue. That's why it falls into the looney toon category. Let's change physics and then it will work..... easy to do in cartoon world.

"Snap on CPU". Again there are design limits to how far apart the CPUs sockets can be and just the multiple package links provided in some CPU models ( e.g., Xeon SP , AMD EYPC ). Those are typically in the same range as the RAM limits for primarily the same latency and signal quality issues.

Folks have built very large CPU socket number systems that covered multiple cabinets. Those are all significantly higher NUMA (non uniform memory access0 systems with custom OS kernels to deal with he enhanced NUMA impacts. Is Apple going to fork off a custom version of macOS for a high NUMA model? Probably not. Those custom CPU "glue' chips that they'd need to create to compose the high NUMA system tend of be quite expensive also. Is Apple going to try to crank the implementation costs dramatically higher? Again ... probably not.

Adding "yet another" GPU can be just be done with normal Thunderbolt v3 ( or USB4 in the future). If put the nominal 1-2 inside the Mac system and just leave the 3+ to Thunderbolt there is nothing extremely special that Apple needs to create. Apple already has deployed a system that does that.

Hard disk? It is unlikely that any future Apple system comes with a hard disk as the default configuration at all.
If there is something likely to be chucked out of the system box into a "snap on" module that would be the hard disk(s). For example a "snap on" 2-5 bay module for SATA devices. That could basically just be Thunderbolt 3 tweaked into a "snap on" connector.

Trying to maximize it with the characteristics with Mini makes it more like a Mini. Apple has a Mini in their line up. They don't need another one. The Mini is discrete GPU less in part to keep it out of the iMac space; let alone the Mac Pro space. Gimping the Mac Pro without a GPU is beyond loopy for a Graphical UI (GUI) focused operating system. If focused on graphics then a graphics processor is a key, essential component of the system; not some optional widget.

The Mini has a GPU. Some folks may not like its limitations but it does have one ( and is "good enough" for a wide variety of uses).

Additionally, the notion of literal desktop stackable and high performance is highly questionable too. Controlling noise , providing independent high power , occupying smaller literal desktop footprint all get more problematical as pull the system closer to the user and desktop working area.

The Mini largely avoids that by capping the power used.

Thank you for the great explanation. If i understand correctly a real modular Mac isn't possible but it could be more of a Mac mini with external disks and GPU, maybe with a system to daisy chain a mini render farm but even that's going to be a stretch.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.