Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
is ARM/Arm suppose to be AMD/Arm or some significance to same word with caps.

AMD has leveraged ASMedia to do their basic I/O connectivity library for them. It would make some sense for Apple just to jump on the same gravy train.

https://www.guru3d.com/news-story/asmedia-600-chipset-family-for-zen-3-available-in-late-2020.html



As to whether Apple gets off of Intel-Mac with USB 4 ... Tiger Lake might have a shot, if only because Ice Lake got one and there may be substantive driver implementation overlap and just cheaper to do.

ARM Holdings, Arm, the technology. I should pick one and stick with it. I don’t mean AMD, because an AMD CPU is never going to be used in a Mac product. I suspect ASMedia might end up supplying USB4 controller chips to Apple, but I’m dubious right now.

Apple wants to make as clean a break with Intel as possible, so again I don’t think Apple will ever outfit an Intel-based Mac with USB4, instead reserving it for their own A-Series based Macs. Obviously, the Mac Pro may get a PCIe expansion card option at some point from Sonnet or someone else, but that’s it. It seems there is a lot of wishful thinking in this forum thread that Apple is going to give everyone last bit of cherry on top of their few remaining Intel Mac updates, but I assure you that is not in Apple’s best interest and they won’t be doing it.
 
If one has to try to explain to someone that "USB-C is just the port you see" -- the battle is lost right away.
To be honest USB-C is one of the best inventions since slice bread, but USB IF ****** way of marketing and how manufacturers have implemented USB have been mangled. Hopefully USB4 will force USB-C to really be what it should have been from the beginning. But I am afraid that we still will have cables, connectors, hubs, hosts without the necessary connections, wires or whatever they are able to skip to save their two cents.
 
I have so many questions:

1-So is TB and USB4 the one and the same now?
2-How does Intel benefit from giving away TB royalty free?
3-Why do we need HDMI, DisplayPort, and USB4, why can we just use 1 port to transfer everything?
4-at 77Gbps, isn't this faster than Ethernet? Can this be used to send internet to households instead of fiber?
5-What are some real world uses for 77Gbsp?
Electrical engineer here. Just to bring some insight to your part about "Can this be used to send internet to households instead of fibre"...

Optic fibre is, and will likely remain the fastest practical medium for internet speeds. The reasons for this are:

1) visible light has the highest frequencies that don't give out ionizing radiation. Ionizing radiation gives us cancer, and is radiated by things such as ultraviolet light (skin cancer from too much sunlight), x-rays (you will get cancer if you have excessive exposure to x-rays), and gamma rays (nuclear bombs etc). Anything of lower frequencies only gives heat radiation, and can cook us if it's too high energy, but otherwise is harmless.

2) The raw maximum data transfer without applying any multiplexing (eg https://en.wikipedia.org/wiki/Frequency-division_multiplexing) is simply half of the frequency of the signal. Thus for visible light, this is around 750 THz for violet light, and thus raw data transfer of up to 375 Tbps. For the fastest version of 5G (which isn't being implemented yet, all the 5G we are starting to get now is much lower frequency), this is around 40 GHz, and thus 20 Gbps. Note that all kinds of multiplexing techniques can be, and are, applied, and thus the theoretical bandwidth achieved can be much higher. However, various versions of these techniques are used by all of optic fibre, 5G, USB, TB3, ethernet, everything, so it's useful as the starting point of these comparisons to start with the raw maximum limit. And thus, a single strand of optic fibre has around 10,000 times the speed capability of 5G. And that's just a single strand. Use two strands and you've just doubled your bandwidth without any multiplexing. You can't double the air, so 5G has only one "strand". Use a 864 strand optic fibre cable (which is commonly manufactured), and you've got roughly 8,640,000 times the raw bandwidth of 5G, and so on. The one advantage that 5G (And 4G, 3G etc) has, is simply mobility. Also note that part of the reason why the faster and higher frequency versions of 5G aren't being implemented, is that the higher you go with these frequencies, the more the signal is blocked by objects, and the more you need direct line of sight. Thus, anything faster, such as a 6G will have very limited, direct line of sight usage. We will, however, keep on refining the multiplexing techniques, as the hardware gains more sophisticated abilities to split the signals into finer bins.

3) When comparing other cabled methods (e.g. ethernet, USB, TB3) , all those methods transfer electricity through wires. These also have the same formula of half the frequency for maximum raw bandwidth. However, as you increase the frequency of an electric signal in a wire, you get all kinds of problems with the electric energy being converted to magnetic energy, and leaking out of the cable. This is helped with things like twisted pairs, and shielding, but it get expensive, and doesn't solve the problem, it only helps it. The energy attenuation is also mitigated by reducing the cable length, thus why you see short maximum cable lengths for some higher speed data transfer protocols, which limits the practical use cases. Whereas with optic fibre, light reflection and refraction results in very low energy leaking, and what leaking that does occur can be solved with repeaters, and thus optic fibre is used for data cables long enough to cross the entire planet. The two advantages that electrical cabled methods have are: for optic fibres, the signal has to be transferred from electricity to light and back to electricity, and thus USB/ethernet/TB3 are much simpler and cheaper for short distance connectivity; and they can transfer electric power, whereas optic fibre can't, and in fact the hardware that translates the signal from electricity to light and back again need a power source.

And thus for data:
Optic fibre is king for speed and distance.
4G/5G is king for mobility
Ethernet/USB/TB are king for short distance connectivity and power supply.

I know this didn't exactly answer your question of "Can this be used to send internet to households instead of fibre", but it does explain why we use optic fibre so much for fast internet.
 
Last edited:
  • Like
Reactions: theorist9
Not sure if I am right here, speed over copper seems to degrade fast the longer the cable gets, this problem is far less with Fiber Optics.
Tidbit...Theoretical limit of FO is just north of 1 Petabit
Partially correct. It's more correct to say that the signal attenuates the longer the cable gets, which in essence means that the signal becomes harder for the receiving side to understand. The quality of the cable can impact this, which is why there are ethernet cable categories.

This happens with fiber optics as well, it just takes a lot more to attenuate the signal to the point the receiving side can't understand the signal (and it isn't impacted by external interference like copper wires can be), although there's a lot more to it obviously. You can get a couple kilometers out of a single fiber optic cable in some cases (depending on the type of fiber optics being used), at very high speeds. The only real issue is cost. Those cables are extremely expensive.
 
  • Like
Reactions: justperry
I'm confused by what these different standards represent and how they combine. The way Intel explains the convergence of USB-C, TB3, USB 3.1, and DP 1.2 is as follows:

"Thunderbolt 3 is a superset solution which includes USB 3.1 (10Gbps), and adds 40Gbps Thunderbolt and DisplayPort 1.2 from a single USB-C port." (https://thunderbolttechnology.net/tech/faq)

I (very roughly) think this means they've combined the USB-C physical connector, the TB3 "physical interface layer" (which I think means how the wires are configured internally within the cable, what power it carries, whether it has a separate audio channel, whether it's dual-directional, etc.), the DP 1.2 video transmission standard, and the USB data transmission standard. Though I've not found a single article that describes all of these together clearly.

Anyways, can anyone give me a clear, technically accurate description of the covergence of USB-C, TB3, USB4, and DP 2.0 to which this article is referring? Is TB3 still the superset, or is USB4 now the superset under which TB3 will be subsumed?
 
Last edited:
Agreed. The current iMac looks dated. I can see the Mac Pro design language trickling down into consumer products.
It would be cool if Apple could use the XDR stand, so you could change the display from landscape to portrait. Not likely to happen, since it would push the cost of the iMac into the stratosphere.
 
  • Like
Reactions: Tekguy0
Not sure how a bunch of laptops that sell less than MacBooks are proof that "volume is extremely high". EliteBook? ThinkPad? Seriously? Those aren't mass-market. The XPS kind of is, but most of the existing ones aren't Thunderbolt.

And none of that negates my third point: you can just use a USB-C dock regardless. Which almost everyone ends up doing.



For Thunderbolt demand? Where?



The cost is zero.
Thinkpad is not mass-market? Seriously?
[automerge]1588219709[/automerge]
I'm confused by what these different standards represent and how they combine. The way Intel explains the convergence of USB-C, TB3, USB 3.1, and DP 1.2 is as follows:

"Thunderbolt 3 is a superset solution which includes USB 3.1 (10Gbps), and adds 40Gbps Thunderbolt and DisplayPort 1.2 from a single USB-C port." (https://thunderbolttechnology.net/tech/faq)

I (very roughly) think this means they've combined the USB-C physical connector, the TB3 "physical interface layer" (which I think means how the wires are configured internally within the cable, what power it carries, whether it has a separate audio channel, whether it's dual-directional, etc.), the DP 1.2 video transmission standard, and the USB data transmission standard. Though I've not found a single article that describes all of these together clearly.

Anyways, can anyone give me a clear, technically accurate description of the covergence of USB-C, TB3, USB4, and DP 2.0 to which this article is referring? Is TB3 still the superset, or is USB4 now the superset under which TB3 will be subsumed?
TB 3 has two generations: Gen1 is Alpine Ridge, and Gen2 is Titan Ridge. Alpine Ridge supports two streams of DP 1.2, whereas Titan Ridge supports two streams of DP 1.4 across each TB3 port.

Alpine Ridge appeared with Intel generation 6 and 7 CPUs, whereas Titan Ridge appeared with Intel generation 8, 9, and 10 CPUs. However, Intel iGPUs didn't support DP1.4 until Intel Iris Plus G7 (which first showed up with the Intel gen 10 CPUs). The first Apple laptops with Intel Iris Plus G7 iGPU are the 2020 MacBook Airs.

So, starting with 2018 MacBook Pros, all 15" MacBook Pros (which all have dGPUs) support two DP 1.4 streams on each TB3 Port. The 13" MacBook Pros (2018, 2019) and 13" MacBook Airs (2018, 2019) support only two DP 1.2 streams on each Titan Ridge TB3 port because the Intel iGPU in these laptops doesn't support DP1.4, but the 2020 MacBook Air supports two DP 1.4 streams on each TB3 port thanks to the Intel Iris Plus G7 iGPU.

Each DP 1.4 stream can do up to 5K @60Hz or up to 8K @30Hz uncompressed, whereas each DP 1.2 stream can do up to 4K @60Hz uncompressed. With MST, each TB3 Alpine Ridge port can do up to one 5K @60Hz uncompressed by combining two DP 1.2 streams out of each TB3 Alpine Ridge port.

Therefore, starting with 2016 MacBook Pros and 2018 MacBook Airs, all Apple laptops support at least one 5K @60Hz monitor uncompressed out of each TB3 port (be it Alpine Ridge with DP1.2, Titan Ridge with iGPUs supporting only DP1.2, or Titan Ridge with dGPU supporting DP1.4 or iGPU supporting DP1.4).

With 15" MacBook Pros (2018, 2019), 16" MacBook Pro (2019), and the 2020 MacBook Air, it's possible to drive one 6K monitor @60Hz uncompressed by combining two DP1.4 streams out of each TB3 Titan Ridge port, using MST.

This article seems to suggest that USB4 will support two DP2.0 streams on each USB4 port. Each DP2.0 stream supports up to 10K @60Hz uncompressed, whereas each DP 1.4 stream supports up to 5K @60Hz uncompressed or 8K @30Hz uncompressed.

Below is the link to the DP2.0 section on Wikipedia


 
Last edited:
Supports Up to Two 8K Displays or One 16K Display

Or Half of One 32K Display!


16K ffs... TV and monitor manufacturers are pushing for more and more pixels and nobody has content that supports it. When even your 4K video will look washed out on your 16K monitor manufacturers should think about priorities. Ridiculous.
 
TB 3 has two generations: Gen1 is Alpine Ridge, and Gen2 is Titan Ridge. Alpine Ridge supports two streams of DP 1.2, whereas Titan Ridge supports two streams of DP 1.4 across each TB3 port.
Two filled streams of HBR3 (DisplayPort 1.4) would be too much for Thunderbolt 3. Titan Ridge can do two streams only if not all of the HBR3 bandwidth is required for the pixels. Apple does this for the Apple Pro Display XDR. Apple has their own drivers/firmware to allow that. I don't think it's allowed with the Windows drivers.

Alpine Ridge appeared with Intel generation 6 and 7 CPUs, whereas Titan Ridge appeared with Intel generation 8, 9, and 10 CPUs. However, Intel iGPUs didn't support DP1.4 until Intel Iris Plus G7 (which first showed up with the Intel gen 10 CPUs). The first Apple laptops with Intel Iris Plus G7 iGPU are the 2020 MacBook Airs.
To be clear, the CPUs show the timeline of Thunderbolt appearances but it doesn't mean those CPUs are required for Thunderbolt. A Thunderbolt controller is a PCIe device that can be added to any computer that supports PCIe (my Mac Pro 2008, or an AMD computer) - only problem is that drivers are not fully developed for that use case. It may be possible to add Thunderbolt to my Power Mac G5...

The 10th gen CPUs with gen 11 graphics are interesting. The Thunderbolt controller is built into the CPU.

Each DP 1.4 stream can do up to 5K @60Hz or up to 8K @30Hz uncompressed, whereas each DP 1.2 stream can do up to 4K @60Hz uncompressed.
Like I said above, two full streams of four lane HBR3 would exceed Thunderbolt 3 limit (so two 5K 60Hz or 8K 30Hz 8 bpc RGB uncompressed signals is not possible). 6K uses dual link SST to transmit two tiles of 3008x3384 at 10 bpc RGB without DSC. Each tile requires less bandwidth than 5K 60Hz or 8K 30Hz 8bpc RGB. Stuffing symbols are used by DisplayPort to fill the bandwidth - more stuffing symbols are used when there is less pixels to transmit. Thunderbolt doesn't transmit stuffing symbols which means that it does not send the entire 51.84 Gbps of dual HBR3 (only 36.6 Gbps is required for 6K). The Thunderbolt controller in the display recreates the stuffing symbols when generating the output DisplayPort signals.

With MST, each TB3 Alpine Ridge port can do up to one 5K @60Hz uncompressed by combining two DP 1.2 streams out of each TB3 Alpine Ridge port.

With 15" MacBook Pros (2018, 2019), 16" MacBook Pro (2019), and the 2020 MacBook Air, it's possible to drive one 6K monitor @60Hz uncompressed by combining two DP1.4 streams out of each TB3 Titan Ridge port, using MST.
MST (Multi Stream Transport) sends multiple streams (one for each display) down a single DisplayPort connection. macOS does not support MST except for old 4K displays that used separate streams for the left and right side of the display. What you are talking about is what Apple calls Dual Link SST (Single Stream Transport) which uses two full DisplayPort connections using SST (LG UltraFine 5K, Dell UP2715K, HP z27q). Apple does not support all dual link SST displays (or maybe they don't support the ones that use HBR3: Dell UP3218K, Acer XV273K).

Thunderbolt can transmit up to two different DisplayPort signals (behave as two seperate DisplayPort connections). Some PCs might only allow for one DisplayPort connection connected. Maybe someone will make a USB4 controller that will allow more than two DisplayPort connections.

This article seems to suggest that USB4 will support two DP2.0 streams on each USB4 port. Each DP2.0 stream supports up to 10K @60Hz uncompressed, whereas each DP 1.4 stream supports up to 5K @60Hz uncompressed or 8K @30Hz uncompressed.
This article says nothing about tunnelling DisplayPort over USB4 (only that it is one method of transmitting DisplayPort). The article is specifically about USB-C DisplayPort 2.0 alt mode which is not tunnelling DisplayPort. I believe USB-C DisplayPort 2.0 alt mode will have similar options as the USB-C DisplayPort 1.2/1.4 alt modes:
1) USB 2.0 with four lanes of DisplayPort
2) USB 3.x with 2 lanes of DisplayPort (the article did not mention 20 Gbps USB4 - only SuperSpeed USB which I assume is the gen 2 speed of 10 Gbps).
 
Thinkpad is not mass-market? Seriously?

If you compare the pricing to the Ideapad, not really. And once you factor in that we're talking about ThinkPads with Thunderbolt? The low-cost E series doesn't have that.
 
With USB 4 / TB 4, OEMs no longer have to pay a fee to license TB; however they do have to pay a fee to certify that their implementation of TB is compliant with Intel’s standards. Even though TB 4 will be available to all OEMs license-free, we may never see AMD with TB 4 if Intel refuses to certify an OEM design using an AMD chip(set).

I have so many questions:

1-So is TB and USB4 the one and the same now?
2-How does Intel benefit from giving away TB royalty free?
3-Why do we need HDMI, DisplayPort, and USB4, why can we just use 1 port to transfer everything?
4-at 77Gbps, isn't this faster than Ethernet? Can this be used to send internet to households instead of fiber?
5-What are some real world uses for 77Gbsp?
Lightning, yes (not sure how that's relevant to this thread). Thunderbolt, no. Thunderbolt used to be specific to Intel chips, but it's now royalty-free.



Elitebooks start at $1,100. That's not the kind of laptop most corporations buy.



In the $1,100-and-above segment, the market shares start to look a lot different.



OK, good.
 
i need TB4 so my egpu won't be bandwidth choked.
Thunderbolt 4 is thunderbolt 3
[automerge]1588257171[/automerge]
USB4 has a 40 Gbps mode AND support for Thunderbolt 3, two separate modes. Unfortunately it's not a given that devices will support 40 Gbps, or TB3 with USB4, it is likely laptops will support both (as long as Intel's controllers do). Phones and ARM tablets are not too likely to have either 40 Gbps or Thunderbolt 3 though, as they have historically not needed nor supported Thunderbolt 3.


Thunderbolt 4 is an unknown at this point. I had heard it could be bumped to 80 Gbps or stick to 40 Gbps. Maybe there won't be a TB4! (well I heard enough rumors that it's being worked on so I think it will happen).

Note that it's using 1 Superspeed lane for 40 Gbps, theoretically they could have made an 80 Gbps mode using both lanes in USB4 (just like USB 3.2 Gen 2x2 which uses both Superspeed lanes). I guess 40 Gbps was decided to be enough for any application using data that's not a video signal. It's odd that USB is okay with using both lanes for video though. But hey, it does mean that DP doesn't really need its own cable any more, it could be USB-C entirely now.
If Thunderbolt 4 is just the same as Thunderbolt it doesn’t make really sense to have it while you can just have the Usb 4 , which is also a rebranding , i hope they’re not saying is 80 Gbps just to save the thunderbolt 3 devices sellings and then next year to say ..ok you know what ? Its 80. Otherwise it doesnt make really sense also close to the first Arm new products incoming.
 
Last edited:
ARM Holdings, Arm, the technology. I should pick one and stick with it. I don’t mean AMD, because an AMD CPU is never going to be used in a Mac product. I suspect ASMedia might end up supplying USB4 controller chips to Apple, but I’m dubious right now.

Apple wants to make as clean a break with Intel as possible, so again I don’t think Apple will ever outfit an Intel-based Mac with USB4, instead reserving it for their own A-Series based Macs. Obviously, the Mac Pro may get a PCIe expansion card option at some point from Sonnet or someone else, but that’s it. It seems there is a lot of wishful thinking in this forum thread that Apple is going to give everyone last bit of cherry on top of their few remaining Intel Mac updates, but I assure you that is not in Apple’s best interest and they won’t be doing it.
They can’t do a very clean and quick break losing all the Intel Mac users so fast , these things take time but if really thunderbolt 4 is just thunderbolt 3 plus usb4 rebranded with some kind of Rosetta , time and just an universal usb standard, this will make the shift easier , not that Thunderbolt is so important to stop Apple from doing the shift , as for Intel Jobs said that they were developing Osx for both Ppc and Intel , i suppose its the same now .
What i really don’t understand is ,what is this Intel “strategy” for? To lose Apple? Very well played then.
 
4. Ethernet is now at 100Gb/s bidirectional, but if you just focuses on transfer speed, but not on quality of transfer, reliability, cable lengths, and so on, it may seem like they could be easily interchanged. Unfortunately there are more than just raw transfer speed at one meter length.

I thought ethernet fastest connection is the Cat6e at 10Gbps? 100GBps bidirectional, you mean 100up and 100down simultaneously ?
[automerge]1588261630[/automerge]
Electrical engineer here. Just to bring some insight to your part about "Can this be used to send internet to households instead of fibre"...

Optic fibre is, and will likely remain the fastest practical medium for internet speeds. The reasons for this are:

1) visible light has the highest frequencies that don't give out ionizing radiation. Ionizing radiation gives us cancer, and is radiated by things such as ultraviolet light (skin cancer from too much sunlight), x-rays (you will get cancer if you have excessive exposure to x-rays), and gamma rays (nuclear bombs etc). Anything of lower frequencies only gives heat radiation, and can cook us if it's too high energy, but otherwise is harmless.

2) The raw maximum data transfer without applying any multiplexing (eg https://en.wikipedia.org/wiki/Frequency-division_multiplexing) is simply half of the frequency of the signal. Thus for visible light, this is around 750 THz for violet light, and thus raw data transfer of up to 375 Tbps. For the fastest version of 5G (which isn't being implemented yet, all the 5G we are starting to get now is much lower frequency), this is around 40 GHz, and thus 20 Gbps. Note that all kinds of multiplexing techniques can be, and are, applied, and thus the theoretical bandwidth achieved can be much higher. However, various versions of these techniques are used by all of optic fibre, 5G, USB, TB3, ethernet, everything, so it's useful as the starting point of these comparisons to start with the raw maximum limit. And thus, a single strand of optic fibre has around 10,000 times the speed capability of 5G. And that's just a single strand. Use two strands and you've just doubled your bandwidth without any multiplexing. You can't double the air, so 5G has only one "strand". Use a 864 strand optic fibre cable (which is commonly manufactured), and you've got roughly 8,640,000 times the raw bandwidth of 5G, and so on. The one advantage that 5G (And 4G, 3G etc) has, is simply mobility. Also note that part of the reason why the faster and higher frequency versions of 5G aren't being implemented, is that the higher you go with these frequencies, the more the signal is blocked by objects, and the more you need direct line of sight. Thus, anything faster, such as a 6G will have very limited, direct line of sight usage. We will, however, keep on refining the multiplexing techniques, as the hardware gains more sophisticated abilities to split the signals into finer bins.

3) When comparing other cabled methods (e.g. ethernet, USB, TB3) , all those methods transfer electricity through wires. These also have the same formula of half the frequency for maximum raw bandwidth. However, as you increase the frequency of an electric signal in a wire, you get all kinds of problems with the electric energy being converted to magnetic energy, and leaking out of the cable. This is helped with things like twisted pairs, and shielding, but it get expensive, and doesn't solve the problem, it only helps it. The energy attenuation is also mitigated by reducing the cable length, thus why you see short maximum cable lengths for some higher speed data transfer protocols, which limits the practical use cases. Whereas with optic fibre, light reflection and refraction results in very low energy leaking, and what leaking that does occur can be solved with repeaters, and thus optic fibre is used for data cables long enough to cross the entire planet. The two advantages that electrical cabled methods have are: for optic fibres, the signal has to be transferred from electricity to light and back to electricity, and thus USB/ethernet/TB3 are much simpler and cheaper for short distance connectivity; and they can transfer electric power, whereas optic fibre can't, and in fact the hardware that translates the signal from electricity to light and back again need a power source.

And thus for data:
Optic fibre is king for speed and distance.
4G/5G is king for mobility
Ethernet/USB/TB are king for short distance connectivity and power supply.

I know this didn't exactly answer your question of "Can this be used to send internet to households instead of fibre", but it does explain why we use optic fibre so much for fast internet.

actually it did. thanks that was an interesting read.
If fiber optic speed is 375 Tbps, why its limited to 1Gbps by ISPs?
 
Last edited:
i need TB4 so my egpu won't be bandwidth choked.

This will be one of the many great uses from it. Anything that can give us proper PCI-Express speeds externally is very welcome.
[automerge]1588269797[/automerge]
If fiber optic speed is 375 Tbps, why its limited to 1Gbps by ISPs?

Because they don't have anything like the bandwidth or the uplink to support more simultaneously.
 
The Super Bowl broadcasters said 1080p60 was better than 4k60 for motion - so hopefully by the time we’re at 16k it will be a lot more than 60fps


“It turns out that 1080p at 60 frames per second delivers really smooth motion, while 4K at 60fps does not. “With 4K at 60fps, you can definitely see some motion artifacting,” says Drazin. And as you can imagine, blurriness, smearing, and pixilation in fast-moving sports is a no-go.”

This is talking about compression/bandwidth... not about the number of pixels. There's nothing inherent to 4k that would cause it to be worse for motion than 1080p.

The issue is that satellites and radio waves have a maximum amount of bandwidth. To get 4k to fit they have to compress it quite a bit - giving lots of compression artifacts for moving images (like football). With the same amount of bandwidth 1080p can be _less_ compressed... and therefore have less compression artifacts.

This is, in fact, the same reason why ESPN and ABC chose to stay with 720p while all other channels were going to 1080p. For sports, a lower resolution with less compression will, most-likely, be better.
 
  • Like
Reactions: nol2001
Intel CPUs packages at the lower end of the laptop/mobile spectrum come with USB4 built in ( so don't have a choice). Those are the most at risk to be flipped over to Apple ARM. The high end of the Mac product line up probably will not have USB4/Thunderbolt built into the CPU (because there is no iGPU there. so not "easing" integration with the TB controller and the DP output stream. ). The PCH ( I/O Chipsets ) on the higher end CPUs tends to lag behind the mainstream CPU+PCH on USB adoption. ( most high end sever rooms don't have huge demands for USB devices. ).

What Apple doesn't have is something suitable for iMac Pro , Mac Pro space. Those are the more likely "last to go".
Addin cards for Mac Pro as also likely to be the "chopped down" USB 4 ( no TB3 and only minimal required USB bus speed update to qualify for the marking) . for the Mac space that probably won't be a good expectation fit.





Again the in the desktop space. Intel may not be offering what can't get with AMD + discrete chip next year.
AMD will be trailing in the laptop space on deep integration.

Apple could just go with TB3 discrete controller and AMD also as an option and just drag their feet on USB4. Basically, did same thing with USB 3.0. They avoided the year one , version one controllers and waited until later.
That was in part to wait until integration with Intel PCH + USB 3.0 merge but if fully intend to dump Intel PCH completely that probably isn't a big motivator. If moving to a new PCH/(I/O chipsset) vendor (e.g., ASMedia ) lomng term then that would more likely be Apple's priority.

All of the higher end MBPs have iGPUs, even the ones that also have dGPUs... also if Intel rolls USB4 into the lower end chipsets they certainly will into the higher end ones
 
All of the higher end MBPs have iGPUs, even the ones that also have dGPUs...

True, but those come with UHD rather than Iris Plus. Part of the reason all large MBPs now have a dGPU is that Intel no longer does 45W CPUs with reasonably good iGPUs.
 
True, but those come with UHD rather than Iris Plus. Part of the reason all large MBPs now have a dGPU is that Intel no longer does 45W CPUs with reasonably good iGPUs.

I think you missed my point, if the argument is that USB4 wouldnt be included without a chipset that includes an iGPU then it's an argument that doesnt apply to MBPs
 
I thought ethernet fastest connection is the Cat6e at 10Gbps? 100GBps bidirectional, you mean 100up and 100down simultaneously ?
No, there are also 25G/40G/100G, and I think 400G is just around the corner. Now for a home computer I doubt you will see those any time soon, but the enterprise equipment exist.
 
This is talking about compression/bandwidth... not about the number of pixels. There's nothing inherent to 4k that would cause it to be worse for motion than 1080p.

The issue is that satellites and radio waves have a maximum amount of bandwidth. To get 4k to fit they have to compress it quite a bit - giving lots of compression artifacts for moving images (like football). With the same amount of bandwidth 1080p can be _less_ compressed... and therefore have less compression artifacts.

This is, in fact, the same reason why ESPN and ABC chose to stay with 720p while all other channels were going to 1080p. For sports, a lower resolution with less compression will, most-likely, be better.
This is talking about compression/bandwidth... not about the number of pixels. There's nothing inherent to 4k that would cause it to be worse for motion than 1080p.

The issue is that satellites and radio waves have a maximum amount of bandwidth. To get 4k to fit they have to compress it quite a bit - giving lots of compression artifacts for moving images (like football). With the same amount of bandwidth 1080p can be _less_ compressed... and therefore have less compression artifacts.

This is, in fact, the same reason why ESPN and ABC chose to stay with 720p while all other channels were going to 1080p. For sports, a lower resolution with less compression will, most-likely, be better.
Or, they can simply get rid of 80% of channels, which are complete crap, and band together several channels to transmit one uncompressed (or less compressed) 4K channel.
 
Or, they can simply get rid of 80% of channels, which are complete crap, and band together several channels to transmit one uncompressed (or less compressed) 4K channel.
Sadly it's not that easy. Every single one of those crap channels are there for a reason. The media companies tell the cable providers to buy and carry all of their channels or none of them. Want ESPN? Then you must carry the Estonian knitting network.
 
Sadly it's not that easy. Every single one of those crap channels are there for a reason. The media companies tell the cable providers to buy and carry all of their channels or none of them. Want ESPN? Then you must carry the Estonian knitting network.
I wasn’t referring to the Estonian Knitting Network. That’s a must have, and I can’t survive this lockdown without it. However, there are literally hundreds of trash channels that need to be eliminated. They are akin junk mail and wasps. The mere purpose of their existence is to annoy people. We must fight back to free up the electromagnetic spectrum so that we can improve the quality of the channels that offer at least a semblance of usefulness.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.