Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
1. Show me an AMD laptop with Thunderbolt.

2. 4800U is mobile. Ultra-mobile would be less than 12W.

3. Nobody knows in what percentage of games each is faster.

1. We have desktop AMD parts so its just priority of OEMs instead of technical limitation -- Thunderbolt is still really rare even on Intel laptops and usually not running in full speed. Most thunderbolt 3 implementation is routed via PCH.

2. Ultra-mobile is a joke for Intel-- ARM chips are several times faster. There's almost no new atom/silver based laptops anymore as Intel reduce sponsoring the OEM for using their chips.

3. Mainstream eSports title generate similar performance for the best gaming CPU from both side.
 
Well, AMD is not able to support TB3 and now Intel is going to support TB4 only for Intel CPU... I dont see any hopes for AMD on Mac.

Thunderbolt 4 is pretty likely to be just Thunderbolt 3 with some tweaks to keep it in compliance with USB 4 . They'll also match on numbers TB 4 goes with USB 4 which will minimize consumer confusion.

Folks are saying it is faster and different on some flimsy evidence. Intel said it was 4x faster than USB. But they didn't say which USB. USB 3.0 ( or USB 3.1 gen 1 or whatever name want to use) is 10Gb/s. 4x that is 40 Gb/s. TBv3 is 40 Gb/s. Done 4x !

The hand waving occurs when folks run off and try to pick up the USB 3.2 which pragmatically nobody uses and then try to slap a 4x on 20Gb/s. That would lead to a new 80Gb/s. That is really suspect. If it was really big bragging Intel was doing they'd probably would have said USB 3.2. They didn't.

There may be a 80Gb/s mode in Thunderbolt 4 but that would to support DisplayPort 2.0. But that 80 only comes from making the whole network unidirectional going outbound. It using the base Thunderbolt protocol but isn't the primary Thunderbolt mode. It would though be consistent in Thunderbolt controllers supporting a "alternative DisplayPort" only mode that they have had from the start. If there is any 80Gb/s that's probably what it is ( i.e., support for direct connect 8K displays or super high refresh displays).

The other speculative push is that Thunderbolt 4 is going to push PCI-e v4. I doubt that. First, it would be disruptive with this whole "merge" with USB that is going on now. Same with the usage by DisplayPort. Second the cables would get even more expensive ... which again is the wrong timing to do that. Third the integrated TB controllers design that Intel did with Ice Lake had x4 PCI-e v3 done to each sub controllers.

Blueprint%20Series_May%2016-2019_COMBINED%20FINAL_AnandTech%20%282%29-page-041_575px.jpg


Each port can be x4 driven by a controller. That is collectively x16 PCI-e v3 worth of bandwidth right there. Optimize that. before jumping to PCI-e v4. ( or could use x8 of PCI-e internally inside the CPU complex to be easier to route and then downshift and ship out over the TB switch/router. ).

What Intel is likely doing in Tiger Lake is an straightforward iteration on what they did with Ice Lake.

Finally, it doesn't make any sense at all to ramp the CPU's controllers up to nose bleed levels if there are no discrete controllers that can also go at that speed. So far there are zero controllers built by someone other than Intel. Let the 3rd parties correctly implement the TBv3 speeds and merge with USB 4. Once there are multiple players only then can pragmatically move Thunderbolt forward (primarily under USB-IF control ..... which is likely going to be measured and relatively slow. )
[automerge]1578421066[/automerge]
.....
This is no longer true

Intel dropped licensing costs and restrictions on the Thunderbolt standard and there are now AMD based boards available with Thunderbolt.

Here's an example: https://www.extremetech.com/computi...thunderbolt-3-only-supports-intel-cpu-coolers

Which uses an Intel TB controller. There are still no 2nd party TB controllers. That is exactly why it is grossly the wrong time for Thunderbolt to suddenly move out from under anyone who is working on a USB4 (merged with TBv3 ) one.

Frankly, all this integrated with the CPU stuff doesn't have much real impact until there are discrete controllers for the peripherals. Until that market is flushed out TBv4 in the CPU would be a "bridge to nowhere" if radically different than what is supported by USB 4.



I'm not sure TB4 will be Intel only. If anything, right now AMD is more suitable to get on board TB4 quicker due to AMD already making the jump to PCI-E 4. Intel is currently still on PCI-E 3.

If TB4 is just USB 4 with the somewhat optional TBv3 elements always turned on then it probably won't be Intel only. Technically the USB 4 standard has some wiggle room to not implement TBv3 in certain contexts. It is still going to be have to have some of the "optional" stuff present to have the TB functionality. It is pretty likely that stuff being unambiguously present is what Intel is calling TB4.




IIRC, to go TB4 on Intel, you'll either need the faster PCI-4 to maintain the 4 lanes currently for thunderbolt.

There is no deep need for PCI-e v4 to be looped in here. What should be a much higher priority is to complete the merge with USB4 and get broader adoption. Then maybe PCI-e v4 can be looped in later, but 40Gb/s ( if get to close to the full allocation for PCI-e v3 traffic on the wire when DisplayPort not active. ) would be highly useful. Especially with broader adoption (and hopefully economies of scale lower entry prices).




Or use up more PCI-E lanes to get to the new bandwidth. Something not everyone is going to be willing to do because there are only so many PCI-E lanes available. Especially on intel's lower end CPU's that have reduced lane counts.

The way Intel is doing it with the mainstream chips, the PCI-e lanes are only internally allocated so don't really have to loose any external links. Ice Lake has more connectivity than any previous TB controllers PCI-e lane wise.
 
Last edited:
1. We have desktop AMD parts so its just priority of OEMs instead of technical limitation -- Thunderbolt is still really rare even on Intel laptops and usually not running in full speed. Most thunderbolt 3 implementation is routed via PCH.

2. Ultra-mobile is a joke for Intel-- ARM chips are several times faster. There's almost no new atom/silver based laptops anymore as Intel reduce sponsoring the OEM for using their chips.

3. Mainstream eSports title generate similar performance for the best gaming CPU from both side.
1. What matters is what you can actually buy.

2. Intel has Ice Lake Y. Only talking about x86.

3. You have to be rich to spend $700 on a CPU for eSports.
 
Last edited:
Thunderbolt 4 is pretty likely to be just Thunderbolt 3 with some tweaks to keep it in compliance with USB 4 . They'll also match on numbers TB 4 goes with USB 4 which will minimize consumer confusion.

Folks are saying it is faster and different on some flimsy evidence. Intel said it was 4x faster than USB. But they didn't say which USB. USB 3.0 ( or USB 3.1 gen 1 or whatever name want to use) is 10Gb/s. 4x that is 40 Gb/s. TBv3 is 40 Gb/s. Done 4x !

The hand waving occurs when folks run off and try to pick up the USB 3.2 which pragmatically nobody uses and then try to slap a 4x on 20Gb/s. That would lead to a new 80Gb/s. That is really suspect. If it was really big bragging Intel was doing they'd probably would have said USB 3.2. They didn't.

There may be a 80Gb/s mode in Thunderbolt 4 but that would to support DisplayPort 2.0. But that 80 only comes from making the whole network unidirectional going outbound. It using the base Thunderbolt protocol but isn't the primary Thunderbolt mode. It would though be consistent in Thunderbolt controllers supporting a "alternative DisplayPort" only mode that they have had from the start. If there is any 80Gb/s that's probably what it is ( i.e., support for direct connect 8K displays or super high refresh displays).

The other speculative push is that Thunderbolt 4 is going to push PCI-e v4. I doubt that. First, it would be disruptive with this whole "merge" with USB that is going on now. Same with the usage by DisplayPort. Second the cables would get even more expensive ... which again is the wrong timing to do that. Third the integrated TB controllers design that Intel did with Ice Lake had x4 PCI-e v3 done to each sub controllers.

Blueprint%20Series_May%2016-2019_COMBINED%20FINAL_AnandTech%20%282%29-page-041_575px.jpg


Each port can be x4 driven by a controller. That is collectively x16 PCI-e v3 worth of bandwidth right there. Optimize that. before jumping to PCI-e v4. ( or could use x8 of PCI-e internally inside the CPU complex to be easier to route and then downshift and ship out over the TB switch/router. ).

What Intel is likely doing in Tiger Lake is an straightforward iteration on what they did with Ice Lake.

Finally, it doesn't make any sense at all to ramp the CPU's controllers up to nose bleed levels if there are no discrete controllers that can also go at that speed. So far there are zero controllers built by someone other than Intel. Let the 3rd parties correctly implement the TBv3 speeds and merge with USB 4. Once there are multiple players only then can pragmatically move Thunderbolt forward (primarily under USB-IF control ..... which is likely going to be measured and relatively slow. )
[automerge]1578421066[/automerge]


Which uses an Intel TB controller. There are still no 2nd party TB controllers. That is exactly why it is grossly the wrong time for Thunderbolt to suddenly move out from under anyone who is working on a USB4 (merged with TBv3 ) one.

Frankly, all this integrated with the CPU stuff doesn't have much real impact until there are discrete controllers for the peripherals. Until that market is flushed out TBv4 in the CPU would be a "bridge to nowhere" if radically different than what is supported by USB 4.





If TB4 is just USB 4 with the somewhat optional TBv3 elements always turned on then it probably won't be Intel only. Technically the USB 4 standard has some wiggle room to not implement TBv3 in certain contexts. It is still going to be have to have some of the "optional" stuff present to have the TB functionality. It is pretty likely that stuff being unambiguously present is what Intel is calling TB4.






There is no deep need for PCI-e v4 to be looped in here. What should be a much higher priority is to complete the merge with USB4 and get broader adoption. Then maybe PCI-e v4 can be looped in later, but 40Gb/s ( if get to close to the full allocation for PCI-e v3 traffic on the wire when DisplayPort not active. ) would be highly useful. Especially with broader adoption (and hopefully economies of scale lower entry prices).






The way Intel is doing it with the mainstream chips, the PCI-e lanes are only internally allocated so don't really have to loose any external links. Ice Lake has more connectivity than any previous TB controllers PCI-e lane wise.

Thanks for the info :)
 
Finally, it doesn't make any sense at all to ramp the CPU's controllers up to nose bleed levels if there are no discrete controllers that can also go at that speed. So far there are zero controllers built by someone other than Intel. Let the 3rd parties correctly implement the TBv3 speeds and merge with USB 4. Once there are multiple players only then can pragmatically move Thunderbolt forward (primarily under USB-IF control ..... which is likely going to be measured and relatively slow. )
Except that Intel will sell eGPUs.
 

1. Only H series CPU have CPU PCIe lanes that is suited for eGPU. And only Alienware and Apple are wiring TB3 to them instead of PCH lanes. eGPU is not an advantage of Intel but an advantage of special OEMs like AW/Apple.
Nobody else cares about eGPU--even in the foreseeable future with AMD CPUs you probably still only have these two choices if you really care about eGPU.

2. Only talking about x86--so why not only talking about Intel and pretend AMD doesn't exist?

3. Why do you need top of the line mainstream CPU to do eSport gaming? AMD has a product line and just pick the one that works under your budget. Oh and you don't need an expensive Z370/Z390 just to use your 3000+DDR4 RAM in XMP mode since AMD support 3200 out of box and do not block you from overclocking the IMC on low end motherboard.
 
1. Only H series CPU have CPU PCIe lanes that is suited for eGPU. And only Alienware and Apple are wiring TB3 to them instead of PCH lanes. eGPU is not an advantage of Intel but an advantage of special OEMs like AW/Apple.
Nobody else cares about eGPU--even in the foreseeable future with AMD CPUs you probably still only have these two choices if you really care about eGPU.

2. Only talking about x86--so why not only talking about Intel and pretend AMD doesn't exist?

3. Why do you need top of the line mainstream CPU to do eSport gaming? AMD has a product line and just pick the one that works under your budget. Oh and you don't need an expensive Z370/Z390 just to use your 3000+DDR4 RAM in XMP mode since AMD support 3200 out of box and do not block you from overclocking the IMC on low end motherboard.
1. I bought an Acer with TB3 in 2015 for eGPU.

2. The point is that if you want an ultra-mobile laptop AMD is likely going to be slower than Intel.

3. It is you who brought up eSports when talking about top CPUs.
 
1. I bought an Acer with TB3 in 2015 for eGPU.

2. The point is that if you want an ultra-mobile laptop AMD is likely going to be slower than Intel.

3. It is you who brought up eSports when talking about top CPUs.



1. eGPU off PCH is a bad performer--Any disk IO will kill GPU performance a lot creating micro stuttering and since more game using larger maps rely on data streaming and upcoming full SSD PS5/Xbox Series X this will only become worse as time goes. Most eGPU guide totally ignore this because most eGPU advertising devices are not actually designed for this workload and point out this will kill the sale for almost all of them. eGPU suppose to give you future proof but failed instead.

2. I have iPad Pro for ultra mobile need. AMD and Intel as long as we are talking about x86 is no where near that performance now. It's MacRumors after all and that should be the expected answer.

3. You wrote:"Intel would still win in eGPU, ultra-mobile, and current pure dGPU gaming."

Only eSport titles are CPU bound. Intel wins nothing in AAA dGPU gaming at 1440p/4k.
Intel only wins if you use 8086k and overclock the IMC to run 4000+ DDR4.

9900k is 50% slower than 8086k for 99 percentile even with same speed of RAM in PUBG thanks to shorter ring bus with lower latency in the hex core 8086k. So I guess Intel old gen wins against its current gen.
 
Last edited:
1. eGPU off PCH is a bad performer--Any disk IO will kill GPU performance a lot creating micro stuttering and since more game using larger maps rely on data streaming and upcoming full SSD PS5/Xbox Series X this will only become worse as time goes. Most eGPU guide totally ignore this because most eGPU advertising devices are not actually designed for this workload and point out this will kill the sale for almost all of them. eGPU suppose to give you future proof but failed instead.

2. I have iPad Pro for ultra mobile need. AMD and Intel as long as we are talking about x86 is no where near that performance now. It's MacRumors after all and that should be the expected answer.

3. You wrote:"Intel would still win in eGPU, ultra-mobile, and current pure dGPU gaming."

Only eSport titles are CPU bound. Intel wins nothing in AAA dGPU gaming at 1440p/4k.
Intel only wins if you use 8086k and overclock the IMC to run 4000+ DDR4.

9900k is 50% slower than 8086k for 99 percentile even with same speed of RAM in PUBG thanks to shorter ring bus with lower latency in the hex core 8086k. So I guess Intel old gen wins against its current gen.

1. An RX 460 in my TB2 case performed better than the 960M in my laptop (moved it, I need a faster 75W card). I will buy a TB3 case when there is an acceptable one.

2. Not everybody can do with a tablet.

3. You can still see differences in games with a 2080 Ti.
 

1. You got random frame drops when large IO operation happens. It's not about what GPU you are using. Put your GPU in desktop to PCH slot will also cause this to happen. It's a cost saving bad design for most TB3 laptop today on the market.

2. We were talking about performance.

3. If a game is GPU bounded you know there's no different from i3 to i9. There's no difference in games when you run 1440p and above.

And even for lower res AMD wins for out of box ram support. Intel is losing to AMD and even its own last gen CPUs. 9900k slower than 8086k in games is already showing Intel's problem.
 
1. You got random frame drops when large IO operation happens. It's not about what GPU you are using. Put your GPU in desktop to PCH slot will also cause this to happen. It's a cost saving bad design for most TB3 laptop today on the market.

2. We were talking about performance.

3. If a game is GPU bounded you know there's no different from i3 to i9. There's no difference in games when you run 1440p and above.

And even for lower res AMD wins for out of box ram support. Intel is losing to AMD and even its own last gen CPUs. 9900k slower than 8086k in games is already showing Intel's problem.
1. I don't care about your theories. I care about my experience.

2. We were talking about PCs.

3. Well, differences can be found, even if they are small many times.
 
Intel 10nm has same density of TSMC N7 but the end CPU frequency/power efficiency is awful. AMD can reach 4.5GHz under reasonable power and Intel Icelake is running 1.xGhz with more power than Intel 14nm running at 4GHz.

This has very little to do with the transitor. AMD and Intel take very different approaches to logic/circuit design. The transistor can toggle at least an order of magnitude faster than the clock. Clock rates are determined by how many gates and how much wire you stick between two flip-flops.
 
Except that Intel will sell eGPUs.

eGPUs work now with TBv3 . Intel wouldn't be barred in the slightest from selling their own eGPU in the future with that kind of bandwidth allocation. And yes they'd probably use their own discrete USB4/TB4 controller to do it. But the real kicker here is other folks getting into the business of making USB4/TB4 controllers. Intel moving the goal posts isn't going to help that happen.




".... UPDATE: Intel confirmed it referenced USB 3.1 in the presentation, meaning Thunderbolt 4 is in fact not faster than Thunderbolt 3. We updated the text accordingly. ...
We followed up with Intel, which intially provided this response:


"Thunderbolt 4 continues Intel leadership in providing exceptional performance, ease of use and quality for USB-C connector-based products. It standardizes PC platform requirements and adds the latest Thunderbolt innovations. Thunderbolt 4 is based on open standards and is backwards compatible with Thunderbolt 3. We will have more details to share about Thunderbolt 4 at a later date."

.....
"

3.1 gen 2 10Gb/s ... 4x == 40 Gb/s.

Same 40 as now. "later date" will probably be about formal certification with USB 4 and other stuff at the edges and flow rates inside of the 40Gb/s boundary. There is a very good chance that "later date" will be after some other 2nd party has had a chance to get a controller out the door to at least sampling with other system vendors.

[ Very similar tensions happened at the USB 2.0 transition where the alternative party controller providers were very unhappy with Intel trying to swash the market dynamics of for discrete controllers by looping Intel's implementation into pragmatically mandatory chip buys by CPU customers. There has to be some space for there to be a viable open market for an open standard. ]


USB4 isn't going to be 100% uniform. ( no where near as uniform as Thunderbolt was along its independent evolution. ]

 
2012 was the end of the line. That is just one more.
Yeah, I helped a bunch of folks migrate off of macOS. Good idea at the time, but always wondered if they’d hate me if Apple EVER started giving any attention back to their pro line. Lucky for me, Apple stayed the course!
I'm not sure Apple will want to bother with a large architectural change to the MBP and write new high-performance drivers -- but if they did, they'd take AMD's parts. At this point, though, I think the next time we see a major change to the MBP (or any of Apple's laptops, for that matter), it'll have Apple's own silicon in it.
I agree. If they decide to put that much effort to rearchitect anything, it will be do support their own CPU.
Yup, now we just have to wait until they actually to ship to see how they perform in a shipping system.
 
1. I don't care about your theories. I care about my experience.

2. We were talking about PCs.

3. Well, differences can be found, even if they are small many times.

1. I do not trust your experience as I fix several problems when someone put GPU in wrong slot and got frame drops.

2. Disqualify ARM devices as PC is a huge mistake. Windows runs on ARM already.

3. What are you talking about? AMD Ryzen is faster than same price range Intel CPUs. What small differences are you talking?
Gaming on hilarious DDR4 2666 with Intel CPU? Or are you saying even it's small difference AMD is still winning in gaming?
[automerge]1578426881[/automerge]
This has very little to do with the transitor. AMD and Intel take very different approaches to logic/circuit design. The transistor can toggle at least an order of magnitude faster than the clock. Clock rates are determined by how many gates and how much wire you stick between two flip-flops.

I understand. I'm getting that based on Intel 14nm++ skylake and GF 12nm zen+ have similar perf/power.

And Intel 10nm Icelake have lower perf/power compared to 14nm++ Skylake. This is a general architecture+fab node comparison. I'm not purely blaming on the node but Intel is definitely doing something really bad now.
 
Last edited:
1. I do not trust your experience as I fix several problems when someone put GPU in wrong slot and got frame drops.

2. Disqualify ARM devices as PC is a huge mistake. Windows runs on ARM already.

3. What are you talking about? AMD Ryzen is faster than same price range Intel CPUs. What small differences are you talking?
Gaming on hilarious DDR4 2666 with Intel CPU? Or are you saying even it's small difference AMD is still winning in gaming?
1. I am tired of your mansplaining.

2. And native code does not run like it should.

3. I am talking about small fps differences. I did not say Intel is good value.
 
1. I am tired of your mansplaining.

2. And native code does not run like it should.

3. I am talking about small fps differences. I did not say Intel is good value.

1. I'm explaining based on my experience of helping my friends building PCs. GPU over PCH is never an ideal setup and there are at least two OEM wiring TB3 through CPU lanes.

2. Native code include native ARM64 code. CPU arch is not as important as it was back in 90s. Adobe already announced arm64 support for Windows and already have iPad version of Photoshop.

3. Regardless of value. Intel is losing on gaming performance. Or are you thinking less fps is better?

Currently the best gaming CPU is an overclocked 8086k. It beat whole Intel current lineup to the dust in extreme situations for more than 50% fps.

Do you care about those extreme situations (PUBG 1080p low @ 200fps+) and void your warranty?

If we throw overclock out of the table Intel only offers 2666MHz DDR4 ram and I can not say that's boosting your FPS compare to AMD 3200MHz default.
 
Last edited:
1. I'm explaining based on my experience of helping my friends building PCs.

2. Native code include native ARM64 code. CPU arch is not as important as it was back in 90s.

3. Regardless of value. Intel is losing on gaming performance. Or are you thinking less fps is better?

Currently the best gaming CPU is an overclocked 8086k. It beat whole Intel current lineup to the dust in extreme situations for more than 50% fps.

Do you care about those extreme situations (PUBG 1080p low @ 200fps+)?

If we throw overclock out of the table Intel only offers 2666MHz DDR4 ram and I can not say that's boosting your FPS compare to AMD 3200MHz default.
1. Should I be impressed?

2. The vast majority of PC games are compiled for x86, for example.

3. I don't care about buying a CPU only for gaming but it seems many people do.
 
1. Should I be impressed?

2. The vast majority of PC games are compiled for x86, for example.

3. I don't care about buying a CPU only for gaming but it seems many people do.

2. That's not important as you can just recompile them to ARM as most of them are writing in C++ or even C#. Civ6 has a iOS version that shares MODs with PC versions.

3. Intel ask you to buy a last gen chip void warranty and tinkering around to get this win. And yet people claiming Intel is less hassle.
 
2. That's not important as you can just recompile them to ARM as most of them are writing in C++ or even C#. Civ6 has a iOS version that shares MODs with PC versions.

3. Intel ask you to buy a last gen chip void warranty and tinkering around to get this win. And yet people claiming Intel is less hassle.
2. I don't think many developers are recompiling for ARM on Windows and sometimes it would be impossible in practice or too much work to port.
 
1. We have desktop AMD parts so its just priority of OEMs instead of technical limitation -- Thunderbolt is still really rare even on Intel laptops and usually not running in full speed. Most thunderbolt 3 implementation is routed via PCH.

In the $1,100 & up range for Intel laptops from major players, Thunderbolt is not all that rare.

In the sub $900 range it is. Up until very recent that sub $800 zone is where the overwhelming vast majority of AMD laptops were. That is going to change a bit with The new Ryzen 4000 options, but the "low priority" was more so because it was 'cheap as possible' priority more so than whether Thunderbolt (TB) was useful or not. TB isn't aligned with race bottom system pricing.

2. Ultra-mobile is a joke for Intel-- ARM chips are several times faster. There's almost no new atom/silver based laptops anymore as Intel reduce sponsoring the OEM for using their chips.

Atom never was most of the Ultra-Mobile space at all. The Celeron/Pentium chips aren't Atom based. A very large fraction of the Chromebooks aren't on Atom anymore either.

Even though AMD was (is?) larglely regulated to the lower half laptop segment they didn't have much leverage in the Chromebook space. How much work the chip vendor puts into reference design and system R&D support matters. Intel largely swapped out their Atom for Core based designs in the lower-mid range Chromebook space and kept ARM implementations from sweeping them away. There aren't zero ARM solutions there, but ARM didn't sweep Intel away like a plague of locusts either.
 
PCIe 3.0 x8 is not enough for the RX 5500 4GiB (8GiB is fine).

Trying to force PCI-e v4 on TB isn't going to fix that. The very bottom end GPU cards are hard to be "excitiing" eGPU solutions because the performance gap between them and the internal integrated GPUs is giong to shrink to an increasingly narrow gap. That says next to nothing about eGPUs and the ability to sell them. Just can't sell them to "too cheap" video cards.

You are just spinning on a position that just doesn't have any real technical foundation at all and just throwing misdirection.

There is extremely little tactical or strategic need for Thunderbolt to move off of PCI-e v3 before the implementations can actually fully take advantage of the x4 PCI-e v3 worth of bandwidth got ( the first iteration of TBv3 largely pragmatically capped at 22Gb/s which is short of 32Gb/s. Get to being able to dynamically flow control a full 30Gb/s and then there is some "need" to move on. Otherwise just chasing "PCI-e v4" tech porn chatter. that isn't going to help Thunderbolt. ).
 
".... UPDATE: Intel confirmed it referenced USB 3.1 in the presentation, meaning Thunderbolt 4 is in fact not faster than Thunderbolt 3. We updated the text accordingly. ...
We followed up with Intel, which intially provided this response:

False. There is no USB 3.1. All USB had been renamed to USB 3.2 with gen 1 and gen 2.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.