Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
USB 3.2 Gen 2 is 10Gb/s, so 4 time that thing is 40Gb/s

USB 3.2 Gen 2 is the new name for "USB 3.1 Gen 2"

There is physically no possible way to double the bandwidth of thunderbolt now as that's the current limit for signal integrity unless they go full fiber and name Thunderbolt back to its original codename "Light Peak".

USB 3.2 Gen 1x1 = USB 3.1 Gen 1
USB 3.2 Gen 1x2
USB 3.2 Gen 2x1 = USB 3.1 Gen 2
USB 3.2 Gen 2x2

Clearly, you are being confused about USB 3.2 standards. There are 4 versions of USB 3.2 and what you explained is wrong. If you keep confusing about USB 3.2, what's the point? USB 3.2 Gen 2x2 has 20gb/s. 4 times means 80gb/s. You see, a lot of people including yourself confuse a lot about USB 3.2 standard and we have no idea if Intel mentioned specific USB version.

Thunderbolt 3 uses 4 lanes of PCIe 3.0 and TB 4 is rumored to used PCIe 4.0. Clearly, there is no reason to keep 40gb/s of bandwidth.
 
USB 3.2 Gen 1x1 = USB 3.1 Gen 1
USB 3.2 Gen 1x2
USB 3.2 Gen 2x1 = USB 3.1 Gen 2
USB 3.2 Gen 2x2

Clearly, you are being confused about USB 3.2 standards. There are 4 versions of USB 3.2 and what you explained is wrong. If you keep confusing about USB 3.2, what's the point? USB 3.2 Gen 2x2 has 20gb/s. 4 times means 80gb/s. You see, a lot of people including yourself confuse a lot about USB 3.2 standard and we have no idea if Intel mentioned specific USB version.

Thunderbolt 3 uses 4 lanes of PCIe 3.0 and TB 4 is rumored to used PCIe 4.0. Clearly, there is no reason to keep 40gb/s of bandwidth.

"Intel did confirm that that they were referencing USB 3.2 Gen 2 - the 10 Gbps version - in the keynote presentation."

It's you confusing about USB 3.2.

They are not mentioning USB 3.2 Gen 2x2.

They are mentioning 4 time USB 3.2 Gen 2(x1)
 
USB 3.2 Gen 1x1 = USB 3.1 Gen 1
USB 3.2 Gen 1x2
USB 3.2 Gen 2x1 = USB 3.1 Gen 2
USB 3.2 Gen 2x2

Clearly, you are being confused about USB 3.2 standards. There are 4 versions of USB 3.2 and what you explained is wrong. If you keep confusing about USB 3.2, what's the point? USB 3.2 Gen 2x2 has 20gb/s. 4 times means 80gb/s. You see, a lot of people including yourself confuse a lot about USB 3.2 standard and we have no idea if Intel mentioned specific USB version.

Thunderbolt 3 uses 4 lanes of PCIe 3.0 and TB 4 is rumored to used PCIe 4.0. Clearly, there is no reason to keep 40gb/s of bandwidth.

If Tiger Lake even ships this year (which I severely doubt) I can see Intel and/or Apple hanging 40Gbps TB4 ports off of the CPU because they will use less PCIe 4 lanes, especially on H-Series. Should Tiger Lake-H have the requisite x16 lanes of PCIe 4, x8 would go to the GPU, x2 to two TB4, x2 to another two TB4 and x4 to the SSD, all of which would make for an incredibly speedy machine given that Apple would be able to connect the SSD directly to the CPU and bypass the PCH for the most critical components. A dGPU such as the 5600M/5700M, with a Xe-based iGPU and PCIe 4.0-based SSD should give a future 16-inch MBP quite a healthy boost.

The real trick for Thunderbolt 4 would be to be able to accommodate the full 40Gbps for data AND not take any overhead away for attached DisplayPort devices.
 
"Intel did confirm that that they were referencing USB 3.2 Gen 2 - the 10 Gbps version - in the keynote presentation."

It's you confusing about USB 3.2.

They are not mentioning USB 3.2 Gen 2x2.

They are mentioning 4 time USB 3.2 Gen 2(x1)

How do you even know it's USB 3.2 Gen 2x1? How come Tom's hardware confirmed that Intel compared with USB 3.1? What about Anandtech? USB 3.2 Gen 2 is not Gen 2x2 or Gen 2x1.

And you see, it's not even certain which USB 3.2 that Intel used to compare. Intel mentioned only USB 3 and I already checked their Keynote for CES 2020. Since many reviewers and websites confused about USB 3 standard, it is not even confirmed whether Intel used USB 3.2 Gen 2x1 to compared it or not.
[automerge]1578610920[/automerge]
If Tiger Lake even ships this year (which I severely doubt) I can see Intel and/or Apple hanging 40Gbps TB4 ports off of the CPU because they will use less PCIe 4 lanes, especially on H-Series. Should Tiger Lake-H have the requisite x16 lanes of PCIe 4, x8 would go to the GPU, x2 to two TB4, x2 to another two TB4 and x4 to the SSD, all of which would make for an incredibly speedy machine given that Apple would be able to connect the SSD directly to the CPU and bypass the PCH for the most critical components. A dGPU such as the 5600M/5700M, with a Xe-based iGPU and PCIe 4.0-based SSD should give a future 16-inch MBP quite a healthy boost.

The real trick for Thunderbolt 4 would be to be able to accommodate the full 40Gbps for data AND not take any overhead away for attached DisplayPort devices.

If Tiger Lake comes out later this year, it wont be possible to use on any Mac cause it takes time to optimize both software and hardware. Usually, it takes 3~6 months. Ice lake released in 3Q 2019 and yet no Mac computers have 10nm Ice lake CPU. The performance itself is already in doubt except for GPU. Apple tends to adopt new technology slowly than others.

5600m and 5700m won't gonna work with current MBP because of 100W limits. CPU use 45W and GPU use 50W. Even they want to use it, then they have to change the charging port itself cause USB-C can charge up to 100W. Also, the performance without a charger will decrease.

At this point, we have no idea about TB4's performance and spec since Intel didn't specific about which USB 3 they mentioned in CES 2020.
 
Last edited:
Ice Lake implements 14 out of 20 instructions, and Tiger Lake will add a 15th.
Those are instruction subsets. According to the Clang count, there are 651 instructions and about 5000 intrinsics total.
 
Last edited:
USB 3.2 Gen 1x1 = USB 3.1 Gen 1
USB 3.2 Gen 1x2
USB 3.2 Gen 2x1 = USB 3.1 Gen 2
USB 3.2 Gen 2x2

Clearly, you are being confused about USB 3.2 standards. There are 4 versions of USB 3.2

How do you even know it's USB 3.2 Gen 2x1? How come Tom's hardware confirmed that Intel compared with USB 3.1? What about Anandtech? USB 3.2 Gen 2 is not Gen 2x2 or Gen 2x1.

There’s gen 1, which is identical to 3.1 gen 1, which is identical to 3.0. 5 Gb/s

There’s gen 2, which is identical to 3.1 gen 2. 10 Gb/s

And then there’s gen 2x2, which is two lanes of gen 2. 20 Gb/s

It’s obnoxious marketing, but it’s not that hard.
[automerge]1578645834[/automerge]
Those are instruction subsets. According to the Clang count, there are 651 instructions and about 5000 intrinsics total.

Yes, fair enough. My point stands — Tiger Lake will include most instructions, and when Cooper Lake’s successors eventually move to Willow Cove, so will they. It’s fragmented for now, but not quite that bad.
 
There’s gen 1, which is identical to 3.1 gen 1, which is identical to 3.0. 5 Gb/s

There’s gen 2, which is identical to 3.1 gen 2. 10 Gb/s

And then there’s gen 2x2, which is two lanes of gen 2. 20 Gb/s

It’s obnoxious marketing, but it’s not that hard.
Im sure glad they simplified things 🤣
 
Last edited:
  • Like
Reactions: Zdigital2015
If Tiger Lake comes out later this year, it wont be possible to use on any Mac cause it takes time to optimize both software and hardware. Usually, it takes 3~6 months. Ice lake released in 3Q 2019 and yet no Mac computers have 10nm Ice lake CPU. The performance itself is already in doubt except for GPU. Apple tends to adopt new technology slowly than others.

Ice Lake-U is only available in low volume and has poor CPU perf (albeit better GPU perf). It's not really that interesting for Apple. They should probably use Comet Lake-U instead, and I'm guessing we'll see the 13-inch MacBook Pro move to that in spring.

Ice Lake-Y seems to basically not be available at all. And it ups the TDP by 30%, so that's not great. It might be a contender for the MacBook Air, but I haven't been able to find any benchmarks whatsoever.

Ice Lake is mostly interesting as a reboot of Intel's attempts to go 10nm, and hopefully, we'll see some more viable products in Tiger Lake some time this year.
 
  • Like
Reactions: PickUrPoison
View attachment 887517

Not bad for 15 watts from AMD....
The iGPU would also blow away the pathetic Skylake one that Apple/Intel has used for the last 4 years...
AND it supports LPDDR4
It seems one would have to buy a very expensive laptop.

But the most I see in an LPDDR4X laptop is 32GiB.

These prices are no-go.
 
Last edited:
USB 3.2 Gen 1x1 = USB 3.1 Gen 1
USB 3.2 Gen 1x2
USB 3.2 Gen 2x1 = USB 3.1 Gen 2
USB 3.2 Gen 2x2

Clearly, you are being confused about USB 3.2 standards. There are 4 versions of USB 3.2 and what you explained is wrong. If you keep confusing about USB 3.2, what's the point? USB 3.2 Gen 2x2 has 20gb/s. 4 times means 80gb/s. You see, a lot of people including yourself confuse a lot about USB 3.2 standard and we have no idea if Intel mentioned specific USB version.

Thunderbolt 3 uses 4 lanes of PCIe 3.0 and TB 4 is rumored to used PCIe 4.0. Clearly, there is no reason to keep 40gb/s of bandwidth.

There are three versions of USB 3.2, not four (4)

The USB 3.2 specification absorbed all prior 3.x specifications. USB 3.2 identifies three transfer rates:

  • USB 3.2 Gen 1: SuperSpeed USB 5Gbps
  • USB 3.2 Gen 2: SuperSpeed USB 10Gbps
  • USB 3.2 Gen 2x2: SuperSpeed USB 20Gbps


It’s clear that USB 3 and Thunderbolt 3 are meant to merge into USB 4. However Intel wants to spin things, this is where we are at, which is still a pretty good place to be. Intel cannot increase Thunderbolt bandwidth using PCIe 3.0 simply due to the limited number of lanes on their consumer CPUs. Thunderbolt 4 is them simply trying to keep up with USB 4 to make Thunderbolt 3 look fresh and new as 4 must be better than a 3, right?

While rumors swirl that Tiger Lake will bring PCIe 4.0 to consumer desktops, I see no concrete information to substantiate those rumors in the least. Intel dragged their heels on adopting USB 3 inside their PCH and will do the same with PCIe 4.0 if they think it will be short-lived, relative to PCIe 5.0. Everyone arguing that PCIe 4 is for consumer and PCI 5 is for the data center is forgetting that the computing public abhors multiple standard like this existing and beyond the normal transition period required, is going to coalesce around the lowest common denominator, and right now that is PCIe 3.0. Those of us here can argue all day long about PCIe 4 or 5 and the need for it, but just as many in these forums will argue that they don’t need it and don’t care. Translate that to the real world and the only people fussing about whether Thunderbolt 4 is going to be 40Gbps or 80Gbps are those of us who thrive on specs, which may or may not matter in the real world. We just want to believe they do, otherwise this handwringing is just a colossal waste of time and energy, which it is.
 
Ice Lake-U is only available in low volume and has poor CPU perf (albeit better GPU perf). It's not really that interesting for Apple. They should probably use Comet Lake-U instead, and I'm guessing we'll see the 13-inch MacBook Pro move to that in spring.

Ice Lake-Y seems to basically not be available at all. And it ups the TDP by 30%, so that's not great. It might be a contender for the MacBook Air, but I haven't been able to find any benchmarks whatsoever.

Ice Lake is mostly interesting as a reboot of Intel's attempts to go 10nm, and hopefully, we'll see some more viable products in Tiger Lake some time this year.

We might expect Tiger lake since Apple can use Intel CPU within a short period of time.
 

It seems Intel didn't clarify about TB4 at this point.

They left themselves just enough wiggle room to backtrack any rumors, guesses or semi-obvious conclusions when they inevitably disappoint everyone with rebased hash.

Typical Intel...again, trying to seize the narrative away from AMD and continually tripping on a banana peel.

Now, if they would stop the FUD and just do what they used to do best, which was make bad-ass CPUs. Clocks ticking, Mr. Swan.
 
The cheapest Intel laptop (N4000, Windows 10, 11" 1080p) costs 175 euro.

The cheapest AMD laptop (Stoney Ridge, FreeDOS, 15" 768p) costs 220 euro.

The second has twice as much RAM and storage.


No graphics comparison.
Now there is a 9120C Chromebook similar to that N4000, but 14", for 200 euro.

Slower CPU but better graphics than the 9125. And it is a lower power part.
 
CISC and RISC fight was over 30 years ago with the release of Pentium MMX which is a RISC style micro code CPU with a x86 CISC decoder.

Modern Intel chip are not CISC nor RISC but instead a VLIW backend with a CISC decoder.
CISC has no logical advantage compare to RISC, let along performance.
iOS is running same kernel as macOS.
ARM chips are not strictly RISC anymore long times ago.

CISC/RISC is not important for more than 2 decades. Complex in CISC does not means calculus in 1 instruction. It just means some instruction can be fused into 1 for easy access. FMA for example doing a+b*c in one instruction and ARM already support this kind of instructions.

iOS is nowhere lightweight compare to macOS. It runs everything macOS could run. After the 64bit transition all iOS app are source code compatible with macOS and vice versa.



Extra knowledge: CISC was well know for its memory saving--same assembly code in CISC is much shorter than the RISC version thus reduced instruction memory usage which was a huge amount in 1980s.

You probably thinking less instruction runs faster but that's not the case here as one RISC instruction runs much faster than on CISC complex instruction so the performance ends up the same.

Today our memory are fulled with pictures and media instead of CPU instructions.

As hard as you try you are supporting the argument! The CPU construct of a RISC and CISC boded system is just that from the upper level of the encoding. While the lower core layers have leveraged technologies from the posting camp. The code running is running in a radically different format across the chips.

Do take some time to read up on iOS and macOS. iOS was based on the RISC OS-X and was trimmed to the bare essentials as at the time the iPhone APU could not handle the full OS. Even today is about 60% of the what OS-X offered. Todays macOS is a complete re-write of the code and can't run the ARM instruction set. and there is nothing related to 64bit here.
[automerge]1578716008[/automerge]
I concur. And also for compatibility with the rest (95%) of the world using Windows (Microsoft Office, Clarivate Analytics EndNote, VMware Fusion, etc). Otherwise, the Mac will be a deal breaker for us.

So that's why two of the fastest super computers are using AMD EPYC Rome CPU chips as they can't run as well as Intel's Xeon CPU ;-}

I think you got it backwards!
 
Last edited:
Now there is a 9120C Chromebook similar to that N4000, but 14", for 200 euro.

Slower CPU but better graphics than the 9125. And it is a lower power part.
The cheapest laptop with a RAM slot also costs 200 euro (Celeron, Windows 10, 15" 768p).
 
Last edited:
Do take some time to read up on iOS and macOS. iOS was based on the RISC OS-X and was trimmed to the bare essentials as at the time the iPhone APU could not handle the full OS. Even today is about 60% of the what OS-X offered. Todays macOS is a complete re-write of the code and can't run the ARM instruction set. and there is nothing related to 64bit here.

I… what? “RISC OS-X”? Are you thinking of Archimedes? macOS as a rewrite? Of what? It still shares code with NeXTSTEP. “Nothing related to 64bit”??

iOS was trimmed because they had an opportunity to kill some deprecated code like QuickTime.
 
  • Like
Reactions: MikeZTM
back to the clock speed race?
Clearly, Intel is behind on the node length (14 vs 7 which is really more like 14 vs 10), and they have focused on backend improvements which will pay off in the future when we near the end of the node length race (5nm ... 3 ... there is a physical end to this)

those using adobe products are happy for the clock speed race
 
As hard as you try you are supporting the argument! The CPU construct of a RISC and CISC boded system is just that from the upper level of the encoding. While the lower core layers have leveraged technologies from the posting camp. The code running is running in a radically different format across the chips.

Do take some time to read up on iOS and macOS. iOS was based on the RISC OS-X and was trimmed to the bare essentials as at the time the iPhone APU could not handle the full OS. Even today is about 60% of the what OS-X offered. Todays macOS is a complete re-write of the code and can't run the ARM instruction set. and there is nothing related to 64bit here.
[automerge]1578716008[/automerge]


So that's why two of the fastest super computers are using AMD EPYC Rome CPU chips as they can't run as well as Intel's Xeon CPU ;-}

I think you got it backwards!

OS doesn't care about RISC or CISC, Modern OS are written in high level language like C/C++ instead of 100% assembly code and have high portability and that's why OS X migrated from IBM PowerPC(RISC if you really care) to Intel in 1 year. iOS and macOS is source code compatible and you can now run UIKit apps on macOS and that's the future for Mac.

Actually nobody care about RISC or CISC since more than 20 years ago. Just like nobody care about pure micro-kernel anymore as everything became hybrid.


PS: before the transitioning to 64bit iOS source code are targeting 32bit ARM and in C the basic variable lengths are different than AMD64 creating potential issue if you calling them unaware of the difference or relying on sizeof().

After ARM64 all basic variable are same length in both CPU arch so they are fully source code compatible now.


BTW please read up how iOS was built. iOS original plan was built from iPod OS and since it doesn't meet the "smart" requirements they scrap that and port OS X kernel to it. Today macOS is still running the same old Darwin mach kernel with all those legacy stuffs that was ported from IBM CPUs. Next port would take even shorter as we already have a really good ARM library to work with.

UIKit for macOS/Mac Catalyst("iPad app for Mac") was build as ABI incompatible with iOS emulator. And they finally released xcframework formate that can pack in multiple CPU binaries with multiple ABI into one framework. Currently we have iOS arm64/iOS x86_64(emulator) and Mac Catalyst x86_64/AppKit x86_64. Previously this wan't possible as only one x86_64 slice can exist in the framework. Mac Catalyst x86_64 will conflict with iOS x86_64(emulator). This pave the road for a "Mac Catalyst arm64" build.
 
Last edited:
and that's why OS X migrated from IBM PowerPC

[..]

Today macOS is still running the same old Darwin mach kernel with all those legacy stuffs that was ported from IBM CPUs. Next port would take even shorter as we already have a really good ARM library to work with.

Yup.

I mean, really, OS X / NeXTSTEP started on 68k, then was briefly on Intel, SPARC, and PA-RISC, and was only ported to PowerPC when Apple acquired it (only to be ported back to Intel a few years later). Then it was ported to ARM and rebranded as iOS.

When you're abstract enough to run on two or three architectures, you're abstract enough to run on almost any architecture.
 
  • Like
Reactions: MikeZTM
OS doesn't care about RISC or CISC, Modern OS are written in high level language like C/C++ instead of 100% assembly code and have high portability and that's why OS X migrated from IBM PowerPC(RISC if you really care) to Intel in 1 year. iOS and macOS is source code compatible and you can now run UIKit apps on macOS and that's the future for Mac.

Actually nobody care about RISC or CISC since more than 20 years ago. Just like nobody care about pure micro-kernel anymore as everything became hybrid.

PS: before the transitioning to 64bit iOS source code are targeting 32bit ARM and in C the basic variable lengths are different than AMD64 creating potential issue if you calling them unaware of the difference or relying on sizeof().

After ARM64 all basic variable are same length in both CPU arch so they are fully source code compatible now.

BTW please read up how iOS was built. iOS original plan was built from iPod OS and since it doesn't meet the "smart" requirements they scrap that and port OS X kernel to it. Today macOS is still running the same old Darwin mach kernel with all those legacy stuffs that was ported from IBM CPUs. Next port would take even shorter as we already have a really good ARM library to work with.

UIKit for macOS/Mac Catalyst("iPad app for Mac") was build as ABI incompatible with iOS emulator. And they finally released xcframework formate that can pack in multiple CPU binaries with multiple ABI into one framework. Currently we have iOS arm64/iOS x86_64(emulator) and Mac Catalyst x86_64/AppKit x86_64. Previously this wan't possible as only one x86_64 slice can exist in the framework. Mac Catalyst x86_64 will conflict with iOS x86_64(emulator). This pave the road for a "Mac Catalyst arm64" build.

OK lets go through your errors!
RISC & CISC still has a large bearing on how the OS and apps run on a given CPU. Modern Intel CPU's are CISC at the ASM layer before the instructions are decoded and dispatched by the microcode which at the lower layers has some RISC elements. The instructions are all CISC.

OS-X & MacOS contains a lot of Objective-C, kernel is in C as well as Embedded C++, as well as assembler code for low level file system for performance. Windows 7 and newer was written in C++, kernel is in C.

ARM64 has no bearing here as Apple has their own chip design and instruction set (most of it is not disclosed). If Apple used a plain jane ARM64 APU then you might have something but Apple is not likely to do that.

No iOS was ported from OS-X when it was on the PowerPC CPU. iPod was still very different it was never a full OS!

When Apple ported OS-X from PowerPC to Intel it did a full rewrite of the Darwin micro kernel and the rest of the the code to work on CISC processors.

When Apple ported over iOS it needed to trim back a lot of the bulk removing lots of the code as it just wouldn't fit in the limited RAM and storage the first iPhone had (iPhone OS is the original name of iOS).

I don't under stand your referencing an emulator as that has no bearing in this.
[automerge]1578971934[/automerge]
I… what? “RISC OS-X”? Are you thinking of Archimedes? macOS as a rewrite? Of what? It still shares code with NeXTSTEP. “Nothing related to 64bit”??

iOS was trimmed because they had an opportunity to kill some deprecated code like QuickTime.

Some of us old farts remember the first generation of OS-X which ran on IBM's PowerPC (RISC based). NeXTSTEP was the source of Apples OS-X kernel Darwin and most of the supporting elements.

When IBM/Motorola couldn't match the performance on what Intel was doing Apple jumped to Intel CISIC CPU's Core Duo and then on to Core 2 Duo and then onto the i3/5/7 CPU's.

The i CPU's where a big change for Intel as the microcode leveraged some RISC technology which get people confused!

Adding in ARM CPU's into the mix then gets into which flavor of ARM! Arm's ARM or Apples ARM which are similar but still very different from each other!
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.