Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Pangalactic

macrumors 6502a
Original poster
Nov 28, 2016
514
1,443
Hi! I have a question: I am not a super technical user by any stretch of the imagination, but I wanted to ask what is the point of having the T2 Chip? Because from what I understand the T2 chip does some of the regular chip functionality, while the Intel chip does the rest of the functionality. The operating system that connects them is Bridge OS, which results in the infamous Bridge OS error.
Wouldn’t it be more simple for Apple to build their own chips that would do everything and get rid of Intel chips? And if that’s not possible, then why didn’t they just stick to using the Intel chips without the T2 In order to avoid excessive complexity of the system?
 
Hi! I have a question: I am not a super technical user by any stretch of the imagination, but I wanted to ask what is the point of having the T2 Chip? Because from what I understand the T2 chip does some of the regular chip functionality, while the Intel chip does the rest of the functionality. The operating system that connects them is Bridge OS, which results in the infamous Bridge OS error.
Wouldn’t it be more simple for Apple to build their own chips that would do everything and get rid of Intel chips? And if that’s not possible, then why didn’t they just stick to using the Intel chips without the T2 In order to avoid excessive complexity of the system?

I guess this is happening because all of the current macOS programs are written to run on the x86 architecture and Apple would need to create a Rosetta-like feature for the current and near future OSes to make sure everything keeps working. This would probably cause the apps to perform much worse than they do now while developers transition all of their programs to the new system architecture.
 
ARM is still far behind for general purpose when compared to x86. Power and temp is also a huge issue for ARM type of chips as well.

T2 is there because how Macbook setup with touch bar and siri interaction. It's trying to be independent from the main CPU.
 
ARM is still far behind for general purpose when compared to x86. Power and temp is also a huge issue for ARM type of chips as well.

T2 is there because how Macbook setup with touch bar and siri interaction. It's trying to be independent from the main CPU.


Is it really that far behind? Especially for general purpose? I'd argue the complete opposite.The new iPad Pro has as much computing power as many 15" MacBook Pros (2016 Touch Bar models with the i7 2.9 and Pro 460!), no matter if you believe that benchmarks across architectures are comparable or not. The power is clear as day when doing real world CPU intensive stuff in iOS, like handling a 42 megapixel RAW image. And what's wrong with power and temperature needs? The A12X runs cooler (and with no heatsink or fan) and with less power consumption than an equivalent x86 chip. Intel is where power and temperature is a huge issue, not ARM. Hell, I bet you could run two or three overclocked A12X chips simultaneously where one 45w quad-core i7 exists today and still make less heat and consume less power than the Intel setup.

We're on the verge of a computing boom again I think. I can't wait to see what an A-series chip can do with a proper cooling setup and laptop-sized battery. What's difficult - and what's holding it back - is software. Just like it was moving from PowerPC to x86 back in the day, it's a big ask for legacy software (like Adobe's CC apps) to be rewritten for a new ARM-based desktop OS. They did it before, and they'll do it again very soon. We're already getting full Photoshop for iPad.
 
Is it really that far behind? Especially for general purpose? I'd argue the complete opposite.The new iPad Pro has as much computing power as many 15" MacBook Pros (2016 Touch Bar models with the i7 2.9 and Pro 460!), no matter if you believe that benchmarks across architectures are comparable or not. The power is clear as day when doing real world CPU intensive stuff in iOS, like handling a 42 megapixel RAW image. And what's wrong with power and temperature needs? The A12X runs cooler (and with no heatsink or fan) and with less power consumption than an equivalent x86 chip. Intel is where power and temperature is a huge issue, not ARM. Hell, I bet you could run two or three overclocked A12X chips simultaneously where one 45w quad-core i7 exists today and still make less heat and consume less power than the Intel setup.

We're on the verge of a computing boom again I think. I can't wait to see what an A-series chip can do with a proper cooling setup and laptop-sized battery. What's difficult - and what's holding it back - is software. Just like it was moving from PowerPC to x86 back in the day, it's a big ask for legacy software (like Adobe's CC apps) to be rewritten for a new ARM-based desktop OS. They did it before, and they'll do it again very soon. We're already getting full Photoshop for iPad.

Say what you like and believe whatever benchmarks you wish to (at your delusion), but that Photoshop on iPad won’t be anything like the photoshop on 15” MacBook Pro performance wise. Watching the demo was actually quite cringe for me.


People on this forum have some fantasy idea of ARM - someone recently mentioned some MacBooks at triple the speeds!
 
Say what you like and believe whatever benchmarks you wish to (at your delusion), but that Photoshop on iPad won’t be anything like the photoshop on 15” MacBook Pro performance wise. Watching the demo was actually quite cringe for me.


People on this forum have some fantasy idea of ARM - someone recently mentioned some MacBooks at triple the speeds!
Could you elaborate on that? I know benchmarks alone usually don't tell the whole story, but the performance of the new iPad Pros for example are pretty much universally praised by reviewers, not just in terms of benchmarks but also in all the real-world applications they threw at them. Why do you believe that this edge in performance won't carry over to a hypothetical ARM-based future macOS-version? Not saying you're wrong, I'm just curious what your reasoning for that conclusion is.
 
ARM is still far behind for general purpose when compared to x86.

Where do you get this from? It’s like a continuation of some late 90s thing that doesn’t reflect how ARM based chips are today.

ARM is just one part of the A series chip. The total package can do anything a mainstream Intel chip can at much lower power consumption without throttling badly.

We can very confidently state that in 1-2 years Apple will be selling A series laptops because they will outperform Intel and AMD’s best offerings.

Specialist chipsets are awesome, as Amiga fans from the late 80s to early 90s will tell you.
 
  • Like
Reactions: Falhófnir
Say what you like and believe whatever benchmarks you wish to (at your delusion), but that Photoshop on iPad won’t be anything like the photoshop on 15” MacBook Pro performance wise. Watching the demo was actually quite cringe for me.


People on this forum have some fantasy idea of ARM - someone recently mentioned some MacBooks at triple the speeds!
I think you should stop burying yourself in the sand. It is quite obvious that Apple’s own Arm chips are fast and sip very little power. It’s a matter of time until Apple will switch
Could you elaborate on that? I know benchmarks alone usually don't tell the whole story, but the performance of the new iPad Pros for example are pretty much universally praised by reviewers, not just in terms of benchmarks but also in all the real-world applications they threw at them. Why do you believe that this edge in performance won't carry over to a hypothetical ARM-based future macOS-version? Not saying you're wrong, I'm just curious what your reasoning for that conclusion is.
it is not just benchmarks - my colleague recently who recently got a top of the line 12.9” 1tb model has said that Adobe Lightroom on his iPad far outperforms his top of the line 2017 15” MacBook Pro. I’ve got the 2017 iPad Pro which is no slouch - but I’m tempted to upgrade
 
  • Like
Reactions: afir93
While I would like to see some in-depth performance analysis, I think it's safe to say that Apple's ARM is catching up with x86 world. There is this misconception of ARM chips as being fundamentally slow and only suitable to low-power, low-performance applications. I believe its part of the "RISC vs. CISC" myth which still somehow stubbornly lingers on...

There is very little principal difference between modern x86 chips and ARM chips as Apple uses them. Both are super-scalar architectures with advanced branch prediction and high-performance simd units. Intel doesn't disclose the transistor counts of their CPUs, but A12X is an absolute monster with over 10 billion transistors — that's almost twice as much as a 18-core Haswell Xeon CPU from 2014 (even though most of these transistors are probably cache). Not to mention that its the first 7-nm CPU to be shipped in a user product.

If the x86 world continues to stagnate, Apple will have all the reasons to move to their own high-powered ARM chips on the desktop. Software compatibility is less of a problem than many posters here claim (although its still a significant one). Transition from x86-64 to 64-bit ARM is very different than the previous transition from PowerPC to x86. The main difference is that x86-64 and ARM are datatype-compatible: their basic types have identical sizes, layouts and alignment requirements. And x86 instructions can be compiled to ARM instructions. The main difficulty is translating SIMD code, since the instructions are quite different. But it's not impossible. I can even imagine Apple leveraging future iterations of T2 chip for this, running an LLVM-derived x86 to ARM compiler securely...
 
Wouldn’t it be more simple for Apple to build their own chips that would do everything and get rid of Intel chips?
Yes, it would be more simpler, but then what happens to all the developers and consumers who use the Mac and have to change platforms. They will lose a segment of customers, and its my belief that segment will be major, i.e., Dropping Intel will result in consumers dropping Macs and buying a windows PC. I'm certainly brining in my personal opinion and perspective on this, and that is, if Apple were to drop Intel and migrate to their own chipset. I would never buy another Mac.

why didn’t they just stick to using the Intel chips without the T2 In order to avoid excessive complexity of the system?
I really wished they didn't add the T1/T2. Yes the prospect sounded cool on paper and remember the initial addition of the this proprietary chipset was to drive the TouchBar. Yet, we see apple improving the T1 into the T2, rolling it out on the iMac Pro, Mac Mini, and MacBook Air, but so far with three MBP generations under their belt they've done nothing to extend or improve the TouchBar.

The T2, it seems to be now focused on security. Is that level of security actually needed? For most consumers, I'd say no and the downside is that it introduces a complex interplay of two distinct CPUs (intel and ARM) and two operating systems macOS and bridgeOS and so far that hasn't resulted in stability, i.e., too many crashes. I'm off on a rant, but I don't see the reason for the T2 chip and its downsides so far have outweighed any possible upsides.
 
Yes, it would be more simpler, but then what happens to all the developers and consumers who use the Mac and have to change platforms. They will lose a segment of customers, and its my belief that segment will be major, i.e., Dropping Intel will result in consumers dropping Macs and buying a windows PC. I'm certainly brining in my personal opinion and perspective on this, and that is, if Apple were to drop Intel and migrate to their own chipset. I would never buy another Mac.


I really wished they didn't add the T1/T2. Yes the prospect sounded cool on paper and remember the initial addition of the this proprietary chipset was to drive the TouchBar. Yet, we see apple improving the T1 into the T2, rolling it out on the iMac Pro, Mac Mini, and MacBook Air, but so far with three MBP generations under their belt they've done nothing to extend or improve the TouchBar.

The T2, it seems to be now focused on security. Is that level of security actually needed? For most consumers, I'd say no and the downside is that it introduces a complex interplay of two distinct CPUs (intel and ARM) and two operating systems macOS and bridgeOS and so far that hasn't resulted in stability, i.e., too many crashes. I'm off on a rant, but I don't see the reason for the T2 chip and its downsides so far have outweighed any possible upsides.
T2 can also be used for HEVC encoding. It was in the press release and briefly mentioned in the keynote. No signs of any current software supporting it yet though but I’m sure Final Cut Pro and compressor will shortly. If the encoding performance of the a12x in encoding can be replicated (tested by laptomag at approx 4x faster than the latest 13’ mbp) in the T2 ..and hope fully with good quality as well, it will be an important differentiator for macs.
 
  • Like
Reactions: afir93
I don't think it will be an important differentiator. Just my opinion, but I see it being more of apple trying to make their Macs Proprietary and less compatible, and have more control.
Well 4x faster video encoding would be beneficial for many
 
  • Like
Reactions: afir93
Microsoft already has a version of Windows 10 that runs on ARM. It does x86 emulation to run regular programs. I’ve seen some reviews calling it slow, but these are just early systems. Apple is probably already working on an ARM compatible version of macOS.

Here is the Microsoft site dealing with making programs for Windows on ARM:
https://docs.microsoft.com/en-us/windows/arm/

ARM is probably here to stay in the desktop world and it will at least create more competition and motivate Intel to step up their game.
 
  • Like
Reactions: Queen6 and afir93
Microsoft already has a version of Windows 10 that runs on ARM. It does x86 emulation to run regular programs. I’ve seen some reviews calling it slow, but these are just early systems. Apple is probably already working on an ARM compatible version of macOS.

Here is the Microsoft site dealing with making programs for Windows on ARM:
https://docs.microsoft.com/en-us/windows/arm/
I expect Apple have had MacOS (back when it was OS X, even) running on ARM pretty well as long as iOS has existed given how much of the underpinning code for each OS is shared. I think certainly since the A5/ A6 days when Apple began really getting into designing their own chips they’ve seen it as a possible future path for MacOS. Personally it doesn’t hugely affect me one way or the other, I can do my work on MacOS, Windows, even iOS though it wouldn’t be quite as suitable for me as things stand. Overall I think bringing their two platforms closer together architecturally makes sense as they develop things like Marzipan. Intel seems to have stalled and x86 has largely stalled with it. Apple’s efforts are still going great guns.
 
  • Like
Reactions: NintendoFan
Yes, it would be more simpler, but then what happens to all the developers and consumers who use the Mac and have to change platforms. They will lose a segment of customers, and its my belief that segment will be major, i.e., Dropping Intel will result in consumers dropping Macs and buying a windows PC.

And you say that based on what? If Apple rolls out an ARM Mac, you'd be able to run a modern macOS app on it without even noticing that it doesn't have an Intel CPU. The only problem I see is Bootcamp. But even that is solvable if they use a translation layer via T2.


The T2, it seems to be now focused on security.

Security is only one part. It seems that what Apple wants to have is an AI controller that will take over low-level management of the machine (this includes power management and other things), with this goal of making all these tasks more efficient and more effective. Sure, the current iteration seems to have a bug. No reason to overdramatise it. They will solve it and then you will have extended warranty with free replacement, as was always done in these cases.
[doublepost=1541865211][/doublepost]
I expect Apple have had MacOS (back when it was OS X, even) running on ARM pretty well as long as iOS has existed given how much of the underpinning code for each OS is shared.

Exactly this. I have little doubt that they have a bunch of ARM-based Macs running in a secret facility somewhere. Not to mention that they have been working towards CPU-target independence for years. When you look at it closer, its just part of the big plan: binary compatibility between data types, LLVM byte code as intermediate binary representation, LLVM itself.
 
T2 can also be used for HEVC encoding. It was in the press release and briefly mentioned in the keynote. No signs of any current software supporting it yet though but I’m sure Final Cut Pro and compressor will shortly. If the encoding performance of the a12x in encoding can be replicated (tested by laptomag at approx 4x faster than the latest 13’ mbp) in the T2 ..and hope fully with good quality as well, it will be an important differentiator for macs.
Actually, regarding the bolded part: back in July, professional video-editor Austin Mann wrote in his MacBook Pro 2018 review that the maxed out 2018 MacBook Pro encoded a video file into HEVC using QuickTime more than four times (!) as fast as the maxed out 2016 MacBook Pro (24 seconds vs 99 seconds). A whopping 400% increase in encoding times! He explains this as the T2 chip most likely assisting the CPU/GPU during the encoding process, which to me sounds like the most likely explanation.

Keep in mind that both the mobile Skylake aswell as the Cannonlake already have Intel's Quick Sync HEVC functionality, so while the jump in CPU performance might have somewhat contributed to that speed increase, it alone shouldn't make the process anywhere near four times as fast (the maxed-out mobile Coffeelake chips are somewhere around 50% faster than the Skylake ones, due to the two additional cores). For comparison, the 2013 MacBook Pro (which did not have HEVC hardware encoding built into its CPU yet, nor did it have the T2 chip) scores in at 33 minutes for the same file in Austin Mann's review.

Based on this, my guess would be that at the very least Apple's own software such as QuickTime, FCPX, Compressor and iMovie can already take advantage of the T2 chip's hardware encoding.
 
  • Like
Reactions: Howard2k
Apple's corporate memory includes almost losing the business because of their selection of the Motorola CPU family. While clearly superior to Intel in addressing, Intel gained market share and Motorola languished and with it Apple computers. Until Apple went to Intel processors on the Mac, Macs were cool buy underpowered and used primary by Apple enthusiasts, and the Wintel platforms crushed them in the marketplace.
 
Last edited:
Actually, regarding the bolded part: back in July, professional video-editor Austin Mann wrote in his MacBook Pro 2018 review that the maxed out 2018 MacBook Pro encoded a video file into HEVC using QuickTime more than four times (!) as fast as the maxed out 2016 MacBook Pro (24 seconds vs 99 seconds). A whopping 400% increase in encoding times! He explains this as the T2 chip most likely assisting the CPU/GPU during the encoding process, which to me sounds like the most likely explanation.

Keep in mind that both the mobile Skylake aswell as the Cannonlake already have Intel's Quick Sync HEVC functionality, so while the jump in CPU performance might have somewhat contributed to that speed increase, it alone shouldn't make the process anywhere near four times as fast (the maxed-out mobile Coffeelake chips are somewhere around 50% faster than the Skylake ones, due to the two additional cores). For comparison, the 2013 MacBook Pro (which did not have HEVC hardware encoding built into its CPU yet, nor did it have the T2 chip) scores in at 33 minutes for the same file in Austin Mann's review.

Based on this, my guess would be that at the very least Apple's own software such as QuickTime, FCPX, Compressor and iMovie can already take advantage of the T2 chip's hardware encoding.
Nice .. didn't know it was already implemented. It also matches the 4X increase in speed I alluded to. Looks like the T2 has the same HEVC encode unit as A-series chips.

I wonder if they will add even more in future T series chips like the Neural engine or even the GPU. GPU on the A12X is impressive - about the same speed as the High end Radeon in the 15 inch MBP while using at least 5-10X less power.
 
At this point I would prefer Apple chips. Intel has been lagging and Apple has clearly caught up. Take the A12X and turn the other 4 efficiently cores into performance cores and I'd be pretty happy. Most of the apps I use are Apple apps and have no doubt those would be supported well (Logic Pro X). Along with the power/heat savings of the A CPUs, I honestly can't wait. Apple's CPU/GPU cadence is much better than Intel's now, too.

Of course, this is speaking for strictly notebooks. I don't know what the desktop situation would be. Maybe that's the hold up? Great notebook CPUs/GPU, but they may not scale up well for a desktop.
 
At this point I would prefer Apple chips. Intel has been lagging and Apple has clearly caught up. Take the A12X and turn the other 4 efficiently cores into performance cores and I'd be pretty happy. Most of the apps I use are Apple apps and have no doubt those would be supported well (Logic Pro X). Along with the power/heat savings of the A CPUs, I honestly can't wait. Apple's CPU/GPU cadence is much better than Intel's now, too.

Of course, this is speaking for strictly notebooks. I don't know what the desktop situation would be. Maybe that's the hold up? Great notebook CPUs/GPU, but they may not scale up well for a desktop.

Maybe they can make a 32 core 64 thread chip, like the AMD Ryzen Threadripper 2990WX, for the redesigned (2019?) Mac Pro machines.
https://www.amd.com/en/products/cpu/amd-ryzen-threadripper-2990wx
 
This depends more on the code than on the processor per se.


Right, but the fact that we're getting so close with such a small, passive ARM chip shows that we could likely scale this up and be in jawdropping performance territory if Intel continues to stagnate. And if an architecture change is what it takes to nudge some innovation effort to be put in by major software players instead of relying on legacy code (Adobe?) all the more reason to go for it.
[doublepost=1541987718][/doublepost]
Say what you like and believe whatever benchmarks you wish to (at your delusion), but that Photoshop on iPad won’t be anything like the photoshop on 15” MacBook Pro performance wise. Watching the demo was actually quite cringe for me.


People on this forum have some fantasy idea of ARM - someone recently mentioned some MacBooks at triple the speeds!


I don't know if it's a fantasy - we don't want to just plonk an A12X in place of an i7 and call it a day. What if we had an Apple ARM chip developed for the TDP of a MacBook Pro enclosure? A Mac Pro enclosure? We could up the core count and the processing speed immensely to fit, for example, a 45w TDP like a 15" MacBook Pro or through the roof with a Xeon-level TDP of 150w in a desktop cooling solution. The iPad "X" type chips have a TDP of around 5 watts.

All we're saying is, if we can do this much with ~5w TDP, one can only imagine what one could do if one were designed for a 45w application. Active cooling and more input power and I don't see why it wouldn't be an absolute screamer of a chip. Add Apple's software/hardware integration and their experience transitioning major computing platforms between different architectures and I think you've got a winner of a strategy in the next five years.
 
Last edited:
Yes, it would be more simpler, but then what happens to all the developers and consumers who use the Mac and have to change platforms. They will lose a segment of customers, and its my belief that segment will be major, i.e., Dropping Intel will result in consumers dropping Macs and buying a windows PC. I'm certainly brining in my personal opinion and perspective on this, and that is, if Apple were to drop Intel and migrate to their own chipset. I would never buy another Mac.


I really wished they didn't add the T1/T2. Yes the prospect sounded cool on paper and remember the initial addition of the this proprietary chipset was to drive the TouchBar. Yet, we see apple improving the T1 into the T2, rolling it out on the iMac Pro, Mac Mini, and MacBook Air, but so far with three MBP generations under their belt they've done nothing to extend or improve the TouchBar.

The T2, it seems to be now focused on security. Is that level of security actually needed? For most consumers, I'd say no and the downside is that it introduces a complex interplay of two distinct CPUs (intel and ARM) and two operating systems macOS and bridgeOS and so far that hasn't resulted in stability, i.e., too many crashes. I'm off on a rant, but I don't see the reason for the T2 chip and its downsides so far have outweighed any possible upsides.


Why stick with Intel? What’s the rationale for that?

It wouldn’t surprise me that T2 and Bridge OS turn out to be the heroes/villains in a combined ARM / Intel deployment, primed by ARM, before dropping Intel and moving to emulation.

I might of course have no idea what I’m talking about.
[doublepost=1541989649][/doublepost]
Apple's corporate memory includes almost losing the business because of their selection of the Motorola CPU family. While clearly superior to Intel in addressing, Intel gained market share and Motorola languished and with it Apple computers. Until Apple went to Intel processors on the Mac, Macs were cool buy underpowered and used primary by Apple enthusiasts, and the Wintel platforms crushed them in the marketplace.


Empires fall.

Look at IBM. Digital. Compaq. Motorola. Palm. BlackBerry. Microsoft to some degree. And Apple of course.

No reason Intel cannot also jump on that list. The architecture would survive Intel’s demise through AMD, and a competitive product would have to be spectacular, but the A series has been pretty spectacular.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.