Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Hands up how many of you understood this? Seems great, but this is getting a little past my level of expertise. Headline; "TMSC Makes Better Chips for Apple." Good for all of us!
 
  • Like
Reactions: linkmaster02
You can’t just throw more cores at it because you can only get so much parallelism in the code that is executed. This is why Apple’s processor improvements are slowing down.
<snip>
This is a crucial point, often lost when some talk about Apple replacing Intel processors next year with A-series.

Yes, Apple has a world class silicon design team, but it isn’t possible to get the performance of Intel 90-140W CPUs just by scaling current A-series processors to 64 or more cores. Certain workloads could utilize that type of multicore processor very efficiently of course, but many times performance is limited by how much work can be done when executing a single thread.

I’d be interested to hear your take on a timeline for Apple to transition to their own CPU.
 
Apple is never use the state of the art memory inside of any of it's device. They just wait till memory prices fall and then buy the lowest memory chip set and speed. I think the LPDDR3 and the lowest speed LPDDR4 are neck in neck on pricing. The bandwidth is the primary difference between the two memory components for the most part. More lanes, faster the over all system!
 
It's been remarkable to see how silicon has shrunk all the way from 1 micron (1,000nm) to where we are today. It's what has made our current mobile computing world possible. However, as silicon continues to shrink, engineers will hit a limit, that being atom size. We probably have another decade before engineers will have to think differently and come up with a groundbreaking solution just how they worked around the GHz limit imposed by heat, gate delays and the speed of electric transmission by adding additional cores, threads, more levels of cache and moving to 64-bit.
 
Oh god, that art work looks like its from 1994.
Yeah, and there is no actual roadmap illustration either. :p Argh. I would have liked to see something like this:

21.jpg


BTW, this is from ARM, and the roadmap is from 2014. It's impressive that they are actually almost on time... because of TSMC.
 
  • Like
Reactions: lec0rsaire
Why it should? My iPhone X with iOS 11.3 is smooth as butter

You need to buck up and decide not to respond to troll posts. I've been given a number of timeouts here for responding to stuff like that. It's not worth it. The best thing to do with a troll is nothing.
 
  • Like
Reactions: KPandian1
Apple is never use the state of the art memory inside of any of it's device. They just wait till memory prices fall and then buy the lowest memory chip set and speed. I think the LPDDR3 and the lowest speed LPDDR4 are neck in neck on pricing. The bandwidth is the primary difference between the two memory components for the most part. More lanes, faster the over all system!

LPDDR3 wasn't really the cheapest when Apple started using it. It's superior to regular DDR3, DDR3L and much more efficient on standby than DDR4 @ 2133 MHz. This is the reason why Apple has continued to use it in their notebooks. As soon as LPDDR4 is viable, we'll see that introduced into the MacBook Pro. They use DDR4 ECC @ 2666 MHz in the iMac Pro and DDR4 @ 2400 MHz in the latest 4K and 5K iMacs. The reason the MacBook pros don't have any sort of DD4 is not because they're being cheap. Apple would've loved to offer 32 GB on the MacBook Pros. They would make a killing off of the price since most people would pay for it. They would've easily charged $400 for it on the 13" base versions ($200 for 16GB even on the $1799 model) and perhaps another $200 on the 15" base models since they already have come with 16GB standard since 2014. Of course they could be greedy and still charge $400 but that would be outrageous.
[doublepost=1525392052][/doublepost]
Yeah, and there is no actual roadmap illustration either. :p Argh. I would have liked to see something like this:

BTW, this is from ARM, and the roadmap is from 2014. It's impressive that they are actually almost on time... because of TSMC.

ARM is clearly the future. It's only a matter of time before they leave Intel in the dust. Don't get me wrong Intel will remain competitive and probably the gold standard on the desktop but the pace at which ARM's chips have evolved has been extraordinary. The A11 is already nearly on par with a dual core i7 U 13" MacBook Pro. If I were Intel I would be extremely worried! Licensing design will always be cheaper than selling chips as well. So we have great performance at a much cheaper price.
[doublepost=1525392642][/doublepost]
No roadmaps go beyond 3nm. The trouble, aside from increasing manufacturing challenges, is that quantum effects start to dominate and the transistors don’t behave like they used to at larger geometries. Modern transistor modes have hundreds of device parameters that attempt to track all the relevant physical parameters that affect their performance on modern nodes.

The width of a silicon atom is 0.2 nanometers, so we are talking transistors with features in the tens of atoms already. There is a real physical limit, even if the quantum effects weren’t in play. This is why there are efforts to find a replacement for silicon that allows circuits to switch faster. There are already materials out there, but they cannot be manufactured on the same scale and density as the current CMOS processes.

3DIC techniques tackle it from the energy per bit angle by making the interconnects closer together, making them easier to drive between interfaces, and thus, able to drive faster. Thermal management techniques will also help because heat has a negative influence on transistor performance, and thus, speed.

Your explanation cannot be any clearer. Well done!
 
Last edited:
I almost NEVER use anywhere near the power of the A chips, but I always get so excited reading about the future chips!

Well I guess maybe faceID uses quite a bit?
 
This is a crucial point, often lost when some talk about Apple replacing Intel processors next year with A-series.

Yes, Apple has a world class silicon design team, but it isn’t possible to get the performance of Intel 90-140W CPUs just by scaling current A-series processors to 64 or more cores. Certain workloads could utilize that type of multicore processor very efficiently of course, but many times performance is limited by how much work can be done when executing a single thread.

I’d be interested to hear your take on a timeline for Apple to transition to their own CPU.

The most interesting part about Apple transitioning to a custom CPU for Mac OSX products would likely be the extensions they made to the ISA rather than just the architecture. They could add a lot of heavy lifting vector instructions and other desktop class features to really balloon the TDP, but they’d also likely need to make a large number of their own instructions to adapt it to the desktop space. There would also be the question of whether they include a GPU in the design, design their own discrete GPU, or go to AMD or Nvidia for a more traditional route. They recently ceded the mobile GPU performance crown to Qualcomm, so it will be interesting to see how their custom GPUs develop.

I have no questions about their technical capability. They beat the market to 64 bit by a wide margin, have churned out multiple custom designs in the same year across multiple product lines, and have validated a new design to two new foundry processes at once. They simply choose and execute.

Apple is never use the state of the art memory inside of any of it's device. They just wait till memory prices fall and then buy the lowest memory chip set and speed. I think the LPDDR3 and the lowest speed LPDDR4 are neck in neck on pricing. The bandwidth is the primary difference between the two memory components for the most part. More lanes, faster the over all system!

Apple has typically adopted the latest mobile memory standard within a year of it appearing on the market in competing products. They equipped an iPad with a 128 bit off package memory to meet their high bandwidth requirements, which you don’t see with other vendors. They were also the first to really raise the bar for NAND performance in mobile devices. I would say Apple has a history of leadership in memory adoption in the mobile space.
 
If you have a mic nearby, you should drop it. Well done.

The most interesting part about Apple transitioning to a custom CPU for Mac OSX products would likely be the extensions they made to the ISA rather than just the architecture. They could add a lot of heavy lifting vector instructions and other desktop class features to really balloon the TDP, but they’d also likely need to make a large number of their own instructions to adapt it to the desktop space. There would also be the question of whether they include a GPU in the design, design their own discrete GPU, or go to AMD or Nvidia for a more traditional route. They recently ceded the mobile GPU performance crown to Qualcomm, so it will be interesting to see how their custom GPUs develop.

I have no questions about their technical capability. They beat the market to 64 bit by a wide margin, have churned out multiple custom designs in the same year across multiple product lines, and have validated a new design to two new foundry processes at once. They simply choose and execute.



Apple has typically adopted the latest mobile memory standard within a year of it appearing on the market in competing products. They equipped an iPad with a 128 bit off package memory to meet their high bandwidth requirements, which you don’t see with other vendors. They were also the first to really raise the bar for NAND performance in mobile devices. I would say Apple has a history of leadership in memory adoption in the mobile space.
 
I've long been under the impression that reliability starts becoming an increasingly big problem much smaller than this. Unless something has changed, I suspect the "true 3D ICs" part of the article may be the bigger deal.

However, I'm no expert on this topic, and did not stay at a Holiday Inn Express last night either.;)

Perhaps someone much more knowledgable is up to date on where Moore's Law and this limit is currently believed to be maxed (or is that min'd) out can chime in here???

I remember seeing a derivation of the ultimate classical computer chip purely based on quantum physics and thermodynamics. I don't remember exactly the specs, but in terms of transistor size, we're pretty close to fundamental limits. It's important to note that those fundamental limits do not consider that the chip has to actually be made, resulting in unavoidable engineering compromises.

What will have to happen instead of making things smaller, is making things faster. The 5GHz 'limit' is purely a materials and thermal limit. Changing to much more exotic materials should allow higher clock rates, lower switching power, and much greater tolerances to heat.

If we change to carbon nanotubes or graphene, with appropriate packaging a CPU could run at 1000's of degrees, no problem, with sky-high clock speeds.
 
  • Like
Reactions: BoulderAdonis
That was a verbose, highly technical, and dry MR article. It could probably have been shortened to about 1/5 it's size and say essentially the same thing.
 
Hands up how many of you understood this? Seems great, but this is getting a little past my level of expertise. Headline; "TMSC Makes Better Chips for Apple." Good for all of us!
I’m getting t-shirt made with, “I read the whole article but all I got was a headache,” printed on it. Maybe a QR code as a link so others can join the party.
 
  • Like
Reactions: xnu
Hands up how many of you understood this? Seems great, but this is getting a little past my level of expertise. Headline; "TMSC Makes Better Chips for Apple." Good for all of us!

I did, having been an engineer at a silicon valley startup that designed custom chips.
 
  • Like
Reactions: sos47 and xnu
Thank you to all those who chimed in with excellent explanations, particularly about why we are closing in on the limit of silicon-based architecture.

To those who are clearly more knowledgeable than me - I have several questions (please point me to links/references if you find it easier than answering):

- what are the limits of alternatives like Gallium Arsenide?
- could changes in thermal management allow improved performance without requiring further reductions in transistor size?
- outside of using the third dimension more effectively, where does the next wave of improvements seem likely to come from?

It's totally outside of my line of work, however i find this all so fascinating...
 
Impressive technology and undoubtedly quite a bit of trade secret art/know-how. All the A12 eggs, in a basket, on a far-off island called Taiwan. A place mentioned quite often in political circles. Tim Cook is smarter than that, isn't he?
 
  • Like
Reactions: sos47
Great.. Fine.. Glad to hear about these technological breakthroughs and manufacturing miracles but where and when am I going to see a MAC Mini upgrade...?

Still waiting..!
 
This might have been the first article ever on MacRumors that I felt might be worth pointing people towards for some insight.
Good job Chris!
 
I adore the sound of all of this.

But PLEASE.....................

Apple (and others) Give us reasons/software that's going to use all of this power.

Actually DO SOMETHING new with the hardware that's going to push developers and users to want and use this ever increasing power.

I always seem to come back to the same idea in my mind and that is a LARGE screen at home.
That could be a computer monitor or a TV with a dock in it's base that you place your phone in, and you cna then use your phone to power the LARGE screen.

For games and apps.

Naturally the phone would be being recharged at the same time.

Or course then, this MIGHT kill the iMac's for your casual users?

Let's have full desktop apps running on the phone.

I know Samsung are kinda trying it, but it's a crying shame we have these new chips and people are just going to be running email and flappyturd on them.

There NEEDS to be a reason to use this amazing stuff more than we have now
 
So in other words, the A13 or A14 may be the first processor from Apple/TMSC that doesn’t get a speed gain or a mention from Tim/Phil because the main reason for mentioning it at the iPhone announcement event/keynote, the performance gain, won’t be an attribute of the processor.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.