Hands up how many of you understood this? Seems great, but this is getting a little past my level of expertise. Headline; "TMSC Makes Better Chips for Apple." Good for all of us!
This is a crucial point, often lost when some talk about Apple replacing Intel processors next year with A-series.You can’t just throw more cores at it because you can only get so much parallelism in the code that is executed. This is why Apple’s processor improvements are slowing down.
<snip>
Why it should? My iPhone X with iOS 11.3 is smooth as butterTMSC's chip technology certainly is impressive- but odds are pretty much 100% that the new iPhone XI with the new A12 TSMC chip running iOS 12 will lag & stutter anyway. Sigh.
Yeah, and there is no actual roadmap illustration either.Oh god, that art work looks like its from 1994.
Why it should? My iPhone X with iOS 11.3 is smooth as butter
Apple is never use the state of the art memory inside of any of it's device. They just wait till memory prices fall and then buy the lowest memory chip set and speed. I think the LPDDR3 and the lowest speed LPDDR4 are neck in neck on pricing. The bandwidth is the primary difference between the two memory components for the most part. More lanes, faster the over all system!
Yeah, and there is no actual roadmap illustration either.Argh. I would have liked to see something like this:
BTW, this is from ARM, and the roadmap is from 2014. It's impressive that they are actually almost on time... because of TSMC.
No roadmaps go beyond 3nm. The trouble, aside from increasing manufacturing challenges, is that quantum effects start to dominate and the transistors don’t behave like they used to at larger geometries. Modern transistor modes have hundreds of device parameters that attempt to track all the relevant physical parameters that affect their performance on modern nodes.
The width of a silicon atom is 0.2 nanometers, so we are talking transistors with features in the tens of atoms already. There is a real physical limit, even if the quantum effects weren’t in play. This is why there are efforts to find a replacement for silicon that allows circuits to switch faster. There are already materials out there, but they cannot be manufactured on the same scale and density as the current CMOS processes.
3DIC techniques tackle it from the energy per bit angle by making the interconnects closer together, making them easier to drive between interfaces, and thus, able to drive faster. Thermal management techniques will also help because heat has a negative influence on transistor performance, and thus, speed.
This is a crucial point, often lost when some talk about Apple replacing Intel processors next year with A-series.
Yes, Apple has a world class silicon design team, but it isn’t possible to get the performance of Intel 90-140W CPUs just by scaling current A-series processors to 64 or more cores. Certain workloads could utilize that type of multicore processor very efficiently of course, but many times performance is limited by how much work can be done when executing a single thread.
I’d be interested to hear your take on a timeline for Apple to transition to their own CPU.
Apple is never use the state of the art memory inside of any of it's device. They just wait till memory prices fall and then buy the lowest memory chip set and speed. I think the LPDDR3 and the lowest speed LPDDR4 are neck in neck on pricing. The bandwidth is the primary difference between the two memory components for the most part. More lanes, faster the over all system!
The most interesting part about Apple transitioning to a custom CPU for Mac OSX products would likely be the extensions they made to the ISA rather than just the architecture. They could add a lot of heavy lifting vector instructions and other desktop class features to really balloon the TDP, but they’d also likely need to make a large number of their own instructions to adapt it to the desktop space. There would also be the question of whether they include a GPU in the design, design their own discrete GPU, or go to AMD or Nvidia for a more traditional route. They recently ceded the mobile GPU performance crown to Qualcomm, so it will be interesting to see how their custom GPUs develop.
I have no questions about their technical capability. They beat the market to 64 bit by a wide margin, have churned out multiple custom designs in the same year across multiple product lines, and have validated a new design to two new foundry processes at once. They simply choose and execute.
Apple has typically adopted the latest mobile memory standard within a year of it appearing on the market in competing products. They equipped an iPad with a 128 bit off package memory to meet their high bandwidth requirements, which you don’t see with other vendors. They were also the first to really raise the bar for NAND performance in mobile devices. I would say Apple has a history of leadership in memory adoption in the mobile space.
I've long been under the impression that reliability starts becoming an increasingly big problem much smaller than this. Unless something has changed, I suspect the "true 3D ICs" part of the article may be the bigger deal.
However, I'm no expert on this topic, and did not stay at a Holiday Inn Express last night either.
Perhaps someone much more knowledgable is up to date on where Moore's Law and this limit is currently believed to be maxed (or is that min'd) out can chime in here???
I’m getting t-shirt made with, “I read the whole article but all I got was a headache,” printed on it. Maybe a QR code as a link so others can join the party.Hands up how many of you understood this? Seems great, but this is getting a little past my level of expertise. Headline; "TMSC Makes Better Chips for Apple." Good for all of us!
Hands up how many of you understood this? Seems great, but this is getting a little past my level of expertise. Headline; "TMSC Makes Better Chips for Apple." Good for all of us!
You need to buck up and decide not to respond to troll posts. The best thing to do with a troll is nothing.
Deep cynicism supported by frequenting MR.
Oh god, that art work looks like its from 1994.