Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
5 nm is on 300 mm ("12 inch") wafers, so there is a hell of a lot of surface area available, and I've found an Anandtech report showing 2600 good die on a test wafer, with 600 failed die and some number of edge die (die at the edge fo the wafer are generally discarded as a quality protection against field returns.

I would assume that TSMC would not release a die to production with less than 90-955 yield, so, being teh lazt mathematics engineer that I am, I would expect 3000 good die per wafer.

www anandtech com /show/15219/early-tsmc-5nm-test-chip-yields-80-hvm-coming-in-h1-2020

This needs a major caveat: the test die that was referenced by Anandtech could be either smaller or larger than an Apple-designed die, so the good die per wafer yield is still very much up in the air.

Not really. From the Anandtech article

"... . Using the calculator, a 300 mm wafer with a 17.92 mm2 die would produce 3252 dies per wafer. An 80% yield would mean 2602 good dies per wafer, and this corresponds to a defect rate of 1.271 per sq cm. ..."

The A12X is around 110 - 120 mm2 die.

"... . Using the calculator, a 300 mm wafer with a 17.92 mm2 die would produce 3252 dies per wafer. An 80% yield would mean 2602 good dies per wafer, and this corresponds to a defect rate of 1.271 per sq cm. ..."


A13 is around 98 mm2
"...

Total Die98.48
.."


So that test chip is off by an order of magnitude. Talking hundreds of dies not thousands. More than high probability that Apple will take the 5nm shrink to add more to some "A14" variant for the Mac. ( More GPU cores , more ARM cores , wider and more numerous display output subsystem , more I/O streams (multiple USB ports , Thunderbolt ports ), etc. ) . The die is pretty unlikely to get smaller because they are going to increase the transistor budget.

So

10.5 x 11.2 , 0.1 defect density , and 330mm wafer .

comes out to 540 dies. A vastly different number. Apple's mobile processors are fast in part because they are relatively big. ( the relatively largely at this point being that there is not cellular radio subsystem present versus other players in phone space. )
 
Last edited:
Yup how silly of me to want to see Apple put their best foot forward and maybe push computers out of the lul they have been in for the last decade. How silly of me and all those people buying the Mac Pro, AMD Threadrippers, 8k capable R5 cameras. How could anyone want to do local development as stipulated by their company? Or I dunno, how most development is done?

Hmmm...not sure how much computing power you really need for local development unless you need to run large databases or application servers on your laptop/desktop. A 4 or 6 core Intel i7 with 16GB RAM is sufficient for most development tasks that I've seen.

Even if your company doesn't use public cloud yet (which is definitely the future for most businesses), surely you have network compute or storage as part of your build & deployment pipeline? How else do you integrate your work?

A few years ago, I would have agreed with your position. I bought a used Dell Xeon workstation with 64GB RAM and stuffed it full of disks, because I couldn't give enough cores & memory to my VMs, but this is largely unused for work now (it's good for photo & video editing). Pretty much all my compute, database, storage and CI/CD workflow is running on cloud services. But then again, this is my business, so most of my clients are doing the same. What I can say, is that there a booming market in migrating customer data centers and infrastructure to the public cloud, and this trend shows no signs of abating.
 
50+ Docker images on your local dev machine is "very common"? Come on.


It doesn't sound very "monolithic" to me. It sounds like an architectural nightmare that only makes sense for massive teams, in which case… why on earth aren't you sharing a build host somewhere.

I asked him the same question...unless you are a one-man band, you are generally integrating your work on a build server that is running your delivery pipeline. This certainly needs more grunt than your laptop, depending on the size of the team and complexity of the build.

In my experience, it is unusual to have every single developer recreating the entire stack on their local machine, and even if they do, there are typically using downscaled instances or mocks.

Just about every developer I come across is using a laptop with 16-32GB RAM and a 4/6/8 core Intel CPU. A few have desktops or workstations. I see a tendency to use fewer large application servers and RDBMS - a lot more lightweight frameworks like NodeJS, Python Flask and NoSQL DBs.

The elephant in the room is cost. Sure, many developers would love to have a 32-core beast machine to work on (not portable though!), but their employers won't be buying these if they can get away with a 4-core laptop.....

If Apple can produce a 16-32 core developer machine at the same cost that is clearly faster than Intel, then there is a case for having more power "just because you can".

I'm not knocking progress...and I'm interested to see what Apple comes up with...but the realities are that developers don't generally need huge computing power, and no-one is going to want to pay for unnecessary power. This is why I spend so much time optimizing customer cloud instances to get the best value for them - I make them as small as possible with the flexibility to scale up or out.
 
Last edited:
  • Like
Reactions: PickUrPoison
Do you think this will be "Blazing Fast" as some outlets/people are suggesting.
Or do you think the harsh reality will be that it's REALLY fast at the few specific apps and tasks which the chip has been tweaked to perform well at.
but when it comes to being put against a general CPU and confronted by general apps it's not going to look anything special at all?
And rather than admit it's just ok-ish, the Apps will simply get the blame for not being optimised for the CPU ?

Personally, if we are going to put it up against other CPU's from other brands is needs to be able to handle anything that's thrown at it, not just a limited number of specially optimised apps.

Thoughts?

I think your hypothesis may be right. I expect Apple to show significant performance increases in a few key applications or tasks, but we fairly average in others (maybe a modest increase up to 20-30%). Apps running under Rosetta will be labelled as such to ensure that users know there "not-optimized / not ARM-native" - assuming performance is lack-lustre.

General performance for native-ARM apps will need to be able to match or exceed existing Intel version on similarly-priced hardware though.
 
The 18 core iMac Pro, and 24/28 core Mac Pro are not for Apple customers? The Afterburner, Final Cut, and Logic are not or Apple customers?
This round of Apple Silicon SOCs are not intended for the replacements for the iMac Pro or the Mac Pro. They will likely be the last ones moved to AS at the end of the 2 year migration. The chips then will likely be significantly upgraded.

In the meantime, lower end models like the Macbook and maybe the low end iMac and mini would be well suited to a version of the A14x expected in the next iPad pros. The current iPad Pro chip A12Z is as fast as a MacBook Pro.

The other consumer Macs will likely get Either another version of the A14x (Or A15x depending on timing) with more cores and or faster clocks.
 
As an AWS user, I'd say those costs add up quickly. So, easy yes, cost effective, I'd be skeptical. It's also introduced a new kind of anxiety of forgetting to ramp down hardware when unused. YMMV.

Besides, if Apple Silicon is so great - it should have compute to spare, no?

Yes, cost is the big "gotcha!" on cloud platforms :) I can't be the only person who has accidentally spun-up an expensive stack at the push of a button, and not realized the cost until it's too late. Or forgotten to shut down instances that were left to run for months....Curious how they make it really easy to create the resources, but quite hard to set up decent reporting on exactly how much it is costing you. Hence the spread of 3rd party apps for cloud management and cost control.

Cloud platforms are probably the most cost-effective when running short-lived infrastructure. If you only need the 32 vCPU licenced database for a few weeks, and can turn it off after working hours, then Cloud is a great solution. If you have some long-running task (3D renders, bitcoin mining, batch compute etc.) that you will repeat over months or years, then there is a cut-over point where it makes more financial sense to buy your own hardware, and then enjoy having a powerful machine for other day-to-day tasks.
 
This round of Apple Silicon SOCs are not intended for the replacements for the iMac Pro or the Mac Pro. They will likely be the last ones moved to AS at the end of the 2 year migration. The chips then will likely be significantly upgraded.

In the meantime, lower end models like the Macbook and maybe the low end iMac and mini would be well suited to a version of the A14x expected in the next iPad pros. The current iPad Pro chip A12Z is as fast as a MacBook Pro.

The other consumer Macs will likely get Either another version of the A14x (Or A15x depending on timing) with more cores and or faster clocks.

I agree. The first Apple Silicon is most likely to be in a new MacBook (12" or maybe 14") that competes favorably with the MacBook Air or entry-level MBP13 - i.e. is usefully faster with better battery life, and some application-specific enhancements (e.g. faster video rendering in iMove and FCPX).

They will be good enough to generate interest and positive reviews of Apple Silicon, but not cannibalize sales of the MBP16 or higher-end MBP13 (probably!). A new iMac (24"?) and Mac Mini will follow mid-2021, with a new MBP16 at the end of 2021. A high-end iMac 27" (maybe new 30-32"?) could arrive at the end of 2021 or early 2022, with a new MacPro sometime in 2022 - possibly with an early announcement, followed by a 6 month wait, as previously.
 
  • Like
Reactions: PickUrPoison
There are few applications that really need 64 cores. Video editing tends to level off after 24 cores and there is little benefit in more cores (see https://www.pugetsystems.com/labs/a...formance-AMD-Threadripper-3990X-64-Core-1659/). Even multiple GPUs don't add benefit beyond 2 or 3, depending on the application.

In servers, especially those running VMs or containers, sure 64 cores or more is common. But these are running dozens or hundreds of separate instances for multiple users / applications.

I'm really interested to know which single application needs or greatly benefits from really high core-counts.

Exactly. (The other person has since responded, saying they run 50 Docker containers on their workstation, and, uh, sure…)
 
I asked him the same question...unless you are a one-man band, you are generally integrating your work on a build server that is running your delivery pipeline.

But especially if you are a one-man band, maybe don’t architect your software such that it needs so many containers.

In my experience, it is unusual to have every single developer recreating the entire stack on their local machine, and even if they do, there are typically using downscaled instances or mocks.

Well, it depends on how much time it takes to set up the build environment. If every time I want to test locally, I need to launch fifty Docker containers, I wouldn’t enjoy the situation where there’s a critical bug in production and I need to track it down. It would probably drive me bananas and make me turn that app into a monolith again.

Just about every developer I come across is using a laptop with 16-32GB RAM and a 4/6/8 core Intel CPU. A few have desktops or workstations. I see a tendency to use fewer large application servers and RDBMS - a lot more lightweight frameworks like NodeJS, Python Flask and NoSQL DBs.

Exactly.

Desktops are the minority for devs now, and, again, most of the actual code you’ll be writing won’t be CPU-bound much less parallellizable. It’ll wait for your DB, or for some API, etc. Especially when it’s so highly modularized that very little code runs in one and the same process.
 
  • Like
Reactions: PickUrPoison
As an AWS user, I'd say those costs add up quickly. So, easy yes, cost effective, I'd be skeptical. It's also introduced a new kind of anxiety of forgetting to ramp down hardware when unused. YMMV.

Besides, if Apple Silicon is so great - it should have compute to spare, no?

It should. But we don’t need to be hyperbolic about it. A 50% core increase on all Macs would already be quite a lot. This user seems to be advocating for far more than that. Diminishing returns, and unreasonable expectations.
 
Last edited:
They said they expect to ship end of this year during the peak of Covid back in June. I highly doubt they are impacted by Covid with regards to making the hardware. They probably have all the PPE and money to make things smooth.
The majority of TSMC capacity is working on orders from Huawei because US sanctions will block TSMC from fulfilling Huawei orders in the future. TSMC is in overdrive stockpiling for Huawei. All other orders are pushed back.
 
What if the A14X had the ability to be doubled or quadrupled up as a multiprocessor system, eg:

- A14X
- A14X2 (two chips)
- A14X4 (for chips)

If these higher end versions of the A14 allow for running on multi-processor motherboards, then apple could have a really flexible platform with essentially only two CPUs, the A14 and the A14X. Imagine what a Mac Pro with 4x A14X chips in it would be like 🤯
Then you would need another advanced silicon, a controller for the CPUs.
 
That's not the idea behind the headless Mac. It's to do things the (thermally limited) Mac mini can't, without having to shell out the $6k needed for a base Mac Pro (which, in its base configuration, is a poor value).

Apple could probably produce an i9-10900K headless Mac, at its current profit margins, for ~$3k. That would be a beast of a machine for people that need power but don't need, or have the budget for, Xeons. That's the basis of the interest in such a machine. It would be of interest to hobbyists, graphics/video pros that are independents and don't have the budget for a Mac Pro, scientists wanting to supply their staff with Macs for development work, etc. etc.
The vast majority of Apple customers don’t seem to want the mythical xMac. In fact they don’t want any kind of desktop.

80% of Mac buyers purchase laptops. Another 10-15% (est) buy iMac. That means that between Mac mini, iMac Pro and Mac Pro, there are a very small number of units sold. Splitting that 2-3 million units between four different models instead of three doesn’t seem to be anything Apple is interested in, regardless of how bad you might want it.

Maybe that will change in the future, with the move to Apple silicon, there’s at least some small possibility I suppose.
 
The majority of TSMC capacity is working on orders from Huawei because US sanctions will block TSMC from fulfilling Huawei orders in the future. TSMC is in overdrive stockpiling for Huawei. All other orders are pushed back.

What?! No way that’s true. Apple would have signed on for capacity that TSMC is obligated by contract to fulfill. Also report says the majority TSMC 5nm process is going to Apple
 
What?! No way that’s true. Apple would have signed on for capacity that TSMC is obligated by contract to fulfill. Also report says the majority TSMC 5nm process is going to Apple

The vast majority of TSMC production capacity is not 5nm. Huawei has been probably racking up a ton of 7nm and lower production capacity. All the wafers they could get their hands on. ( Probably turning in and/or getting rebates for future 5nm orders and buying up all of the 7nm process ( especially that doesn't doesn't use EUV ) they can get for the current products.

Apple has a lock on the bulk of early 5nm high volume manufacturing, but that lockout is probably going to end over the next couple of months. That isn't what Huawei needs to continue to ship and get revenue for products at this point in time. It would be better to be on newer 5nm parts for products that could ship soon, but at this point they need to keep the lights on.

There aren't that many EUV fab steppers out there anywhere. Talking low double digits and TSMC only has about 1/2 of them ( so even lower double digits). The vast bulk of fab production equipment isn't those at this point.
 
The vast majority of TSMC production capacity is not 5nm. Huawei has been probably racking up a ton of 7nm and lower production capacity. All the wafers they could get their hands on. ( Probably turning in and/or getting rebates for future 5nm orders and buying up all of the 7nm process ( especially that doesn't doesn't use EUV ) they can get for the current products.

Apple has a lock on the bulk of early 5nm high volume manufacturing, but that lockout is probably going to end over the next couple of months. That isn't what Huawei needs to continue to ship and get revenue for products at this point in time. It would be better to be on newer 5nm parts for products that could ship soon, but at this point they need to keep the lights on.

There aren't that many EUV fab steppers out there anywhere. Talking low double digits and TSMC only has about 1/2 of them ( so even lower double digits). The vast bulk of fab production equipment isn't those at this point.

Yeah but I don’t think anything Hiawei can do will affect Apples guarantee on chips. TSMC wouldn’t risk jeopardizing their commitments to Apples... Cause if they fall short I’m sure Apple has clawback clauses for the money Apple paid to guarantee specific amounts of chips.
 
Yeah but I don’t think anything Hiawei can do will affect Apples guarantee on chips. TSMC wouldn’t risk jeopardizing their commitments to Apples... Cause if they fall short I’m sure Apple has clawback clauses for the money Apple paid to guarantee specific amounts of chips.

Apple is contracting for more than just 5nm chips at the moment. There are other products with olders stuff. Apple's exclusivity lock outs is likely only on bleeding stuff.

This is all bit over hyped though. Apple generally isn't getting better wafer start contracts than other folks. There is always some "slop" in where folks aren't using all of their quota once get to the more mature technology. ( ebb and flow on product demand trickles into flow through the fab. ). That is really where the Hauwei going into burst mode has bigger issues. Doubtful Huawei any of the other very large players out either to the point they are completely "starved out" of wafter starts. ( so Apple isn't the only company holding their ground. )

Apple has products with "hand me down" processors on 7nm , 10nm , even higher.

The major disruptive factor is the relatively smaller players who mainly buy wafer starts on more of a "spot market" basis. Relatively short runs that fit in between the "very high volume" runs. That stuff has been blown up. On 7nm it was already stressed because AMD , Nvidia were ramping up this year. But the Huawei thing cranked it up to whole another level. If Apple had any demand bubble along the way they'd be hit also.
 
Yeah, absolutely pointless to upgrade to the i5 or i7, unless you loooove that fan sound.

I just checked, one person "liked" what I wrote, and one person "downvoted" it, ha ha Apple is a deity for some people, and I think I just blasphemed!
If anyone thinks that Apple can do no wrong, I have two words for them: 'butterfly keyboard'.
 
  • Haha
Reactions: sideshowuniqueuser
If anyone thinks that Apple can do no wrong, I have two words for them: 'butterfly keyboard'.
You can even start back at the Apple III with the socketed chips on the motherboard where the ”fix” was to raise it a couple inches from your desk and drop it to reseat the chips. Maybe that’s why they want to solder everything now LOL
 
  • Like
Reactions: bluecoast
I think your hypothesis may be right. I expect Apple to show significant performance increases in a few key applications or tasks, but we fairly average in others (maybe a modest increase up to 20-30%). Apps running under Rosetta will be labelled as such to ensure that users know there "not-optimized / not ARM-native" - assuming performance is lack-lustre.

General performance for native-ARM apps will need to be able to match or exceed existing Intel version on similarly-priced hardware though.

My thought is this.

If you went to Intel or AMD with Photoshop and it's internal routines, gave Intel/AMD all the photoshop code and said, I want you to develop parts of your CPU/GPU to specifically run these functions photoshop does.
I'm sure both companies could create something that would blast thru photoshop funtions at a blazing speed which is totally un matchabale by anything else.

Likewise Apple can code into the chips ways for their own video editing and other functions to run super fast, so you end up with a system/chip that, when it's placed into a device and running these apps is blindingly fast.
One could say that's amazing.
One could also say that's kinda cheating.

It would be like making a car that's only ok-ish, but you've built the car for one specific type of road, and when its on that specific road. OMG it's so fast on that road, nothing else can touch it.
But when on other normal roads, it's nothing special really.

I guess it depends what roads you want to drive on, as to whether thats a great approach or not.

I'd love to think Apple's ARM version will be blazing fast with all software.
My gut tells me, it will be specifically designed to run specific software really well, but will just be ok-ish when presented by other general software.

Not that this is bad thing, and can be great if you can get all the devs to spend the time and money to fully optimise their apps for your specific silicon.

I guess we'll soon see what ARM can do on BIGGER machines that really need more power to run much heavier and full fat software.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.