My thought: Apple now has identified the Jon Prosser leaker. They have been canned.I now have trust issues with Digitimes and Jon Prosser and those who lied about this morning's apple watch release...
My thought: Apple now has identified the Jon Prosser leaker. They have been canned.I now have trust issues with Digitimes and Jon Prosser and those who lied about this morning's apple watch release...
5 nm is on 300 mm ("12 inch") wafers, so there is a hell of a lot of surface area available, and I've found an Anandtech report showing 2600 good die on a test wafer, with 600 failed die and some number of edge die (die at the edge fo the wafer are generally discarded as a quality protection against field returns.
I would assume that TSMC would not release a die to production with less than 90-955 yield, so, being teh lazt mathematics engineer that I am, I would expect 3000 good die per wafer.
www anandtech com /show/15219/early-tsmc-5nm-test-chip-yields-80-hvm-coming-in-h1-2020
This needs a major caveat: the test die that was referenced by Anandtech could be either smaller or larger than an Apple-designed die, so the good die per wafer yield is still very much up in the air.
Total Die | 98.48 |
Yup how silly of me to want to see Apple put their best foot forward and maybe push computers out of the lul they have been in for the last decade. How silly of me and all those people buying the Mac Pro, AMD Threadrippers, 8k capable R5 cameras. How could anyone want to do local development as stipulated by their company? Or I dunno, how most development is done?
50+ Docker images on your local dev machine is "very common"? Come on.
It doesn't sound very "monolithic" to me. It sounds like an architectural nightmare that only makes sense for massive teams, in which case… why on earth aren't you sharing a build host somewhere.
Do you think this will be "Blazing Fast" as some outlets/people are suggesting.
Or do you think the harsh reality will be that it's REALLY fast at the few specific apps and tasks which the chip has been tweaked to perform well at.
but when it comes to being put against a general CPU and confronted by general apps it's not going to look anything special at all?
And rather than admit it's just ok-ish, the Apps will simply get the blame for not being optimised for the CPU ?
Personally, if we are going to put it up against other CPU's from other brands is needs to be able to handle anything that's thrown at it, not just a limited number of specially optimised apps.
Thoughts?
This round of Apple Silicon SOCs are not intended for the replacements for the iMac Pro or the Mac Pro. They will likely be the last ones moved to AS at the end of the 2 year migration. The chips then will likely be significantly upgraded.The 18 core iMac Pro, and 24/28 core Mac Pro are not for Apple customers? The Afterburner, Final Cut, and Logic are not or Apple customers?
As an AWS user, I'd say those costs add up quickly. So, easy yes, cost effective, I'd be skeptical. It's also introduced a new kind of anxiety of forgetting to ramp down hardware when unused. YMMV.
Besides, if Apple Silicon is so great - it should have compute to spare, no?
This round of Apple Silicon SOCs are not intended for the replacements for the iMac Pro or the Mac Pro. They will likely be the last ones moved to AS at the end of the 2 year migration. The chips then will likely be significantly upgraded.
In the meantime, lower end models like the Macbook and maybe the low end iMac and mini would be well suited to a version of the A14x expected in the next iPad pros. The current iPad Pro chip A12Z is as fast as a MacBook Pro.
The other consumer Macs will likely get Either another version of the A14x (Or A15x depending on timing) with more cores and or faster clocks.
There are few applications that really need 64 cores. Video editing tends to level off after 24 cores and there is little benefit in more cores (see https://www.pugetsystems.com/labs/a...formance-AMD-Threadripper-3990X-64-Core-1659/). Even multiple GPUs don't add benefit beyond 2 or 3, depending on the application.
In servers, especially those running VMs or containers, sure 64 cores or more is common. But these are running dozens or hundreds of separate instances for multiple users / applications.
I'm really interested to know which single application needs or greatly benefits from really high core-counts.
Gonna need a lot more than 8 performance cores for it to be useful, even if those are super fast. I am expecting 30 odd cores in my next desktop and perhaps at least 12 in a MacBook Pro.
I asked him the same question...unless you are a one-man band, you are generally integrating your work on a build server that is running your delivery pipeline.
In my experience, it is unusual to have every single developer recreating the entire stack on their local machine, and even if they do, there are typically using downscaled instances or mocks.
Just about every developer I come across is using a laptop with 16-32GB RAM and a 4/6/8 core Intel CPU. A few have desktops or workstations. I see a tendency to use fewer large application servers and RDBMS - a lot more lightweight frameworks like NodeJS, Python Flask and NoSQL DBs.
As an AWS user, I'd say those costs add up quickly. So, easy yes, cost effective, I'd be skeptical. It's also introduced a new kind of anxiety of forgetting to ramp down hardware when unused. YMMV.
Besides, if Apple Silicon is so great - it should have compute to spare, no?
The majority of TSMC capacity is working on orders from Huawei because US sanctions will block TSMC from fulfilling Huawei orders in the future. TSMC is in overdrive stockpiling for Huawei. All other orders are pushed back.They said they expect to ship end of this year during the peak of Covid back in June. I highly doubt they are impacted by Covid with regards to making the hardware. They probably have all the PPE and money to make things smooth.
Then you would need another advanced silicon, a controller for the CPUs.What if the A14X had the ability to be doubled or quadrupled up as a multiprocessor system, eg:
- A14X
- A14X2 (two chips)
- A14X4 (for chips)
If these higher end versions of the A14 allow for running on multi-processor motherboards, then apple could have a really flexible platform with essentially only two CPUs, the A14 and the A14X. Imagine what a Mac Pro with 4x A14X chips in it would be like 🤯
The vast majority of Apple customers don’t seem to want the mythical xMac. In fact they don’t want any kind of desktop.That's not the idea behind the headless Mac. It's to do things the (thermally limited) Mac mini can't, without having to shell out the $6k needed for a base Mac Pro (which, in its base configuration, is a poor value).
Apple could probably produce an i9-10900K headless Mac, at its current profit margins, for ~$3k. That would be a beast of a machine for people that need power but don't need, or have the budget for, Xeons. That's the basis of the interest in such a machine. It would be of interest to hobbyists, graphics/video pros that are independents and don't have the budget for a Mac Pro, scientists wanting to supply their staff with Macs for development work, etc. etc.
The majority of TSMC capacity is working on orders from Huawei because US sanctions will block TSMC from fulfilling Huawei orders in the future. TSMC is in overdrive stockpiling for Huawei. All other orders are pushed back.
From Chinese supply chain sources and industry connections.Any evidence of that?
The majority of TSMC capacity is working on orders from Huawei because US sanctions will block TSMC from fulfilling Huawei orders in the future. TSMC is in overdrive stockpiling for Huawei. All other orders are pushed back.
What?! No way that’s true. Apple would have signed on for capacity that TSMC is obligated by contract to fulfill. Also report says the majority TSMC 5nm process is going to Apple
The vast majority of TSMC production capacity is not 5nm. Huawei has been probably racking up a ton of 7nm and lower production capacity. All the wafers they could get their hands on. ( Probably turning in and/or getting rebates for future 5nm orders and buying up all of the 7nm process ( especially that doesn't doesn't use EUV ) they can get for the current products.
Apple has a lock on the bulk of early 5nm high volume manufacturing, but that lockout is probably going to end over the next couple of months. That isn't what Huawei needs to continue to ship and get revenue for products at this point in time. It would be better to be on newer 5nm parts for products that could ship soon, but at this point they need to keep the lights on.
There aren't that many EUV fab steppers out there anywhere. Talking low double digits and TSMC only has about 1/2 of them ( so even lower double digits). The vast bulk of fab production equipment isn't those at this point.
Yeah but I don’t think anything Hiawei can do will affect Apples guarantee on chips. TSMC wouldn’t risk jeopardizing their commitments to Apples... Cause if they fall short I’m sure Apple has clawback clauses for the money Apple paid to guarantee specific amounts of chips.
If anyone thinks that Apple can do no wrong, I have two words for them: 'butterfly keyboard'.Yeah, absolutely pointless to upgrade to the i5 or i7, unless you loooove that fan sound.
I just checked, one person "liked" what I wrote, and one person "downvoted" it, ha ha Apple is a deity for some people, and I think I just blasphemed!
You can even start back at the Apple III with the socketed chips on the motherboard where the ”fix” was to raise it a couple inches from your desk and drop it to reseat the chips. Maybe that’s why they want to solder everything now LOLIf anyone thinks that Apple can do no wrong, I have two words for them: 'butterfly keyboard'.
No you wouldn’t. The bus controllers would be on the CPUs. Same as back in the hypertransport days with DEC an AMD.Then you would need another advanced silicon, a controller for the CPUs.
I think your hypothesis may be right. I expect Apple to show significant performance increases in a few key applications or tasks, but we fairly average in others (maybe a modest increase up to 20-30%). Apps running under Rosetta will be labelled as such to ensure that users know there "not-optimized / not ARM-native" - assuming performance is lack-lustre.
General performance for native-ARM apps will need to be able to match or exceed existing Intel version on similarly-priced hardware though.