Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It’s doable I guess, with the smaller footprint being able to fit on mobile devices, maybe it’s only a matter of time before they stack up many and add active cooling too boot (is one running theory).
At the same time, 12 cores in 2010 are nowhere near as powerful and efficient as 12 today... even with the crazy stagnation that happened since ~2015 with Intel.
I don’t know exactly what you are doing that requires such a heavy multithreaded approach, I know 3D renderers and some phases for compiling code can eat as many cores as they are given for example, but for 99% of users 12 core is already above and beyond.

Ah even today the core count for many things is more important than the speed of those cores. And there is a strong case for coming out with something like 16 cores as the base, application developers would know the 'worst' ARM Mac has a load of grunt.
 
When I am developing I am spinning up 50+ Docker images to run tests against.

Ah, I see I've found an artisanal microservices expert.
[automerge]1599655127[/automerge]
Ah even today the core count for many things is more important than the speed of those cores.

Nonsense. Taking single-threaded code and running it on a faster-per-core CPU is far easier than adapting the code to be parallelized.
 
  • Haha
Reactions: pasamio
Ah, I see I've found an artisanal microservices expert.

I wouldn't go that far, this is a very common setup for developers today. They'll even be a few monolithic apps that you'll still have a dozen Docker images for the database, redis, elasticsearch, varnish cache, mailing service, etc. And with all that you still want to run your IDE or code editor, iTunes, Teams, Outlook, and gosh knows what else.
 
Nonsense. Taking single-threaded code and running it on a faster-per-core CPU is far easier than adapting the code to be parallelized.

More and faster cores are preferable, however, more cores are better than all else in many cases. Batch processing images is just one example that even people that just use the pre-installed apps would benefit from more cores.

I don't see what the resistance is here? We don't want better computers? AMD has been putting Intel to shame on multicore performance and only just caught up on single-core, but I guess some people want Apple to get a free ride here, might as well just stick with Intel if they aren't going to push past.
 
This event is going to hard on Intel/AMD. For next 6 months there will be benchmarks after benchmarks blowing Intel AMD out of the water.
Competition is good. If there was no competition for iPhone, we would all be lining up to buy iPhone 5S this fall.
 
Gonna need a lot more than 8 performance cores for it to be useful, even if those are super fast. I am expecting 30 odd cores in my next desktop and perhaps at least 12 in a MacBook Pro.
I guess define useful, if you are getting 30 cores in a desktop, you are in xeon, or upper end AMD specs, way above what most people would define as "useful". For the record many people find far less performance, You know like quad-core i5, 6 core i7 and 8 core i9s to be plenty "useful". Just sayin
 
I now have trust issues with Digitimes and Jon Prosser and those who lied about this morning's apple watch release...
There is a release of Apple Watch this morning? an Event is Scheduled for the 15th. BTW, guessing what the future may hold and being wrong is not a lie. Lying is saying something that you know to be untrue, or should know to be untrue, by definition, guessing at the future cannot be a lie, because it involves uncertain future events
 
  • Like
Reactions: Tagbert
I guess define useful, if you are getting 30 cores in a desktop, you are in xeon, or upper end AMD specs, way above what most people would define as "useful". For the record many people find far less performance, You know like quad-core i5, 6 core i7 and 8 core i9s to be plenty "useful". Just sayin

I am sure a quad core is still useful, but we aren't talking about the last decade. These are new Apple CPU's for the next decade and I have said more than once that 30+ desktop and 12/16 on mobile would be a good start. Starting off at or behind AMD and Intel is not a good start.
 
Yup. Hotspot is not really a great alternative to embedded cellular. Yeah, it can somewhat substitute, especially for infrequent users.

But you burn through your phone battery, there’s no GPS so Find My isn’t nearly as useful as it could be and (the Mx co-processor could easily wake up occasionally and phone home long after the battery wouldn’t start the Mac itself) and I’m sure a number of other reasons I haven’t thought of yet.

Anyone who has ever used an iPad with cellular, or a cellular-equipped notebook (Dell, Lenovo, etc.) knows how great they are. The plans are affordable too, at least in the US.
Really? I have always avoided the non cellular tablets preemptively labeling them as an useless feature (for me mostly).
So I find myself very rarely hotspotting to be able to watch, read and do assorted stuff on a park table or bench (and hence burning battery plus the phone enters in a strange non-wifi mode which I wonder what it entails, does it still AirDrops for example?). Maybe I should start giving those LTE ones a try for the next ones.

At least, for what its worth and what it offers, hotspot works amazingly well with easy one click, discoverability and connection between iOS devices.
 
Keeping my fingers cross they put some of their better HW engineers on the Performance Controller, both WRT design & verification.

Don't need another A11/A12 fiasco !
 
I am sure a quad core is still useful, but we aren't talking about the last decade. These are new Apple CPU's for the next decade and I have said more than once that 30+ desktop and 12/16 on mobile would be a good start. Starting off at or behind AMD and Intel is not a good start.
Wow, I missed where Intel and AMD discontinued all of their lower end lines because no one was buying them. If I'm following your logic correctly, Apple is behind because it doesn't release the fastest supercomputer either?
 
  • Haha
Reactions: PickUrPoison
Wow, I missed where Intel and AMD discontinued all of their lower end lines because no one was buying them. If I'm following your logic correctly, Apple is behind because it doesn't release the fastest supercomputer either?

What on earth are you on about? It's as if you want Apple to fail before they even get started. Surely you can see there is a lot of promise in the new CPU and that a lot of people want Apple to come out with two guns blazing. If they come out with a whimper of an ultrabook CPU it will immediately set the tone and continue this assumption people have that ARM is for your Raspberry Pi projects.
 
$25 billion is a lot of money. Can someone explain why it costs so much to create a new manufacturing process?
I'm sure if you go onto a chip manufacturing web site you might find out. I know next to nothing, but they have ultra clean rooms, lithography equipment which is custom made, cutting, furnaces.......... Factories in general are pretty expensive
 
I am not sure that comparing RISC cores with CISC cores is a useful exercise.
It is my understanding that RISC cores are typically smaller and require less transistors, therefore it is quite likely that we will see many more cores crammed onto a single chip. But the way they operate is different and therefore I don't think an analysis based purely on cores numbers is particularly useful.
 
What on earth are you on about? It's as if you want Apple to fail before they even get started. Surely you can see there is a lot of promise in the new CPU and that a lot of people want Apple to come out with two guns blazing. If they come out with a whimper of an ultrabook CPU it will immediately set the tone and continue this assumption people have that ARM is for your Raspberry Pi projects.
have you seen any benchmarks? the current line of A series chips is in the line of mainstream intel and AMD. I do not see them targeting the upper end of the market for the first go round. Anyone who needs a 30-core chip is probably not going to risk their livelihood on the first generation of anything. On the other hand, iPad pro's already using less involved A series chips are plenty fast for the mainstream market. the first iteration of A14x sounds like it leapfrogs the current generation substantially. Sorry, I am not getting your point at all. Are you saying that the first generation ought to target a geek bench 5 score of 19000 (28-core Xeon based Mac Pro) or something? I would expect them to target, at the high end) a 5000 geek bench 5 which is equivalent to the 6-core i7 16" MacBook Pro
 
  • Like
Reactions: agsystems
Really? I have always avoided the non cellular tablets preemptively labeling them as an useless feature (for me mostly).
So I find myself very rarely hotspotting to be able to watch, read and do assorted stuff on a park table or bench (and hence burning battery plus the phone enters in a strange non-wifi mode which I wonder what it entails, does it still AirDrops for example?). Maybe I should start giving those LTE ones a try for the next ones.

At least, for what its worth and what it offers, hotspot works amazingly well with easy one click, discoverability and connection between iOS devices.
I hear you; Instant Hotspot is nice—as far as it goes.

But yeah, I’d give it a test drive maybe, and see how you like it, if the monthly cost—it does add up—is worth the added utility.

In a business context I’d say it’s a no-brainer, but it’s a different calculation when it’s coming out of your own pocket 🤣
 
  • Like
Reactions: amartinez1660
I am not sure that comparing RISC cores with CISC cores is a useful exercise.
It is my understanding that RISC cores are typically smaller and require less transistors, therefore it is quite likely that we will see many more cores crammed onto a single chip. But the way they operate is different and therefore I don't think an analysis based purely on cores numbers is particularly useful.
You could do some more reading, from what I understand a CISC core runs more efficient commands but far slower, so 1 command = 3 or 4 commands all at about the same time, you measure ultimate performance not core counts. Besides the last article I read is that even Intel's cores actually use a RISC architecture underneath. Now this is well above my pay grade, but it is amazing what you can find out when you google it (even if you do get a lot of ads)
 
Keeping my fingers cross they put some of their better HW engineers on the Performance Controller, both WRT design & verification.

Don't need another A11/A12 fiasco !
I guess by definition it wasn't a fiasco if the chips still performed as required in the product/market they were targeted (which they did). I did hear some people ramble on about something wrong with some instruction set or something, but never quantified it to they didn't run the software or show the graphics on the devices they were used, so it was a big meh.
 
  • Like
Reactions: PickUrPoison
Sorry, I am not getting your point at all. Are you saying that the first generation ought to target a geek bench 5 score of 19000 (28-core Xeon based Mac Pro) or something? I would expect them to target, at the high end) a 5000 geek bench 5 which is equivalent to the 6-core i7 16" MacBook Pro

The first-gen chip should show us performance that isn't just on par with AMD/Intel, it should show us a solid progression to something not possible before. The dev kit is impressive, but still at the level of an i3 Mac mini. ARM has a stigma of being for netbooks, at least one of the new machines this year needs to show off some real power.

We had this with the Intel transition, the first Intel Macs made the G4 and all but the best of the G5 line look like old tech and that inspired confidence. I am looking forward to Apple showing off what they can do and having something that just keeps up with AMD/Intel isn't going to be a strong selling point, no matter how big an accomplishment that in its own right is.

I know many will just be happy if the laptops don't get a wee bit too toasty on your lap having this new CPU out that has 1:1 performance with existing machines isn't going to capture the imagination.
 
  • Disagree
Reactions: jdb8167
Does anyone know (On average) how many (usable) chips that that are likely to get per wafer?

That's a very difficult thing to say, TSMC don't usually go around with their yield specifics.
From what I've seen their 5nm chips have strong yields - better than what they had with 7nm at this stage due to EUV simplifying a number of patterning steps.

So to answer your question, I don't know. If I had to take a wild stab in the dark, maybe 80% +/- 5%. We don't know the die size of these A14 chips so it's impossible to say how many per 300mm wafer, maybe 550 chips if they are around 100mm^2 so ~450 usable. These are all very rough numbers but they aren't unrealistic.

Also I'm a TSMC shareholder and EEE, so I follow this stuff pretty closely - TSMC is miles ahead of the pack.
 
If Apple is going to use iPad chips in full fledged computers I am worried...

Apple planning to move to their own chip designs is at least 2 year old news. No one knew when, or how they intended to introduce it to their entire line but this has been openly talked about even by Apple for a while.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.