Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The fact remains that for desktops thermal envelopes are much less critical
I see you never owned a G5 PowerMac, or as my friends referred to it at the time "Is the 747 taking off again?"

How long do you think it will be before Apple CPUs can outpace a 64 core Threadripper

I mean.. we have literally no idea what their first machines will ship with. The dev kit is based on their "beefiest" iPad CPU so far, which is 4+4 cores and runs in production devices thinner than a pizza base. Let's say, worst case scenario, they at-most swap the low power cores for high performance cores, and call it a day.

So it's got 8 of the "high performance" cores from the current a12x/z.. How much more cooling does that need than the current one? Double? Triple? Ok great. So they can comfortably run in a device... the size of a tissue box?

Now.. how big is a Mac Pro? Wait. What? It's bigger than a tissue box? What do you mean it's a lot ****ing bigger? Well how many tissue boxes?

And yet they could have put a Threadripper in a Mac Pro tomorrow and retained full x86 compatibility for all the various apps and plugins,
What exactly would that gain? And more specifically, how exactly does that solve the issue of better energy efficiency (and thus less heat) in laptops/small form factors?

You can't say you seriously imagined a scenario where Apple has two ongoing product lines, both running macOS, but using two different, incompatible CPU architectures?




Seriously though, your logic is just baffling. I completely understand your goal: retain x86 compatibility. But holy ****, those are some weird arguments you're making. I've worked with/on x86 computers and servers for the best part of two decades.

I have never, ever, on any device, in any situation, thought "you know what this needs, is more unnecessary heat".
 
It's gonna be rated at 10 hours, why would it be more ? They could have made it more with Intel but they wanted the thinnest product possible that gives 10 hours, why would it change ?

Because this is Apple's own silicon vs. Intel. They'll want to show it's better at more than just speed. So my guess is they'll tweak the performance to the point where it's balanced between having a bit more speed, a bit less heat and a bit more battery life. That's way more marketable than just a lot more speed.
 
  • Like
Reactions: Christopher Kim
Just to clarify the post above if it wasn't obvious, from the tissue-box analogies and sarcasm:

If you already have a CPU design that has good performance/efficiency characteristics, scaling up to "more cores" is hardly brain surgery. It wasn't that long ago that Mac Pros (and PowerMacs before them) provided more processing capability using dual-CPU setups. I'm not saying this is necessarily Apples plan for the Mac Pro (or iMac Pro if it remains in the lineup). But scaling up a very efficient processor to give more processing power seems like a much easier problem to solve than "sorry boss, we can't release a laptop with a faster processor, they keep melting"
 
  • Like
Reactions: SteveW928
I use UNIX, Windows and MacOS on my iMac. Indeed I'd say my iMac is the best Windows machine I have ever had. I need this capability for work, for a lot of scientific software is Windows-only. To me it seems like Apple is throwing away a lot of flexibility that many people rely on. I remember the transition from the 68000 CPU's to PowerPC processors. It was painful and virtualization was a nightmare. The switch to Intel was fantastic. If Apple doesn't get this working well, they're going to be the laughing stocks of the computer world, and it will be compared to the disastrous transition to the PowerPC that nearly killed Apple.
 
Apple Silicon is rumored to have performance 70-100% greater than Intel x86; why would the price come down?
The cost of production of an Apples A12Z is around $50-60 while i9 is $400, now if Apple pockets all that savings or pass it the customers is yet to be seen, but my feeling is they would want those market share to make this a viable direction without sales of a new architecture somebody has to answer the investors.
 
So it's got 8 of the "high performance" cores from the current a12x/z.. How much more cooling does that need than the current one? Double? Triple? Ok great. So they can comfortably run in a device... the size of a tissue box?

Now.. how big is a Mac Pro? Wait. What? It's bigger than a tissue box? What do you mean it's a lot ****ing bigger? Well how many tissue boxes?
...

I have never, ever, on any device, in any situation, thought "you know what this needs, is more unnecessary heat".

Funnily enough, I don't remember suggesting more unnecessary heat as a design objective.

And your extrapolations of what might be possible with Apple silicon are simplistic and naive in the extreme. Getting from where they are today with chips designed to power a phone, to an enterprise-class server architecture CPU(s) supporting multiple PCIe 4.0 lanes, ECC memory and the rest, is not simply a matter of doing more of what they do already, and putting it in a tissue box.

And yes, I don't see any problem with retaining x86 for desktops and Apple silicon for mobiles. That's what Apple do AT THE MOMENT. Hardly challenging.

They just don't like paying Intel and care not one jot that this move is ****ing inconvenient for their users. That is the beginning and end of it. Everything else is just spin.
[automerge]1592999998[/automerge]
The cost of production of an Apples A12Z is around $50-60 while i9 is $400, now if Apple pockets all that savings or pass it the customers is yet to be seen,

But you do not need to be clairvoyant to guess the answer. Businesses exist to make profits, and to charge as much as the market can bear. It's a simple as that. Future pricing will be governed by market opportunity, not by reduced costs.
 
  • Disagree
  • Like
Reactions: 09872738 and chabig
They were demoing those apps on a maxed out Mac Pro. Of COURSE it's going to scroll smoothly. Would be pretty scary if it didn't on one of those beasts. What about a lesser, more realistic machine?
How do you think it was "maxed out"? The demo was running on 8 GB Ram with a passively cooled processor Designed for the iPad. And it was running a x86 Tomb Raider in Rosetta Translation with what seemed like acceptable frame rates at 1080p, I mean that was using a SOC and not a dedicated GPU.
[automerge]1593000396[/automerge]
I'm not investing in something Apple is clearly transitioning away from. If I had bought a Mac in the last year, I wouldn't be very worried. But I wouldn't buy one now.
Definitely makes sense to wait it out.
 
Apple could easily be pairing two or three chips for the high end and still be more energy efficient and more cost effective. Two 8 Core A14 based series anyone? I think the Intel emulation on Apple CPU will be quite fine once the real Macs come to market.

Apple may not need to pair CPU's for sometime, they are using 5nm technology so that could fit more CPU on the die, Also ARM Chips usually require much lesser die size than x86 so increasing the die size will easily accommodate Ryzen like CPU's on the same die without chiplet design., Because some 30% of the die on Intel/AMD CPU's is microcode for instruction decoder. Intel's x86 is a terrible architecture I am happy we are getting rid of it, and if all goes well the world will follow the footsteps of apple.
 
Funnily enough, I don't remember suggesting more unnecessary heat as a design objective.
But you did downplay concern for heat or power efficiency in desktop computers, while simultaneously suggesting that Apple could adopt a CPU whose manufacturer has previously recommended be used with a water cooler. Oh yeah that sounds like a great road to go down again. Nothing bad ever came from sticking pumped fluids inside a computer :rolleyes:


And yes, I don't see any problem with retaining x86 for desktops and Apple silicon for mobiles. That's what Apple do AT THE MOMENT. Hardly challenging.

Funny how you conveniently forgot about the most popular form of Mac, that is very much affected by (a) heat and (b) power requirements: laptops.

If you can say with a straight face, that power and heat are not an issue for laptops, you should go work for a politician.
 
Just to clarify the post above if it wasn't obvious, from the tissue-box analogies and sarcasm:

If you already have a CPU design that has good performance/efficiency characteristics, scaling up to "more cores" is hardly brain surgery. It wasn't that long ago that Mac Pros (and PowerMacs before them) provided more processing capability using dual-CPU setups. I'm not saying this is necessarily Apples plan for the Mac Pro (or iMac Pro if it remains in the lineup). But scaling up a very efficient processor to give more processing power seems like a much easier problem to solve than "sorry boss, we can't release a laptop with a faster processor, they keep melting"
It isn’t that easy. Just ask Intel. There’s a reason their desktop chips have lagged for awhile. They started focusing on power efficiency instead of raw performance for their 14nm process. They wanted to concentrate on mobile and this in turn caused their higher performing CPUs more “leaky” and less able to handle current by generating much more heat at higher performance levels. The Positive trade off is better power performance at lower power. Broadwell and then Skylake was the beginning of this philosophy and notice how their desktop performance after this came to much smaller increments after the success stories of sandy and ivy bridge.

Even now, intel can’t make their 10nm process run at higher power levels. Notice that the true 10th gen 10nm chips are only being released in the under 45 watt class? Everything else from MacBook Pro 16 45 watt cpu to their desktop chips are all still 14nm!

If it was so easy to just add more power and cooling to their low power designs like ice lake, don’t you think Intel would be selling 10nm desktop class chips that would be much more competitive with AMD in power per performance?
 
As a CPU designer give me that problem every time. (“What? I have >100w power budget and all I have here is this 30W SoC? Gee, sounds hard. Call me in a year.” .... spends year playing solitaire .... “ok, here you go. Problem solved.” ... turns up power supply voltage and speeds up clock...)

So what chip did you design?
[automerge]1593007788[/automerge]
It wouldn’t provide contest other than “it will run a lot faster than this,” but the fact that people seem to think it would provide context is exactly why they won’t allow it.

99% of the people on here will pay no attention, see the benchmark, and start shouting “see! It runs slower than intel’s [insert chip here]” and then that narrative starts to spread and it’s all based on ********.

People are already saying an ARM Mac will massively outperform Intel/AMD chips now, because of Geekbench scores.
 
The game demos they are showcasing look bad. Honestly, I don't know who are they targeting exactly showcasing these.
For gamers, those demos look like early 2000 gaming on PC, if not earlier. The Tomb Rider demo looked flat, no proper illumination, low poly count in a confined space (basically this is the lowest setting scenario for high framerate).
For casual gamers, sure, but an angry birds type of game mention would be enough.
Apple is not a gamers company but it's trying to seem like one and failing. I don't understand why even bother, it's not like this is important for Apple.
It was using an ipad graphics core from 2 years ago. The point was to show emulation speed, not to lure gamers.
[automerge]1593008258[/automerge]
Just to clarify the post above if it wasn't obvious, from the tissue-box analogies and sarcasm:

If you already have a CPU design that has good performance/efficiency characteristics, scaling up to "more cores" is hardly brain surgery. It wasn't that long ago that Mac Pros (and PowerMacs before them) provided more processing capability using dual-CPU setups. I'm not saying this is necessarily Apples plan for the Mac Pro (or iMac Pro if it remains in the lineup). But scaling up a very efficient processor to give more processing power seems like a much easier problem to solve than "sorry boss, we can't release a laptop with a faster processor, they keep melting"
This is exactly right.

Adding cores is comparatively easy so long as you have thought ahead of time about a good bus protocol - most of the trick then is then in having an intelligent scheduler, which is really a software problem - and in how you deal with memory accesses.

It’s also easier to make a fast processor if you start from an efficient one because raising the voltage and clock is more than a squaring factor on power while speed increases only about linearly. But if you start with a desktop processor and reduce speed, you only affect dynamic power dissipation, not static.

On the other hand, often “efficient” processors have critical timing paths that nobody bothered to optimize, unless they contemplated ahead of time that the chip might be sold at higher clock bins.
[automerge]1593008565[/automerge]
Because some 30% of the die on Intel/AMD CPU's is microcode for instruction decoder.

No, the entire instruction decoder is only maybe 20% of the size of a core, and the cores are only part of the die (caches, I/O blocks, clock modules, buses, etc. take up the rest, with cache often being by far the biggest part of the die).

The microcode ROM is a sizeable chunk of the instruction decoder. But it’s a small percentage of the die. (Still a bad thing and permanent disadvantage of x86).
[automerge]1593008730[/automerge]
So what chip did you design?

F-RISC/G, Exponential x704, Exponential x705, Sun UltraSparc V, AMD K6-II, K6-II+, K6-III, K6-III+, Athlon 64, Opteron, and various variations of some of those.
 
Last edited:
But you do not need to be clairvoyant to guess the answer. Businesses exist to make profits, and to charge as much as the market can bear. It's a simple as that. Future pricing will be governed by market opportunity, not by reduced costs.

Apple has maintained a surprisingly consistent profit margin. It seems pretty unlikely that they will change that now. However, remember that that i9 price at $400 includes all of Intel’s R&D costs, that are not included in the $50-$60 production cost of the A12Z, so figure double that price fully burdened, still about a quarter of the price. Apple can either provide cheaper systems at the same performance point, systems with better specs at the same price point or their most likely choice - a blend (somewhat less money, somewhat better specs), all while maintaining their margins.

You are absolutely right that businesses exist to make profit, however your presumption that they will use these cost drops (eliminating Intel’s profit margin, marketing and other non-required costs), to radically increase margin have no basis in history (easily seen by looking at their margins over time).
 
  • Like
Reactions: SteveW928
ARM wasn't relevant back when Apple made a switch to Intel. ARM has progressed massively during the last decade thanks to the luxury of having the massive market behind to continue on the R&D that only x86 enjoyed during late 90s~2000s.
Neither was PPC.
Reason x86 came out on top against RISC contenders isn't because it was superior architecture, but because it had a huge market to keep providing R&D to keep making improvements.
x86 matured to the point where it was competitive with and, in some aspects, faster than RISC based processors. Enough so that RISC vendors could not justify the continued cost of developing their own processors.

If we're to believe RISC, in and of itself, is superior to CISC (which essentially means x64 for this discussion) then this would not have been the case. The developers, who were in most part the users, of these processors would have continued to improve them enjoying the fruits of their superior and faster designs. It would have been worth the cost to continue to design and produce these processors. As it turns out it was not. The expense was too high and the performance delta was no longer compelling. Over time each abandoned their own processor designs and migrated to x86.
 
It isn’t that easy. Just ask Intel. There’s a reason their desktop chips have lagged for awhile.
I'm not going to debate the intricacies of specific CPUs and why they are or aren't "efficient" or "performant" - I'm not an electrical engineer, and to be honest debating with someone whining about coulda woulda shoulda isn't really ever gonna end is it? But I want to reiterate what I said before: Apple, and many other vendors have been in a very similar scenario before: "We have this CPU with a handful of cores. It's operating at pretty much the highest frequency it can, well. How can we make the computer using this, faster?" "Why don't you just put two of the little ****ers in there, Bob?"

All this talk about 64 core Rip Snorters and the existing 28 core Xeons proves that we're all agreed that the primary way to improve performance in 2020 is the ability to run more tasks in parallel, not so much a single-core 20GHz processor.
 
  • Like
Reactions: SteveW928
I'm not going to debate the intricacies of specific CPUs and why they are or aren't "efficient" or "performant" - I'm not an electrical engineer, and to be honest debating with someone whining about coulda woulda shoulda isn't really ever gonna end is it? But I want to reiterate what I said before: Apple, and many other vendors have been in a very similar scenario before: "We have this CPU with a handful of cores. It's operating at pretty much the highest frequency it can, well. How can we make the computer using this, faster?" "Why don't you just put two of the little ****ers in there, Bob?"

All this talk about 64 core Rip Snorters and the existing 28 core Xeons proves that we're all agreed that the primary way to improve performance in 2020 is the ability to run more tasks in parallel, not so much a single-core 20GHz processor.

And the reason for that is simple physics. Power is proportional to frequency times voltage squared. So 20GHz requires, at a minimum, 5x more power than a 4GHz identical core. But to get to 20GHz you also need to increase the voltage quite a bit, otherwise the capacitances charge and discharge too slowly to support that clock rate. And that’s a squared effect. So a 20GHz core burns exponentially more power than a 4GHz core. Whereas 5 4-GHz cores burn “only” 5x what one 4-GHz core burns.
 
Than
I think i already explained the Office situation earlier. Microsoft had 2 options, either compile it for ARM and losing compatibility with 3rd party plugins or emulating Office and make it compatible with all 3rd party plugins. However they figured there could be a third option.

Short excursion on the topic:
An application consists of the executable itself (e.g. the *.exe), supplemental libraries (*.dll), system libraries provided by Microsof) (*.dll) and then the OS kernel services itself plus the kernel drivers.
If you start a 3rd party x86 program, Windows will use the x86 excutable, all DLLs as x86 versions and emulates all the containing code - when the app is doing a kernel or driver call, the execution will jump out of emulation an will execute driver and kernel code natively ARM. In this sense every application is only partially emulated and is running partially native.

So what Microsoft did is, they where compling special version of the DLLs as ARM, however with a special shim/wrapper, which makes the ABI compatible with x86, such that emulated x86 application can directly call into native ARM libraries. Microsoft is calling these CHPE (compiled hybrid portable executable) libraries.

Thats what they are doing with Office. Only application level (*.exe) is emulated but all libraries, kernel and driver calls are native ARM. This technique makes Office fly on the Surface Pro X as there is only very small emulation penalty and all the 3rd plugin compatibility is perserved.
Thanks for taking the time to write that very thorough explanation. Genuinely appreciated. So then the assumption is that these supplemental extension libraries, aka dll hell (sorry couldn’t resist) were never part of Office for Mac functionality and so they didn’t have to worry about these for macOS? Is this also part of the reason I am always told that Office for Mac is missing functionality/features in comparison to the Windows versions?
 
And the reason for that is simple physics. Power is proportional to frequency times voltage squared. So 20GHz requires, at a minimum, 5x more power than a 4GHz identical core. But to get to 20GHz you also need to increase the voltage quite a bit, otherwise the capacitances charge and discharge too slowly to support that clock rate. And that’s a squared effect. So a 20GHz core burns exponentially more power than a 4GHz core. Whereas 5 4-GHz cores burn “only” 5x what one 4-GHz core burns.

What surprised me was that the A12X/Z already runs at 2.49GHz. I am genuinely curious to see how Apple ramps this thing up to usage beyond a device as thick as a pizza base - is it faster cores? Is it more cores? Is it multi-cpu? Is it a combination of all those things? How relevant are the low-power cores in actual desktop (i.e. iMac/Mac mini/Mac Pro; those where it'll never be running on an internal battery) usage? Are they still beneficial to let the machine run at ultra-low power?
 
What surprised me was that the A12X/Z already runs at 2.49GHz. I am genuinely curious to see how Apple ramps this thing up to usage beyond a device as thick as a pizza base - is it faster cores? Is it more cores? Is it multi-cpu? Is it a combination of all those things? How relevant are the low-power cores in actual desktop (i.e. iMac/Mac mini/Mac Pro; those where it'll never be running on an internal battery) usage? Are they still beneficial to let the machine run at ultra-low power?
The low power cores are very useful. Look at activity monitor on mac and how many processes are running at “0%” of cpu. By scheduling them on low performance cores you free up the big cores to run in longer bursts without having to share time with those other cores. And those cores generate less heat to do the same job as the big cores (because they are doing things more slowly and switching fewer transistors), so the overall die is a little cooler than it would otherwise be, which lets the big cores heat things up more. I think of the small cores mostly as improving overall system performance rather than really making much difference in power consumption on a mac.
 
What surprised me was that the A12X/Z already runs at 2.49GHz. I am genuinely curious to see how Apple ramps this thing up to usage beyond a device as thick as a pizza base - is it faster cores? Is it more cores? Is it multi-cpu? Is it a combination of all those things? How relevant are the low-power cores in actual desktop (i.e. iMac/Mac mini/Mac Pro; those where it'll never be running on an internal battery) usage? Are they still beneficial to let the machine run at ultra-low power?
I'm also very interested to see what they will do. Seems the core design is the same for the A and AX variants of a chip (e.g. Lightning (high performance) and Thunder (high efficiency) in the A13) just used in differing amounts - always 2x high performance on the iPhone chip, with varying numbers of high efficiency, and 4 of each on the A12X/Z - and clocked at different speeds.

Assuming the A14 is as big an improvement as has been rumoured these are going to already be very powerful chips, and considering the leak was for a 12 core (8 high performance 4 high efficiency) Mac processor, that potentially sounds like it would fit the bill for the 16" MBP. Looks like more cores, probably somewhat higher clock speed, and of course better graphics is the recipe Apple are going for.

I wonder if the MacBook Air will share the basic iPad A14X though? The A12Z seemed to already be more than adequate for the sorts of things a MBA will be used for...
 
As a CPU designer give me that problem every time. (“What? I have >100w power budget and all I have here is this 30W SoC? Gee, sounds hard. Call me in a year.” .... spends year playing solitaire .... “ok, here you go. Problem solved.” ... turns up power supply voltage and speeds up clock...)

watch those mintime violations tho
 
If I post a link to a high court judge with the same name as me, it does not mean I am a high court judge.
It makes you someone to be ignored. *blocked*

How long do you think it will be before Apple CPUs can outpace a 64 core Threadripper?

Not a cost efficient way to do things (look at the price of those chips), so maybe never. Better to use a farm of servers, GPUs and TPUs on an HPC interconnect, each module with its own cooling. But that's not Apple's market.
[automerge]1593013044[/automerge]
I get the feeling that their next big hobby after turning their attention away from the massive pile of work they have for the next year or so is to focus on gaming big time. We’ll see.

IMO, they first go after Google and Nvidia to compete with TPU-like stuff in the ML training & inference market (which will turn out to be more useful for the future of gaming and AR than just pushing pixels fast). Plus the developed IP will be very useful for the cost efficiency of Apple's data centers.
 
Last edited:
watch those mintime violations tho

Yep. Hate those ****ers. Actually, the first chip i worked on at one of my employers involved redesigning the integer ALU for a follow-on version of a chip. Can’t remember, but it was probably something like a 300MHz part that we wanted to move to 350MHz or something. The design used latches instead of flip flops, and was clock-borrowing all over the place.

First thing I did was to recenter all the clocks, so that I could see how bad the critical paths actually were, and see if there was a way to move logic around so I wasn’t relying on clock tricks. After about two months I had the design working at the right frequency with only the tiniest bit of clock borrowing. Then I started the hold time analysis, and found the thing was completely ****ed in the bottom half of the speed bins. So another month screwing with it to try to get it to work properly with mintime.

After that project I got involved with EDA and methodology, and we never used latches again, never used clock borrowing again, and we used a static timing methodology that ensured min and max times were both solved simultaneously from the beginning. I even wrote a tool to let me drag cells and wires around and instantly see new min and max times for all paths through a gate in real time. Synopsys wasn’t happy that I figured out how to replicate the same answers that primetime would give. (Never did get my capacitance extraction to be very good, but it was good enough to cut down on a lot of work).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.