Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apple said they are developing a family of SoC's for the Mac. Why do you assume they're going to draw only 15w?
If you start with an iPad SOC that draws maybe 5 Watt, using 15 Watt alone will give you an awful lot more performance. Say twice as many cores, and 25% higher clock speed. I'd expect ARM using 15 Watt to beat Intel with 30 Watt. And of course you're right, no reason why Apple would stop at 15 Watt.
 
  • Haha
Reactions: Babygotfont
Maybe he's right - it would seem awfully shortsighted for them to not just change CPU but also GPU at the same time. I'm already cynical they can build anything like that iMac Pro or Mac Pro replacements in 2 years time with ARM chips, let alone the level of graphics crunching power those devices need too.

Big Sur for devs probably has everything ripped out of it so they can't find out what's happening and only what they need to work at the most basic level, but I guess we'll see.
The rumours I ahve heard is the intel iris style replacements could be ready this year but the heavier more dedicated offerings will be 2021 as Apple need the time to get the GPU side ready.
 
If you start with an iPad SOC that draws maybe 5 Watt, using 15 Watt alone will give you an awful lot more performance. Say twice as many cores, and 25% higher clock speed. I'd expect ARM using 15 Watt to beat Intel with 30 Watt. And of course you're right, no reason why Apple would stop at 15 Watt.
Apple traditionally stops where the performance per watt peaks. Though that is within their self made heat and watt constraints.
 
It wont be better though, it might be more power efficient, but not more powerful. I mean they skimp on the hardware as it is :/ You have to pay through the roof to spec it up, even then you can build yourself something much better on Windows. With the likes of Nvidia bypassing the CPU to unload what they want on the GPU.... gonna be crazy for encoding and crap that has been throttled by the CPU.

Well, look at it from this perspective. An A13 core running at 2.66ghz performs within a 15% margin of f a 5.0Ghz Intel desktop CPU, while consuming 10x time less power. Now, that clock seems to be an upper limit for the A13, but if Apple is able to push the clocks to say, 3.3ghz without blowing up power consumption (should be realistic with the new 5nm process), they will outperform anything Intel will have to offer in the coming years while still having a 3x-5x power consumption advantage.

Conservatively, I'd expect the peak performance of shipping Apple Silicon to be at least 20% higher than Intel or AMD chips within the same TDP bracket.

Meanwhile Apple move in house, and their GPU on the mobile side has never been anything special.

I would disagree. Their mobile GPUs compare quite favorably to discrete desktop GPUs. The A12Z SoC offers almost identical performance to a 50W Pascal part (1050 GTX) and seems to have at least 20-50% power efficiency advantage over Turing and/or Navi (looking at graphics). Of course, we are excluding the high-end GPU spectrum like the one occupied by the new 3000 series (which is less relevant to Apple anyway). In this segment, performance scales more or less linearly with the amount of shader units — there is always more work to do than the GPU can, and this work is inherently massively parallel. Taking the performance of the A12Z and scaling it to a 50W part (which should yield around 36-40 GPU cores or 2304-2560 "shader cores") would net you performance close to a 2070 RTX. Of course, this is just a math exercise in the end, with very little relation to reality, but it is useful to estimate the relative power of these GPUs.

To add to this, on low-end and mid-range, Apple has a very good chance to outperform the competitors — by a healthy margin — simply because their GPUs need to do less work (because of their TBDR approach). It is not clear whether this will scale to high-end though. GPU compute boils down to the instruction scheduler and the amount of the ALUs. The former is state-of-the art in Apple GPU architecture, the later is on the low side (since they have only been making ultra-low-power mobile chips). Still, looking at compute benchmarks, Apple seems to be doing more work per ALU per Watt than, say, Navi.
 
Last edited:
That's my whole point. Intel released this just after the new intel Mac knowing that Apple will be on ASi before they release the heavier versions of 11th gen. So Apple will be comparing ASi to the previous iMac (10th gen) whereas intel will be comparing ASi to their new (yet to be released) 11th gen heavier cpus.

I have no idea what your point is. These CPUs have nothing to do with the iMac. They're not remotely in the right power bracket. Making any comparisons is moot until either Apple ships an ARM iMac, Intel ships Rocket Lake-S, or both.
 
  • Like
Reactions: Zdigital2015
I have no idea what your point is.
I'll explain it again for your benefit.

Intel moving to 11th gen for this is just the start of a full 11th gen transition. Intel choosing now to start this transition possibly shows their intentions. Or maybe the Apple/Intel contract/relationship is nearing the end of it's life. Who knows. Why now? Sounds very "timed" for a reason.

Think about the broader picture, not just this specific CPU (which is out of the iMac's power/watt range, assuming the ASi iMacs are similar to the current intel iMacs on this point).
 
Intel moving to 11th gen for this is just the start of a full 11th gen transition.

True.

Intel choosing now to start this transition possibly shows their intentions.

Huh?

Tiger Lake-U shipping in fall of 2020 was basically known for years.

What… "intentions"?

Or maybe the Apple/Intel contract/relationship is nearing the end of it's life. Who knows. Why now? Sounds very "timed" for a reason.

It isn't.

Ice Lake-U was announced in August 2019, and this is the natural progression of that.

Think about the broader picture, not just this specific CPU

The broader picture has nothing to do with Apple here.
 
I will take a wait and see approach. Apple is going to attempt to push ARM on the Mac. They can do that no problem at the hardware level. They have dumbed down their own apps enough to run on iOS/macOS so that is not a problem either.

Its success is going to rely on NATIVE software from the likes of Adobe, Microsoft and others. The iPad versions of these apps are not going to cut it. Emulated software is not going to cut it either. It will take native App support for this push to ARM to be successful.

Then again if the cost of supporting x86/64 Mac's comes close to how much that make on Mac's and Mac revenue is a small portion of their overall revenue then maybe Apple does not care if they get native App support from those vendors. They will make more money per device sold once they control the whole Mac (no Intel) and that might make up for the loss of Mac sales due to people leaving to Windows to run Adobe CC, MS Office and other apps that do not get ported.

You may not have noticed the demo of native MS Office, Photoshop, and Lightroom durning the Apple Silicon reveal at WWDC.
 
Apple has yet to have performance desktop silicon.
Let's wait and see.

They will have the same issues as Intel and possibly more since they depend on another company for their process technology.
If TSMC falters, Apple is dead in the ware for silicon.

Only dead in the water if TSMC falters and is the only other fab on Earth (it's not). That's why it's better to outsource process tech. Just ask Intel.
 
The consumer benefits of this Apple transition remain to be seen. I think there will be some sort of deficit of apps working on the Apple silicon. At least, for the first year or so. I’m not talking about the iPhone/iPad apps that will clearly work from the get-go. There are many other apps that currently only work on the x86 chip architecture. I guess, we’re going to have to wait and see how it all pans out.

No, the big devs will port quickly (much easier than past transitions), and Rosetta 2 looks solid.
 
Only mentioned it once and I should stop comparing it? I didn't want to get into a big console discussion as its far off topic, but I'll just say this.

As discussed many times around here in the Epic 30% battle, Consoles are subsidized, so they cost more than the sale price.... and they are designed to last 6-7 years without upgrades. The new gen consoles clearly are not going to be competing on graphics. Yes, you can squeeze more performance out of a consistent console experience, but you are failing to note that's no longer how game developers are working. They aren't hand optimizing games, they are targeting Xbox One S, Xbox One X, Xbox Series S, and Xbox Series X all at the same time, not to mention PCs..and are using the same APIs to go cross platform with all of the DirectX technologies. Gone are the days that Xbox console games are going to be well optimized for a single platform. Even Sony is allowing exclusives to launch on the PC at the same time. So they are in fact, directly competing, and you can't fake ray tracing performance - you either have it or you don't. NVidia had a huge swing and a miss on RTX 2000 series.... they were overpriced and way underperforming for ray tracing. Some people thought it was a gimmick and wouldn't take off. Now we have the ability to raytrace 4K 60fps. Nvidia is actually targeting people who are waiting to buy new consoles this fall with their super low RTX 3070/3080 pricing compared to where the RTX 2070/2080 was. Last I heard the 4K performance for X Box Series X is actually up-sampled not real 4K. AMD probably could have done better if they did it as a discrete chip like older consoles had, even if it still didn't match.

Now
Back to my original actual point
Apple needs to let NVidia back into the game and continue working with AMD. Without discrete graphics as an option on the new Apple silicon macs, they are going to be unappealing to many. Even worse if they get rid of 3rd party graphics drivers on Apple Silicon for eGPUs and expansion cards on Mac Pro.

"Go buy a PC to game" shouldn't be the refrain. Apple isn't even used that much in business, its become for of a luxury consumer brand. We need more options to make the graphics appealing, not less.

Apple will need discrete graphics on the MacPro. Perhaps they will roll their own?
 
Regardless of the implications for the MacBooks, looks like I'll definitely wait until fall before buying a Windows laptop.
 
Well, look at it from this perspective. An A13 core running at 2.66ghz performs within a 15% margin of f a 5.0Ghz Intel desktop CPU, while consuming 10x time less power. Now, that clock seems to be an upper limit for the A13, but if Apple is able to push the clocks to say, 3.3ghz without blowing up power consumption (should be realistic with the new 5nm process), they will outperform anything Intel will have to offer in the coming years while still having a 3x-5x power consumption advantage.
Power roughly grows with the square of the clock speed. To go from 8/3rds GHz (2.666) to 10/3rds GHz (3.333) would increase the power by 10/8th squared or 100/64 or a factor 1.5625. That is without any process change.
 
I think it is possible they'll ship an updated 16" this fall with an Intel CPU for the last time before moving the entire MacbookPro line to ARM. I don't know if it will have this CPU or the 10th gen, but I'd expect a CPU update before they ditch Intel completely.
This cpu is inappropriate for MBPs.
 
It wont be better though, it might be more power efficient, but not more powerful. I mean they skimp on the hardware as it is :/ You have to pay through the roof to spec it up, even then you can build yourself something much better on Windows. With the likes of Nvidia bypassing the CPU to unload what they want on the GPU.... gonna be crazy for encoding and crap that has been throttled by the CPU.

Meanwhile Apple move in house, and their GPU on the mobile side has never been anything special.
It’s already more powerful. And more power efficient means more powerful - it gives you headroom to increase voltage and clock speed while staying within the power budget of the enclosure.
 
The integrated Iris Xe graphics are better than 90 percent of all discrete notebook GPUs sold last year

Apple said something like this with the iPad Pro A12X

Apple said that A12X Bionic was faster than 92% of PC laptops available at the time

yeah A14X or whatever the name of next Apple Silicon on the MacBook will blow this tiger lake chips
 
This is an incredibly impressive mobile jump for Intel. I hope it makes its way to the MBA. I’m not in the market for a new notebook anytime soon, but this makes me optimistic about the future.
 
I’m sorry, but what you are mixing a bunch of basic facts with a load of random hogwash. Yes, ARM is a load/store ISA, so it sometimes needs two instructions to do what x86 can do with one, but then again ARM has plenty of instructions that don’t exist in x86, not to mention that it has twice as many registers and generally needs less stack juggling. The “running more instruction takes more time” is a plainly wrong since both ARM and x86 CPUs will translate the instructions into RISC-like microcode which then get reordered on the fly. It’s just that x86 can sometimes encode a sequence of such micro-ops more compactly, but then again it used variable length instructions, which are more costly to decode.

Apple CPUs have been analyzed in detail, they are very wide superscalar devices with large caches, perfectly capable of executing any kind of code (save for HPC stuff where newest Intel has an advantage with 512bit per clock AVX - Apple mobile can currently only do 378bit per clock). No, you don’t need to tweak your code to take advantage of it - the CPU will execute it out of order automatically to ensure that the bs kind is efficiently utilized. Just like with any other high-performance superscalar CPU.

And yes, all it takes is a recompile, unless your program has latent bugs (that might not become apparent when executed on a x86-64 CPU) or you are using CPU-specific features (which you will need to rewrite). Obviously you’ll need to test your apps for the new architecture, that goes without saying- same thing as updating your app to a new OS.
Howdy gnasher729,

I take it, that you are not a developer? It is very naïve to think that is all it takes is a quick re-compile to get a program to work on Apple Silicon, From a pure "will it run" idea, what you say is technically true. The program will open, but there is no guarantee that it will run very well. It may run very slow, or it may run too fast (not likely but possible), things that just worked before may cause the program to hang as it takes too long to execute, causing the OS to assume the program has hung. It is more complex than just a recompile. For simple applications, that do not require a ton of computational power to run, they should perform fine, even if a bit slower due to their design, but performance sensitive applications will need to be tweaked a bit to run effectively. The Apple Silicon uses the ARM instruction set, which is RISC, meaning that it runs fixed-length simple (Reduced Instruction Set Computing) instructions. Program instructions that take only one instruction on an Intel CPU, will have to be broken down into multiple instructions on RISC, running more instructions of course takes more time. There are things that can be done to mitigate this (pipelining, increasing the number of instructions that can be executed per clock, etc..), but it has to be done. That is why Apple made the dev kits available so early, and also why they announced that your iOS apps can run, because I imagine that for a little while at least, these apps will perform better than the initial set of recompiled apps. Good luck!

Rich S.

Edit and Update: After seeing all of the negative responses to my post, I went back and re-read what I had posted. My overall intent is still correct, but some of what I wrote is wrong. Thank you to the folks that pointed it out. Some tasks that can be done with one instruction on CISC, have to be done with multiple instructions on RISC. This is fact, and one of the differences between CISC and RISC. Where I was wrong, was in the idea that a complex instruction on a CISC processor only takes one clock-cycle to execute. It may take the CISC CPU multiple clock cycles to finish the task, meaning that a RISC CPU taking multiple instructions to complete, will not take more overall execution time.

This aside, does not invalidate my assertation that some applications will take more than a simple re-compile to run well. Yes, some applications will work fine, with just a recompile. Others will work, perhaps a little slower, or perhaps a little faster due to the differences in how the CPUs work. We will not really know until Apple Silicon gets out in the wild. There are reasons that Apple has been telling folks to not publish benchmarks on the Mac Mini developer system, as it does not represent what they (Apple) will be shipping.

Architecture transitions are never simple.

Lastly and perhaps most importantly, I want to apologize to gnasher729 for my opening statement. I did not mean for it to sound like an attack, and looking back at how I wrote it, I don't see how it couldn't be perceived as one. An example of a current program (and rather timely due to current situation with Epic) which will take more than a simple recompile, is the Unreal Engine Development Environment. The engine itself is already used in multiple games on iOS (as well as Switch, and Android), but the development environment is on x64 macOS, GNU/Linux, and Windows. They will need to port this to Apple Silicon macOS, and I think, but am not sure, that it still uses OpenGL on macOS. I also know from firsthand experience, that the development environment performs better in Windows than in macOS on the same hardware.

Thanks!
 
In the Business World, it's the software that matters, NOT the hardware !

Apple's custom Si Macs will very-likely establish a beachhead with Gamers & Hobbyists first !

What good is having a Mac that mostly runs iOS apps ?

Isn't that what an iPad is ?

You need to educate yourself because the new Macs will still be able to run pretty much all your old software. Software will either get updated to run on Apple Silicon or it will run in Rosetta 2 with only a minor performance hit.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.