Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The best of an ARM Mac Hw will be in notebook. I imagine it will last like 24 Hours before to charge again ...
 
I work at Autodesk and have the source code for both Maya and Fusion 360 (two of our Mac products) on my system.

This developer transition kit is completely and utterly inadequate. 16GB of RAM? Man, I feel bad for whoever gets stuck with the arduous task of getting our stuff running on this system. It's going to be awful.

Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable, and the development teams are typically using Ryzen 9 or Threadripper systems running Linux or Windows. These CPUs are 4x faster than A12Z. Fusion's requirements for decent compile perf are somewhat lower than Maya, but still well beyond 16 GB RAM.

Like many here, I am confident that the systems Apple introduces over the next year will be a significant step forward from this kit, but the gap they have to cross to meet what Autodesk's pro customers are using today -- never mind what they will expect two years from now -- is still really vast. They can't just catch up with the perf of the current Mac Pro. AMD Zen 3 is just around the corner and it's going to kick the everlovin' bejeezus out of every Mac and Intel system in 3D workloads.

But even if the CPU perf situation at the high-end turns out okay, what is really going to make or break it for us is video driver quality. It always has. NVIDIA and AMD are many, many years ahead of Apple on driver quality and developer relations.... if Apple continues to be uncaring about OpenGL and Vulkan, and if they don't have comprehensive raytracing support on next-gen AMD Radeon GPUs in a timely fashion, then Apple is going to lose the 3D market almost completely.

(Usual disclaimer: these are my opinions, not my employer's)
I don’t doubt anything you say.

Whag you can expect is the first systems will be geared to users who don’t use high end software. They will run Apple apps, Office, and iOS aps well and some other basic apps via Rosetta. These prerelease dev kits are meant for developing apps for those people.

High end Intel systems will remain the go to for 2 years for your customers.

In that time you will have access to a much more robust setup to develop on.
 
Dear Tim Cook I’am games delovoper for Mac OS and Windows(BootCamp) and me need Intel processors on Professions Mac’s becouse a lot of games engines compiling for Intel processors!!! (Unreal Engine 5(Intel-Mac OS X) Unity(Intel-Mac OS X), DECIMA(Windows), FrostByte(Windows), DUNA(Windows), IDTech(Windows)
If you are a game developer you probably have a decent gaming rig anyway.

And did you miss that Unity development kit is already running on the new ARM machines...
 
  • Like
Reactions: ader42
I find it amusing so many are mentioning that the chip that was benchmarked here is underclocked. The clocks are only 90mhz lower. That hardly has a noticeable impact on performance.
 
I find it amusing so many are mentioning that the chip that was benchmarked here is underclocked. The clocks are only 90mhz lower. That hardly has a noticeable impact on performance.

About 4% impact. A lot? Not a lot? Since we are pretending these numbers mean something, that’s up to you.
 
  • Like
Reactions: leman
I have run some benchmarks on an iPhone XR (2 + 4 cores), and each little core has about 10% the performance of a fast core. But iOS knows how to handle this, so you get 2.4 times the performance of two fast cores. 20% difference again. Note that this didn't work with an A11, because running the little cores at max slowed down the fast cores a little bit, so you ended up gaining very little.


It's not their purpose to run "demanding tasks like a benchmark", but every bit helps. On an iPhone XR (2 + 4) it makes 20% difference, on this machine it would be 10% difference. Same as changing the clock rate from 2500 to 2750 MHz. And since A12 you can run the slow cores forever as hard as you can, without slowing anything else down.

Thx for this info!
[automerge]1593498190[/automerge]
I can understand not using the "efficiency" versus the "performance" cores, but why would Apple underclock them in a Mini compared to an iPad Pro? There's more room in the Mini enclosure for air flow so there ought not be a thermal reason and it's always plugged in to a wall socket so saving power ought not be an issue either.

This is puzzling for me as well. How long can the iPad do 2.5 GHz? Maybe they want to have a safety margin as they know many will run @100% a lot.
 
I'll toss in one observation, ``If you need 96GB to compile Maya when 32GB compile Blender Master within 10 minutes on an old FX-8350 speaks poorly to the state of Maya's code base.'' That being said, 16GB for a processor designed to work with 2-4GB on a very locked down system tells me most of that 16GB is for Rosetta to be cached to compensate for the poor system performance. Maya is just going to expose it and be overkill.

On the plus side, Autodesk has an opportunity to optimize the hell out of Maya for x86_64 and ARM64 by the time it reaches Apple Silicon in two years and by then Zen 5 should really scream on it.
 
About 4% impact. A lot? Not a lot? Since we are pretending these numbers mean something, that’s up to you.
Actually it's 3.68% and performance doesn't scale perfectly with clocks anyway so the actual performance loss is lower than that.
 
If you are a game developer you probably have a decent gaming rig anyway.

And did you miss that Unity development kit is already running on the new ARM machines...

This is wonderful news to hear. Now bring on zbrush and the arm macs will be the perfect laptop for me
 
  • Like
Reactions: ader42
Dear Tim Cook I’am games delovoper for Mac OS and Windows(BootCamp) and me need Intel processors on Professions Mac’s becouse a lot of games engines compiling for Intel processors!!! (Unreal Engine 5(Intel-Mac OS X) Unity(Intel-Mac OS X), DECIMA(Windows), FrostByte(Windows), DUNA(Windows), IDTech(Windows)
Dude, it's just an instruction set. Heck, Unreal Engine 5 already is targeting iOS (read: "Apple Silicon"). I didn't even check the others because it's so "*shrug* it will be ported easily".

Edit: Unity targets iOS also. Decima game studio targets iOS. FrostBite Go targets iOS. iD Tech has games running on iOS. All these can be brought over with relative ease.

Sure it takes some time, but it's not a super high hurdle, compared to a whole game engine itself.
 
Last edited:
  • Like
Reactions: ader42
Actually it's 3.68% and performance doesn't scale perfectly with clocks anyway so the actual performance loss is lower than that.
Turns out that in scales pretty close with clock, especially for small changes in clock. I always thought it shouldn’t, but every time I worked on a cpu and sped up the clock by x%, it always improved on benchmarks, on average, by x%.

With bigger clock changes you can start getting into non-linear behavior. And of course some workloads will be less (e.g. memory access-dominated workloads where you don’t also speed up the memory “clock” and rarely some will be more (certain race conditions will resolve in your favor sometimes and buy you a cycle.)
[automerge]1593499637[/automerge]
With all the speculation on this thread about how easy or hard it will be to port apps to Apple Silicon, I thought it would be useful to hear from an actual developer of a well-known and respected Mac app (one of the "boots on the ground", as it were), so I posted on Keyboard Maestro's forum, asking the dev for his thoughts on the transition. Here are my questions and his reply:


BTW, while I have no affilitation with KM, I don't my giving a strong plug for the product: I find it's indispensable for creating app-specific and global macros, and linking them to keyboard shortcuts. So I'd like to see its continued availability in future Macs; it's one of my most important productivity tools.

I thought one of his most telling comments was the following:

"But of course, a lot of this is up to Apple as well, they may cause any number of deliberate or unintentional blockers for Keyboard Maestro in Big Sur or on ARM. Many of the things that Keyboard Maestro does and that Keyboard Maestro customers depend on are unfortunately not things that are a focus for Apple. Time will tell."

This is an important practicality that is often missed in these discussion.
”page doesn’t exist or is private”
 
Apple does decide to launch a 2-year old design CPU, run it on just 4 of the 8 cores, and under-clock it slightly and run everything through Rosetta.

Many people don't realize this, but Apple is making a clear statement here. Meaning, if we can do this, can you imagine what we could do with a Mac dedicated chip? Exciting times ahead.
 
Makes you seriously wonder what they are going to be able to pull off on a production mini and a decent heatsink and fan, 8 * A14 performance cores all at a higher clock? Double the multicore score and another 1/3rd on the single core?
 
Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable, and the development teams are typically using Ryzen 9 or Threadripper systems running Linux or Windows. These CPUs are 4x faster than A12Z. Fusion's requirements for decent compile perf are somewhat lower than Maya, but still well beyond 16 GB RAM.

With all due respect, that is barely something to boast about.
[automerge]1593504877[/automerge]
I can understand not using the "efficiency" versus the "performance" cores, but why would Apple underclock them in a Mini compared to an iPad Pro? There's more room in the Mini enclosure for air flow so there ought not be a thermal reason and it's always plugged in to a wall socket so saving power ought not be an issue either.

A completely wild, unverified, baseless supposition: my guess is that these chips might be factory rejects that work but don't qualify for being used in an actual iPad Pro. Lower clocks then would be to increase stability.
 
Both this ARM and my 2600 have 4 cores. But my 2600 has the advantage of hyper threading.

You keep harping on about your 230W Core i7 2600 - which checking on Geekbench gets slightly lower single thread at 3.4GHz than this 2.4GHz Apple Silicon.

This developer machine is running an iPad Pro internally with more RAM. If it uses more than 20-30W at the socket I would be surprised because 7.5W SoC + LPDDR4 + SSD don't use much. So it's beating your CPU in single thread in a fraction of the power.

This is a two generation old A12 core - the ARM Macs will use A14 cores, which are going to be around 30-40% higher IPC. They will have a higher TDP, and hence higher clocks. And they will be running native code which is worth 30% on its own.

We don't know how many cores Apple will include, but rumours suggest 8 large cores (and 4 small cores), so multithreaded benchmark scores will skyrocket.
 
I work at Autodesk and have the source code for both Maya and Fusion 360 (two of our Mac products) on my system.

This developer transition kit is completely and utterly inadequate. 16GB of RAM? Man, I feel bad for whoever gets stuck with the arduous task of getting our stuff running on this system. It's going to be awful.

Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable, and the development teams are typically using Ryzen 9 or Threadripper systems running Linux or Windows. These CPUs are 4x faster than A12Z. Fusion's requirements for decent compile perf are somewhat lower than Maya, but still well beyond 16 GB RAM.

Like many here, I am confident that the systems Apple introduces over the next year will be a significant step forward from this kit, but the gap they have to cross to meet what Autodesk's pro customers are using today -- never mind what they will expect two years from now -- is still really vast. They can't just catch up with the perf of the current Mac Pro. AMD Zen 3 is just around the corner and it's going to kick the everlovin' bejeezus out of every Mac and Intel system in 3D workloads.

But even if the CPU perf situation at the high-end turns out okay, what is really going to make or break it for us is video driver quality. It always has. NVIDIA and AMD are many, many years ahead of Apple on driver quality and developer relations.... if Apple continues to be uncaring about OpenGL and Vulkan, and if they don't have comprehensive raytracing support on next-gen AMD Radeon GPUs in a timely fashion, then Apple is going to lose the 3D market almost completely.

(Usual disclaimer: these are my opinions, not my employer's)
All this is irrelevant. The main task of the dtk is to test your builds - not do the actual build. You can do the actual build on any system with Xcode installed.
 

"ARM has two different instruction encoding modes: ARM and THUMB. In ARM mode, you get access to all instructions, and the encoding is extremely simple and fast to decode. Unfortunately, ARM mode code tends to be fairly large, so it's fairly common for a program to occupy around twice as much memory as Intel code would."

I'd trust Stackoverflow more than I'd trust you..

StackOverflow in this case has exaggerated the differences, and simplified to the point of unhelpfulness.

Firstly, ARM and THUMB (A32 and T32) are not AArch64, in AArch64 (all Apple Silicon will be) you run A64 code, which includes both 32-bit and 16-bit instructions, and operates on 32-bit and 64-bit words. You cannot change operating modes except on an exception boundary, and it is suggested that Apple Silicon doesn't even support A32/T32. http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0801b/IBAIEGDJ.html

A64 code is denser than A32 code.
It is nearly as dense as x86-64 code.
[automerge]1593510328[/automerge]
I work at Autodesk and have the source code for both Maya and Fusion 360 (two of our Mac products) on my system.

This developer transition kit is completely and utterly inadequate. 16GB of RAM? Man, I feel bad for whoever gets stuck with the arduous task of getting our stuff running on this system. It's going to be awful.

Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable, and the development teams are typically using Ryzen 9 or Threadripper systems running Linux or Windows. These CPUs are 4x faster than A12Z. Fusion's requirements for decent compile perf are somewhat lower than Maya, but still well beyond 16 GB RAM.

Your developers will build on the Mac Pro in XCode as per normal, but they will select the universal binary option.
The final application will be copied to the dev ARM Mac Mini.
The QA testers will run the application to find issues - and for Maya/Fusion this will take a very long time I'm sure, each function would need to be tested. I'm sure a lot of automation exists, but maybe the automation tool needs to be ported too.
The main issue I would expect is where you have optimised x86-64 assembly in the codebase with a low-performance C/C++ fallback. Someone will have to write either optimised C/C++ or hand-optimised ARM for these compute kernels. They may decide to switch to GPU compute via Metal, etc.

This is what this box is for. It's not a development platform, you'll use your existing machines for that. It's a QA platform.
 
Last edited:
I'd trust Stackoverflow more than I'd trust you..

You shouldn't, the accepted answer there is overly simplistic. For example, as an ISA comparison they use "repe cmpsb", which is terribly slow on modern CPUs and therefore not used. It is a remnant of an old x86, where Intel tried to be more CISC-y. When you look at modern x86 instructions, like AVX, they are all following RISK design principles.

In regards to code size, here is a small comparison using a simple quick sort C implementation (https://godbolt.org/z/t54RiQ) :

ARM64x86-64
optimized for size (-0s)108 bytes75 bytes
optimized for performance (-O2)128 bytes171 bytes
optimized for performance (-O3)128 bytes258 bytes

Generally, for short trivial code (like one people like to show in examples but will never occur in real life), x86 will have denser code since the more frequently used instructions are shorter. This is also the reason why the compiler can optimize x86 better for size — it can try to choose shorter instructions even if the generated code is suboptimal (not much freedom on ARM in this regard). For complex algorithms (especially ones involving a lot of computation), ARM will often produce shorter code since a) newer Intel instructions are longer b) ARM has more registers, so the compiler has more room to work.
 
  • Like
Reactions: Gerdi and Hattig
You keep harping on about your 230W Core i7 2600 - which checking on Geekbench gets slightly lower single thread at 3.4GHz than this 2.4GHz Apple Silicon.

The i7 2600 is a 95W chip tops.
And it uses the 32nm node, it's 10 years old and it's not even the best i7 Intel launched that generation. The 2700K gets better scores in Geekbench and intel's 15W, 10nm mobile i7 is able to easily outscore the 2700k in Geekbench.
 
The i7 2600 is a 95W chip tops.
And it uses the 32nm node, it's 10 years old and it's not even the best i7 Intel launched that generation. The 2700K gets better scores in Geekbench and intel's 15W, 10nm mobile i7 is able to easily outscore the 2700k in Geekbench.

This was a specific response to someone talking about his specific system, and it's system TDP, so I mentioned likely system TDP of the dev mini.

Of course Intel moved on since 2011, and current 15W chips outperform 95W a decade ago.

What we have is what a ~7.5W TDP (we don't know if it's higher in the dev box so it can run at turbos for longer than the iPad) performs like in Rosetta, and a range of benchmark comparisons like 15W IceLake where it compares similarly.
We can guess that the final silicon will be +20% (A13) + 20% (A14) + 30% (native) + 30% (higher clocks) (+more cores) as well, but we don't know the exact figures here.
 
This was a specific response to someone talking about his specific system, and it's system TDP, so I mentioned likely system TDP of the dev mini.

Of course Intel moved on since 2011, and current 15W chips outperform 95W a decade ago.
You said: your 230W Core i7 2600 which is the CPU not the system.
Also what's the point of mentioning the TDP of a PC tower when you can just measure the power draw at the wall?

What we have is what a ~7.5W TDP (we don't know if it's higher in the dev box so it can run at turbos for longer than the iPad) performs like in Rosetta, and a range of benchmark comparisons like 15W IceLake where it compares similarly.
We can guess that the final silicon will be +20% (A13) + 20% (A14) + 30% (native) + 30% (higher clocks) (+more cores) as well, but we don't know the exact figures here.

Why wouldn't it turbo at max frequency when it has much better cooling than on a tablet?
Anyway Geekbench isn't really a good benchmark for X86 CPUs in the first place.
 
Last edited:
You shouldn't, the accepted answer there is overly simplistic. For example, as an ISA comparison they use "repe cmpsb", which is terribly slow on modern CPUs and therefore not used. It is a remnant of an old x86, where Intel tried to be more CISC-y. When you look at modern x86 instructions, like AVX, they are all following RISK design principles.

In regards to code size, here is a small comparison using a simple quick sort C implementation (https://godbolt.org/z/t54RiQ) :

ARM64x86-64
optimized for size (-0s)108 bytes75 bytes
optimized for performance (-O2)128 bytes171 bytes
optimized for performance (-O3)128 bytes258 bytes

Generally, for short trivial code (like one people like to show in examples but will never occur in real life), x86 will have denser code since the more frequently used instructions are shorter. This is also the reason why the compiler can optimize x86 better for size — it can try to choose shorter instructions even if the generated code is suboptimal (not much freedom on ARM in this regard). For complex algorithms (especially ones involving a lot of computation), ARM will often produce shorter code since a) newer Intel instructions are longer b) ARM has more registers, so the compiler has more room to work.

Lol.. the point was I'd trust Stackoverflow over some random dude on these forums. Nobody uses -O3 the benefit is tiny and has some added risk. Even the speed benefit for -O2 in production is small over -O1 and I work on very large software packages.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.