I don’t doubt anything you say.I work at Autodesk and have the source code for both Maya and Fusion 360 (two of our Mac products) on my system.
This developer transition kit is completely and utterly inadequate. 16GB of RAM? Man, I feel bad for whoever gets stuck with the arduous task of getting our stuff running on this system. It's going to be awful.
Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable, and the development teams are typically using Ryzen 9 or Threadripper systems running Linux or Windows. These CPUs are 4x faster than A12Z. Fusion's requirements for decent compile perf are somewhat lower than Maya, but still well beyond 16 GB RAM.
Like many here, I am confident that the systems Apple introduces over the next year will be a significant step forward from this kit, but the gap they have to cross to meet what Autodesk's pro customers are using today -- never mind what they will expect two years from now -- is still really vast. They can't just catch up with the perf of the current Mac Pro. AMD Zen 3 is just around the corner and it's going to kick the everlovin' bejeezus out of every Mac and Intel system in 3D workloads.
But even if the CPU perf situation at the high-end turns out okay, what is really going to make or break it for us is video driver quality. It always has. NVIDIA and AMD are many, many years ahead of Apple on driver quality and developer relations.... if Apple continues to be uncaring about OpenGL and Vulkan, and if they don't have comprehensive raytracing support on next-gen AMD Radeon GPUs in a timely fashion, then Apple is going to lose the 3D market almost completely.
(Usual disclaimer: these are my opinions, not my employer's)
If you are a game developer you probably have a decent gaming rig anyway.Dear Tim Cook I’am games delovoper for Mac OS and Windows(BootCamp) and me need Intel processors on Professions Mac’s becouse a lot of games engines compiling for Intel processors!!! (Unreal Engine 5(Intel-Mac OS X) Unity(Intel-Mac OS X), DECIMA(Windows), FrostByte(Windows), DUNA(Windows), IDTech(Windows)
I find it amusing so many are mentioning that the chip that was benchmarked here is underclocked. The clocks are only 90mhz lower. That hardly has a noticeable impact on performance.
I have run some benchmarks on an iPhone XR (2 + 4 cores), and each little core has about 10% the performance of a fast core. But iOS knows how to handle this, so you get 2.4 times the performance of two fast cores. 20% difference again. Note that this didn't work with an A11, because running the little cores at max slowed down the fast cores a little bit, so you ended up gaining very little.
It's not their purpose to run "demanding tasks like a benchmark", but every bit helps. On an iPhone XR (2 + 4) it makes 20% difference, on this machine it would be 10% difference. Same as changing the clock rate from 2500 to 2750 MHz. And since A12 you can run the slow cores forever as hard as you can, without slowing anything else down.
I can understand not using the "efficiency" versus the "performance" cores, but why would Apple underclock them in a Mini compared to an iPad Pro? There's more room in the Mini enclosure for air flow so there ought not be a thermal reason and it's always plugged in to a wall socket so saving power ought not be an issue either.
Actually it's 3.68% and performance doesn't scale perfectly with clocks anyway so the actual performance loss is lower than that.About 4% impact. A lot? Not a lot? Since we are pretending these numbers mean something, that’s up to you.
If you are a game developer you probably have a decent gaming rig anyway.
And did you miss that Unity development kit is already running on the new ARM machines...
![]()
Dude, it's just an instruction set. Heck, Unreal Engine 5 already is targeting iOS (read: "Apple Silicon"). I didn't even check the others because it's so "*shrug* it will be ported easily".Dear Tim Cook I’am games delovoper for Mac OS and Windows(BootCamp) and me need Intel processors on Professions Mac’s becouse a lot of games engines compiling for Intel processors!!! (Unreal Engine 5(Intel-Mac OS X) Unity(Intel-Mac OS X), DECIMA(Windows), FrostByte(Windows), DUNA(Windows), IDTech(Windows)
Turns out that in scales pretty close with clock, especially for small changes in clock. I always thought it shouldn’t, but every time I worked on a cpu and sped up the clock by x%, it always improved on benchmarks, on average, by x%.Actually it's 3.68% and performance doesn't scale perfectly with clocks anyway so the actual performance loss is lower than that.
”page doesn’t exist or is private”With all the speculation on this thread about how easy or hard it will be to port apps to Apple Silicon, I thought it would be useful to hear from an actual developer of a well-known and respected Mac app (one of the "boots on the ground", as it were), so I posted on Keyboard Maestro's forum, asking the dev for his thoughts on the transition. Here are my questions and his reply:
BTW, while I have no affilitation with KM, I don't my giving a strong plug for the product: I find it's indispensable for creating app-specific and global macros, and linking them to keyboard shortcuts. So I'd like to see its continued availability in future Macs; it's one of my most important productivity tools.
I thought one of his most telling comments was the following:
"But of course, a lot of this is up to Apple as well, they may cause any number of deliberate or unintentional blockers for Keyboard Maestro in Big Sur or on ARM. Many of the things that Keyboard Maestro does and that Keyboard Maestro customers depend on are unfortunately not things that are a focus for Apple. Time will tell."
This is an important practicality that is often missed in these discussion.
Apple does decide to launch a 2-year old design CPU, run it on just 4 of the 8 cores, and under-clock it slightly and run everything through Rosetta.
Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable, and the development teams are typically using Ryzen 9 or Threadripper systems running Linux or Windows. These CPUs are 4x faster than A12Z. Fusion's requirements for decent compile perf are somewhat lower than Maya, but still well beyond 16 GB RAM.
I can understand not using the "efficiency" versus the "performance" cores, but why would Apple underclock them in a Mini compared to an iPad Pro? There's more room in the Mini enclosure for air flow so there ought not be a thermal reason and it's always plugged in to a wall socket so saving power ought not be an issue either.
Both this ARM and my 2600 have 4 cores. But my 2600 has the advantage of hyper threading.
All this is irrelevant. The main task of the dtk is to test your builds - not do the actual build. You can do the actual build on any system with Xcode installed.I work at Autodesk and have the source code for both Maya and Fusion 360 (two of our Mac products) on my system.
This developer transition kit is completely and utterly inadequate. 16GB of RAM? Man, I feel bad for whoever gets stuck with the arduous task of getting our stuff running on this system. It's going to be awful.
Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable, and the development teams are typically using Ryzen 9 or Threadripper systems running Linux or Windows. These CPUs are 4x faster than A12Z. Fusion's requirements for decent compile perf are somewhat lower than Maya, but still well beyond 16 GB RAM.
Like many here, I am confident that the systems Apple introduces over the next year will be a significant step forward from this kit, but the gap they have to cross to meet what Autodesk's pro customers are using today -- never mind what they will expect two years from now -- is still really vast. They can't just catch up with the perf of the current Mac Pro. AMD Zen 3 is just around the corner and it's going to kick the everlovin' bejeezus out of every Mac and Intel system in 3D workloads.
But even if the CPU perf situation at the high-end turns out okay, what is really going to make or break it for us is video driver quality. It always has. NVIDIA and AMD are many, many years ahead of Apple on driver quality and developer relations.... if Apple continues to be uncaring about OpenGL and Vulkan, and if they don't have comprehensive raytracing support on next-gen AMD Radeon GPUs in a timely fashion, then Apple is going to lose the 3D market almost completely.
(Usual disclaimer: these are my opinions, not my employer's)
![]()
How does the ARM architecture differ from x86?
Is the x86 Architecture specially designed to work with a keyboard while ARM expects to be mobile? What are the key differences between the two?stackoverflow.com
"ARM has two different instruction encoding modes: ARM and THUMB. In ARM mode, you get access to all instructions, and the encoding is extremely simple and fast to decode. Unfortunately, ARM mode code tends to be fairly large, so it's fairly common for a program to occupy around twice as much memory as Intel code would."
I'd trust Stackoverflow more than I'd trust you..
I work at Autodesk and have the source code for both Maya and Fusion 360 (two of our Mac products) on my system.
This developer transition kit is completely and utterly inadequate. 16GB of RAM? Man, I feel bad for whoever gets stuck with the arduous task of getting our stuff running on this system. It's going to be awful.
Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable, and the development teams are typically using Ryzen 9 or Threadripper systems running Linux or Windows. These CPUs are 4x faster than A12Z. Fusion's requirements for decent compile perf are somewhat lower than Maya, but still well beyond 16 GB RAM.
I'd trust Stackoverflow more than I'd trust you..
ARM64 | x86-64 | |
optimized for size (-0s) | 108 bytes | 75 bytes |
optimized for performance (-O2) | 128 bytes | 171 bytes |
optimized for performance (-O3) | 128 bytes | 258 bytes |
You keep harping on about your 230W Core i7 2600 - which checking on Geekbench gets slightly lower single thread at 3.4GHz than this 2.4GHz Apple Silicon.
The i7 2600 is a 95W chip tops.
And it uses the 32nm node, it's 10 years old and it's not even the best i7 Intel launched that generation. The 2700K gets better scores in Geekbench and intel's 15W, 10nm mobile i7 is able to easily outscore the 2700k in Geekbench.
You said: your 230W Core i7 2600 which is the CPU not the system.This was a specific response to someone talking about his specific system, and it's system TDP, so I mentioned likely system TDP of the dev mini.
Of course Intel moved on since 2011, and current 15W chips outperform 95W a decade ago.
What we have is what a ~7.5W TDP (we don't know if it's higher in the dev box so it can run at turbos for longer than the iPad) performs like in Rosetta, and a range of benchmark comparisons like 15W IceLake where it compares similarly.
We can guess that the final silicon will be +20% (A13) + 20% (A14) + 30% (native) + 30% (higher clocks) (+more cores) as well, but we don't know the exact figures here.
You shouldn't, the accepted answer there is overly simplistic. For example, as an ISA comparison they use "repe cmpsb", which is terribly slow on modern CPUs and therefore not used. It is a remnant of an old x86, where Intel tried to be more CISC-y. When you look at modern x86 instructions, like AVX, they are all following RISK design principles.
In regards to code size, here is a small comparison using a simple quick sort C implementation (https://godbolt.org/z/t54RiQ) :
ARM64 x86-64 optimized for size (-0s) 108 bytes 75 bytes optimized for performance (-O2) 128 bytes 171 bytes optimized for performance (-O3) 128 bytes 258 bytes
Generally, for short trivial code (like one people like to show in examples but will never occur in real life), x86 will have denser code since the more frequently used instructions are shorter. This is also the reason why the compiler can optimize x86 better for size — it can try to choose shorter instructions even if the generated code is suboptimal (not much freedom on ARM in this regard). For complex algorithms (especially ones involving a lot of computation), ARM will often produce shorter code since a) newer Intel instructions are longer b) ARM has more registers, so the compiler has more room to work.