Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
But even if the CPU perf situation at the high-end turns out okay, what is really going to make or break it for us is video driver quality. It always has. NVIDIA and AMD are many, many years ahead of Apple on driver quality and developer relations.... if Apple continues to be uncaring about OpenGL and Vulkan, and if they don't have comprehensive raytracing support on next-gen AMD Radeon GPUs in a timely fashion, then Apple is going to lose the 3D market almost completely.

Stop caring about OpenGL and Vulkan. You will never see this again on macOS.

Your job is to rewrite the rendering engine in Metal. That's it. If you don't, others will do and have far better performance, stability and support than your product will ever have. And you probably have the workforce at Autodesk to do this. Nothing is impossible in software engineering. If you can't do this, your softwares are just poorly written. I can't believe Autodesk's developers are still focused on OpenGL when Apple Metal has been out since 2017 on the Mac and Apple clearly state WWDC after WWDC that this is the only graphics API that will be supported on all Apple products in the near future. If Autodesk hasn't begun to rewrite the whole thing in Metal, they are f*cking late, nothing else.

RDNA2 will have ray tracing. Pretty sure Apple Silicon will have some kind of hardware ray tracing too when launching high performance Silicon for the highest end Macs. NVIDIA has it, AMD has it, Apple will have it, especially if they wanna grow in the gaming market.
 
Last edited:
Apple needs to do better than this for desk top performance. I am a little scared now.

EDIT: I retract this statement due to all the negative reactions. But, Apple still needs to do better than this for desk top performance. I am not scared though.

You missed a VERY IMPORTANT point Mad Hatter - these results are for an x86 Geekbench running in EMULATION on an ARM chip, not the ACTUAL Geekbench reflecting the actual speed of the machine. No computer on Earth has ever been able to emulate an x86 at better than 60% native speed, and Rosetta2 does amazingly well, usable well, close to the theoretical maximum. The idea is NOT that you're gonna be running NEW software in emulation - new software will be recompiled to be native. It's so you can run the OLD software - software that was designed for slower Macs in the first place.

You're not the only one who seems to have misunderstood benchmarks running in emulation - many online reviews are doing the same thing, and it's a complete misunderstanding. This really needs to be better understood by the public. Apple in no way designed this to be like Transmeta - a device purely for emulating x86 code. Quite the contrary - it is an extremely nice courtesy of them to put so much effort into simulating an x86 for legacy code.

Do you remember the first PowerPC Macs running 68000K Mac software in Emulation? That was UNUSABLE slow, hundreds of times slower. Half speed is quite usable. I can still use my 17" 2010 Macbook Pro quite well, and it's about half speed.
 
You missed a VERY IMPORTANT point Mad Hatter - these results are for an x86 Geekbench running in EMULATION on an ARM chip, not the ACTUAL Geekbench reflecting the actual speed of the machine. No computer on Earth has ever been able to emulate an x86 at better than 60% native speed, and Rosetta2 does amazingly well, usable well, close to the theoretical maximum. The idea is NOT that you're gonna be running NEW software in emulation - new software will be recompiled to be native. It's so you can run the OLD software - software that was designed for slower Macs in the first place.

You're not the only one who seems to have misunderstood benchmarks running in emulation - many online reviews are doing the same thing, and it's a complete misunderstanding. This really needs to be better understood by the public. Apple in no way designed this to be like Transmeta - a device purely for emulating x86 code. Quite the contrary - it is an extremely nice courtesy of them to put so much effort into simulating an x86 for legacy code.

Do you remember the first PowerPC Macs running 68000K Mac software in Emulation? That was UNUSABLE slow, hundreds of times slower. Half speed is quite usable. I can still use my 17" 2010 Macbook Pro quite well, and it's about half speed.
Thanks for your gracious explanation JDIAMOND. It seems that many have attacked my initial remark on the matter -- as cursory as it was -- as an affront to everything decent in the world or some kind of blasphemy from the devil himself. The irony is that I am not an Apple hater. I have been using Apple products for longer than most on this forum and I am not an opponent of the ARM transition at all -- as long as Apple can do it right for those of us who depend upon and enjoy MacOS and need to use their computer as a means to make a living rather than just consume product for entertainment purposes. If you are telling me that this chip can deliver i7 or i9 performance with a full featured OS running several concurrent professional grade programs, I will be very happy.
 
  • Like
Reactions: psychicist
Important Q....

Does anyone think Apple will bring touch screen capabilities to their ARM macs?
 
If you are telling me that this chip can deliver i7 or i9 performance with a full featured OS running several concurrent professional grade programs, I will be very happy.

No, this chip won’t be able to do that because it’s a literally an eviscerated iPad stuffed into a body of a Mac mini.

The “real” Apple Silicon mac CPUs are expected to come out this autumn and are expected to be very competitive with the Intel hardware they are replacing. My personal expectation is at least 20% faster CPU and 2x faster GPU than the i9 CPU used in the MacBook Pro.
 
Do you remember the first PowerPC Macs running 68000K Mac software in Emulation? That was UNUSABLE slow, hundreds of times slower.

I do remember, and no the slowest Power Mac at that; the 6100/60. It was a bit slow, but more than that, it was buggy (the dreaded Error 11). "Hundreds of times slower", though? Nah. Large portions of the OS were running in 68k emulation for years to come.

You're misremembering.
[automerge]1593723712[/automerge]
Important Q....

Does anyone think Apple will bring touch screen capabilities to their ARM macs?

Maybe. Maybe not. It has nothing at all to do with ARM.

It may have something to do with how macOS 11 Big Sur has larger padding in many of its controls.

In any case, if Apple wants to add touch, they will. The architecture isn't stopping them.
 
The reason there is a penalty due to rosetta is that the translation from x86 to Arm is not perfectly smart. You are taking code that was optimized for x86 (in terms of the order of instructions, what instructions were chosen, etc.) and without access to source code you are translating. The result is NOT what you would get if you took the same source code and simply compiled to Arm. It will be more inefficient, doing instructions that may not be required, not using Arm instructions that might be faster that the Arm instructions that directly correspond to the x86 instruction, etc.

I like to think of Rosetta as using Google Translate. It works in that you can have a conversation, but the results will never be as good as being a native speaker.
 
Apple needs to do better than this for desk top performance. I am a little scared now.

EDIT: I retract this statement due to all the negative reactions. But, Apple still needs to do better than this for desk top performance. I am not scared though.

I have to ask. Just jhat part of prototype development machine did you not understand?
[automerge]1593736274[/automerge]
1) Guess this person doesn’t depend upon income from the App Store.

2) This is NOT the final chip that is going into production. It’s for the DTK only. The benchmarks are irrelevant.

Yes. This belong in the 'why did you waste our time with this nonsense?' category.
 
Last edited:
I have to ask. Just jhat part of prototype development machine did you not understand?
[automerge]1593736274[/automerge]
Save us the faux insight please. Whether something is a prototype or not does not necessary speak to the performance or capability of the device in question. It really only speaks to its purpose and that it is not for production.
 
  • Disagree
Reactions: EPO75
These two sentences don't seem to be connected.

Compilers are optimized to convert developers' intent into assembly language instructions. If you then convert those instructions to another assembly language, it's like making a photocopy of a photocopy. You lose information each time.

A compiler does a better job of optimizing code that a hardware translator, because (1) it has more information about designer intent and (2) it can take as much time as it wants to do so.

So, for example, if I have code like:

a = a + 1

In intel code it might do:

add [memory a] 1 -> [memory a]

That's one instruction but it takes a lot of time because it accesses memory twice (once for a read, once for a write).

rosetta might translate that to Arm like:

load r0, [memory a]
add r0, 1 -> r0
store r0, [memory a]

This would take exactly the same amount of time as the intel instruction.

But a compiler that understands Arm may instead have simply done:

add r0, 1 -> r0

That would be much smarter. The load/store were there because x86 doesn't have a lot of registers. Not necessary on Arm.


These sort of optimizations CAN be done by rosetta, but it gets harder and harder the more you try to do it, and it will likely never be as good as just compiling to Arm in the first place.

Good example! Because it also demonstrates that the recompiler has to faithfully do all writes, even if they would not be necessary native ARM, because it cannot prove that the memory write has not a side effect later. Part of the reason is, that the information of memory objects from the high level language are gone.
 
Others have made this point, but I'll reiterate it: We shouldn't take Geekbench scores too seriously, especially when it comes to a cross-platform comparison.

Linus Torvalds posted this on May 31, 2019:

"...But yes, I very much agree that you shouldn't treat GB as a "cross system" benchmark.

We've seen this before: cellphones tend to have simpler libraries that are statically linked, and at least iOS uses a page size that would not be relevant or realistic on a general purpose desktop setup.

At the other spectrum of issues, cellphones obviously tend to have thermal limits that may not be relevant in other form factors, although some of the "benchmark mode" tweaks might hide some of that."

Source: https://www.realworldtech.com/forum/?threadid=185109&curpostid=185132
 

Attachments

  • 7dd7658735485f31af72efd077e79423.png
    7dd7658735485f31af72efd077e79423.png
    15.3 KB · Views: 85
I work at Autodesk and have the source code for both Maya and Fusion 360 (two of our Mac products) on my system.

This developer transition kit is completely and utterly inadequate. 16GB of RAM? Man, I feel bad for whoever gets stuck with the arduous task of getting our stuff running on this system. It's going to be awful.

Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable, and the development teams are typically using Ryzen 9 or Threadripper systems running Linux or Windows. These CPUs are 4x faster than A12Z. Fusion's requirements for decent compile perf are somewhat lower than Maya, but still well beyond 16 GB RAM.

Like many here, I am confident that the systems Apple introduces over the next year will be a significant step forward from this kit, but the gap they have to cross to meet what Autodesk's pro customers are using today -- never mind what they will expect two years from now -- is still really vast. They can't just catch up with the perf of the current Mac Pro. AMD Zen 3 is just around the corner and it's going to kick the everlovin' bejeezus out of every Mac and Intel system in 3D workloads.

But even if the CPU perf situation at the high-end turns out okay, what is really going to make or break it for us is video driver quality. It always has. NVIDIA and AMD are many, many years ahead of Apple on driver quality and developer relations.... if Apple continues to be uncaring about OpenGL and Vulkan, and if they don't have comprehensive raytracing support on next-gen AMD Radeon GPUs in a timely fashion, then Apple is going to lose the 3D market almost completely.

(Usual disclaimer: these are my opinions, not my employer's)

The fact you work at Autodesk and make this incomparable comparison makes me a little worried.

"Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable"
What is the above line supposed to correspond to in the context of the discussion ? can you please show me the Intel/AMD portable machine in the market that has 96GB++ of memory with a 7W SoC (i.e including DRAM and GPU) ?

Its like me giving you to test drive my Fiat punto , and you say , WTF my Porsche runs circles around the Ferrari!!!
just to make it clear , you compare top of the line not yet released (Z3 Ryzen 9 700$ - Theadripper 3000$) chips to a 2 y/o iPad chip and make a direct logic deduction about the future high end Apple silicon saying its not sufficient.

Talk about logic jumps and common sense......
 
Others have made this point, but I'll reiterate it: We shouldn't take Geekbench scores too seriously, especially when it comes to a cross-platform comparison.

Linus Torvalds posted this on May 31, 2019:

"...But yes, I very much agree that you shouldn't treat GB as a "cross system" benchmark.

We've seen this before: cellphones tend to have simpler libraries that are statically linked, and at least iOS uses a page size that would not be relevant or realistic on a general purpose desktop setup.

At the other spectrum of issues, cellphones obviously tend to have thermal limits that may not be relevant in other form factors, although some of the "benchmark mode" tweaks might hide some of that."

Geekbench is not a good benchmark. It's like doing a distance race, but everyone runs a different course. Geekbench does not execute the same code on different platforms and it is not really transparent what exactly it measures. They blame to measure real-world performance, but then they end up throwing everything together into one big confusing score.

But at least in this case, I'm ok with using Geekbench because it can provide some idea about the efficiency of Rosetta on ARM relative to a native Intel Mac. Of course, these tests need to be run on the actual Mac Silicon, not the iPad box they throw together for devs.

P.S. In regards to page size — 16kb page sizes are coming to Mac. Nobody talks about this since people prefer to complain about icons, but this is a very important change that was 20 years overdue. As usual, Apple takes lead in things that need to be done. Not sure why Torvalds refers to 16Kb pages as not relevant or unrealistic, but then again he has always been a very opinionated guy (for better or worse).
 
Macs are going to suck for years now. It‘s hard enough to get incompetent dinosaurs like Avid to do relatively easy things like move to 64-bit.

But to reissue every app and then every plug-in for ARM?

Hahaha! Apple is headed back to the backwater it wallowed in through the ‘90s.
 
  • Disagree
Reactions: EPO75 and pldelisle
The fact you work at Autodesk and make this incomparable comparison makes me a little worried.

"Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable"
What is the above line supposed to correspond to in the context of the discussion ? can you please show me the Intel/AMD portable machine in the market that has 96GB++ of memory with a 7W SoC (i.e including DRAM and GPU) ?

Its like me giving you to test drive my Fiat punto , and you say , WTF my Porsche runs circles around the Ferrari!!!
just to make it clear , you compare top of the line not yet released (Z3 Ryzen 9 700$ - Theadripper 3000$) chips to a 2 y/o iPad chip and make a direct logic deduction about the future high end Apple silicon saying its not sufficient.

Talk about logic jumps and common sense......

He only said that the kit given by Apple is not suitable for him to do his job as a developer.
This is a real issue.
 
He only said that the kit given by Apple is not suitable for him to do his job as a developer.
This is a real issue.

With ridiculous demands like this, I am sure they have some sort of build server setup that can do that comfortably. You don't have to build the software on the DTK, it's more for testing (as was pointed out by multiple people in this thread).
 
  • Like
Reactions: psychicist
The fact you work at Autodesk and make this incomparable comparison makes me a little worried.

"Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable"
What is the above line supposed to correspond to in the context of the discussion ? can you please show me the Intel/AMD portable machine in the market that has 96GB++ of memory with a 7W SoC (i.e including DRAM and GPU) ?

Its like me giving you to test drive my Fiat punto , and you say , WTF my Porsche runs circles around the Ferrari!!!
just to make it clear , you compare top of the line not yet released (Z3 Ryzen 9 700$ - Theadripper 3000$) chips to a 2 y/o iPad chip and make a direct logic deduction about the future high end Apple silicon saying its not sufficient.

Talk about logic jumps and common sense......

That's not what @warrenr is saying. They're saying they wish Apple had provided a DTK that's a bit more powerful, to make recompiling a more pleasant experience.

It's kind of moot, though. I don't think Apple is providing these as a build machine, but as a target. You build on your existing computers, then try it out on the DTK.
 
  • Like
Reactions: leman
That's not what @warrenr is saying. They're saying they wish Apple had provided a DTK that's a bit more powerful, to make recompiling a more pleasant experience.

It's kind of moot, though. I don't think Apple is providing these as a build machine, but as a target. You build on your existing computers, then try it out on the DTK.

It is still worrisome, that an Autodesk developer is thinking he needs to compile on the test machines... Hopefully he was just the most clueless developer of the Autodesk workforce.
 
With ridiculous demands like this, I am sure they have some sort of build server setup that can do that comfortably. You don't have to build the software on the DTK, it's more for testing (as was pointed out by multiple people in this thread).
I already worked for HP.
I confirm we had build servers for this with insane hardware. Quad socket Intel 18 cores (the max at the time), 1 TB RAM, RAID arrays of NVME SSDs. These machines were dedicated only to build software.
So « it takes 96 GB of RAM to build Maya », that’s cute ....
 
  • Like
Reactions: psychicist
I can understand not using the "efficiency" versus the "performance" cores, but why would Apple underclock them in a Mini compared to an iPad Pro? There's more room in the Mini enclosure for air flow so there ought not be a thermal reason and it's always plugged in to a wall socket so saving power ought not be an issue either.

Why underclock for a temp quick&dirty machine that will not go into production aside of some beta testing ?

- The whole marketing dept. will not want to show off yet. So they'll be asking to hold back what can be done
- Reliability: less heat, less problems, less cutting edge of what's possible, less testing. From an engineering point of view: this is not a finished product that underwent everything a real product has to go through to refine so it's ready for the market.
- Less engineering effort, so getting the machines into the hands of those needing to port their applications sooner, instead of pouring everything you know how to do into and and then having to make sure it all works. Just do enough that it'l be a solid thing, not a cutting edge thing.
- It's not just a CPU, you have I/O and other supportive systems that need to integrate with it all. An iPad Pro where this CPU origianlly was made for doesn't have TB3 ports, it's got some internal stuff (speakers, microphone, camera, digitizer, wifi, GPS receiver (in some), LTE/GSM/... modem, bluetooth, some more sensors etc. and just one USB-C port) this one: actually nowhere I've seen where it is mentioned what ports these have. But those ports (whatever they are) need to integrate into a single machine and need to perform well enough to get the apps ported. Now if they took those components out of a regular mac mini (might be hard with e.g. the TB3 ports - as that's intel stuff right there) those are going to have been made with certain expectation on how the rest of the. motherboard is operating. You'd have to match it somehow to the performance the CPU actually has, so an A99 CPU that's 5 times as fast as the i7 you replace will be throttled by all the supporting electronics in the machine that aren't designed to operate that fast.

So it's a matter of work to solve all these things in a reliable fashion to release the machine to your beta group to
 
But at least in this case, I'm ok with using Geekbench because it can provide some idea about the efficiency of Rosetta on ARM relative to a native Intel Mac. Of course, these tests need to be run on the actual Mac Silicon, not the iPad box they throw together for devs.
Except that most here have been using the GB results to do a cross-platform comparison of Intel vs. Apple Silicon, and not merely to assess the effect of Rosetta.

And when it comes to Rosetta, here we are only seeing the effect of Rosetta on one particular application—Geekbench. I'm not sure if that can be generalized, since the extent to which the Rosetta-produced binary is suboptimal for Geekbench may be different from the extent to which it is suboptimal for other programs.
 
Last edited:
Important Q....

Does anyone think Apple will bring touch screen capabilities to their ARM macs?


Nope, we're heading for unification of apps so theres no need. iPad and Mac will be the same thing, with iPad just being another item in the linup of computers, all of which will be able to run the same software as they will share a common architecture.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.