Apple specifically mentioned they were helping Blender with the transition during the WWDC keynote.
But even if the CPU perf situation at the high-end turns out okay, what is really going to make or break it for us is video driver quality. It always has. NVIDIA and AMD are many, many years ahead of Apple on driver quality and developer relations.... if Apple continues to be uncaring about OpenGL and Vulkan, and if they don't have comprehensive raytracing support on next-gen AMD Radeon GPUs in a timely fashion, then Apple is going to lose the 3D market almost completely.
Apple needs to do better than this for desk top performance. I am a little scared now.
EDIT: I retract this statement due to all the negative reactions. But, Apple still needs to do better than this for desk top performance. I am not scared though.
Thanks for your gracious explanation JDIAMOND. It seems that many have attacked my initial remark on the matter -- as cursory as it was -- as an affront to everything decent in the world or some kind of blasphemy from the devil himself. The irony is that I am not an Apple hater. I have been using Apple products for longer than most on this forum and I am not an opponent of the ARM transition at all -- as long as Apple can do it right for those of us who depend upon and enjoy MacOS and need to use their computer as a means to make a living rather than just consume product for entertainment purposes. If you are telling me that this chip can deliver i7 or i9 performance with a full featured OS running several concurrent professional grade programs, I will be very happy.You missed a VERY IMPORTANT point Mad Hatter - these results are for an x86 Geekbench running in EMULATION on an ARM chip, not the ACTUAL Geekbench reflecting the actual speed of the machine. No computer on Earth has ever been able to emulate an x86 at better than 60% native speed, and Rosetta2 does amazingly well, usable well, close to the theoretical maximum. The idea is NOT that you're gonna be running NEW software in emulation - new software will be recompiled to be native. It's so you can run the OLD software - software that was designed for slower Macs in the first place.
You're not the only one who seems to have misunderstood benchmarks running in emulation - many online reviews are doing the same thing, and it's a complete misunderstanding. This really needs to be better understood by the public. Apple in no way designed this to be like Transmeta - a device purely for emulating x86 code. Quite the contrary - it is an extremely nice courtesy of them to put so much effort into simulating an x86 for legacy code.
Do you remember the first PowerPC Macs running 68000K Mac software in Emulation? That was UNUSABLE slow, hundreds of times slower. Half speed is quite usable. I can still use my 17" 2010 Macbook Pro quite well, and it's about half speed.
No.Important Q....
Does anyone think Apple will bring touch screen capabilities to their ARM macs?
If you are telling me that this chip can deliver i7 or i9 performance with a full featured OS running several concurrent professional grade programs, I will be very happy.
Do you remember the first PowerPC Macs running 68000K Mac software in Emulation? That was UNUSABLE slow, hundreds of times slower.
Important Q....
Does anyone think Apple will bring touch screen capabilities to their ARM macs?
The reason there is a penalty due to rosetta is that the translation from x86 to Arm is not perfectly smart. You are taking code that was optimized for x86 (in terms of the order of instructions, what instructions were chosen, etc.) and without access to source code you are translating. The result is NOT what you would get if you took the same source code and simply compiled to Arm. It will be more inefficient, doing instructions that may not be required, not using Arm instructions that might be faster that the Arm instructions that directly correspond to the x86 instruction, etc.
Apple needs to do better than this for desk top performance. I am a little scared now.
EDIT: I retract this statement due to all the negative reactions. But, Apple still needs to do better than this for desk top performance. I am not scared though.
1) Guess this person doesn’t depend upon income from the App Store.
2) This is NOT the final chip that is going into production. It’s for the DTK only. The benchmarks are irrelevant.
I have to ask. Just jhat part of prototype development machine did you not understand?
[automerge]1593736274[/automerge]
Save us the faux insight please. Whether something is a prototype or not does not necessary speak to the performance or capability of the device in question. It really only speaks to its purpose and that it is not for production.
These two sentences don't seem to be connected.
Compilers are optimized to convert developers' intent into assembly language instructions. If you then convert those instructions to another assembly language, it's like making a photocopy of a photocopy. You lose information each time.
A compiler does a better job of optimizing code that a hardware translator, because (1) it has more information about designer intent and (2) it can take as much time as it wants to do so.
So, for example, if I have code like:
a = a + 1
In intel code it might do:
add [memory a] 1 -> [memory a]
That's one instruction but it takes a lot of time because it accesses memory twice (once for a read, once for a write).
rosetta might translate that to Arm like:
load r0, [memory a]
add r0, 1 -> r0
store r0, [memory a]
This would take exactly the same amount of time as the intel instruction.
But a compiler that understands Arm may instead have simply done:
add r0, 1 -> r0
That would be much smarter. The load/store were there because x86 doesn't have a lot of registers. Not necessary on Arm.
These sort of optimizations CAN be done by rosetta, but it gets harder and harder the more you try to do it, and it will likely never be as good as just compiling to Arm in the first place.
Only Italians will understandOhhh, I just realized why it’s called Rosetta... wow I feel dumb.
I work at Autodesk and have the source code for both Maya and Fusion 360 (two of our Mac products) on my system.
This developer transition kit is completely and utterly inadequate. 16GB of RAM? Man, I feel bad for whoever gets stuck with the arduous task of getting our stuff running on this system. It's going to be awful.
Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable, and the development teams are typically using Ryzen 9 or Threadripper systems running Linux or Windows. These CPUs are 4x faster than A12Z. Fusion's requirements for decent compile perf are somewhat lower than Maya, but still well beyond 16 GB RAM.
Like many here, I am confident that the systems Apple introduces over the next year will be a significant step forward from this kit, but the gap they have to cross to meet what Autodesk's pro customers are using today -- never mind what they will expect two years from now -- is still really vast. They can't just catch up with the perf of the current Mac Pro. AMD Zen 3 is just around the corner and it's going to kick the everlovin' bejeezus out of every Mac and Intel system in 3D workloads.
But even if the CPU perf situation at the high-end turns out okay, what is really going to make or break it for us is video driver quality. It always has. NVIDIA and AMD are many, many years ahead of Apple on driver quality and developer relations.... if Apple continues to be uncaring about OpenGL and Vulkan, and if they don't have comprehensive raytracing support on next-gen AMD Radeon GPUs in a timely fashion, then Apple is going to lose the 3D market almost completely.
(Usual disclaimer: these are my opinions, not my employer's)
Others have made this point, but I'll reiterate it: We shouldn't take Geekbench scores too seriously, especially when it comes to a cross-platform comparison.
Linus Torvalds posted this on May 31, 2019:
"...But yes, I very much agree that you shouldn't treat GB as a "cross system" benchmark.
We've seen this before: cellphones tend to have simpler libraries that are statically linked, and at least iOS uses a page size that would not be relevant or realistic on a general purpose desktop setup.
At the other spectrum of issues, cellphones obviously tend to have thermal limits that may not be relevant in other form factors, although some of the "benchmark mode" tweaks might hide some of that."
The fact you work at Autodesk and make this incomparable comparison makes me a little worried.
"Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable"
What is the above line supposed to correspond to in the context of the discussion ? can you please show me the Intel/AMD portable machine in the market that has 96GB++ of memory with a 7W SoC (i.e including DRAM and GPU) ?
Its like me giving you to test drive my Fiat punto , and you say , WTF my Porsche runs circles around the Ferrari!!!
just to make it clear , you compare top of the line not yet released (Z3 Ryzen 9 700$ - Theadripper 3000$) chips to a 2 y/o iPad chip and make a direct logic deduction about the future high end Apple silicon saying its not sufficient.
Talk about logic jumps and common sense......
He only said that the kit given by Apple is not suitable for him to do his job as a developer.
This is a real issue.
The fact you work at Autodesk and make this incomparable comparison makes me a little worried.
"Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable"
What is the above line supposed to correspond to in the context of the discussion ? can you please show me the Intel/AMD portable machine in the market that has 96GB++ of memory with a 7W SoC (i.e including DRAM and GPU) ?
Its like me giving you to test drive my Fiat punto , and you say , WTF my Porsche runs circles around the Ferrari!!!
just to make it clear , you compare top of the line not yet released (Z3 Ryzen 9 700$ - Theadripper 3000$) chips to a 2 y/o iPad chip and make a direct logic deduction about the future high end Apple silicon saying its not sufficient.
Talk about logic jumps and common sense......
That's not what @warrenr is saying. They're saying they wish Apple had provided a DTK that's a bit more powerful, to make recompiling a more pleasant experience.
It's kind of moot, though. I don't think Apple is providing these as a build machine, but as a target. You build on your existing computers, then try it out on the DTK.
I already worked for HP.With ridiculous demands like this, I am sure they have some sort of build server setup that can do that comfortably. You don't have to build the software on the DTK, it's more for testing (as was pointed out by multiple people in this thread).
I can understand not using the "efficiency" versus the "performance" cores, but why would Apple underclock them in a Mini compared to an iPad Pro? There's more room in the Mini enclosure for air flow so there ought not be a thermal reason and it's always plugged in to a wall socket so saving power ought not be an issue either.
Except that most here have been using the GB results to do a cross-platform comparison of Intel vs. Apple Silicon, and not merely to assess the effect of Rosetta.But at least in this case, I'm ok with using Geekbench because it can provide some idea about the efficiency of Rosetta on ARM relative to a native Intel Mac. Of course, these tests need to be run on the actual Mac Silicon, not the iPad box they throw together for devs.
Important Q....
Does anyone think Apple will bring touch screen capabilities to their ARM macs?