Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I can understand not using the "efficiency" versus the "performance" cores, but why would Apple underclock them in a Mini compared to an iPad Pro? There's more room in the Mini enclosure for air flow so there ought not be a thermal reason and it's always plugged in to a wall socket so saving power ought not be an issue either.
Perhaps the DTKs can sustain the higher speeds for longer, so Apple limited the top speed for heat reasons. Just because an iPad Pro can hit 2.5GHz doesn’t mean it sustains those speeds for very long.

In any case, Apple did say the DTKs have some limitations that won’t be present in the consumer products. Functionality and stability are more important than raw performance in a developer kit.
 
"... Apple's translation layer Rosetta 2..."
very aptly named
[automerge]1593470569[/automerge]
hope all those developers complain to aapl
about mm
not having discreet gpu
not having easy user upgradable ram
not having hot swapable ssd
not having mini led next to on switch
 
Last edited:
Arm working sets are not much bigger than Intel working sets. Given current compiler technology and assuming we are talking about x86-64 and not using 32-bit instructions, the main arm overhead is just additional bits for the predicate field, and occasionally more opcode bits.


"ARM has two different instruction encoding modes: ARM and THUMB. In ARM mode, you get access to all instructions, and the encoding is extremely simple and fast to decode. Unfortunately, ARM mode code tends to be fairly large, so it's fairly common for a program to occupy around twice as much memory as Intel code would."

I'd trust Stackoverflow more than I'd trust you..
 
This is so funny. Apple isn't stupid. They knew the first thing everyone would do was get these things onto benchmark sites. The comparisons would be endless. For what this chip is the results are nothing short of outstanding. This processor runs on nothing and has this kind of performance... All I can say is wow.

As for all the haters just slow your roll. The transition Mac mini is just a box to help get folks over the hump into ARM. With developers utilizing Xcode and Apple using its own custom chips there are endless possibilities for invisible optimizations that can kick in the moment it runs on these new Macs.

Apple was never going to put there new chips into this Mac mini. The Mini chassis just happens to be small and compact. I assume the thinking behind these was more along the line of "get that chip in there and get this out the door". Apple is requiring them to be returned anyway. It's just a test mule and there's no point wasting time trying to optimize the setup.

Like I said Apple isn't stupid. They are not going to show up on the grid with a push bike sporting a weedwhacker engine. I have a good feeling their chips are going to be serious monsters. Especially for those in the higher end Macs. To help make this transition successful apple will be out to silence anyone worried about performance.
 
Just checked and those Rosetta scores are on par with my MacBook Pro i7 from 2012. Though I have the GPU, I have been fine with that performance for years.

I would assume that the production base mini will be twice as fast in Rosetta.
 
Exactly, rather than simply putting competent thermals in their products, they’re doubling down on their own hubris.

The Mac lineup has been one misstep after another, I don’t believe in them to get this right.

This will create more problems than it solves, not the least of which being another pain-point with the incompatibility to run Windows natively.

If they change the designs to thermals the MacBook Pro might be an inch thick. How are they going to make the all new MacBook SuperPro that will only be a 1/16 of an inch thick? /s
 
Just remember, Apple has surely benchmarked the actual chips they’ll be using...so I wouldn’t be concerned. They’re not blindly going into this.
They usually under promise and over deliver with these things.
 
They have the iPad running already, so they just have to add the hinge and keyboard, trackpad and extra USB* port and boom, MBP with touch
 

"ARM has two different instruction encoding modes: ARM and THUMB. In ARM mode, you get access to all instructions, and the encoding is extremely simple and fast to decode. Unfortunately, ARM mode code tends to be fairly large, so it's fairly common for a program to occupy around twice as much memory as Intel code would."

I'd trust Stackoverflow more than I'd trust you..
You are trusting a random person on stackoverflow, not stackoverflow.

http://web.eece.maine.edu/~vweaver/papers/iccd09/ll_document.pdf

It ranges from 30% to 100%, depending on the kind of algorithm and what the compiler chooses to do (assuming not thumb), but when x86 is at its best in terms of code density it is also doing the highest number of implicit load/stores, so you pay for that with hugely slow data memory accesses.
 
You are trusting a random person on stackoverflow, not stackoverflow.

http://web.eece.maine.edu/~vweaver/papers/iccd09/ll_document.pdf

It ranges from 30% to 100%, depending on the kind of algorithm and what the compiler chooses to do (assuming not thumb), but when x86 is at its best in terms of code density it is also doing the highest number of implicit load/stores, so you pay for that with hugely slow data memory accesses.

My original statement still stands... ARM has larger instructions than x86...
 
My original statement still stands... ARM has larger instructions than x86...

And my statement stands. It makes no real difference in modern computing. What % of the RAM used on your machine do you think is instructions? How big do you think your instruction cache is?
 
  • Like
Reactions: psychicist
Well, I don’t know how to address concerns that are based on the willful misunderstanding of “completely useless when the transition completes in 2 years”.

Why do so many people here hold this opinion that is completely unfounded? It’s just as bad as the “my two month old computer is now obsolete because a new model has come out, thanks apple 🤬” posts.

I think it probably has to do with Apple’s history in its last major transition. Some of us bought really expensive PowerBooks right before the Intel chip transition was announced.

There were some of us who bought brand new PowerPC computers who were told that they would be supported “for years to come”. Let me tell you how long a $2,500 computer (in 2003-04 money) is supported during a chip transition at Apple- two years on the outgoing platform.

On Intel Macs you got two OS versions and support with Rosetta. My PowerBook bought in 2004 came with a MacOS and saw one MacOS update by 2006 to Leopard. Then all of the new features came out to Intel Macs on snow leopard. Your software could move to new hardware with translation software but your three year old machine will get no new OS, no new software features, and you’ll have about four years of bug and security patches- basically life support.

By the way, developers leave universal binary behind when they move to a new architecture. Web browsers start to fall behind when they are on an OS that no one supports. When your machine turns four years old and goes on life-support, and suddenly feels like it’s a ten year old paperweight; that’s why people make the claim that the machines will be useless in two years when the transition is complete.

My $3,500 MacBook Pro that I purchased in December 2019 is now in vintage status if the history of Apple is anything to go by. Every Intel Mac owner with a new, expensive machine, is in vintage status andthey don’t know it yet.
 
3 years is a good life span for a laptop regardless of price. And at $3000-$4000 it should be making back that investment in a few months. If you are a consumer thats a bit different, but why would you need a $3000-$4000 laptop in that case?

A 3-4,000 machine should have the power to live longer than three years. Even if it has paid for itself. That Apple will have abandoned expensive hardware three years down the road is where people get distressed.
 
Maybe I am misunderstanding Rosetta 2, but why would there be much in terms of runtime performance penalty? I thought it was doing at-install translation of the x86 -> ARM calls so it then runs native from that point on. It is not doing on-the-fly emulation -- once the app runs it is going direct to ARM.

This is correct. Me saying 'emulating' is wrong in the sense that it is not runtime emulation of the software in a virtual machine or anything like that. I meant emulating as in "translating the binary calls into a compatible set of ARM instructions so that the program is guaranteed (as much as possible) of performing the same binary executions."

You are essentially removing any and all compiler optimizations and tricks as you no longer have the full semantic code and just re-translating the machine code as best as you can to 'emulate' (mimic) the original program. That is super hard. The only reason Apple can do it (I would guess) is that they fully own both software stacks and somewhat guarantee what APIs they are attempting to emulate? Not sure. Its tough, and they are doing it with 30% hit which sounds pretty gosh darn good to me.
 
My original statement still stands... ARM has larger instructions than x86...

You are aware, that this 7 year old article from stackoverflow refers to ARMv7 32 bit and not to ARMv8 64 bit?

In any case i would expect an architecture with 32 GP registers and 3 operands have somewhat less code density. What is more important is, that Aarch64 have fixed length instruction encoding - as any reasonable designed ISA of the last 30 or so years (if we exclude the compressed instruction sets for embedded)
@cmaier: Why didn't you go fixed length for AMD64? Would have been an opportunity, wouldn't it?
 
Last edited:
  • Like
Reactions: psychicist
I work at Autodesk and have the source code for both Maya and Fusion 360 (two of our Mac products) on my system.

This developer transition kit is completely and utterly inadequate. 16GB of RAM? Man, I feel bad for whoever gets stuck with the arduous task of getting our stuff running on this system. It's going to be awful.

Maya needs 96GB++ of memory to compile at a speed even vaguely approaching usable, and the development teams are typically using Ryzen 9 or Threadripper systems running Linux or Windows. These CPUs are 4x faster than A12Z. Fusion's requirements for decent compile perf are somewhat lower than Maya, but still well beyond 16 GB RAM.

Like many here, I am confident that the systems Apple introduces over the next year will be a significant step forward from this kit, but the gap they have to cross to meet what Autodesk's pro customers are using today -- never mind what they will expect two years from now -- is still really vast. They can't just catch up with the perf of the current Mac Pro. AMD Zen 3 is just around the corner and it's going to kick the everlovin' bejeezus out of every Mac and Intel system in 3D workloads.

But even if the CPU perf situation at the high-end turns out okay, what is really going to make or break it for us is video driver quality. It always has. NVIDIA and AMD are many, many years ahead of Apple on driver quality and developer relations.... if Apple continues to be uncaring about OpenGL and Vulkan, and if they don't have comprehensive raytracing support on next-gen AMD Radeon GPUs in a timely fashion, then Apple is going to lose the 3D market almost completely.

(Usual disclaimer: these are my opinions, not my employer's)
 
  • Like
Reactions: mdriftmeyer
I work at Autodesk and have the source code for both Maya and Fusion 360 (two of our Mac products) on my system.

This developer transition kit is completely and utterly inadequate. 16GB of RAM? Man, I feel bad for whoever gets stuck with the arduous task of getting our stuff running on this system. It's going to be awful.

Transition kit is mostly for testing not necessarily for compilation.
 
I work at Autodesk and have the source code for both Maya and Fusion 360 (two of our Mac products) on my system.

This developer transition kit is completely and utterly inadequate. 16GB of RAM? Man, I feel bad for whoever gets stuck with the arduous task of getting our stuff running on this system. It's going to be awful.

They didn’t give your employer access to a lab with a “real” machine?
 
  • Like
Reactions: psychicist
Why should they? Any reasonable company dispatches compile jobs to the on-premise server farms (or optionally in the cloud) and not to some local machines with 16GByte RAM.

For performance testing, etc. Just like they did during the prior transitions - strategic software companies like Microsoft and Adobe surely have access to something better than the DTKs, and I’m sure the access is on Apple’s premises.
 
  • Like
Reactions: psychicist
For performance testing, etc. Just like they did during the prior transitions - strategic software companies like Microsoft and Adobe surely have access to something better than the DTKs, and I’m sure the access is on Apple’s premises.

Could be, but the problem statement was compilation and not running the actual executable, which surely needs less than 96Gbyte RAM.
 
  • Haha
Reactions: Unregistered 4U
Could be, but the problem statement was compilation and not running the actual executable, which surely needs less than 96Gbyte RAM.
Yeah, a little weird to compile on the desktop for an app like that. My question was unrelated, though.
 
So, not even trying to make a shipping ARM mac chip, a two year old tablet SoC translating a more complicated x86 ISA to ARM still performs better than the Qualcomm SQ1 in the Surface Pro X running native, or just above a 2012 iMac, which while old is still a 91W TDP and running its own native ISA benchmark, again.

Man, the silicon when they’re actually trying is going to be bonkers insane.
 
Dear Tim Cook I’am games delovoper for Mac OS and Windows(BootCamp) and me need Intel processors on Professions Mac’s becouse a lot of games engines compiling for Intel processors!!! (Unreal Engine 5(Intel-Mac OS X) Unity(Intel-Mac OS X), DECIMA(Windows), FrostByte(Windows), DUNA(Windows), IDTech(Windows)

Dear Tim Cook I beg in you on my knees i beg in you please, please!!! Staying professionls Mac’s(Mac Pro, iMacPro, MacBookPro) on Intel processors!!! Because I loving playing the special AAA class Windows OS games, on Intel-Based Mac Pro with BootCamp!!! Intel proccessors is a God of proccessors for developers for me!!!
 
  • Haha
Reactions: Ansath
Dear Tim Cook I’am games delovoper for Mac OS and Windows(BootCamp) and me need Intel processors on Professions Mac’s becouse a lot of games engines compiling for Intel processors!!! (Unreal Engine 5(Intel-Mac OS X) Unity(Intel-Mac OS X), DECIMA(Windows), FrostByte(Windows), DUNA(Windows), IDTech(Windows)

Dear Tim Cook I beg in you on my knees i beg in you please, please!!! Staying professionls Mac’s(Mac Pro, iMacPro, MacBookPro) on Intel processors!!! Because I loving playing the special AAA class Windows OS games, on Intel-Based Mac Pro with BootCamp!!! Intel proccessors is a God of proccessors for developers for me!!!
This will change his mind for sure.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.