Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Never say never. One thing I didn't get about the Mac Studio was why put the power supply inside the box where it needs cooling? Wouldn't using a brick like the iMac and the laptops yield a better solution with no extra cooling requirements? If they did that, they could make a Mac Mini Pro, it would have lots of extra cooling capacity (I mean, not for an Intel class thermal chip). It would also give up connectivity options with the small footprint, so there is that

Because the PSU is put inside purposely to receive active cooling/airflow. The heat output from PSU is taken into account in the overall thermal design. This increases stability, reliability, longevity of the PSU and the system.
 
Unless you re-compile them using Xcode. Of course the application with the low level assembler language stuff would have to be rewritten, but ARM is the future. There is a difference between apps that run on Apple Silicon and those optimized for Apple Silicon - big difference.
I see that. A large proportion of users will be excluded in the future. They will continue with Intel / AMD platforms running Windows and / or *BSD. *BSD is on the rise in mission-critical systems.
This is similar to a situation where people don't want to hear anything from other operating systems, because the application they depend on exists for only one specific operating system and often only for one specific version of that operating system.

//kirvetõlge
 
I see that. A large proportion of users will be excluded in the future. They will continue with Intel / AMD platforms running Windows and / or *BSD. *BSD is on the rise in mission-critical systems.
This is similar to a situation where people don't want to hear anything from other operating systems, because the application they depend on exists for only one specific operating system and often only for one specific version of that operating system.

//kirvetõlge
Actually, ARM seems to be on the rise in server contexts, and x86 server is also facing competition from RISC-V. It’s likely that, in general, x86 is going to lose market share and eventually will just stick around maybe for legacy applications as ARM economies of scale increase even more and as more and more consumer computers go ARM and servers go RISC-V.
 
I'm hoping there WILL be an M1/M2 Pro-level Mini introduced, but there have been good points raised herein suggesting that Apple wants users who need the extra power to move onto the Studio line. The fact that there's an Intel Mini still available seems incongruous given the desire to transition all Mac hardware to AS. I think at some point this Intel Mini will be switched to AS and an M2 Pro would be a good option.
 
I'm hoping there WILL be an M1/M2 Pro-level Mini introduced, but there have been good points raised herein suggesting that Apple wants users who need the extra power to move onto the Studio line. The fact that there's an Intel Mini still available seems incongruous given the desire to transition all Mac hardware to AS. I think at some point this Intel Mini will be switched to AS and an M2 Pro would be a good option.

The new M2 Mac mini might haver the 10g port option right out of the gate!
 
In a lot of cases where I've seen performance like this it comes down to AVX. I'm not actually sure if Mathematica is Rosetta or native ARM, but Rosetta cannot run AVX, it only runs SSE/SSSE. And even if you make a native build intrinsics don't come for free so optimising for vector extensions to speed up that sort of thing, yeah that requires some extra work.
...

But there are some cases where a Rosetta build will actually run faster than a naïve native build, because the Rosetta build will enable SSE and translate that to NEON where the native build doesn't use the vector extensions at all because the code hasn't been written to use those intrinsics.

...Yeah that all checks out. Thanks for the link. I personally use Maple and various other mathematical methods and my greatest interaction with Wolfram is through the web Wolfram Alpha, so I've not had much personal experience with Mathematica, though here they also mention the bit about some things actually being faster under Rosetta, which I can only assume is also because of what I mentioned with the SSE acceleration and lacking NEON code paths.
So getting back to this, where do we see AVX, such that porting that code from x86 to ARM in an optimized way would require manually translating AVX to SSE? Is it confined to low-level libraries like Intel's MKL, or is it could it also be found in the core code of large programs like Word, Excel, Photoshop, Premiere, and Mathematica? Translation: The libraries aside, do many large programs contain SIMD instruction sets that need to be manually translated from x86 to ARM, such that a simple automated recompilation won't give an optimized native port?

And will we be seeing significant speed-ups with ARM9, because of SVE2 (https://levelup.gitconnected.com/armv9-what-is-the-big-deal-4528f20f78f3)?
 
Apple never seems to do exactly as expected, but . . .

You would assume all the "headline" releases from here on will have M2 based chips inside. Which makes me wonder whether Apple wouldn't want to tie the timing / release of a Mac Mini M2 Pro with a refresh of the Mac Studio. The speculation at the moment is that the M2 Pro and Max will have 2 extra CPU cores. This would see a 12 core M2 Pro Mac Mini outperform the 10 core Mac Studio's M1 Max, probably by a notable margin, in CPU performance.

Edit - Or, maybe the Mac Studio is more stopgap than permanent fixture. The Mac Studio wasn't listed among the M2 Macs in testing by Gurman: https://www.bloomberg.com/news/arti...s-with-next-generation-m2-chips?sref=9hGJlFio
 
Last edited:
Edit - Or, maybe the Mac Studio is more stopgap than permanent fixture. The Mac Studio wasn't listed among the M2 Macs in testing by Gurman: https://www.bloomberg.com/news/arti...s-with-next-generation-m2-chips?sref=9hGJlFio

Interesting re-visit to this Gurman article dated April 15, 2022. To quote Gurman:
  • A Mac mini with an M2 chip, codenamed J473. This machine will have the same specifications as the MacBook Air. There’s also an “M2 Pro” variation, codenamed J474, in testing.
  • Apple is also testing a Mac mini with an M1 Pro chip, the same processor used in the entry-level 14-inch and 16-inch MacBook Pros today. That machine is codenamed J374.
  • The company has tested an M1 Max version of the Mac mini as well, but the new Mac Studio may make these machines redundant.
To nay sayers, time to prepare your new arguments 🤣
 
  • Like
Reactions: Kazgarth and Roykor
I'm hoping there WILL be an M1/M2 Pro-level Mini introduced, but there have been good points raised herein suggesting that Apple wants users who need the extra power to move onto the Studio line. The fact that there's an Intel Mini still available seems incongruous given the desire to transition all Mac hardware to AS. I think at some point this Intel Mini will be switched to AS and an M2 Pro would be a good option.
But that doenst make sense to me. The Studio is Max / Ultra. The Mini now is M1 base. The jump from M1 base to Max is huge. And many like me are not in the need of a $2500+ config studio, but would be very happy with a $1500/1700 Pro config. I dont mind if it is not passive cooled. Heck, look how they are cooling the mackbooks almost silent.
 
  • Like
Reactions: Tagbert and kc9hzn
Apple never seems to do exactly as expected, but . . .

You would assume all the "headline" releases from here on will have M2 based chips inside. Which makes me wonder whether Apple wouldn't want to tie the timing / release of a Mac Mini M2 Pro with a refresh of the Mac Studio. The speculation at the moment is that the M2 Pro and Max will have 2 extra CPU cores. This would see a 12 core M2 Pro Mac Mini outperform the 10 core Mac Studio's M1 Max, probably by a notable margin, in CPU performance.

Edit - Or, maybe the Mac Studio is more stopgap than permanent fixture. The Mac Studio wasn't listed among the M2 Macs in testing by Gurman: https://www.bloomberg.com/news/arti...s-with-next-generation-m2-chips?sref=9hGJlFio
Not yet. The Studio was like the last of the M1 generation products.

Right now they're banking on how we're going to be using the laptops as both a mobile and a desk solution. I'm also guessing the Macbooks also yield higher profit margins since they can mark up quite a bit for the extras like the screen and the keyboard.

So what's likely going to happen is we'll probably see a repeat of the M1 cycle. The next ones to show up is the M2 Pro/Max MacBook Pros. Then half a year down the road we'll see the M2 Max/Ultra desktop machine. Will it be another Studio sized cube or a Mac Pro Tower? I don't know. But either way it's not going to be a Mini with a Pro/Max/Ultra chip in it.
 
Not yet. The Studio was like the last of the M1 generation products.

Right now they're banking on how we're going to be using the laptops as both a mobile and a desk solution. I'm also guessing the Macbooks also yield higher profit margins since they can mark up quite a bit for the extras like the screen and the keyboard.

So what's likely going to happen is we'll probably see a repeat of the M1 cycle. The next ones to show up is the M2 Pro/Max MacBook Pros. Then half a year down the road we'll see the M2 Max/Ultra desktop machine. Will it be another Studio sized cube or a Mac Pro Tower? I don't know. But either way it's not going to be a Mini with a Pro/Max/Ultra chip in it.
There’s also the fact that, industry wide, laptops are more popular than desktops and have been for some time. And Apple’s been selling more laptops than desktops for even longer than the rest of the industry. Apple likely was already selling more laptops than desktops before the Intel transition (which was likely part of the motivation behind it, since the G5 PowerBook seems to have been dead on the vine).
 
Intel Mac Mini would have been still fine machine if Apple hadn't opted for LGA CPU, that would let people upgrade to gen 10 CPU and get more life out of their Macs, but no, Apple has to be Apple.

(disregard, Intel changed socket again, so it would be of no use even if they did go with classic desktop CPU option)

I think I'll just get a next gen Mac Studio anyway
 
I think it's out of the question to expect a Mini with a Max or Ultra processor as that's what the Studio is for.

The high-end Mini still on Intel though, is completely reasonable to get the M1/M2 Pro at around $1,399-$1,499, which would fit nicely to not just fill a gap between the M1 and M1 Max desktop offerings, ($699 & $1999 respectively), but also to keep a more performant, distinctively high-end option for all the cloud IaaS / server farms currently using the Mini for all kind of services, from MacOS in the cloud to FCP to Xcode app building / signing, etc.
 
  • Like
Reactions: dandeco
I think it's out of the question to expect a Mini with a Max or Ultra processor as that's what the Studio is for.

The high-end Mini still on Intel though, is completely reasonable to get the M1/M2 Pro at around $1,399-$1,499, which would fit nicely to not just fill a gap between the M1 and M1 Max desktop offerings, ($699 & $1999 respectively), but also to keep a more performant, distinctively high-end option for all the cloud IaaS / server farms currently using the Mini for all kind of services, from MacOS in the cloud to FCP to Xcode app building / signing, etc.
That's pretty much how I feel it should work. The Mini gets the regular M1/2 and Pro chips, while the Studio gets the Max and Ultra chips.
 
So getting back to this, where do we see AVX, such that porting that code from x86 to ARM in an optimized way would require manually translating AVX to SSE? Is it confined to low-level libraries like Intel's MKL, or is it could it also be found in the core code of large programs like Word, Excel, Photoshop, Premiere, and Mathematica? Translation: The libraries aside, do many large programs contain SIMD instruction sets that need to be manually translated from x86 to ARM, such that a simple automated recompilation won't give an optimized native port?
So first off I just want to clarify that both AVX and SSE are x86 extensions and not native. Rosetta can however run SSE instructions using SIMD hardware but cannot run AVX code at all. So if a program has an optimised path for SSE and not a native replacement compiling a native ARM build could be slower than the Rosetta translated x86 build that made use of SSE -> Rosetta -> NEON. Any AVX will fail to run on any Apple Silicon Rosetta or not. But often what you'll see in programs is that they will check if a feature is available and have several possible code paths based on it; For simplicity you can think of it as
Code:
if(avx) {
    useAVX();
}
else if(sse) {
    useSSE();
}
else {
    useNoSIMDAcceleration();
}
For a program generally targeting x86. Rosetta would here be able to use the useSSE path, but if we were to #ifdef around the x86 specifics to get something that compiles for ARM we would fall through to the useNoSIMDAcceleration() path.
NEON is the ARM native SIMD acceleration for ARMv8.

Now on to that point. I highly expect a program like Photoshop and Premiere to have some code using intrinsics like these in it. However, I would also expect it to be sort of easy for them to change their uses in a lot of cases to NEON equivalents. I don't know enough about NEON to know the specifics of moving logic from SSE to NEON but given the fact that Rosetta can do it automatically I assume there is at least a way of easily converting your logic to use NEON registers and instructions. Whether there is a clear path from AVX I don't know. I still mostly know x86.
In most cases use of intrinsics will probably go through libraries though. I would imagine companies like Adobe would maintain their own libraries for it, but a lot of others will use what's out there. And the SIMD instructions provided by Apple are in a lot of cases architecture agnostic and can vectorise things across both x86 and ARM. Furthermore you have things like MPS (Metal Performance Shaders) to take problems with huge amounts of vectorise-able work and put it on Metal devices instead of CPUs. So SSE/AVX mainly occupies the middle ground where the vectorised work is not expected to be big enough that the overhead of moving it to the GPU, setting up the necessary GPU encoding jobs and synchronising with it is worth it, but that there is still enough work that it makes sense to vectorise it. And while I do expect AVX/SSE code to be in a fair amount of higher end applications out there, I expect it to also make up a very small amount of the overall codebase, and you're likely to see more SIMD instructions as a result of GCC or Clang's optimisers automatically vectorising some loop states versus the use of intrinsics. What's interesting though it that something that also often follows vectorised work like AVX and SSE is memory bound operations and Apple Silicon has much greater memory bandwidth than your average x86 chip so in that aspect it could vectorise accesses much better potentially. Though that's more for large and potentially even sporadic access patterns; If things fit in cache that's less a concern.
But depending on how closely NEON and SSE/AVX match each other it may even be possible to just write a C header file that declare the same intrinsics used for SSE/AVX but replace the instruction/register calls with ARM ones and be done with it. I again unfortunately don't know NEON well enough to comment more on that

And will we be seeing significant speed-ups with ARM9, because of SVE2 (https://levelup.gitconnected.com/armv9-what-is-the-big-deal-4528f20f78f3)?
Who knows. SVE2 defines the instructions and their semantics. The performance is entirely in the hands of the chip designers and the implementations they come up with. I can say that SVE2 looks super nice from a development perspective; That you can assume super wide registers and let the hardware map it to its actual register sizes and work out the rest - though that also does sound more complex from a hardware perspective. I think it was Chris Lattner who, after starting work for SciFive, said that "Instruction sets don't matter. They can give you ±20% but past that it's all design" or something to that effect. Obviously ±20% is still significant in a lot of environments, but the key point is that we can't really say the performance impact on ARMv9 and SVE on Apple Silicon because the instruction set only means so much. How the chip designers implement it matters a whole lot more. My expectation is that yeah it'll help vectorised code compared to NEON, but won't be a huge difference
 
Which makes me wonder whether Apple wouldn't want to tie the timing / release of a Mac Mini M2 Pro with a refresh of the Mac Studio. The speculation at the moment is that the M2 Pro and Max will have 2 extra CPU cores. This would see a 12 core M2 Pro Mac Mini outperform the 10 core Mac Studio's M1 Max, probably by a notable margin, in CPU performance.

You're presuming that the "2 extra CPU cores" are P cores. It is about as likely that these two extra cores are E cores; not P. Especially if this is TSMC N5P like the plain M2. The additional 1-2 GPU cores per GPU core cluster will bump a "Pro" class die up 4 GPU cores. That will take up space. The P cores getting a L2 cache upgrade will take up space.

Four E cores would make it more consistent with the M2 on that core count. The same with the Max. The Ultra is still done two identical Max size dies would jump to 8 E cores. But if chasing Intel/AMD on max core count numbers than probably won't hurt. ( would be at 24. )

There are rumblings that the M2 Pro/Max will be on TSMC N3. But if one of the main motivations there is to get to smaller die sizes ( so can get more dies out of a N3 wafer), then again the E core is a very creditable option. Again the emphasis is on making the GPU bigger; not the CPU .

The P cores naturally come in clusters of 4. Adding "half" a P core cluster would be odd. It is technically doable but odd. The E core naturally come in a cluster of 4 also. The Pro/Max have that chopped down. Decent chance that is driven by the Max needing to stay small enough so that it will not bump too close the other Packaging requirements for the Ultra to be incrementally under 1x reticle limit.

There is enough density increase with N3 so that could add a 'half' P cluster and still be under the reticle limit. However, even with just 8 P cores and N3 implementation at higher clocks would likely come closer to the old N5 M1 Max on CPU.

They don't have to be exactly timed. If there is an leapfrog for 3-4 months that would not be the "end of the world for the Studio". While the CPU performance would be close the GPU performance would not be. 24 GPU cores isn't likely to catch 32 GPU + twice the memory bandwidth even with some clock speed bumps and incremental cache increase. That memory bandwidth gap is significant (even if it doesn't matter for the CPU cores).

A Mini Pro is gapped from the Studio as long as keep the "Max" out of the Mini (which probably would if reusing the legacy Mini case. )



Edit - Or, maybe the Mac Studio is more stopgap than permanent fixture. The Mac Studio wasn't listed among the M2 Macs in testing by Gurman: https://www.bloomberg.com/news/arti...s-with-next-generation-m2-chips?sref=9hGJlFio

Really? That rumor is from April 14 2022. The Studio was only released in March 2022. Why would Apple has the replacement in testing in the labs when they just released the Studio. And can't even keep up with demand for the studio 3 months later the wait list is still super long. A product that is so 'hot' you can't keep it in stock is in no dire need of a replacement. Not at all.

If Apple refreshed the Studio in Feb-March 2023 that would work just fine. That would be faster than any M-series Mac replacement so far. A Mini Pro and MBP 14"/16" refresh would likely consume all the M2 Pro/Max that Apple could get their hands on for a quarter. Even more so if there is some kind of quad die Mac Pro thing out there sucking up wafers at a disportioncate rate to unit sales.

What could be coupled with the Studio is a return of the large screen iMac ( if they are giving the large screen iMac a shot at the Max by not gutting the thermal system. )
 
Also, an iMac has lost the guts of a huge cooling system. It's unnecessary now.

It is 'unnecessary' now because Apple has gated the iMac 24" to be in a less than a MBP 14" performance zone.
The logic board is too small for a Mx Pro class SoC. The cooling system is chopped mainly do the fact that the performance range is chopped. Constraining the size of the logic board to the chin (along with speakers and fans) shrinks the available space to even put a SoC of any substantive size above the baseline plain Mx size.

It is now a "fast enough for most people" system. It is not even close to trying to fit the role that the larger screen iMac's "better , best" configurations have filled for last 10 years or so.
 
Gurman doubles down his "insider info" on upcoming Mac mini on Jun 26, 2022.

Here are the M2 Macs I’m told to expect beyond the first two:
  • an M2 Mac mini.
  • an M2 Pro Mac mini.
  • M2 Pro and M2 Max 14-inch and 16-inch MacBook Pros.
  • the M2 Ultra and M2 Extreme Mac Pro.
To be released between Fall 2022 and 23H1.

A 48GB or 64GB M2 Pro Mini is going to be a dream machine for many kinds of devs and power users 😁
 
Why do people doubt there will be a Pro-based mini?

The M1 mini lost two TB3 ports and triple external display capability when it moved from Intel chips. Apple clearly targets the mini towards developers. It will gain those features back one way or another.
 
Pleace define a pro user.

Pro software vendors will drop macos support in future. Because Apple drops intel cpu support.
and the economic side as well. Any skilled computer tailor can offer a better machine at a lower cost than Apple can.
And which software company is going to drop support for Macs just because they’re on Apple silicone? You do realise that there was pro software on Mac OS X long before they switched to intel chips, right?
 
  • Like
Reactions: Tagbert
Pro software vendors will drop macos support in future. Because Apple drops intel cpu support.
The opposite has been happening. Software vendors catering to the professional market, and that had software which worked under Intel/MacOS, have ported their apps to Apple Silicon/Mac OS.

Any skilled computer tailor can offer a better machine at a lower cost than Apple can.
Can you name any PC notebook that has the M2 Air's size (volume), weight, and expected battery life and that equals or surpasses the M2 Air for single-core, multi-core, and GPU performance?

The closest I know of is the Dell XPS 13 Plus with the Intel Alder Lake i7-1280P. It has about the same weight and screen size as the M2 Air, and beats it for CPU multi-core, but is 22% larger (volume), and loses out on CPU single-core, GPU, and battery life. [Here I'm assuming the M2 in the Air will perform about the same as the M2 in the MBP.]
 
Last edited:
  • Like
Reactions: Tagbert
The CPU must be INTEL not ARM! The machine code for Intel and ARM are different. Arm cannot run applications designed for Intel. Parallers don't help when host is ARM, tested and isn't subject to further discussion!

That's what Rosetta 2 does. It translates from x86-64 machine code to ARM machine code.
 
Apple gave the Studio an £800 'discount' compared to the equivalent 14 inch M1 Max for the loss of the laptop specific features.

Applying the same kind of proportionate discount of a theoretical Pro Mini compared to a 14 inch M1 Pro and you have a Pro Mini that will start at around £1,300, which with Apple's expensive upgrades will mean higher end models creeping up to the Studio's £2,000 starting price.

That seems to fit nicely with Apple's strategy of tempting buyers to move on to the more expensive machines.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.