Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The Mac Studio starts with an M1 Max chip, there's the base M1 Mac Mini, a pending update Intel high-end Mini, the M1 Max Studio and the M1 Ultra Studio... to me it's obvious the Intel high-end Mini will get an M2 Pro when it gets refreshed, it's the missing M variant in the desktop space, as there's currently no M1 Pro desktop Mac.

My bet is we'll see it this fall, a redesigned Mac Mini with base M2 and a high-end M2 Pro replacing the Intel model, followed next year by an M2 Max & M2 Ultra Studios.
I was thinking that the Mini would come as follows:
  • M2 Mac Mini
  • M2 Pro Mac Mini (replacing Intel)
Anything else would mean a Mac Studio.
 
a pending update Intel high-end Mini,

I see that more in the line of Apple continuing sales of the non Retina MBPs long after the Retina came out or the dreaded 2 core iMac that was an abomination when it launched in 2017 but still stayed till late 2021.

it's the missing M variant in the desktop space, as there's currently no M1 Pro desktop Mac.

A few months ago there was nothing headless above the MacMini and below the the MacPro and that gap existed since the death of the Cube, so.....

If a M2 Mini starts at the same price as the M1 (the new M2 MBA suggests otherwise) BTO options just to an 16/512 M2Pro will be quite close to the base model Studio.

It's also not clear if there will be an M2Pro and how many will be made as the M1Pro is just an M1Max with half the GPU missing which IMO suggest that they may just make enough of them to better fill a wafer and that those few can easily be sold in 14/16" MBPs.

But really IF Apple wants to put the M1/2Pro into a desktop I'd guess the iMac would be the perfect candidate. Ample cooling with the 2fan version and nothing to bump into with pricey BTO options.
 
Yes, but you can say that about any Mac. If you spec-up the new M2 MacBook Air then you're in touching distance of the 14" MacBook Pro, for instance.
Or even within the same product itself. For example if you’re speccing out a MacBook Pro with 32GB of memory and a maxed out M1 Pro, then you’re only just $200 away from an M1 Max.

The nice advantage I feel that desktops have is obviously the change in form factor and I would still appreciate having the Mac mini’s smaller size.
 
I see that more in the line of Apple continuing sales of the non Retina MBPs long after the Retina came out or the dreaded 2 core iMac that was an abomination when it launched in 2017 but still stayed till late 2021.



A few months ago there was nothing headless above the MacMini and below the the MacPro and that gap existed since the death of the Cube, so.....

If a M2 Mini starts at the same price as the M1 (the new M2 MBA suggests otherwise) BTO options just to an 16/512 M2Pro will be quite close to the base model Studio.

It's also not clear if there will be an M2Pro and how many will be made as the M1Pro is just an M1Max with half the GPU missing which IMO suggest that they may just make enough of them to better fill a wafer and that those few can easily be sold in 14/16" MBPs.

But really IF Apple wants to put the M1/2Pro into a desktop I'd guess the iMac would be the perfect candidate. Ample cooling with the 2fan version and nothing to bump into with pricey BTO options.

IMHO both the redesigned MBP and MBA went up in price due to the full array of new tech going into them, from upgraded screens, audio array, cameras, keyboards and so on... all things that are not present on a Mac Mini, I don't see an M2 Mini being more expensive. As a side note, I expect the prices of the redesigned MBP and MBA to come back down eventually in future iterations as their technology stack becomes cheaper.

I get your point about Apple not caring about having gaps between desktop systems, though, I still feel strongly about something along the lines of:

M2 Mac Mini - $699 (only change would be the switch from M1 to M2)
M2 Pro Mac Mini - $1299
M2 Max Mac Studio - $1999
M2 Ultra Mac Studio - $3999

In between you get the different storage, ram and core configurations / binned parts we already know.
 
  • Like
Reactions: l0stl0rd
Both of us are doing some guesstimate here without detailed justification. Not sure it's worth the digression. Perhaps you're expecting a justification from mine. So here we go.
.....

For your 8500B/8700B, the sustained max power is about 65W. From benchmarks I've seen, M1 Pro in gaming can sustain ~60W. Gaming hardly pushes both CPU+GPU to their limits at the same time. ..... Say it's 65W. I don't know if you're happy with me hypothetically put it there for M1 Pro.

Apple isn't known for provisioning adequate cooling for Intel Mac (except MacPro7,1). When 8500B/8700B is sustained at 65W, temperature is around 100C. Fan is spinning up. Noise and throttling crank up. Not something Apple wants for their new Mac's.

So the current 2018 era chassis is capable of cooling 65W. This justification is essentially agreeing that that it does indeed fit. Your baseline justification is has little directly to do with thermals and more so do with noise.

Three super major problems with that as it being an Apple "non starter" criteria. First, is that Apple shipped the 2018 Mini in the first place. The noise can't be sorry horrible that users in large numbers are going to reject the product. It worked before , so therefore is clearly could still work now.

Second, among "average Joe" users who in the world operates their Mini at 100% sustained power draw for 5-6 hours blocks of time per day. About no one. The bigger deal thermal load that the 2018 cooling system had to cover is the P1 , P2 power spikes that the 8500B-8700B would do that push way past 65W. Intel's default TDP numbers are about "sustained" which implicitly is where the clocks are dropped to base clock levels. You can get fan ramp just by doing burst loads just for several minutes at P1 power, which is higher than 65W. The other "accounting" problem is that the T2 was a separate chip consuming TDP before and that gets subsumed into the M1 Pro chip. M1 Pro also has more cores ( 8 cores ( 10 if weave in the E cores) versus 6). If you ramp the M1 Pro 6 cores it isn't going to be at 65W ( again in the "average Joe " context where they have some 4-5 core constrained app it is just going to run cooler. ) . So "average Joe" who overbuys into a Mx Pro Mini.

Third, it really isn't "average Joe" this product is primarily aimed at. If 40-55% of the buyers are folks provisioning these systems into data center rack deployments , departmental servers , etc. then they are not going to be placed on people's desks ( and near their ears). "average desktop PC" noise is not a data center deployment show stopper. If a Mini is located 10 ft or 100 miles away from the user then the noise isn't going to make a material difference to the service it provides. The Mini form factor is more efficient with a Mx Pro class SoCs gets more "CPU cores per square foot of rackspace" than the Studio can. The Studio is 3x bigger and only a bit less than 2x increase in the number of cores when using a Max ( and the Ultra is more 2x bump in $/mac. ).

"Bu there are 45-60% of the users that are not data center deployment". Yeah but that noise threshold is also an Apple mark not necessarily an end user mark. The general PC market has lots of folks who have systems as loud as a Mini 2018. If do some simple things like attach the Mini to bottom desk surface ( incrementally farther away from user's ears with some sound deflects. ), then it is even more in scope. There are decent number of folks who will say "Yeah it would be "nice" to be quieter, but I have work to do". Folks who can chuck the eGPU because the 16-20 GPU cores is not enough will likely be happy to make the trade off ( two fan noises down to just one and more desk space at lower cost. Those folks will likely buy. )


For the user who put a super price premium on noise they would have other options on either side of the price point. Plain "Mx" Mini (lower price) and Studio (with Max "floor" starting point with a higher price). If noise was a show stopper for them then they could move to a different product in the Mac product range. That is just product segmentation.

The notion that Apple removed "X from product Y" so they'll have to remove it from all of the Macs is flawed. No Ethernet jacks on laptops has not nuked them from most Mac desktops ( iMac 24" has a way to easily put it back). No headphone jack on iPhone has not lead to wide spread removes from Macs. Apple has not removed stuff where the "thinness politburo" hasn't chopped so hard as they had to make space trade-offs. If use the old space in the current Mac Mini chassis there is no "thinness" chop problem. The space and vents are there. Apple doesn't have to remove noise here if that brings in $100M in profits in server room deployments.



I would think M1 Pro is indeed likely hitting the thermal design limit if put inside the current Mac Mini.

Which is not really a good reason to kill the product. The chassis/cooling was designed for that limit.
 
M2 Pro Mac Mini - $1299
A 16/512 M1 is already $1099 so not much of a (price) gap here
M2 Pro Mac Mini - $1299

Bump that to a full Pro and 32GB it will be $1999 (based on 14" BTO prices), so no price gap at all.

Remember Apple likes nothing more than people clicking BTO options, cos thats where most of the profit is. With your suggestion there is little incentive to do it on the M1/2 Mini and none on a M2Pro Mini.

Sure Apple could introduce different pricing models or only offer the M2Pro in 1 or 2 versions (instead of 3 for the M1Pro) to give them some wiggle room but I just don't see.

Or ask yourself, of those potential M2Pro Mini buyers, how many will:
- buy a maxed M2 Mini (win for Apple)
- buy a base model Studio (still a win for Apple)
- don't buy a Mac at all (Apple looses)

Vs. Apple running either 3 different headless desktops or 2 with many more SKUs (+1 once the MacPro hits) and all the logistics behind it.

Just doesn't seem like a good business case for me.
 
  • Like
Reactions: chikorita157
If you go back and read the Ars article carefully, the "native" bit was editorialising by the article writer, we weren't told what the actual question was and in context probably does just mean "ARM windows" (as opposed to emulated x86). Earlier paragraphs were clearly referring to virtualisation and/or technologies like Crossover/WINE and the "core technologies" would be the "Hypervisor kit" in MacOS.

Compare that with some words that actually came out of Craig Federighi's mouth in a pervious interview, which leave no wriggle room: "We're not directly booting another operating system - It's purely virtualization".


That said, it's perfectly true that Microsoft could produce a direct-boot Windows for Apple Silicon if they wanted to commit to writing all the bare-metal drivers (and fixing them every time Apple changed the design) just as the Ashai Linux folks are doing for Linux.

I saw The Talk Show bit as well. But that was also in the context of "what are we doing right now to let you run Linux stuff" not necessarily long term options. But I see what you're saying and my memory may very well have been clouded by the article I linked and a "want to believe" :p Though I feel like I have a clear memory of an interview where they said "Bootcamp is an option but that's in Microsoft's hands for now", but that can just be clouded memory.

As you point out though, the Asahi folks being able to reverse engineer drivers and make Linux bootable, albeit by grabbing some blobs straight from the macOS install and all, it would also definitely be possible for Apple, with access to their source code and understanding of their hardware, to make Windows drivers that work even if not offering constant "game ready driver updates" like the other GPU makers - I'm sure that if Microsoft approached Apple and said "We want to collaborate on making this work, with this big bag of money", Apple would be ready to set a team up to work with them on that. But I guess the incentive structure for that isn't really there for either of them
 
So the current 2018 era chassis is capable of cooling 65W. This justification is essentially agreeing that that it does indeed fit. Your baseline justification is has little directly to do with thermals and more so do with noise.

Three super major problems with that as it being an Apple "non starter" criteria. First, is that Apple shipped the 2018 Mini in the first place. The noise can't be sorry horrible that users in large numbers are going to reject the product. It worked before , so therefore is clearly could still work now.

Second, among "average Joe" users who in the world operates their Mini at 100% sustained power draw for 5-6 hours blocks of time per day. About no one. The bigger deal thermal load that the 2018 cooling system had to cover is the P1 , P2 power spikes that the 8500B-8700B would do that push way past 65W. Intel's default TDP numbers are about "sustained" which implicitly is where the clocks are dropped to base clock levels. You can get fan ramp just by doing burst loads just for several minutes at P1 power, which is higher than 65W. The other "accounting" problem is that the T2 was a separate chip consuming TDP before and that gets subsumed into the M1 Pro chip. M1 Pro also has more cores ( 8 cores ( 10 if weave in the E cores) versus 6). If you ramp the M1 Pro 6 cores it isn't going to be at 65W ( again in the "average Joe " context where they have some 4-5 core constrained app it is just going to run cooler. ) . So "average Joe" who overbuys into a Mx Pro Mini.

Third, it really isn't "average Joe" this product is primarily aimed at. If 40-55% of the buyers are folks provisioning these systems into data center rack deployments , departmental servers , etc. then they are not going to be placed on people's desks ( and near their ears). "average desktop PC" noise is not a data center deployment show stopper. If a Mini is located 10 ft or 100 miles away from the user then the noise isn't going to make a material difference to the service it provides. The Mini form factor is more efficient with a Mx Pro class SoCs gets more "CPU cores per square foot of rackspace" than the Studio can. The Studio is 3x bigger and only a bit less than 2x increase in the number of cores when using a Max ( and the Ultra is more 2x bump in $/mac. ).

"Bu there are 45-60% of the users that are not data center deployment". Yeah but that noise threshold is also an Apple mark not necessarily an end user mark. The general PC market has lots of folks who have systems as loud as a Mini 2018. If do some simple things like attach the Mini to bottom desk surface ( incrementally farther away from user's ears with some sound deflects. ), then it is even more in scope. There are decent number of folks who will say "Yeah it would be "nice" to be quieter, but I have work to do". Folks who can chuck the eGPU because the 16-20 GPU cores is not enough will likely be happy to make the trade off ( two fan noises down to just one and more desk space at lower cost. Those folks will likely buy. )


For the user who put a super price premium on noise they would have other options on either side of the price point. Plain "Mx" Mini (lower price) and Studio (with Max "floor" starting point with a higher price). If noise was a show stopper for them then they could move to a different product in the Mac product range. That is just product segmentation.

The notion that Apple removed "X from product Y" so they'll have to remove it from all of the Macs is flawed. No Ethernet jacks on laptops has not nuked them from most Mac desktops ( iMac 24" has a way to easily put it back). No headphone jack on iPhone has not lead to wide spread removes from Macs. Apple has not removed stuff where the "thinness politburo" hasn't chopped so hard as they had to make space trade-offs. If use the old space in the current Mac Mini chassis there is no "thinness" chop problem. The space and vents are there. Apple doesn't have to remove noise here if that brings in $100M in profits in server room deployments.





Which is not really a good reason to kill the product. The chassis/cooling was designed for that limit.

I read the first sentence, and the last sentence of your long thesis. I would refer you to re-read my previous response that you quoted. Perhaps multiple times if not understood well.

I appreciate you follow my posts closely.
 
Because Apple's EULA disallows leasing as a VM, you have to lease them as a dedicated machine.

If I recall correctly that is slightly off in an important way as to why the a Mini Pro makes lots of sense. It is more so multi-tenant that Apple disallows than directly doing Virtual machines.

First, often these dedicated machines are put in a mode where there is a hypervisor anyway even if hosting just one VM instance. It just makes failover if there is a new hardware problem easier. Restart the VM image on another machine that isn't broke and can meet much better service level agreement uptime than if . Apple isn't banning low level hypervisors.

Second if the customer workload is less than half of a machine and need two instances. Could put two virtual machines assigned to that one customer.

Apple's limit is more about their macOS license being for this one Mac and MacOS instance(s). If you buy a VM program on macOS you can fire up as many macOS VM images as you want on that machine. Apple isn't stopping that. What Apple is trying to block is doing high multiplier effect load consolidation onto fewer ( or no) Macs of VM instance serving.

After Big Sur, the minimum leasing period is 24hrs. This makes it extremely hard to make a CICD service at a reasonable cost.

Yeah, kind of curious how Apple thinks they aren't going to get sued (or at least intensely grilled by some major government regulators ) when the lowest level offering in their XCode Cloud service is for 15 hours. Contractually making your competitors charge more money by not playing by Apple's own rules for their own service. [ Wouldn't be surprising if the number changes with macOS 13 (Ventura) to drop down to Apple's minimums. Apple should have to 'eat their own dog food' on licensing. Yeah, they probably aren't paying full price for the hardware, but the licensing. doesn't have the hand waving can do on "we give ourselves the volume discount". ]


Apple is releasing Xcode Cloud so perhaps Apple don't want any third party CIs anymore.

There are 3rd party monitor makers after Apple released the XDR and Studio. XCode Could doesn't make economic sense if you are not deploying an Apple into the Apple store. Lots of folks who do CI/CD deploy their apps to more than just the Apple App store. So XCode Could isn't really a 'competitor' there. An integration target perhaps, but there is enough in the non-intersection of service providing that it doesn't work. XCode Cloud does zero source code management also.


There was a session on deploying Swift to server side which really isn't a good match to XCode Cloud either. For now the if the application "back end" has a hefty server component that is deployed to AWS/Azure/Google/Tencent/etc hosted server services then the other more heterogenous deployment services have far more traction ( not to mention inertia since probably how doing it for last 2-3 years. )
[ this may evolve narrowly at Apple over time. However, this is more about since leaned on developers to heavily adopt Swift then they can apply the same "hammer" to banging out server side code (and don't have to practice programming language skills in another language) in XCode also. More so sucking more code into XCode's wheelhouse that trying to go toe-to-toe with AWS/Azure/etc. ]

MS Office continued after Apple rolled out Pages/Numbers/Keynote. Somewhat similar path here. Pages works nice on Apple products but if need to do stuff with Windows/Linux/Android ... not so much.
 
To be fair, the current Mini is designed to fit into a specific rack mount easily. There will surely be customers who want to utilise that existing space, but have a better SoC to power their server farm?
 
  • Like
Reactions: Icaras and fstoprm
But does Apple want that to "work" in this way today?

Efficient/quite has become a major focus and selling point for them with the AS transition not sure how a noisy Mini would fit into it.

Mac Pro Rack 2019.

If it is high margin profits without doing much extra work .... why not?

Re-use the same 2018 Mini chassis that is pragmatically paid for . Check. ( *cough* M2 MBP 13" 2022 .. same game plan. )

Use M-series SoCs in multiple products. Check.

M1 - MBA , MBP 13" , iMac 24" , iPad Pro , iPad Air ( 5 )
M1 Pro - MBP 14" / 16" (2)
M1 Max - MBP 14" / 16" and Studio ( 3 )
M1 Ultra - Studio ( for this iteration ) ( 1)

The M1 Pro stuck at 2 for as low as the price charged is an odd ball. If the iMac 27" came back with a M1 Pro then back to a matching 3. But it hasn't for far and even if did would be lower than the 5 product mark being set by the 'plain' Mx SoC. Selling more M1 Pro SoCs gets the unit costs down. And if restrict it to just two products that basically just different on screen size there is little scope to crank up the volume.

The Ultra is low spanning but it is also extremely high priced to shrink the addressible market. Pretty good chance though that something "Ultra like" in the M2/M3 generation will get placed in a Mac Pro so Apple can get that number off of just 1 product.


The Home Pod Mini is using a watch chip. AppleTV is using an A-series chip. The lower half of iPad line up is getting "hand me down" A-series chips. It is just Apple's standard modus operandi for their SoCs. Drive the volume up , costs down , and margins up.

If there was not aleady a relatively low cost chassis to throw this into perhaps it would be different. It there wasn't already very substantive infrastructure spend on folks with custom rack mounts for the current Mini dimensions then again it would be different.

Apple skipping the Mx Pro class chip for a Mini Pro is Apple jumping out of their normal approach of doing more product with minimal set of SoCs. They have a Mx Pro ... they are likely looking to find more places to "stuff it into". That's how they roll.
 
  • Like
Reactions: Ruftzooi
In short, there are things that genuinely don't run yet on M1 but so far I've been able to work around it, and it won't affect an average user. In my case at least it's only been development tools and virtualisation bits and bobs. And that's all getting smoothed out all the time anyway
For me it's more about performance than things not running at all. I understand that Mathematica runs fine on Apple Silicon, but it's much slower than it should be. For instance, note these WolframMark benchmark scores. Based on all other benchmarks, a 2021 M1 should be nearly twice as fast a 2014 MacBook Pro:

2014 MacBook Pro*: 3.0
2019 Intel iMac*: 4.6
2021 M1**: 3.2

*See configuration details in signature line.
**Highest score I've seen posted on Mathematica Stack Exchange.

I've read this is at least partly because the Intel MKL used by Mathematica on Intel chips is very highly optimized, and the replacement math library available to Apple on ARM simply isn't as fast. Writing a fast math library is non-trivial. E.g., AMD's first math library was significantly slower than Intels, and their replacement remains slower.

Plus there may be other things going on as well.
 
Last edited:
For me it's more about performance than things not running at all. I understand that Mathematica runs fine on Apple Silicon, but it's much slower than it should be. For instance, note these WolframMark benchmark scores. Based on all other benchmarks, a 2021 M1 should be nearly twice as fast a 2014 MacBook Pro:

2014 MacBook Pro*: 3.0
2019 Intel iMac*: 4.6
2021 M1**: 3.2

*See configuration details in signature line.
**Highest score I've seen posted on Mathematica Stack Exchange.

I've read this is at least partly because the Intel MKL used by Mathematica on Intel chips is very highly optimized, and the replacement math library available to Apple on ARM simply isn't as fast. Writing a fast math library is non-trivial. E.g., AMD's first math library was significantly slower than Intels, and their replacement remains slower.

Plus there may be other things going on as well.

In a lot of cases where I've seen performance like this it comes down to AVX. I'm not actually sure if Mathematica is Rosetta or native ARM, but Rosetta cannot run AVX, it only runs SSE/SSSE. And even if you make a native build intrinsics don't come for free so optimising for vector extensions to speed up that sort of thing, yeah that requires some extra work.

The QEMU workaround I've used to get something running I mentioned before was slow as hell too but it was a backup, I have an Intel 2020 iMac as well.

I think it's worth bearing in mind though that "it runs" will get you where you need to be, eventually. It may not be as speedy as it should or could be, but at least it'll get you to the finish line. And most things are still fast - I would still argue that for an average consumer the performance bits aren't problems either and everything the average consumer wants to do will run faster on M1 than prior Intel hardware. In cases like your Mathematica example, yeah... It ain't as good as it could be. But you can get work done and it'll improve. Slowly but steadily I'm sure we'll get the math stuff moved to NEON too (The ARMv8 equivalent of vector instruction extensions like AVX and SSE). Or hell, even better; Accelerate some of it with Metal too.

But there are some cases where a Rosetta build will actually run faster than a naïve native build, because the Rosetta build will enable SSE and translate that to NEON where the native build doesn't use the vector extensions at all because the code hasn't been written to use those intrinsics.

PS. funny you compare a 2014 with an M1. Cause I went from a 2014 Pro (4770HQ CPU though) to my M1 Max :D (PSS. 2014 is definitely PCIe 3.0, not 2.0)
 
I think the pricing would bring an M1 Pro mini to around the same ballpark as the entry level Mac Studio. So it’s not going to happen on that basis.

I would absolutely buy a £1199 M1 Pro 16/512 Mac mini instantly though. But I don’t think it will ever exist.
 
For the first one why wouldn’t they keep the lower-end i3 mini? Seems like the best choice for reasonably priced Intel device. Also I honestly don’t think they’d be interested in doing that for this transition (also didn’t for PPC iirc). I feel like they kept the higher end models to dignify something > M1 is going to go into those, similar to how they didn’t touch the higher-end 13” MBPs when the lower one went M1.

For the second one I’d figure they’d have more stock of the lower end i3 version than the others.

You are aware the only Intel Mac mini desktop systems available on the Apple website are the i5 & i7 variants, the i3 was axed when the M1 Mac mini debuted...

Use M-series SoCs in multiple products. Check.

M1 - MBA , MBP 13" , iMac 24" , iPad Pro , iPad Air ( 5 )
M1 Pro - MBP 14" / 16" (2)
M1 Max - MBP 14" / 16" and Studio ( 3 )
M1 Ultra - Studio ( for this iteration ) ( 1)

Actually, the M1 is in SEVEN products; the iPad Pro tablets are available in 11" & 12.9" variants, and you did not even list the M1 Mac mini...! ;^p
 
More practically, we will never see a 2nd TB monitor supported on the low end MacMini (again; since Mac mini 2018).
 
I'm not actually sure if Mathematica is Rosetta or native ARM,
Mathematica introduced a native ARM build with 12.3.1, and they're now on 13.0.1, so I'd say they're on their 2nd major ARM version; but some of the libraries may run under Rosetta (the latter isn't clear to me).
In a lot of cases where I've seen performance like this it comes down to AVX. I'm not actually sure if Mathematica is Rosetta or native ARM, but Rosetta cannot run AVX, it only runs SSE/SSSE. And even if you make a native build intrinsics don't come for free so optimising for vector extensions to speed up that sort of thing, yeah that requires some extra work.

The QEMU workaround I've used to get something running I mentioned before was slow as hell too but it was a backup, I have an Intel 2020 iMac as well.

But there are some cases where a Rosetta build will actually run faster than a naïve native build, because the Rosetta build will enable SSE and translate that to NEON where the native build doesn't use the vector extensions at all because the code hasn't been written to use those intrinsics.

Here's some more interesting detail about this (though it's 8 mos. old), including a reply by Itai Seggev, who is one of the Mathematica devs:


I think it's worth bearing in mind though that "it runs" will get you where you need to be, eventually. It may not be as speedy as it should or could be, but at least it'll get you to the finish line.
The thing is, I don't know when it will improve. It may be quite a while before MMA's performance on AS equals what it should. In the meantime, it would be irritating to go from a 2014 machine that takes 15 mins to run a calc, to a 2019 machine that takes 10 minutes, and then have to return to waiting about 15 mins on a 2022 machine.

(PSS. 2014 is definitely PCIe 3.0, not 2.0)
Can you cite a source for the 2014 MBP using PCIe 3.0? My source for it being 2.0 is was Wikipedia's data table for the Intel MacBook Pro's ( https://en.wikipedia.org/wiki/MacBook_Pro_(Intel-based) ), where it says that about the 2014's storage—though it does miss the detail that, if you have the 1 TB SSD (the largest available), it uses 4 lanes instead of 2. It seems this is unlikely to be a typo, since there's accompanying text indicating that they changed some of the MBP's to PCIe 3.0 in 2015.
 
Last edited:
Can you cite a source for the 2014 MBP using PCIe 3.0? My source for it being 2.0 is was Wikipedia's data table for the Intel MacBook Pro's ( https://en.wikipedia.org/wiki/MacBook_Pro_(Intel-based) ), where it says this about the 2014's storage—though it does miss the detail that, if you have the 1 TB SSD (the largest available), it uses 4 lanes instead of 2. It seems this is unlikely to be a typo, since there's accompanying text indicating that they changed some of the MBP's to PCIe 3.0 in 2015.
Oh sorry, I wasn't aware we were talking storage. I just thought we were talking available PCIe lanes, and for the models equipped with a GPU, the GPU goes over PCIe 3.0 for sure. Storage would make sense to run over PCIe 2.0 lanes though.
Here's some more interesting detail about this (though it's 8 mos. old), including a reply by Itai Seggev, who is one of the Mathematica devs:
Yeah that all checks out. Thanks for the link. I personally use Maple and various other mathematical methods and my greatest interaction with Wolfram is through the web Wolfram Alpha, so I've not had much personal experience with Mathematica, though here they also mention the bit about some things actually being faster under Rosetta, which I can only assume is also because of what I mentioned with the SSE acceleration and lacking NEON code paths.
The thing is, I don't know when it will improve. It may be quite a while before MMA's performance on AS equals what it should. In the meantime, it would be irritating to go from a 2014 machine that takes 15 mins to run a calc, to a 2019 machine that takes 10 minutes, and then have to return to waiting about 15 mins on a 2022 machine.
Oh definitely. I'm sure it will improve eventually, but it can definitely be frustrating to not have a clear timeline for it, and it could be anywhere from tomorrow to 5 years or theoretically more
 
Oh sorry, I wasn't aware we were talking storage. I just thought we were talking available PCIe lanes, and for the models equipped with a GPU, the GPU goes over PCIe 3.0 for sure. Storage would make sense to run over PCIe 2.0 lanes though.
I thought it was PCIe 2.0 for I/O as well, because the 2014 MBP has TB2; and, IIUC (which I might not—these standards are confusing), TB2 uses PCIe 2.0, and TB3 uses PCIe 3.0.

That is interesting, though. I wasn't aware that you could have one PCIe generation for storage and a different one for the GPU. I thought if a chip was described as, e.g., PCIe 4.0, it would be PCIe 4.0 everywhere.

Does this hybrid scenario still happen for current chips? E.g., are there some that use a PCIe 4.0 GPU but are only PCIe 3.0 for storage? And what about the new PCIe 5.0 chips—will some of them be PCIe 5.0 for the GPU only?
 
I thought it was PCIe 2.0 for I/O as well, because the 2014 MBP has TB2; and, IIUC (which I might not—these standards are confusing), TB2 uses PCIe 2.0, and TB3 uses PCIe 3.0.

That is interesting, though. I wasn't aware that you could have one PCIe generation for storage and a different one for the GPU. I thought if a chip was described as, e.g., PCIe 4.0, it would be PCIe 4.0 everywhere.

Does this hybrid scenario still happen for current chips? E.g., are there some that use a PCIe 4.0 GPU but are only PCIe 3.0 for storage? And what about the new PCIe 5.0 chips—will some of them be PCIe 5.0 for the GPU only?
So what will frequently happen is that the CPU will offer x number of PCIe lanes and the chipset of the logic board will offer some PCIe lanes. For one thing, they don't have to be of the same generation, but for another, what you can do is "split them". So if you logic board's chipset provides 4x PCIe 3.0, you can split that out to be 8x PCIe 2.0. Because the bandwidth of 3.0 is basically double 2.0 so you just need a chip to multiplex it out. Then there's also the other end of the equation. You can have a slot and chip lane configuration that supports PCIe 3.0 but attach a PCIe 2.0 device to it, in which case the speed it runs at is PCIe 2.0 regardless.

For a long time Intel's consumer CPUs (still?) 'only' offered 16 lanes of fast PCIe from the CPU; Basically just enough for a GPU. And the remaining PCIe lanes for I/O would come from the logic board chipset. This is one of the aspects in which AMD's Risen chips upped the game a bit
As an example, a Risen 7 5700X offers 20 PCIe lanes from the CPU and the AMD chipset offered on logic boards for it offers 4, all PCIe 4.0. But some logic boards may chose to divide that 4x PCIe 4.0 chipset lanes out to more 3.0 slots or in any configuration that pleases them
 
  • Like
Reactions: theorist9
I thought it was PCIe 2.0 for I/O as well, because the 2014 MBP has TB2; and, IIUC (which I might not—these standards are confusing), TB2 uses PCIe 2.0, and TB3 uses PCIe 3.0.

That is interesting, though. I wasn't aware that you could have one PCIe generation for storage and a different one for the GPU. I thought if a chip was described as, e.g., PCIe 4.0, it would be PCIe 4.0 everywhere.

Does this hybrid scenario still happen for current chips? E.g., are there some that use a PCIe 4.0 GPU but are only PCIe 3.0 for storage? And what about the new PCIe 5.0 chips—will some of them be PCIe 5.0 for the GPU only?
Addendum:

Here's Intel's page for the 4980HQ chip you have listed as being in your 2014 MacBook Pro
If you search the sheet for PCI you can see the revision is version 3. It also says it offers 16 lanes from the CPU. IIRC Apple dedicated 8 lanes to graphics on those models (but may have been 16) and used the remaining CPU lanes for I/O like the Thunderbolt controller, USB, etc. And used the chipset provided lanes for the SSD. I could be misremembering the layout there but in any case, the Intel ARK page here at least provides some evidence for you that the CPU provides PCIe 3.0 lanes. As for lane allocation you can see the x# lane count in System Information under Graphics (at least for what's assigned the GPU) :)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.