Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
What "many things"? Apple's product lineup has been pretty stagnant, and continues to remain so, other than some spec bumps.

You're really gonna make the case that the year that brought an entire new Mac (Mac Studio) and the most significant Air redesign since its introduction in 2008 was… stagnant?
 
You're really gonna make the case that the year that brought an entire new Mac (Mac Studio) and the most significant Air redesign since its introduction in 2008 was… stagnant?
The Mac Studio is just a spec bumped Mac Mini and I don't really care about the Air since I don't use notebooks.
 
We already have a replacement to the afterburner card: The Media Engine in all M Series SoCs since the M1 Pro

Apple-M1-Pro-Media-Engine.jpg


Just one Media Engine outperforms the Afterburner Card, effectively turning it into retroware

M1-Max-ProRes-benchmark.jpg


And the Max chips got 2 of these, the Ultra chips having 4.

So this compute module is something else entirely.


You are right - i was trying to make a point that the compute module may be part of the SOC and got waylaid
 
We already have a replacement to the afterburner card: The Media Engine in all M Series SoCs since the M1 Pro

[..]

So this compute module is something else entirely.

Yeah, but "some kind of add-on card/chip to boost specialized performance" seems like the most plausible explanation to what a "Compute Module" could be.
 
Some researchers looked at Xcode and iOS and didn't find any of this to prove 9to5mac's claims. There isn't even a "Xcode 16.4 beta" so that's quite a careless article...

If you search the strings of libMobileGestalt.dylib (MobileGestalt; used by low-level system APIs to get device info like the UDID, IMEI, device color, model identifier, etc.) in the iOS 16.4 Simulator runtime in Xcode 14.3 beta 1, you'll find mentions of a "ComputeModule" alongside other devices. I'm not sure how they found out the model identifiers as 13,1 and 13,3 though. There didn't seem to be any mentions of a "ComputeModule" in the developer disk image either.

  • iPhone
  • iPod
  • iPad
  • AppleTV
  • iFPGA (this appears to be Apple's internal prototype boards)
  • Watch
  • AudioAccessory (HomePod)
  • iBridge (Touch Bar)
  • AppleDisplay (Studio Display)
  • ComputeModule

Here is the command I ran:

Code:
strings /Applications/Xcode-beta.app/Contents/Developer/Platforms/iPhoneOS.platform/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS.simruntime/Contents/Resources/RuntimeRoot/usr/lib/libMobileGestalt.dylib | grep -E "ComputeModule|$" --color=always | less -r
 
Last edited:
Got source or more info to read up on, please? I'm curious

There is a mysterious section of 16 extra (32 total) Neural Engine matrix cores turned off in software in the M1 Max (64 NE cores in the Ultra). They take up a fair amount of space on the die.
 
  • Like
Reactions: amartinez1660
Got source or more info to read up on, please? I'm curious

There were some (now-deleted) tweets by Hector Martin aka marcan, one of the lead developers of Asahi Linux, about this. Unfortunately he seems to have deleted his Twitter account when he moved to Mastodon, and someone else took over the username.

From what I remember the second cluster of Neural Engine cores is called "ane1" in the device tree (the first is called "ane0"), so searching for "ane1" on google might pull up some results.

EDIT: Here is a post made by Martin about it on another site: https://news.ycombinator.com/item?id=30852818
 
Last edited:
  • Like
Reactions: marshy and Freida
These modules will allow you to keep the same set of 900 EUR wheels over several processor generations -
Everybody wins - simples
 
  • Like
Reactions: dgdosen
Apple's GPU is pretty well suited for VR. Apple's upscaling tech upscales both resolution and fills in extra frames for higher frame-rate. It is also low latency. It can handle a lot of operations without needing to use RAM as a scratch buffer (like an AMD or Nvidia chip would) which is great for performance and power efficiency. Just a plain M2 or M3 is likely going to be enough for top-tier experiences and an M-series Ultra would certainly be a VR beast if the headset supports remote rendering at all.
You realize you can't just slap on generic upscaling in VR, right? It's why DLSS isn't supported on VR yet. TL;DR there will be inconsistencies in upscaling artifacts across the stereoscopic images which will make things look terrible.

And the original point stands, none of Apple's GPU capabilities exceed the rasterization firepower of a mid-range desktop GPU from a full generation ago (e.g., a 3060). They needs something much more powerful to run high end applications at the high resolutions their HMD is supposed to support.
 


An all-new "compute module" device has been spotted in Apple beta code, hinting that new hardware may soon be on the way.

Mac-Pro-Feature-Teal.jpg

The new "ComputeModule" device class was spotted in Apple's iOS 16.4 developer disk image from the Xcode 16.4 beta by 9to5Mac, indicating that it runs iOS or a variant of it. The code suggests that Apple has at least two different compute modules in development with the identifiers "ComputeModule13,1" and "ComputeModule13,3."

The modules' purpose is unclear, but speculation argues that they are designed for the Apple silicon Mac Pro – potentially serving as a solution to enable a modular interface for swappable hardware components or add additional compute power via technologies like Swift Distributed Actors. There is also a chance that the compute modules could be designed for Apple's upcoming mixed-reality headset or something else entirely.

Yesterday, recent Apple Bluetooth 5.3 filings were uncovered, a move that often precedes the launch of new products, so the compute module finding could be the latest indication that new Apple hardware is likely on the horizon.

Article Link: Mysterious New 'Compute Module' Found in Apple Beta Code
It sounds like they are possibly redesigning the board for the Mac Pro in a way that professionals can still use the modular frame but use apple made CPU’s
 
They just want to capture the Kubernetes bramble market from Raspberry Pi...

Seriously though, what's the highest-density, board-to-board interconnect that might be usable for something like this? If
Apple's UltraFusion had 10,000 connection signals, that's only 100 Hirose connectors that the Raspberry Pi 4CM uses...
 
Everything is "modulerable" if you pad it with enough logic.


But dreamers like me are just going to dream of a Raspberry Pi CM4-alike Computer Module with an A15 Bionic instead of a Broadcom.
I think the dream will be coming from the bottom up! What's to stop Raspberry Pi from using TMSC for their next gen processors? They're already using them to etch their RP2040 microcontroller.

Why not flood the world with cheap, 'close-enough' ARM modules built on TSMC's less-than-cutting-edge 16/12/10/7 nm nodes? Maybe something with the power of Apple's 'ancient' A10? RPi is due for an update...
 
  • Like
Reactions: amartinez1660
If you search the strings of libMobileGestalt.dylib (MobileGestalt; used by low-level system APIs to get device info like the UDID, IMEI, device color, model identifier, etc.) in the iOS 16.4 Simulator runtime in Xcode 14.3 beta 1, you'll find mentions of a "ComputeModule" alongside other devices. I'm not sure how they found out the model identifiers as 13,1 and 13,3 though. There didn't seem to be any mentions of a "ComputeModule" in the developer disk image either.

  • iPhone
  • iPod
  • iPad
  • AppleTV
  • iFPGA (this appears to be Apple's internal prototype boards)
  • Watch
  • AudioAccessory (HomePod)
  • iBridge (Touch Bar)
  • AppleDisplay (Studio Display)
  • ComputeModule

Here is the command I ran:

Code:
strings /Applications/Xcode-beta.app/Contents/Developer/Platforms/iPhoneOS.platform/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS.simruntime/Contents/Resources/RuntimeRoot/usr/lib/libMobileGestalt.dylib | grep -E "ComputeModule|$" --color=always | less -r

This is interesting because it strongly suggests it is a standalone device in a new category. Maybe some kind of device for home or office where you want more compute than a single Mac can provide? Maybe an array of M-series SoCs? Bet it will be expensive.
 
The "13" in the "ComputeModule13,1" model might mean these have the M1 Ultra chip. This has interesting implications for possible back-enabling hardware raytracing. Either the new Mac Pro just won't get hardware ray tracing which would be a disappointment to pro users in visual effects (a big market for the Mac Pro) or this may be the reason for the secret silicon discovered by Asahi Linux developers in the Max/Ultra chips. Read on for the technical explaination.

There is a mysterious section of 16 extra (32 total) Neural Engine matrix cores turned off in software in the M1 Max (64 NE cores in the Ultra). They take up a fair amount of space on the die. It is like they are scaling the NE cores 1-to-1 with the number of GPU cores. They are not used for binning and macOS hides the extras right after boot so they don't show up in hardware reports. Hardware ray-tracing is primarily a matrix math problem so the NE cores are potentially relevant.

These cores are smaller and more programmable than what would be in an Nvidia or AMD hardware ray tracing accelerator so they are easy to dismiss at first as unrelated. However, Apple's solution is not going to look much like the incumbents. Apple's future ray tracing tech was licensed from Imagination Technologies in 2020. Rather then cores with a large fixed pipeline like in Nvidia/AMD chips, Apple's chips will have a simpler ray estimator that will get close to finding a ray intersection then they will use the normal GPU compute units to refine that result down to the final answer. The GPU cores can find the intersection much more efficiently when they are starting from an estimate. Possibly these mysterious NE cores that are adept at matrix math will be used for the intersection estimators that then hand off to standard GPU cores. I think the fact that they are 1-to-1 with GPU cores and perform the right type of math makes it pretty credible this could line up with this algorithm. The details of Imagination's algorithm are likely to run better on a GPU of Apple's design that heavily uses on-core caches over the immediate-mode rendering style (using GPU RAM heavily as scratch storage) as employed by AMD/Nvidia.

If this is true, the Apple chip might potentially blow Nvidia/AMD out-of-the-water. An Nvidia chip just uses a small part of its silicon for raytracing while most of it sits idle. The design Apple licensed and may have implemented would use all the cores on the GPU and all the NE cores (including the secret ones) for accelerated ray tracing. There is probably a signifiant software component which is why this tech may have been sitting idle.

There is one other mysterious aspect. Apple was rumored to bring the same tech to iPhone, but apparently they had to postpone the next-gen A-series GPU design a year because hardware raytracing was using too much power. It isn't clear how this GPU relates the the M-series GPU. Possibly the hardware ray tracing algorithm needs more work for mobile devices, but it could be ready for desktop devices including older Apple Silicon devices soon.

There certainly is a fair amount speculation in this take, but the mysterious extra cores and Apple's licensing of Imagination Technologies hardware ray tracing method in 2020 are definite facts.
What the… do you have any more news or a brief pointer of where to keep track of this potential progress?
Curious to know if the trail went cold or similar or how much of only speculation it turns out to be.
Honestly, it would kick some serious butt if it were to be true… also it’s a bit irritating that the NE are incredibly black boxed, so many supposed uses, like denoising on a Blender Cycles render? But not used anywhere like that yet.
 
What the… do you have any more news or a brief pointer of where to keep track of this potential progress?
Curious to know if the trail went cold or similar or how much of only speculation it turns out to be.
Honestly, it would kick some serious butt if it were to be true… also it’s a bit irritating that the NE are incredibly black boxed, so many supposed uses, like denoising on a Blender Cycles render? But not used anywhere like that yet.

It is also worth mentioning that the M2 Max seemingly lacks a second Neural Engine, at least on the die shot Apple has given us. There are some errors in the below die analysis from Twitter (in my opinion; see end of this post), but it is clear that there is not a second ANE. Perhaps the ComputeModule is a stopgap product and the only one intended to take advantage of M1 Max's second ANE? That could explain why the model number is a 13,x variant (M1-generation) as opposed to a 14,x variant (M2-generation).

EDIT: I wanted to add that M1 Max's second neural engine could have also been removed due to physical space constraints, as the CPU, GPU, etc. have gotten bigger but the process node has stayed largely the same (TSMC N5 => N5P). There simply might not have been space for it on the die, and considering that macOS doesn't use the second neural engine, Apple engineers might have seen fit to remove it. In any case, we'll know the real answer depending on whether the second ANE returns with M3 Max or not. (end of edit)

My pet theory is that this is a GPU-like device intended to slot into a PCIe slot to help developers port their games to Metal and Apple silicon. It is rather interesting how iOS 16.4 beta 1 turned up with references to this ComputeModule device (and in libMobileGestalt, no less -- see my earlier post) at the same time that Apple held a press event showing off gaming on Apple silicon.

The fact that references to this device have shown up in iOS 16.4 suggests that Apple might be planning to release it in an upcoming event along with the final release of iOS 16.4, possibly the same event where the rumored 15" MacBook Air will be announced. If Apple were planning to release it at WWDC along with iOS 17, then it would not make sense to add support for it in iOS 16.4.

FmthjkvXwAAXVC-


Errors I saw:
  • Both: The "Media Engine" blocks should be "Display Engine".
  • M2 Max only: The two blocks marked "ProRes Codec" include additional silicon to the right that don't match (looking at the unannotated image from Apple's release video) and are in fact only the same physical width as the SLC cache blocks between them. In addition, the bottom 19 cores of the GPU and the bottom two display engine blocks are marked incorrectly. The bottom 19 cores actually wrap around the two additional display engines as seen in the image below (from https://www.theregister.com/2023/01/17/apple_m2_max_pro/). This results in some unknown silicon at the bottom edge of the chip that is likely part of the die-to-die interconnect for the predicted M2 Ultra.

m2_max_die_shot.jpg


cc: @4nNtt
 
Last edited:
This is interesting because it strongly suggests it is a standalone device in a new category. Maybe some kind of device for home or office where you want more compute than a single Mac can provide? Maybe an array of M-series SoCs? Bet it will be expensive.

I‘m actually beginning to think a multiple-SoC standalone box as a computing device would be a clever play by Apple. Using Intel processors you’d have to create a whole rack mount to do this, because of the power consumption of the processors, and you’d have something that takes up a lot of physical space, power cabling, networking connectivity and so on.

The high performance-per-watt and low power consumption of Apple’s processors allows them to fit multiple processors in one device which you can run off a standard electricity socket, and where all the interconnects between processors are internal to the device, instead of using external cabling. You could have maybe 6 M2 Pro processors in a single box, with suitable bus interconnects.

That would give you 48 Performance cores and 24 efficiency cores for a lot of parallel compute, using a similar amount of power to one Intel device. It’s like having a mini compute cluster underneath the desk, instead of having to spin up a whole lot of cloud instances or buy three Mac Studio’s with M2 Ultra. I think there’d be quite a few uses for a device like that in scientific computing.
 
  • Like
Reactions: amartinez1660
So here's my crazy idea about these compute modules. Maybe they are not related to the Mac, but CarPlay instead?

I mean the new CarPlay was already announced. And it's much more advanced - to the point it could probably replace whole infotainment system in a car. So it would make sense to run it from dedicated piece of hardware that's built into a car rather than from an iPhone. Just like Google did with Android Automotive and Android Auto.

Apple, being Apple, would probably want as much control over this as possible. So instead of providing just the software, they decided to design a whole module that runs iOS so they can sell the whole package to car manufacturers.

As I said, it's just a crazy idea, feel free to poke holes in my logic😄
 
  • Like
Reactions: atonaldenim
So here's my crazy idea about these compute modules. Maybe they are not related to the Mac, but CarPlay instead?

I mean the new CarPlay was already announced. And it's much more advanced - to the point it could probably replace whole infotainment system in a car. So it would make sense to run it from dedicated piece of hardware that's built into a car rather than from an iPhone. Just like Google did with Android Automotive and Android Auto.

Apple, being Apple, would probably want as much control over this as possible. So instead of providing just the software, they decided to design a whole module that runs iOS so they can sell the whole package to car manufacturers.

As I said, it's just a crazy idea, feel free to poke holes in my logic😄

This makes sense. M1 Max has a second neural engine that is unused and disabled in software (see #109, #119). Having two neural engines instead of one (or four instead of two for an M1 Ultra-based product) might come in handy for computer-vision applications such as self-driving features.
 
Last edited:
Maybe one of the two "discovered" ComputeModules is a Mn Ultra and the other is an ASi (GP)GPU, and both have proprietary SuperDuperUltraHighSpeed connectors to the "backplane"; said "backplane" having one SDUHS connector for the Mn Ultra and two SDUHS connectors for ASi (GP)GPUs, the rest of the backplane has a handful of PCIe slots for other uses...?

Maybe these ASi (GP)GPUs are also of an Ultra configuration, but comprised of two GPU-specific SoCs (see my other ramblings elsewhere) capable of handling as much RAM as the Mn Ultra SoC; and the SuperDuperUltraHighSpeed connections allow the system to see the Mn Ultra SoC & the ASi (GP)GPUs as one mass unit, and thereby increasing total system UMA RAM to 576GB (hopefully they solve the ECC issue) and pumping a bunch of cores into the GPU subsystem...?

Maybe Apple also offers a headless desktop variant with one less ASi (GP)GPU specific slot and zero PCIe slots, and the Mn Ultra & ASi (GP)GPU cards with the SDUHS connections are more compact; they could call it the Mac Cube...?

;^p
 
  • Like
Reactions: spaz8
Assuming there is a ASi Mac Pro, which feels more likely by the day (saw an Apple reseller selling a 16 core, 96 gb 5500wx 2019 MP for $2999 last night) .. how Apple proceeds will be really telling about Apple. If they EoL the MP, if they phone it in as basically a Mac Studio M2 or M3 in a bigger box. If they mothball/don't update the Mac Studio to an M2, or only update it even other year. Or Apple decides to delay the Mac Pro further because they can't make it meet the needs as a "proper" workstation. A M2 Ultra would top out at only 192 GB ram, and an M3 Ultra at 256 GB ram supposedly. Then there is the even more poultry GPU compute numbers if you are constrained to a single M2/M3 SoC. Does Apple keep selling an Xeon MP option? The Mac Pro really is the most interesting product space by a country mile.

Heres hoping the "modular" idea pans out, granted I'll assume that each daughter card (another Ultra SoC) is another $2K to get more Ram, and GPU, and CPU - granted I'd guess ppl are buying the card for the first two vs the diminishing returns of more CPU.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.