Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
ai, neural engine, all these terms have always made me wonder if its just enticing sales pitches or is it actually using repeating patterns of some sort to make 'educated' guesses? I mean I'm no where near informed enough to understand, but I see the op's point.

Machine Learning models that use neural nets are a whole field of research and engineering. Something like the neural engine is really just a hardware accelerator for the sort of math that a neural net uses.

The simple version of the idea is that you define a network’s inputs and outputs. Then train the network on test data until it’s providing the expected outputs for the given inputs. The idea is that by mimicking something like what the neural network of an animal brain does, we can create models that are more flexible, or handle more complicated situations than fixed algorithms can. For example, driving a car is a very complicated thing. No fixed algorithm could ever really do it. Identifying an object in a photo is also very complex.

But there’s a lot of issues with the tech and a long way to go with learning how to use it properly. Per the old adage: garbage in, garbage out. Bad training data, biases in the data, etc, get reproduced in the result. So while there are those touting it as a way to take “human bias” out of the equation, the results tend to show that it really doesn’t. Google’s gaps in training data meant their photo identification neural net saw black people as gorillas. Similar gaps means facial recognition for trying to identify criminals lead to a lot of false positives for certain skin colors (“ya’ll look alike to me” in computer form). Using models to hand down sentences to avoid judicial bias just replicated the bias because it was trained on historical data and had limited inputs. The list goes on.

As for Apple, the machine learning models tend to feed into Siri, or image processing (as others have pointed out). Things like “when should I charge the battery to 100% instead of 80%?” or “Should I suggest to the user this common action?” are using models run on the neural engine. Many companies look to run these models in the cloud when they can so they can iterate quickly, but Apple is investing in running these models locally to keep personal information local to the device when possible.
 
I used to watch powermetrics to try to figure out when the Neural engine would be kicked into high gear. The only instances I saw frequently were when viewing photos and typing text.

Of course it's likely used for numerous other things as well (including, as others have mentioned, optimized battery charging and so forth). But it seems to be most often used to accelerate text prediction and pattern matching/text detection in photos, alongside certain image processing algorithms for meeting applications and so forth.

I'm curious to see what the other uses for it are as well. It's interesting that they included the Neural engine, because GPUs don't do a bad job with neural work already. Of course the ANE is better equipped to accelerate those workloads with lower power usage, so my guess is that Apple has big plans for it long-term.
 
  • Like
Reactions: yitwail
Anything using the Core ML framework in MacOS can (potentially) utilize the ANE. Core ML is actually the only way to access it afaik. The direct ANE framework is “private” and can’t be used otherwise.

The model processing automatically gets scheduled to either the CPU/GPU/ANE.
 
  • Like
Reactions: ArkSingularity
It can.

When I know the cores are thermally throttling, and therefore slowing down a render, I can manually set the fans to max speed, which will stop them from throttling and the render is completed in less time.
That's what the "High Performance" mode is for...  > System Settings > Battery
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.