Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Nah, not quite. Siri will SOUND like it's making sense and give you a very eloquent, detailed answer which will be completely wrong in every way.

Why do people trust generative AI? It's the latest stupid fad in the tech industry like NFTs were. It's going to make our current torrent of misinformation worse and cause a lot more harm than good. Just, no.
There's actually no way people are this short-sighted. You genuinely believe that no error/fact-checking algorithms can be developed for generative AI? Bing GPT already has a similar system.

I agree with you on the NFT take, they're truly useless, but generative AI has actual tangible benefits. I've already used DALLE to make images for my PowerPoints and I've used ChatGPT to help me put together some small scripts for random stuff.
 
Smaller models also execute faster. I'm sure everyone is trying for that, however there are probably still differences when optimizing to run on devices vs running in the cloud. Apple is likely the only one to really push running on device.
The open source community has been working for the better part of the year to run smaller/optimized models on a PC. It’s a compromise though, and I have a hard time seeing this working on a phone in the near to mid term.

That aside, Apple first has to worry to get it to work at all in a way that’s appropriate and useful for the average user. It’s not a given they’ll manage to have something robust and GPT3/4-level for the next iPhone launch. They’ll probably go for something more limited to specific use cases.
 
It's strange, but it's kind of like a way of programming using language.
In some ways, yes. Most people using Python today have no experience or interest in machine level programming, dealing with registers and interrupts, etc.

So a LLM like chatGPT can be thought of as some giant interpreter which accepts natural English and outputs a product.

Still not AI, though.
 
M1/M2 basically has such a low level of graphics capability, it's around 100x slower compared to H100.
Strikes me as a bit harsh on the M1/M2, but I also think we're missing a big point here.

These Apple silicon chips have what Apple markets as the "neural engine". It's not used much, especially by third party software developers, as near as I can tell said developers have no control over the engine. Apple's own proclamations (https://machinelearning.apple.com/research/neural-engine-transformers ) aside.

Apple claims said engine is capable of many Teraflops.

Down at the device level it will be some sort of SIMD, an array processor of some iteration (Apple plays coy with the details.)

Assuming this engine is actually useful (prove it to me!), nothing is stopping Apple from making a greatly scaled up version of it.

Nothing except money.

And why should Apple invest billions of dollars in chip development that cannot be directly commercialized but will be only for their own app (LLM) development when there is another party (say Nvidia) who already has product available to do the job?

Still, Apple may decide to go the route of making a unique hardware architecture simply for their own consumption.

Whatever Apple is doing, I doubt Kuo' rumor machine is going to give us the truth.
 
If Siri can finally understand who is adding stuff to the shopping list, it will be $4.75 billion well spent.
 
  • Like
Reactions: nt5672
Reminder that Apple is already investing heavily in AI hardware, on the client side.

The ANE in the A17 Pro is 2x as fast as that in the A16. I expect the same 2x improvement between the ANE in the M2 series and the M3 series. The M2's ANE was 1.4x as fast as the one in the M1, so Apple actually seems to be accelerating its investment in neural network performance on Apple Silicon.
 
Apple doesn't have to worry about this. They can be penny pinchers and they'll still do fine. Because TINA.
 
General AI is coming, whether you like it or not. It is hugely promising (and potentially very dangerous) but I haven’t seen a killer application of the current tech that will make the masses drop there iPhones or MacBooks. Apple is still in the (AI) race, but they will have to reboot Siri and make it feel like a truly reliable personal assistant in stead of a party trick… oh and make it secure, encrypted and private please. That last bit seems to be sorely missing from Meta, Microsoft and Google…
 
Last edited:
  • Like
Reactions: 4odomi
Strikes me as a bit harsh on the M1/M2, but I also think we're missing a big point here.

These Apple silicon chips have what Apple markets as the "neural engine". It's not used much, especially by third party software developers, as near as I can tell said developers have no control over the engine. Apple's own proclamations (https://machinelearning.apple.com/research/neural-engine-transformers ) aside.

Apple claims said engine is capable of many Teraflops.

Down at the device level it will be some sort of SIMD, an array processor of some iteration (Apple plays coy with the details.)

Assuming this engine is actually useful (prove it to me!), nothing is stopping Apple from making a greatly scaled up version of it.

Nothing except money.

And why should Apple invest billions of dollars in chip development that cannot be directly commercialized but will be only for their own app (LLM) development when there is another party (say Nvidia) who already has product available to do the job?

Still, Apple may decide to go the route of making a unique hardware architecture simply for their own consumption.

Whatever Apple is doing, I doubt Kuo' rumor machine is going to give us the truth.

It's not harsh, it's reality. M1/M2 is terrible for deep learning. Apple says Neural Engine is only good for inference.

If Apple Silicon were useful, companies would be buying loads of Mac minis instead of paying NVIDIA billions of dollars for 10,000 servers at $500k each.

You can't just scale a processor. The transistors need to be designed for high speed and power. Why do you think AMD and Intel still beat up M2 Ultra when power doesn't matter? For AI training, power and cooling doesn't matter. You also need to design a very high speed interconnect. You can't magically have one appear.

If engineering teams need nothing except money, then Apple should have announced their own 5G modem years ago. You need talent. You need leadership and foresight.
 
And Apple know that 20k+ servers wants to be administrated and maintained? I mean iCloud is already crashing weekly, just saying.
 
  • Like
Reactions: nt5672
Tim, don't waste your money on this, AI is just another name for Metaverse. Just focus on making the computers faster. Focus on things that matter like:

- 1 month battery life on a single charge
- Instant on for all devices
- Improve the satellite services further - Facetime over satellite off grid tracking so family and close friends can know exactly where you are.
- Translation, ability to easily communicate across Mac, iPad and iPhone users in different languages through iMessage
- Ability to record a Hollywood feature film on a single charge
- 100x zoom
- Finder for iPad
- Detachable MacBook Display that becomes an iPad with Pencil support
- 32 inch iMac with OLED
- 100 GBs of free iCloud storage by default
- 1 TB of physical storage
- Give Mac Minis and MacBook Airs to schools and colleges at deep discounts
- Have 1 billion dollar a year developer academy
- Seamless Windows integration by including a personality mode licensed from Microsoft, optional for users who need it
- No need to ever reboot a device for point updates, they just rapidly and transparently install in the background with no disruption
- Ability to use multiple apps at the same time on an iPhone, watch a Facetime while web browsing
- Seamless migration from any other platform: Linux, Windows, ChromeOS to an Apple device
- Device pre-emptively knows my presence and brings the right apps up on screen based on which device I was working on last, knows what time of the day is.
- Has communication notification feature to tell others user is not at work, stop bothering
 
Yes, but that's also an opportunity perfectly suited for Apple... as they can optimize the training models to be smaller and more efficient to process. I've appreciated that Apple keeps their resources (RAM, etc.) lighter... as it forces developers to work harder to optimize their apps. As soon as you give developers massive amounts of headroom, they get lazy, fast.
But guess what that requires...? For iOS to be open and allow data to go back and forth thus adapting and getting better etc. Which is Apple's kryptonite and is now coming home to roost as Alexa, Bixby, Google Assistant to run off over the horizon.
 
Kuo thinks that Apple is purchasing servers equipped with Nvidia's HGX H100 8-GPU for generative AI training, with the company planning to upgrade to B100 next year.
So I guess the M2 Ultra with Neural Engine just doesn't cut it, huh? :p I remember Apple talking such a big game about how the M2 Ultra was better than using discrete GPUs for machine learning during WWDC this year:

"Finally, the 32-core Neural Engine is 40% faster. And M2 Ultra can support an enormous 192GB of unified memory, which is 50% more than M1 Ultra, enabling it to do things other chips just can't do. For example, in a single system, it can train massive ML workloads, like large transformer models that the most powerful discrete GPU can't even process because it runs out of memory." (source)
 
Better hope China does not invade Taiwan - where do you think Nvidia fabricates all these chips at? With the AI-boom, TSMC is even more a critical supplier than before. I'm glad to see several chip fabs are being built in the US again.. but it will take years to rival TSMC's super fabs sheer scale and wafers per day output.
You know it's useless for China to invade Taiwan just for TSMC right? TSMC is useless if ASML stops selling them machines.
 
  • Like
Reactions: mdriftmeyer
Not surprising. A good investment that could payoff in future Apple products.

Adobe is certainly onboard with AI.
 
Also the same guy who was absolutely positive about titanium = overheating.

I had my doubts about that one when I ran a Geekbench 6 CPU test on my 15 Pro and it was the first iPhone I've ever had to not dim the screen during the test.

That said, I do hope Apple invests heavily in AI if they're going to take a stab at it. Siri still feels primitive after all these years and I wouldn't want any potential AI features to have the same fate.
 
It's not harsh, it's reality.
It's harsh because you are criticizing the M1/M2 for not being able to do something it was never intended to do.

The M2 was never intended for large research and development projects whose goal is to replicate/surpass chatGPT style LLMs.

The M1/M2 are intended for end-user applications.

You might as well criticize the M2 for not replacing IBM's Watsonx or IBM's Quantum System One.
 
  • Like
Reactions: nt5672
it can train massive ML workloads
The word "massive" is doing a lot of work in that claim.

For people who want to do their own projects, sure, get an M2 Ultra.

But from what the OP implies (not that I trust these rumor mongers), it seems like Apple wants to deploy across their server farms. The M2 Ultra (or its follow on chips) may be fine to work in those server farms.

But for the development and massive (if one wants to train an LLM using internet traffic, for example) training (and remember that more than English is required - we're talking training in many languages) one will want a system where Teraflops are not enough.
 
It’s easy to have Siri be the butt of every joke on here. But I personally think she’s gotten monumentally better. The old stigma of Siri functionality is still reverberating around the internet much like how Apple Maps are still horrible compared to Google maps, Verizon is still way better than T-Mobile, etc etc. Time to embrace some change, people.

I for one am ecstatic at this news.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.