Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Off the top of my head..1. No is making any money in it. 2. Open source, there are no secrets, no moats 3. The easily available data sources have already been scrapped. 4. LLMs may not be the future of AI,it's not in any way ready for prime time or critical system and possibly never will. 5. The inherent nature of today's LLM where they need prompts, and "alignment" under the hood to keep them from straying.

The foolish belief in a race to AGI with a winner takes all jackpot. This is a fallacy of people who live inside cubicles and have never experienced the real word or touched grass. To believe that once you achieve AGI that everything stops for them and no one will ever catch up is something only children homeschooled could believe. The world does not work that way
On what are you basing your assumption about AGI? Do you have experience in the field?
 
On what are you basing your assumption about AGI? Do you have experience in the field?
No one does. It doesn't exist. All assumptions, all predictions, all guesses, but there plenty of real world examples and common sense of it being a fallacy, whereas zero, none for that, or all the doom prophecies.
 
No one does. It doesn't exist. All assumptions, all predictions, all guesses, but there plenty of real world examples and common sense of it being a fallacy, whereas zero, none for that, or all the doom prophecies.
Ok.
 
Off the top of my head..1. No is making any money in it. 2. Open source, there are no secrets, no moats 3. The easily available data sources have already been scrapped. 4. LLMs may not be the future of AI,it's not in any way ready for prime time or critical system and possibly never will. 5. The inherent nature of today's LLM where they need prompts, and "alignment" under the hood to keep them from straying.

The foolish belief in a race to AGI with a winner takes all jackpot. This is a fallacy of people who live inside cubicles and have never experienced the real word or touched grass. To believe that once you achieve AGI that everything stops for them and no one will ever catch up is something only children homeschooled could believe. The world does not work that way
Absolutely correct. Not one AI company is making money on their AI other than investor money and contracts with larger tech firms like Amazon, Google, Microsoft, DoD, etc. And by “making money” I mean profit.

OpenAI is about the worst. They even lose money per inquiry with their most expensive paid tier. Even Microsoft’s CEO in a recent interview has put down the notion that AGI is not happening anytime soon from anyone which flies directly in the face of Sam Altman who said that ChatGPT was soon able to achieve AGI. Part of the reason for Microsoft’s broadside against Sam Altman and OpenAI is that part of the deal between Microsoft and OpenAI is that OpenAI has to meet certain performance and profitability goals for continued Microsoft investment. In return Microsoft has agreed to NOT develop their own in-house AI until the exclusivity clause they signed with OpenAI has been fulfilled with those performance metrics mentioned above. It looks like Microsoft MAY be looking for an out and is telegraphing this by dogging on ChatGPT and Sam Altman’s claims of AGI.

Which if true…and if Apple really is about to ink a deal with either OpenAI or Anthropic to replace Siri, then my money is on Apple choosing OpenAI. Why? Sam Altman may see the writing in the wall with Microsoft and is desperate for another high profile deal to keep investor interest and the their money coming in. I doubt Anthropic is chosen simply because they are tied way too much into Amazon and AWS along with their shady ties to the Mass AI Surveillance firm Palintir.
 
  • Like
Reactions: rp2011
Absolutely correct. Not one AI company is making money on their AI other than investor money and contracts with larger tech firms like Amazon, Google, Microsoft, DoD, etc. And by “making money” I mean profit.

OpenAI is about the worst. They even lose money per inquiry with their most expensive paid tier. Even Microsoft’s CEO in a recent interview has put down the notion that AGI is not happening anytime soon from anyone which flies directly in the face of Sam Altman who said that ChatGPT was soon able to achieve AGI. Part of the reason for Microsoft’s broadside against Sam Altman and OpenAI is that part of the deal between Microsoft and OpenAI is that OpenAI has to meet certain performance and profitability goals for continued Microsoft investment. In return Microsoft has agreed to NOT develop their own in-house AI until the exclusivity clause they signed with OpenAI has been fulfilled with those performance metrics mentioned above. It looks like Microsoft MAY be looking for an out and is telegraphing this by dogging on ChatGPT and Sam Altman’s claims of AGI.

Which if true…and if Apple really is about to ink a deal with either OpenAI or Anthropic to replace Siri, then my money is on Apple choosing OpenAI. Why? Sam Altman may see the writing in the wall with Microsoft and is desperate for another high profile deal to keep investor interest and the their money coming in. I doubt Anthropic is chosen simply because they are tied way too much into Amazon and AWS along with their shady ties to the Mass AI Surveillance firm Palintir.
Zuckerberg is supposedly buying up all the top talent to catch up. Reportedly paying $100 million signing bonuses on top of their salaries to poach the top talent for this Meta Super Intelligence Labs. Things have gone off the rails but no one wants to beta against this on Mark. The media calls out that "Mark bet on Instagram" so he has to be right about everything in tech.
 
Rumors like this and the rumor of Apple buying Perplexity give me no hope that a decent, working Siri is coming out in Spring. I think they are still very early in the planning stages and just trying to figure out who they can hire, who they can cut a check to, and what they need to do to get out of last place.
There's a lot of money in this space - so there's a lot of money in speculation as well.

Apple's Siri strategy is to have three tiers:

A (series of) local model(s) for actions. This is a small model that runs off of local data and doesn't need world knowledge (it isn't meant to answer trivia questions about WWI). If it can't answer the question, it needs to package the question up along with a subset of local information and send it to private compute.

Remote action model(s) for actions that runs on apple's private compute. Again, no need for world knowledge.

Escalation to a third party service like ChatGPT with knowledge models to ask it to search the internet for information, etc.

There's no place in that architecture for ChatGPT or Claude except the final tier - unless Apple is asking them to do custom, Apple specific work for running new action models on-device or on private compute for action-oriented model. Considering ChatGPT is hoping to replace the iPhone in the future, this would also be a very difficult business deal to negotiate.
 
  • Like
Reactions: cardfan
If you still believe that Apple Silicon chip is great, why dont you explain that Apple is not able to develop AI for their own instead of relying on others just like this article?
This article states (be it speculation or not) that these companies are porting to the Apple's compute servers - which are M3 Ultra based, and where nary a Nvidia core exists.
 
  • Haha
Reactions: BNBMS
There are OS's one can run (desktop and mobile) that aren't infiltrated with AI slop, yes.
Desktop yes, because of Linux and while there are smartphones around that have Linux as their OS, they are not great, and also you will not get the apps, so may as well go for a normal non-smart phone.
I doubt Apple will force AI onto you, it can be disabled now, so I doubt that will vchange.
 
  • Like
Reactions: turbineseaplane
There’s a reason why every processor in every device from Apple, Android and now Windows PC’s had a NPU ( Neural Processing Unit ). Ai is not just Genmoji’s and Chatbots. The days of AI “Opt-In or Out” are quickly diminishing. By 2030 or sooner those options will be gone for all platforms.
We will wait and see, not all processors have NPUsn and while they can be used for AI, they will be slower. i doubt very much if the days of opt in and out of AI is as close as you think.

if i update my PC, the CPU I am looking at, a AMD Ryzen 5, don't have any AI capabilities.
At the end of the day, why should we be forced to use AI stuff, or are they that worried that people will nto turst it, so they have to force it onto to us?

It will happen at some point, I expect, but not in the next five years, they need to make it more reliable and accurate first.
 
This article states (be it speculation or not) that these companies are porting to the Apple's compute servers - which are M3 Ultra based, and where nary a Nvidia core exists.
Why would they rely on OTHER companies for? It only proves that Apple cant make such things due to poor hardware and software. Besides, M3 Ultra is not even close to be "enough". Does M3 Ultra even close to Nvidia A100?
 
Last edited:
This article states (be it speculation or not) that these companies are porting to the Apple's compute servers - which are M3 Ultra based, and where nary a Nvidia core exists.
Let’s clear things up. Apple is training their in house LLMs using Nvidia GPUs. A metric crap ton of them. Once they are trained then they are segmented into large parameter, medium parameter and small parameter models for cloud based, edge based and local on device based deployment. The cloud based ones are the ones being deployed on Apple servers with M3 Ultra or what I’ve also heard is M4 Ultra based servers because they run much cooler than M3s.
 
Tell that to Nvidia's market share by more than 90% in AI field. M3 Ultra is only RTX 5070TI's performance and it's nothing compared to A100 and newer versions. Besides, Apple dont even have any systems like CUDA which is how Nvidia dominate the market. Yet, you admitted yourself that it's good only for LOCAL AI models which is a joke.

If you still believe that Apple Silicon chip is great, why dont you explain that Apple is not able to develop AI for their own instead of relying on others just like this article?

It's only DEFYING the truth and fact after all.
Uhhh what other kind of AI besides local would you need a powerful GPU for? Your statement makes no sense at all. I admitted what exactly? What other kind of AI besides local do you run on those Nvidia GPUs that you are so hot on?

If you want to use remote AI servers, you could do that on an Apple II with an ethernet card. If you aren't using remote servers, then you are using local AI... lol.

Apple's problems developing AI has absolutely nothing to do with their chips. We're running all the open source LLMs on Apple Silicon right now and it's very powerful.

Also the reason the M3 Ultra is so good for AI is the 512gb unified memory. It's obvious you don't understand any of this by your absolutely absurd statement that the M3 Ultra is like a RTX 5070 Ti... a card that only has 16gb of VRAM. Lol.
 
  • Disagree
Reactions: BNBMS
Uhhh what other kind of AI besides local would you need a powerful GPU for? Your statement makes no sense at all. I admitted what exactly? What other kind of AI besides local do you run on those Nvidia GPUs that you are so hot on?

If you want to use remote AI servers, you could do that on an Apple II with an ethernet card. If you aren't using remote servers, then you are using local AI... lol.

Apple's problems developing AI has absolutely nothing to do with their chips. We're running all the open source LLMs on Apple Silicon right now and it's very powerful.
There are many applications required for high GPU performance such as 3D, gaming, AI, and more. Besides, even photography, video, design also need higher GPU performance thanks to AI. Now you are only justifying NOT needing it.

Yes, it does. They dont have any hardware to run and develop their own AI and LLM while disputing LLM itself is useless by Apple themselves. And tell that to Nvidia A100. You must be dreaming that M3 Ultra is powerful.

Also the reason the M3 Ultra is so good for AI is the 512gb unified memory. It's obvious you don't understand any of this by your absolutely absurd statement that the M3 Ultra is like a RTX 5070 Ti... a card that only has 16gb of VRAM. Lol.
This is one of the biggest misconception about Unified memory. Yes, it has a lot of memory BUT the speed or bandwidth is WAY slower than server grade GPU or even RTX 5090, no ECC feature which totally defeats the purpose of having a lot memory, and only good for single device, not servers.

Dont forget that having multiple M3 Ultra is same thing as adding more GPU which also makes Unified memory's bandwidth speed useless since they are separate devices and not directly connected each other for servers.

Ironically, when you run Stable Diffusion if you ever used before on both PC and Mac, Nvidia GPU with WAY less VRAM runs WAY faster than Apple Silicon Mac with 128gb of RAM because the RAM size is NOT everything especially since you also need to consider the speed of GPU itself and VRAM.
 
There are many applications required for high GPU performance such as 3D, gaming, AI, and more. Besides, even photography, video, design also need higher GPU performance thanks to AI. Now you are only justifying NOT needing it.

Yes, it does. They dont have any hardware to run and develop their own AI and LLM while disputing LLM itself is useless by Apple themselves. And tell that to Nvidia A100. You must be dreaming that M3 Ultra is powerful.


This is one of the biggest misconception about Unified memory. Yes, it has a lot of memory BUT the speed or bandwidth is WAY slower than server grade GPU or even RTX 5090, no ECC feature which totally defeats the purpose of having a lot memory, and only good for single device, not servers.

Dont forget that having multiple M3 Ultra is same thing as adding more GPU which also makes Unified memory's bandwidth speed useless since they are separate devices and not directly connected each other for servers.

Ironically, when you run Stable Diffusion if you ever used before on both PC and Mac, Nvidia GPU with WAY less VRAM runs WAY faster than Apple Silicon Mac with 128gb of RAM because the RAM size is NOT everything especially since you also need to consider the speed of GPU itself and VRAM.
I don't even know what this argument is about anymore. I never said Nvidia GPUs were bad or worse than Apple Silicon, I just said Apple Silicon is powerful for AI purposes and Nvidia is overrated.

Same for your memory bandwidth argument. I'm not claming the M3 Ultra is faster than Nvidia nor that memory size is "everything", but if you have a model that is too big to fit in VRAM or unified memory, then memory bandwidth is worthless because the model will become so slow it becomes unusable. The M3 Ultra is a compelling option compared to spending the same amount on crazy Nvidia server GPUs and it is better *in some ways*.

Also your premise of using marketshare as a mark of superiority is ridiculous. By that logic, Windows is the best OS on the planet. Nvidia has a monopoly which didn't even start with AI, it started with PC gaming. It was dumb then and it's still dumb now in the AI age.

Bottom line, Apple can do whatever they want. They are not technically incapable. If they wanted to compete with Nvidia in the server AI space, they can. Apple's priorities are often not fully understood, and sometimes they are just downright screwy. I don't agree with all their decisions... or maybe even most. But Nvidia sucks as a company and their products are way overpriced and overrrated. And again, Apple's inablity to make a smart Siri or do whatever else with AI models has nothing with their ability to make hardware. Honestly I strongly believe Apple's inability to turn Siri into a smart LLM comes down to one thing: LLMs are extremely chaotic and unpredictable, and Apple hates that. They are control freaks and they prefer a dumb Siri to an unpredictable one. Their rigidness is holding them back, as it often does.

If you ever watch Alex Ziskind's Youtube channel, he has tested LLMs on many different hardware and Apple Silicon often beats Nvidia in many use cases, especially when price is a factor. It is a viable competitor, whether you want to admit it or not.
 
Last edited:
  • Disagree
Reactions: BNBMS
I don't even know what this argument is about anymore. I never said Nvidia GPUs were bad or worse than Apple Silicon, I just said Apple Silicon is powerful for AI purposes and Nvidia is overrated.
You already contradicted your own statement which is ironic.

Same for your memory bandwidth argument. I'm not claming the M3 Ultra is faster than Nvidia nor that memory size is "everything", but if you have a model that is too big to fit in VRAM or unified memory, then memory bandwidth is worthless because the model will become so slow it becomes unusable. The M3 Ultra is a compelling option compared to spending the same amount on crazy Nvidia server GPUs and it is better *in some ways*.

Also your premise of using marketshare as a mark of superiority is ridiculous. By that logic, Windows is the best OS on the planet. Nvidia has a monopoly which didn't even start with AI, it started with PC gaming. It was dumb then and it's still dumb now in the AI age.

Bottom line, Apple can do whatever they want. They are not technically incapable. If they wanted to compete with Nvidia in the server AI space, they can. Apple's priorities are often not fully understood, and sometimes they are just downright screwy. I don't agree with all their decisions... or maybe even most. But Nvidia sucks as a company and their products are way overpriced and overrrated. And again, Apple's inablity to make a smart Siri or do whatever else with AI models has nothing with their ability to make hardware. Honestly I strongly believe Apple's inability to turn Siri into a smart LLM comes down to one thing: LLMs are extremely chaotic and unpredictable, and Apple hates that. They are control freaks and they prefer a dumb Siri to an unpredictable one. Their rigidness is holding them back, as it often does.

If you ever watch Alex Ziskind's Youtube channel, he has tested LLMs on many different hardware and Apple Silicon often beats Nvidia in many use cases, especially when price is a factor. It is a viable competitor, whether you want to admit it or not.
Single computer VS Server is a whole different story and do companies use only one computer for AI development and more? Seriously, all of your saying dont even make sense when you are trying to defend Apple's problem. Tell that M3 Ultra is powerful than Nvidia A100 servers to other people. I would laugh instead.

No matter what you are saying, the current situation of Apple is already a great evidence to support my claim as they cant even develop their own AI services and development.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.