Probably what Apple Intelligence was hoping to do...It's all very clever that it can do it, but to what end? I keep hearing of a nebulous "AI Researcher" having a use for this, but to do what exactly? Running LLMs locally has fascinating potential, but who practically needs to run the gigantic model locally?
Running a AI models to do market research, analyze health data of individuals or patient groups or populations, do patent searches, develop a business strategy, analyze peoples reactions to communications (political or just advertising), develop clinical development strategies, develop clinical trial protocols. All stuff I would do with an AI (current LLM's are not there yet but agentic models will be soon) and where I do NOT want anyone knowing what do and that I do it.It's all very clever that it can do it, but to what end? I keep hearing of a nebulous "AI Researcher" having a use for this, but to do what exactly? Running LLMs locally has fascinating potential, but who practically needs to run the gigantic model locally?
Yes. That should be corrected in the article. Was a bit confused when I read ”virtual memory”.448GB of VRAM, not virtual memory (V = Video)!
If virtual memory were a usable solution to anything in this particular computing space there would be nothing terribly special about this machine.

It's all very clever that it can do it, but to what end? I keep hearing of a nebulous "AI Researcher" having a use for this, but to do what exactly? Running LLMs locally has fascinating potential, but who practically needs to run the gigantic model locally?
If SanDisk's high-bandwidth Flash takes off, then it might be sooner rather than later. It'll allow LLMs to use cheap high-density flash memory instead of using expensive main memory.Yes, but more 10 than 5.
Really impressive but... I'm predicting AMD and Intel delivering something similar for half the price pretty soon.
AMD may be reluctant (they probably prefer selling pro GPUs for more) but Intel probably will.
What it does prove though is that the $500bn server centre forSkynetChatGPT is largely a load of bollocks.
Maybe Apple could sell a dedicated LLM version of their macstudio to offer middle to small businesses their own localized AI server.
They could allow other “open source” systems run on it and earn from the hardware sold.Since Apple is *already* making their own AI servers, this is a distinct possibility.
Whether or not they'll do it is another question. I think they will, but they have other issues to fix first. The optics of offering an AI server when Siri and Apple Intelligence are struggling would be poor.
Not without some major technology breakthrough, no. Phones are not meant for this type of massive computing problem. They can run smaller models, but I would never, ever expect a small device such as a phone to efficiently run a large AI model. It makes more sense to offload this task on a server that are able to run models like this.Would it be safe to say that in 5-10 years a smartphone will be able to run a model like this internally and without the internet?
Without access to copywrited data, everything from blog posts and YouTube videos to literature and music to consumer they’ve reached the limits of what an LLM can do. They won’t admit this to Wall Street, if they do funding dries up and the lights go off because customers are not paying for it and probably never will.Wasn't a huge portion of that for training the models? Not using the models?
No. There is already comparable to R1 QwQ model with 20 times smaller parameters count, so you will be able to do it much sooner, if not already.Would it be safe to say that in 5-10 years a smartphone will be able to run a model like this internally and without the internet?
What are you talking about? Do they have some alien tech others don't?Nope. Apple is way out ahead here.