Since you will be using 8GB RAM iPhones in ten years time, that is a definite “no”.Would it be safe to say that in 5-10 years a smartphone will be able to run a model like this internally and without the internet?
Since you will be using 8GB RAM iPhones in ten years time, that is a definite “no”.Would it be safe to say that in 5-10 years a smartphone will be able to run a model like this internally and without the internet?
Please provide a source for where Deepseek running locally can achieve this. Not being a jerk, but all AI models aren’t created equally and there is a lot of proprietary stuff going on that is not published research at OpenAI, Microsoft, Anthropic, etc.hospitals or medical offices or labs that want to crunch tons of patient data and get insights from it, within the scope of hippa privacy regulations
basically for only 8k you can have a magical box that takes in someones xray and tells you every single issue they have, 10% more accurate than a human doctor as per latest studies
My personal use case right now is brainstorming a novel I’m writing. Online AIs tend to be too highly censored to be useful.Can somebody explain the actual use of an LLM that cannot search for up-to-date info online?
I'm not being sarcastic, I actually want to know.
Please provide a source for where Deepseek running locally can achieve this. Not being a jerk, but all AI models aren’t created equally and there is a lot of proprietary stuff going on that is not published research at OpenAI, Microsoft, Anthropic, etc.
I don’t think there is one because Deepseek has not demonstrated this capability especially with respect to multimodality. If there is I’d love to read it and welcome you to share. Perhaps someone fine-tuned Deepseek within 2 months to do this, and I’d really like to read that research if it exists but I imagine this was a broad statement and not specific which is important to understand.
How much has top spec VRAM in a Mac increased in the past 11 years? 12GB in 2014 to 512GB today. That is over a 42x increase and now VRAM is significantly more important.You expect Apple to increase RAM 64-fold in 11 years? I don’t buy it.
448GB of VRAM, not virtual memory (V = Video)!
According to Lee's testing, the 671 billion parameter AI model can be executed directly on Apple's high-end workstation, but it requires substantial memory resources, consuming 404GB of storage and requiring the manual allocation of 448GB of video RAM through Terminal commands.
I wouldn’t be surprised if Apple sold accelerator cards for the next gen Mac Pro featuring M3 Ultra chips with 512 GB each… package them as server racks and you have something that could hurt nvidia badly.
How much has top spec VRAM in a Mac increased in the past 11 years? 12GB in 2014 to 512GB today. That is over a 42x increase and now VRAM is significantly more important.
Thanks for the info.My personal use case right now is brainstorming a novel I’m writing. Online AIs tend to be too highly censored to be useful.
I don’t need to use massively large models for this (between 10B - 27B is fine), but I do want a decent sized context window and that takes additional RAM.
My experience is that there’s diminishing returns with Mac when it comes to LLM.Thanks for the info.
Next year I need to change my desktop, and I'd like to carefully consider the amount of ram I need.
I'm on 128GB now (M1 Ultra, but I'd consider getting 256).
At this point I'm sincerely curious to know what you were talking about.Nope. Apple is way out ahead here.