Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Strange, you can order them in China and Hong Kong.

I wonder if this is a Vietnam thing since U.S. market Macs are assembled there.

1776899788565.png



1776899859887.png
 
Assuming the sentiment is a genuine one, and even assuming we both agree on what you mean by “governments” and “ai companies”—can you explain in some detail how you think this could actually be done in practice?
Companies who manufacture these components must first satisfy demand for consumer products before supplying demand for companies looking to build massive server farms to operate AI.
 
Companies who manufacture these components must first satisfy demand for consumer products before supplying demand for companies looking to build massive server farms to operate AI.
Ok. I’m interested to hear more.

How does a bulk memory or GPU reseller in Hong Kong or hell, even Nvidia determine who is and who isn’t building an AI datacenter with their order?

Are they supposed to tell their company stakeholders that they rejected maximizing their profit because the buyer admitted to building an AI with it?

And what if the buyers all lie?
 
Yes I know that is what happens and the massive increase in demand is why this is a problem. It is the job of the government to know when enough is enough tho. Should we let companies buy all the water too because there is high demand for it?
And that is also a problem with Hersey's, Walmart and other conglomerates buying up water for their bottled water business.
 
  • Like
Reactions: KeithBN
Ok. I’m interested to hear more.

How does a bulk memory or GPU reseller in Hong Kong or hell, even Nvidia determine who is and who isn’t building an AI datacenter with their order?

Are they supposed to tell their company stakeholders that they rejected maximizing their profit because the buyer admitted to building an AI with it?

And what if the buyers all lie?
I never once suggested companies self regulate which is why I said the government should step in. And if you're suggesting that countries like the US don't know what is happening on massive plots of land then idk what to say to that.
 
I never once suggested companies self regulate which is why I said the government should step in. And if you're suggesting that countries like the US don't know what is happening on massive plots of land then idk what to say to that.
Step in how?

No, they don’t know what’s happening on every plot of land or warehouse. No government does!

I’m literally responding to what you literally wrote about “passing a law” requiring them to “sell to consumers first.”

I think a demand-side approach to increase supply would be better.

Pass laws to license data centers, regulate their imports, set up routine inspections, and limit the amount of equipment that they can buy, maybe requiring they prove that they have enough electricity to power the hardware.

No electricity, no bulk order of GPUs. Demand goes down.
 
the M5 Pro cant come soon enough. I want to see how well gemma4:31b will run on it
Gemma 4 31B will use more than 64GB of memory FWIW. I've been doing a lot of testing lately and depending on what you're doing it can hit 80-90gb wired.

Some user quantized version will may be lower but I have to use what ships from the creators for what I'm doing.
 
  • Wow
Reactions: boast
Most are not running AIs locally. They're running agent harnesses that connect with cloud AIs.
True, the advantages a lot of them see with a Mac mini are that it's tiny, it runs headless, and it sips less power than a light bulb.
Honestly, the ones with more RAM make a lot of sense for always-on, privacy-centric local AI. The buy-in price and the electricity use are tiny in comparison to a Windows machine. I would think a 32 or especially a 64 GB Mac mini would allow you to easily run a decent reasoning agent (Qwen 3.5 or Gemma 4 or similar), with some headroom for other models if needed, and only dip into APIs rarely if ever. LLMs aren't really my jam, but I would think there are a variety of Mac mini configs that would work well for agents, local or over API.
 
  • Like
Reactions: centauratlas
While you were reporting that the Mac mini delays are only due to RAM shortage, several people pointed out that it is due to an imminent refresh. Now the lower RAM configs are also out of stock, indicating that it is indeed due to the refresh.
Except in this case the "out-of-stock" status wasn't applied to all the Minis at once, but instead hit the higher-RAM configs first.

If this were due to an imminent refresh, wouldn't Apple have applied that to all the Minis at once?

Or, when Apple has done a model refresh in the past, have they staggered the out-of-stock status among different configs within that model?

If not, that would suggest this is due to a shortage rather than a refresh decision.

Though that would not preclude that there's also a refresh coming, if Apple sourcing for the LPDDR5X-9600 used in the M5 is better than that for the LPDDR5X-7500 used in the base M4 and the LPDDR5X-8333 in the M4 Pro and Max
 
Last edited:
I find it fascinating this Mac mini is popular for local AI models. Without even knowing this was a trend, I bought mine explicitly for local AI mid-last year. Nothing crazy, just to speed up my speech to text, do advanced text processing and the occasional question.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.