Absolutely. It was obvious at the time that it was a three pronged loiss at the time, Just like not spending the money for the Nvidia AI chips for work on Apple Intelligence when the team was begging for them, and announcing it before it was ready and making fools of themselves.Another misstep by current Apple leadership. Hopefully, it doesn't come back to bite them.
Apple is now actively using MacRumors to seek for new codenames. lmao.Hey! That's my name!
I would try to use that as an excuse to upgrade, but I just got an M4 Max so I think I'm good until at least M8
Apparently, so does Apple. Still plenty of bugs in their systems.You seem to misunderstand RAID.
I have yet to get Genmoji to actually give me either a) anything, or b) something that represents what I asked for.Guess that is the reason Apple Intelligence sucks balls - chips not good enough!
My point was more on it's easy to say this in hindsight, but at the time, there was no way to know what the future holds and AI was not that widely popular then. And their CPU's didn't beat Apple CPU's, their claims were based on giving them full power when testing and then testing the battery longevity on lower power, but their chips were excellent and really made Intel and AMD increase their efforts. ButI mean a big chunk of the original Apple Silicon team left Apple to develop it on their own. They lost the same amount of people either way. And the Nuvia team was posting about their server chips leapfrogging Apple in performance and power efficiency before being acquired by Qualcomm.
It was a huge three pronged short sighted management failure. 1. Lost a big chunk of top Apple Silicon engineers, 2. Gave competition, Windows and Qualcomm a nice post Intel and X86 gift and sadly and worst of all 3. They needed a server and AI chip development team anyway. Their short sightedness left them behind in AI and the server space that unlike Apple, everyone else could clearly see.
...but Apple don't make servers, so why would they want to build server chips?The team that went on to make Arm chips for Qualcomm was eager to build server chips for Apple a decade ago before being told no and leaving Apple to develop them on their own.
Why not bring back Mac OS Server while they're at it?Servers so
hot swap storage?
at least raid 1 for main / boot storage?
IPMI / IPMI that can do DFU mode from the web?
More then 128GB max ram?
More then 1 nic?
PCI-E slots for networking / storage devices?
My understanding that that since Apple dropped Mac servers several decades ago they rely exclusively on x86 servers running linux and programmed in C++. Apple definitely does not eat their own dog food. And that also explains why there are so many bugs in Apple non-C++ arm software.Apple should already have a server room filled with thousands of Mac Minis running M4s crunching AI. Shame on them if they don’t.
SHAME!!
Not that I am aware of. I don't think there has even been a codename leak for M5. The big M5 rumor that seems solid, now from at least two different industry sources, is that the M5 Pro/Max will use SoIC instead of SoC.Do we know anything new about M5 chips? I assume they’ve been working on M6 and M7 but, what do we know about the chips that are already being produced? No new leaks about the base M5?
Yeah, I am aware of such rumors regarding the M5 Pro/Max. But apparently there are no leaks about the base M5 SoC. We don’t know if they will improve the Neural Engine adding more cores, or improve the GPU with a new architecture or more cores. Or if it will be an overclocked M4 (likely) with bigger cachés (less likely) and faster RAM (even less likely)Not that I am aware of. I don't think there has even been a codename leak for M5. The big M5 rumor that seems solid, now from at least two different industry sources, is that the M5 Pro/Max will use SoIC instead of SoC.
The Mac Pro is primarily a single user workstation. It pragmatically needs to be able to drive multiple diplays so that the single user can see the GUI. Very good chance that Apple's "AI server chip" has no display function units. First, Apple display processing units are abnormally big. Likely space die trade off will lean more toward the AI compute need to do rather than the display task absolutely not going to do in a data center. Basically same thing for Thunderbolt subsystem. More likely traded for the rumored Broadcomm internode communication system.
A Mac Pro with no display process and no Thunderbolt won't meet the modern criteria for being a "Mac" anymore.
Then where is Mac Pro's chip? Apple cant even make sever and super computer grade chip unlike Nvidia. Cant and Can is a huge difference.Errr.. Nvida?
![]()
Apple's solution is much more rigidly "Homogenous Unified Memory" but others are on the MCM and/or multiple die path also. There is no DIMMs slots for mega-modular memory there.![]()
A Quick Introduction to the NVIDIA GH200 aka Grace Hopper
The NVIDIA GH200 or "Grace Hopper" is far from a single product. We have a quick guide so when someone says "GH200" you know what to look forwww.servethehome.com
I said Apple Silicon, NOT Nvidia.Nvidia SuperChips are a Failure? really???? The longer term trendline toward far more power efficient AI compute is heading directly at chiplets and/or MCM.
Doesn't change the fact that Apple Silicon sucks for GPU and AI resources. That's also why they are struggling with their own AI server which only has 50,000 made out of more than 5 years old GPU, Cant even make a powerful desktop chip, and heavily limited by SoC as they need to make a huge chip.Apple's possible resources are not limited to that. 'AI' is not equated with solely just large as possible language models. Apple has some self imposed constraints that go beyond avaialble GPUs. Privacy , not streamrolling copyrights. not hovering up as much customer prompt data a possible. etc. Apple has been more focused on inference on modest resources than on powering the most monstrous MW consuming machine possible.
Back to Mac Pro ... it has been capped at normal USA wall socket power levels. Not 'brown out' the neighborhood power levels.
Apple's AI server chip is far more likely to be the most power efficiencent (Perf/W ) of its immediate rivals far more so than the "Most Powerful" option.
Yes there are, though your hating on Apple has clouded your summary.If you truly believe that Apple Silicon is the best, think again. There are reasons why Nvidia is dominating the entire
AI markets.
Being logical is hating? Wow. If you think that way, tell me if there are any Apple Silicon based super computer to train and study AI for their own company before you even say other things.Yes there are, though your hating on Apple has clouded your summary.
Nvidia succeeded because:
1) Intel has been flopping around, for several years, like a dying fish on a wharf;
2) Export bans have kept the cutting edge photolithography out of the hands of the PRC, therefor Chinese startups in their IC industry are hobbled;
3) the major ARM licensees, such as Apple and Samsung, were not in that business.
Apple have Metal for their own but still, it's such a tiny market especially since Apple lacks their own software. But once Apple can make Mac Pro grade Apple Silicon chip, then they can at least try.How do they plan to be successful without CUDA support? Except for LLMs, all the cutting edge AI imaging and video is developed in CUDA.
I get beefing up iCloud in order to better share and delineate the personal AI/compute from the cloud. They struggled with that. Even that's been on the roadmap, so where's the news here? Besides, there's been a sense of doing this out-of-order for a few years. Build the reliable stack and scale. Apple hasn't built the first part yet, so this is all putting the cart before the horse. It's not like Apple hasn't made bad bets before (trash can Mac Pro).As long as this "AI server chip" is a 'behind the curtain' hardware solely for Apple Intelligence, it really isn't going to make much of a difference until Apple Intelligence as an overall system is competitive. Software and Hardware.
There is zero indications that Apple is trying to go into the general purpose web services business ( Microsoft Azure, Amazon AWS , Google Cloud , etc. ) business. Pretty good chance this "AI Server" chip never shows up in a product that is for sale at retail.
Not really 'decade behind' AWS, Azure , or G-Cloud if simply just pulling workload in house and/or creating new AI inference workloads that didn't exist before.
The problem is that AI models are basically implemented by researchers, and researchers chose CUDA a long time ago, and they won’t change (an unwise choice if you ask me, because it’s vendor specific, but most researchers are not smart when it comes to real implementation).Apple have Metal for their own but still, it's such a tiny market especially since Apple lacks their own software. But once Apple can make Mac Pro grade Apple Silicon chip, then they can at least try.