Nvidia will probably cost 100K.It should be M4 Ultra based sometime 2025. When is Nvidia shipping theirs?
Nvidia will probably cost 100K.It should be M4 Ultra based sometime 2025. When is Nvidia shipping theirs?
they dont even have to. just custom compilation of macOS with Hypervisor framework and they can run any arm64 linux distro.I wonder if they made an AS Linux for internal use.
A chip with no design flaws could protect user privacyI don't see how any chip could "inherently protect user privacy" -- that is just nonsense.
The goal would be to not have a non-server OS running on a server.they dont even have to. just custom compilation of macOS with Hypervisor framework and they can run any arm64 linux distro.
Are there more specifics to this data? Just wondering because I would imagine that not all “TOPS” are equal. For example a Jetson Orin Nano by Nvidia is capable of 40TOPS.Seems solid to scale.
- A14 Bionic (iPad 10): 11 Trillion operations per second (OPS)
- A15 Bionic (iPhone SE/13/14/14 Plus, iPad mini 6): 15.8 Trillion OPS
- M2, M2 Pro, M2 Max (iPad Air, Vision Pro, MacBook Air, Mac mini, Mac Studio): 15.8 Trillion OPS
- A16 Bionic (iPhone 15/15 Plus): 17 Trillion OPS
- M3, M3 Pro, M3 Max (iMac, MacBook Air, MacBook Pro): 18 Trillion OPS
- M2 Ultra (Mac Studio, Mac Pro): 31.6 Trillion OPS
- A17 Pro (iPhone 15 Pro/Pro Max): 35 Trillion OPS
- M4 (iPad Pro 2024): 38 Trillion OPS
lolA chip with no design flaws could protect user privacy
no such thing as "server OS" - its just an windows or linux without end-user components like desktop, apps, or media stuff. and its exactly what i meant with "custom compilation of macOS"The goal would be to not have a non-server OS running on a server.
As long as Apple is designing its own servers, they should release a new XServe and sell it to the masses.
Apple plans to power some of its upcoming iOS 18 features with data centers that use servers equipped with Apple silicon chips, reports Bloomberg’s Mark Gurman.
![]()
M2 Ultra chips will be behind some of the most advanced AI tasks that are planned for this year and beyond. Apple reportedly accelerated its server building plans in response to the popularity of ChatGPT and other AI products, and future servers could use next-generation M4 chips.
While some tasks will be done on-device, more intensive tasks like generating images and summarizing articles will require cloud connectivity. A more powerful version of Siri would also need cloud-based servers. Privacy has long been a concern of Apple’s, but the team working on servers says that Apple chips will inherently protect user privacy.
Gurman previously claimed that all of the coming iOS 18 features would run on-device, but it sounds like some capabilities coming this year will in fact use cloud servers. Apple plans to use its own servers for now, but in the future, it may also rely on servers from other companies.
Article Link: Apple to Power AI Features With M2 Ultra Servers
I don’t have a link but I use an M1 Max and Nvidia 4090. Depends on the usage and data size.Do you have a link that compares them for AI workloads?
I totally agree. Apple has a fully certified UNIX system in macOS, the plumbing is enterprise-grade. Even if they created a downloadable suite of server tools that could be installed on the consumer version of macOS, along with a proper rack-mountable server hardware product with a RAID array, they could steal the show.They should make their own servers again! The Intel Xserves couldn't compete with any other x86/x86-64 Server. But back when they were on PowerPC, there was at least a novelty. I think the one thing that would make an Apple Silicon Xserve difficult to be viable is the fact that it can only natively boot macOS. But toss on the open source equivalents to the utilities that set Mac OS X Server apart from its client counterparts, and you have a pretty decent alternative to the Windows/Linux duopoly in that space.
That's a good point but those chips are never going to be anywhere close to even h100 let alone h200 and b200 simply because they were not made specifically for AI. remember that those so called Ultra chips have slower NPU than iPhone 15 Pro Max.why pay nvidia 80% margin if you can use your own hardware?
you are also missing the obvious: the M2 Ultra is a niche product, and bigger chips (the so-called M1/M2 Extreme) were cancelled because of that.
now Apple has an incentive to built those as not only they can sell it in Mac Pro/Studio, they can run their own servers on it with massive cost saving compared to what AMD and Nvidia offers.
big chips were given green light if this rumor is true
It's the next 3D tv, AR/VR, or foldable phone. A dumb trend that no one will care about in a few years.I’m not an average consumer but i am really not seeing what the big deal is about AI.
Man I don't know, each generation of Nvidia GPU gets more and more power efficient. My latest GPU upgrade (a 4070) didn't require a PSU upgrade, but upgrading to a 3080 the previous year would have. For the same performance.And also need the power of the Hoover Dam to complete the workload. No thanks! Apple is way more efficient then Nvidia
Apple plans to use its own servers for now, but in the future, it may also rely on servers from other companies.
True for onesie's. Apple isn't building these in onesie's.Maybe just use Nvidia GPU's that are like 50 times faster and are actually built for this type of workloads ?
I’m not an average consumer but i am really not seeing what the big deal is about AI.
The idea is that your prompts would be going to Apple servers instead of OpenAI, Google, M$. If you trust Apple to keep your PII private, then using their servers is inherently more private.I don't see how any chip could "inherently protect user privacy" -- that is just nonsense. Privacy is primarily a function of the software running on the chip. While there may be features of a chip that could help protect privacy in a multi-tenant environment (like a cloud server), that would be at a very low level such as protecting memory from being read across processes or threads.