Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I have a Nvidia 4090 workstation. My M1 Max runs stuff 4090 can only dream. Nvidia need to up the 24 GB VRAM or lower prices of GPU with more than 40 GB. If Apple can provide 256 GB unified memory, I can lower my cloud costs(already low with M1 Max).
Apple needs to allow multiple studios to have cluster of GPU and memory.
I was talking about their h200 and soon b200 series not gaming GPU's lol.
h200 VRAM: Up To 141 GB HBM3e @ 6.5 Gbps
And that's called unified ram, not VRAM specifically where you have to waste memory on everything which maxes out at 400 GB/s BW. I'm pretty sure that's even slower than RTX 3090's VRAM (as expected ofc).
Apple is currently busy selling their last generation $1800 laptop with 8GB ram, which has less RAM than majority of android phones. I don't think those glorious days will come anytime soon.
Sure those GPU's much more expensive but M2 Ultra is like 31.6 TOPS meanwhile h200 is 3900ish TOPS and soon b200 with 20,000 TOPS.
Comparing M2 Ultra to h200 in AI is more like comparing GT 710 to RTX 4090 in gaming.
 
All Intel needed to do was make a more energy and heat efficient chip for Apple. Then Apple would be releasing the new iPad Pro with Intel inside, building server farms out of Intel chips.
 
Apple be writing Linux drivers. Not for public consumption, naturally!
 
Anything to save the stock price. This is a textbook case of using leverage. AI is hot thing, so Apple is trying to announce as many things as they can about AI. Of course they may or may not actually amount to anything. When EV was hot, they would keep dropping news about Apple Car. The moment EVs were out of favor, they announced they are canning the effort. For all we know this might just be an experimental project that will go nowhere in terms of production at scale.
 
  • Like
Reactions: arkitect
I wonder if Apple will lock all the really interesting features behind a paywall (like Apple One or an iCloud+ subscription)? 🤔
 
  • Like
Reactions: arkitect
In my experience with talking about it with people, the only people who support and want AI, are people who tend to not like thinking. Those of us that like to use our brains, are fine without AI. Just my experience. YMMV
I think a lot and use AI to complete tasks I am unable to do (nobody can do everything no matter how much they like thinking…). Game changer for me.
 
In my experience with talking about it with people, the only people who support and want AI, are people who tend to not like thinking. Those of us that like to use our brains, are fine without AI. Just my experience. YMMV
My experience is the opposite. I am a scientist and professor (I like to use my brain) and I use ChatGPT almost every day to help speed up my research and work. In fact, every person I know who uses ChatGPT or something similar regularly are people who think and do a lot -- researchers, artists, professors, engineers, etc. I'm sure my sample is biased, but LLMs have been one of the best tools ever developed for much of what I do. They help me get things done much more efficiently.
 
That doesn't negate what I said.

If you can't do something, and you have AI do it for you, you did not do it. Period. You did not accomplish anything. The computer/AI did. You simply took credit for it.
You can use it to assist you, but if you don't know what you're doing and AI outputs garbage, you will think the garbage is good and consume it. In other words, even though AI may be capable of doing things you cannot, unless you know what the expected output is, you may end up doing things completely wrong...which is worse than not doing them at all.
 
Will it be able to edit a mistyped number in the dial pad?. Or will need an m10 ultra and 20 years of development to improve the basics phone functions?, such as quck contacts aka Force Touch?. Ah, those were the good times of IOS.
 
  • Like
Reactions: arkitect
You can use it to assist you, but if you don't know what you're doing and AI outputs garbage, you will think the garbage is good and consume it. In other words, even though AI may be capable of doing things you cannot, unless you know what the expected output is, you may end up doing things completely wrong...which is worse than not doing them at all.
And that's where the human thinking comes in.
 
  • Like
Reactions: Chuckeee
My experience is the opposite. I am a scientist and professor (I like to use my brain) and I use ChatGPT almost every day to help speed up my research and work. In fact, every person I know who uses ChatGPT or something similar regularly are people who think and do a lot -- researchers, artists, professors, engineers, etc. I'm sure my sample is biased, but LLMs have been one of the best tools ever developed for much of what I do. They help me get things done much more efficiently.
I hope you're double-checking the output or your research may contain hallucinations.

For fun, here's an automatic CS paper generator from 2005!
Apparently one of the papers it generated (complete garbage) was accepted for a conference.
 
That doesn't negate what I said.

If you can't do something, and you have AI do it for you, you did not do it. Period. You did not accomplish anything. The computer/AI did. You simply took credit for it.
That's like saying because I used a calculator, I didn't actually solve a multi-part problem of which that arithmetic was only one small part of the process.

I can't afford to hire an assistant editor, but I can use generative AI to point out flaws in my writing, like repetitive sentences, poor transitions, etc. Then I go back and rewrite those sections. I did all the thinking. All the AI did was offer a fresh pair of virtual eyes. It was up to me to agree/disagree with the AI and think about how to correct the flaws myself.
 
In my experience with talking about it with people, the only people who support and want AI, are people who tend to not like thinking. Those of us that like to use our brains, are fine without AI. Just my experience. YMMV
In my life, it's only useful to plan travel itineraries. (could do without)

If it was plugged on and understood scientific litterature though, that would be neat. At some point, it might be able to make new hypotheses based on the current litterature.

Consumer AI though is a big meh for me. It's mostly overhyped by companies trying to sell people a solution to problems that don't exist. Publicly traded companies always gotta come up with the next thing regardless of what consumers need
 
I was talking about their h200 and soon b200 series not gaming GPU's lol.
h200 VRAM: Up To 141 GB HBM3e @ 6.5 Gbps
And that's called unified ram, not VRAM specifically where you have to waste memory on everything which maxes out at 400 GB/s BW. I'm pretty sure that's even slower than RTX 3090's VRAM (as expected ofc).
Apple is currently busy selling their last generation $1800 laptop with 8GB ram, which has less RAM than majority of android phones. I don't think those glorious days will come anytime soon.
Sure those GPU's much more expensive but M2 Ultra is like 31.6 TOPS meanwhile h200 is 3900ish TOPS and soon b200 with 20,000 TOPS.
Comparing M2 Ultra to h200 in AI is more like comparing GT 710 to RTX 4090 in gaming.
3090 is useless in front of M2 Ultra with 192 GB unified memory. 3090/4090 can work well with toy models. Where did you get 400 GB BW for an ultra? It is 800 GBps on ultra.
And with out knowing Apple’s architecture, these numbers don’t mean much. Is Apple running all the inference in cloud or a hybrid approach where device runs most of inferences and uses cloud as needed? I didn’t compare to H200, not sure how you got that impression. lol.
What is unknown is will Apple use Ultra for training or inferences, I know Apple used to use Nvidia H100/A100 for training.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.