Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
View attachment 2545074

Can we expect 4x better prompt processing?

There is some additional interesting tech likely included with the new GPU MXU units, so who knows, maybe the performance uplift will be even better.

Prompter prompts you say?

What areas, outside of LLMs and AI will this benefit?

The matrix stuff is pretty much limited to AI, but it’s becoming increasingly important in graphics too. Nvidia has been pushing neural shaders quite a lot, and there are nice things like procedural materials etc. that can be done.

They mentioned 2x FP16 also.

That could help some gaming shaders if they are optimized appropriately. For example FP16 is sufficient for a lot of operations related to lighting.

For gaming, it'd be MetalFX. Nvidia's DLSS is done on the Tensor core. MetalFX can use these new cores.

I’d think that ANE is the better fit for MetalFX. It’s more energy efficient and frees up the GPU to do other work.
 
I agree. For some strange reason, people are expecting LLMs to never make mistakes. Instead, they should always check the LLM's work if it's important. If a task is not that important, then don't check. Learn how to use LLMs instead of thinking it's not "AGI" so it's not worth using.
You started out more or less promoting LLMs as a magic button you press to get production worthy code, yet now that a few people have posted about their experiences with them making mistakes your tune has changed. Interesting.

If a LLM requires hypervigilance to use, why should I use it? Handing off work to something that's likely to screw up, in unpredictable ways, with greatly varying levels of subtlety from one attempt to the next? That's not my ideal productivity enhancer.

People have compared LLMs to junior devs, but at least a bright junior dev can be expected to (a) understand and reason about the problem statement to some degree and (b) learn from their mistakes, neither of which are properties of LLMs. There's also a long term problem here: if you're using LLMs instead of junior devs, where are you getting the next generation of experienced devs to watch over LLMs?

And if you are an experienced dev, be aware that there are studies showing that programmers who use LLMs are slower and more error prone than programmers who don't. Yes, these studies covered people who love LLMs and swear by them. There's even evidence that heavy reliance on LLMs decreases cognitive ability over time - instead of exercising your own reasoning, you're training yourself to stop thinking and ask the AI to think for you.

AI bubble hype is causing problems even for people who haven't opted in. For example, the curl project is being DDOS'd by fake AI-generated bug reports. Many of these seem to be filed by sincere people who got fooled into believing they could "contribute" to an open source project by just asking a chatbot to analyze it for security bugs. They've tried putting in the rules that you must disclose up front whether you used AI, or face an immediate ban, but that just got many of these AI believers to start hiding what they were doing. After all, they think the AI has told them about a real bug, so they better do what they can to get it in front of people's eyes.


There's also a ton of ethical and environmental issues with so-called "generative AI", but I bet you're one of the people who would just handwave such concerns away.
 
Will be really interesting if they keep the Neural Accelerator per GPU core for the M series. If so, for certain tasks it will be a monster performance improvement.

Really looking forward to M5 now and I'm glad they delayed it to (hopefully) get these improvements and others in.
they delayed it? i thought only the Mbp are delayed and those are rumoured to get the M6 family
M5 probably ipads pro and Mba in the spring
 
You started out more or less promoting LLMs as a magic button you press to get production worthy code, yet now that a few people have posted about their experiences with them making mistakes your tune has changed. Interesting.
You're completely missing the point. I said it can one shot new apps. It doesn't mean I'm going to let it run wild in my business critical application without review. It writes 90% of the code in my business critical app but all of it is reviewed by me.

The rest of your post is just regurgitating anti-LLM garbage, no offense. It looks like you just searched for why LLMs are bad, and then posted what you found.
 
Last edited:
A19 Pro Geekbench Metal score
1757490173527.png
 
that what a better cooling can do
Even if they put last year A18 pro still could get better result just with that vapour chamber and al structure
Kind of impressive
 
People have compared LLMs to junior devs, but at least a bright junior dev can be expected to (a) understand and reason about the problem statement to some degree and (b) learn from their mistakes, neither of which are properties of LLMs. There's also a long term problem here: if you're using LLMs instead of junior devs, where are you getting the next generation of experienced devs to watch over LLMs?

I must say I was quite confused by the attitude regarding junior devs in some posts. In my organization people who are incompetent, rude, and unwilling to learn do not keep their role very long. It’s not my job to babysit less experienced people. Everyone contributes according to their ability and grows over time.

LLMs are a useful tool, as long as one uses them responsibly. And they are getting larger and more accurate at some tasks. But I don’t see a qualitative improvement in the last generation models, at least not on tasks that are relevant to me.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.