Prompter prompts you say?
What areas, outside of LLMs and AI will this benefit? They mentioned 2x FP16 also.
Prompter prompts you say?
For gaming, it'd be MetalFX. Nvidia's DLSS is done on the Tensor core. MetalFX can use these new cores.Prompter prompts you say?
What areas, outside of LLMs and AI will this benefit? They mentioned 2x FP16 also.
Prompter prompts you say?
What areas, outside of LLMs and AI will this benefit?
They mentioned 2x FP16 also.
For gaming, it'd be MetalFX. Nvidia's DLSS is done on the Tensor core. MetalFX can use these new cores.
Will be really interesting if they keep the Neural Accelerator per GPU core for the M series. If so, for certain tasks it will be a monster performance improvement.
You started out more or less promoting LLMs as a magic button you press to get production worthy code, yet now that a few people have posted about their experiences with them making mistakes your tune has changed. Interesting.I agree. For some strange reason, people are expecting LLMs to never make mistakes. Instead, they should always check the LLM's work if it's important. If a task is not that important, then don't check. Learn how to use LLMs instead of thinking it's not "AGI" so it's not worth using.
daniel.haxx.se
they delayed it? i thought only the Mbp are delayed and those are rumoured to get the M6 familyWill be really interesting if they keep the Neural Accelerator per GPU core for the M series. If so, for certain tasks it will be a monster performance improvement.
Really looking forward to M5 now and I'm glad they delayed it to (hopefully) get these improvements and others in.
You're completely missing the point. I said it can one shot new apps. It doesn't mean I'm going to let it run wild in my business critical application without review. It writes 90% of the code in my business critical app but all of it is reviewed by me.You started out more or less promoting LLMs as a magic button you press to get production worthy code, yet now that a few people have posted about their experiences with them making mistakes your tune has changed. Interesting.
Massive 41% better: https://browser.geekbench.com/v6/compute/compare/4765303?baseline=4764370A19 Pro Geekbench Metal score
View attachment 2545753iPhone18,2 - Geekbench
Benchmark results for an iPhone18,2 with an ARM processor.browser.geekbench.com
People have compared LLMs to junior devs, but at least a bright junior dev can be expected to (a) understand and reason about the problem statement to some degree and (b) learn from their mistakes, neither of which are properties of LLMs. There's also a long term problem here: if you're using LLMs instead of junior devs, where are you getting the next generation of experienced devs to watch over LLMs?
Will be really interesting if they keep the Neural Accelerator per GPU core for the M series. If so, for certain tasks it will be a monster performance improvement.
Really looking forward to M5 now and I'm glad they delayed it to (hopefully) get these improvements and others in.
that what a better cooling can do
Even if they put last year A18 pro still could get better result just with that vapour chamber and al structure
Kind of impressive
Yeah, I mean we all were at that level at some point, its kind of sad. I used to train programmers in my job, and its something that the more you put in to help the more you'll get out, both personally and professionallyI must say I was quite confused by the attitude regarding junior devs in some posts.
Agree but if the cooling cannot keep up with it, its tech wasted, especially in something so thin like the airThese are pretty much the same cores, so yes, the new GPU cores will certainly be present in M5. Apple is serious about making the MacBook more interesting to ML researchers and developers.
Better cooling is just a small part of it. The GPU execution units got substantially wider. The new GPU can now execute two 16-bit precision operations per cycle, or a certain mix of 32-bit operations. These things add up.
I don't think that's the plan, I meant MBP M5 in early 2026 vs. the yearly cadence, which is the rumor. I'm really looking forward to these new chips and may upgrade M4 Max -> M5 Max if they pan out, if the A19Pro is an indicator of what's coming. Really good stuff.they delayed it? i thought only the Mbp are delayed and those are rumoured to get the M6 family
M5 probably ipads pro and Mba in the spring
I get why you say this but their point specifically about LLMs not learning from their mistakes is extremely valid. It's an incredible, probably intractable problem given the nature of the tools.You're completely missing the point. I said it can one shot new apps. It doesn't mean I'm going to let it run wild in my business critical application without review. It writes 90% of the code in my business critical app but all of it is reviewed by me.
The rest of your post is just regurgitating anti-LLM garbage, no offense. It looks like you just searched for why LLMs are bad, and then posted what you found.
Exactly, you need to be able to learn, to have true agency and improve. Right now that's impossible.I must say I was quite confused by the attitude regarding junior devs in some posts. In my organization people who are incompetent, rude, and unwilling to learn do not keep their role very long. It’s not my job to babysit entitled beginners. Everyone contributes according to their ability and grows over time.
LLMs are a useful tool, as long as one uses them responsibly. And they are getting larger and more accurate at some tasks. But I don’t see a qualitative improvement in the last generation models, at least not on tasks that are relevant to me. I do believe that future models will have better performance, but they will use different architecture than token prediction.
Same, though as I've mentioned I'm really happy with my M4 Max as it stands. I'm more curious to see if apple changes the architecture or just add improvements to the existing setup.I'm really looking forward to these new chips and may upgrade M4 Max -> M5 Max
Same, though as I've mentioned I'm really happy with my M4 Max as it stands. I'm more curious to see if apple changes the architecture or just add improvements to the existing setup.
Extremely valid to what? That AI is in a bubble? That we shouldn't use LLMs?I get why you say this but their point specifically about LLMs not learning from their mistakes is extremely valid. It's an incredible, probably intractable problem given the nature of the tools.
We do have better versions of MoE and a bunch of stuff happening behind the scenes now where they can review things, especially code, to find errors much of the time which is great and makes them useful for scaffolding and the "thinking" / "reasoning" model paradigm plus web search has opened up a lot more every-day use cases, but without being corrected and internalizing that knowledge there will be a hard wall on how far these tools can advance since this also blocks off self-learning which is going to be an even greater challenge.
The two rumors I saw in various websites seem to fit the bill. I'm not saying these are viable, just stuff you see on the interwebs.What would constitute a change of architecture to you? From what we’ve seen yesterday there are major changes in both the CPU and the GPU in this new generation.
Jesus no.The first, a change the unified memory architecture and focusing dedicated memory for gpu operations,