Prompter prompts you say?
What areas, outside of LLMs and AI will this benefit? They mentioned 2x FP16 also.
Prompter prompts you say?
For gaming, it'd be MetalFX. Nvidia's DLSS is done on the Tensor core. MetalFX can use these new cores.Prompter prompts you say?
What areas, outside of LLMs and AI will this benefit? They mentioned 2x FP16 also.
Prompter prompts you say?
What areas, outside of LLMs and AI will this benefit?
They mentioned 2x FP16 also.
For gaming, it'd be MetalFX. Nvidia's DLSS is done on the Tensor core. MetalFX can use these new cores.
Will be really interesting if they keep the Neural Accelerator per GPU core for the M series. If so, for certain tasks it will be a monster performance improvement.
You started out more or less promoting LLMs as a magic button you press to get production worthy code, yet now that a few people have posted about their experiences with them making mistakes your tune has changed. Interesting.I agree. For some strange reason, people are expecting LLMs to never make mistakes. Instead, they should always check the LLM's work if it's important. If a task is not that important, then don't check. Learn how to use LLMs instead of thinking it's not "AGI" so it's not worth using.
they delayed it? i thought only the Mbp are delayed and those are rumoured to get the M6 familyWill be really interesting if they keep the Neural Accelerator per GPU core for the M series. If so, for certain tasks it will be a monster performance improvement.
Really looking forward to M5 now and I'm glad they delayed it to (hopefully) get these improvements and others in.
You're completely missing the point. I said it can one shot new apps. It doesn't mean I'm going to let it run wild in my business critical application without review. It writes 90% of the code in my business critical app but all of it is reviewed by me.You started out more or less promoting LLMs as a magic button you press to get production worthy code, yet now that a few people have posted about their experiences with them making mistakes your tune has changed. Interesting.
Massive 41% better: https://browser.geekbench.com/v6/compute/compare/4765303?baseline=4764370A19 Pro Geekbench Metal score
View attachment 2545753iPhone18,2 - Geekbench
Benchmark results for an iPhone18,2 with an ARM processor.browser.geekbench.com
People have compared LLMs to junior devs, but at least a bright junior dev can be expected to (a) understand and reason about the problem statement to some degree and (b) learn from their mistakes, neither of which are properties of LLMs. There's also a long term problem here: if you're using LLMs instead of junior devs, where are you getting the next generation of experienced devs to watch over LLMs?