Just drop the AI nonsense altogether, IMO. Every LLM is supposedly getting better on metrics, but their real world performance, in my experience, is getting worse. It’s almost like how CPU and GPU makers optimize for benchmarking software instead of actual innovation that might not result in “number bigger now — bigger number good”.
I don’t think I have had a single LLM interaction in the past 2 months where I haven’t had to ask it if I was unclear (“No, you were very clear.”), ask it repeatedly to double-check and verify it’s clearly long answer, been gaslit on answers I know are wrong, have been told by the LLM to do a search for the answer, etc, etc.