I have no interest in the online AI’s as soon as we can local language models running on our own devices. And we’ll have that soon.
We already have some, but they are much worse than the online models, even if they claim x% accuracy I've found them extremely lacking when given corner case questions but led in a similar way to what I do with GPT. I'm hopeful they'll get there within a year because I agree with you – I'd much rather have a local instance that I could have a very long-running relationship with as far as guiding / training the model for my particular usage.
Fingers crossed that happens before all of this magnificent technology gets locked behind insanely expensive paywalls. The value GPT provides
right now is far greater than the paltry $20/mo they're asking, and once Microsoft gets involved and charges 10x the amount we're going to have stratification which I do not want – this technology should be democratized, ideally with functionality and privacy in mind which is why I'd like Apple to really be pushing behind the scenes hard on it. I just don't think they are because it doesn't line up with a specific product they have currently.
There's too much misunderstanding and the tech press has done a terrible job explaining to the layperson how to actually use this and instead sows FUD or hyperbole. There are limitations of course but if you're technically minded and understand the constraints I can literally save the $20/mo in my own labor by using it for 15-20 minutes depending on the task. It's such an amplification device if you use it correctly, but it's a total nonsense generator if you aren't technically minded and haven't read the documentation or understand the technology behind the scenes at all. I've never seen anything like it in 30+ years of being in this field, except for probably one thing: the internet. It's a total game changer, and the sky is the limit.
But much like NLPs, there is just an utter flood of nonsense research being funded and done on these now that it's very difficult to sift out the good / novel approaches form the "look what I made this LLM do". Real, hard science is happening, but it's being buried among the noise. I've been following this field closely for more than half a decade and over the last 6 months it's become very difficult to keep-up with the volume of research being done, and most of it frankly sucks. Exactly what happened when NLP was the hot thing. But it doesn't mean that this isn't a revolutionary technology, far from it. I'm pretty excited for the future and I hope Apple gets on board soon, but I doubt it.