If Only Steve Jobs Were Still Here (OMG I can't believe I said that - I'll put a buck in the swear jar!) he might have held his ground and not got in to the "AI" snake oil business. Apple could always license ChatGPT or something if they got it wrong.
Don't get me wrong - the tech behind so-called "AI" (LLMs, diffusion, neural processing etc.) is important for the future but the current crop of consumer applications don't seem to be fit for anything more than a "bit of fun". It's like early attempts to play jerky, grainy video on 8-bit computers - definitely a sign of "things to come" but not, at the time, remotely useful. What's worrying is the current "Emperor's new clothes" syndrome where people are pushing ahead and using it
anyway despite the fact that it is horribly unreliable and has the potential to cause all sorts of social problems.
It's telling that
everybody - not just Apple - are playing the same game of rolling out AI "features" whether customers want them or not, making users jump through hoops to disable them, then stealthily re-enabling them with every update.
It also seems like current AI is a bit over-ambitious, and could be used better as "a second pair of eyes" rather than pretinding that it can do things that it can't.
I don't want AI to categorise mail if I can't rely on the result and might have an important personal message filed under "clutter" or something. Now, if AI could scan my existing *junk* folder and flag up any messages that might be false positives that would be something. Or, if I read a message, maybe AI could put a "Move to Invoices?" button to help me sort messages as I read them, that would be great... but, no, it has to pretend that it is clever enough to know better than me, because that
looks more impressive.
I don't want AI to
summarise email messages and tell me that my relative has been euthanised (when they actually had their dog put down). It's not worth the 10 seconds of stress. There have been some "summaries" of news stories (not for discussion in this thread) that had the potential to cause riots.
When I search, I want to find an original source, not a possibly-hallucinated AI summary. Now, if AI could help narrow down tricky searches
when asked to, that would be grest - but understanding context isn't an LLM's strong suit...
If I write code then "dumb" templates and auto-complete are enough to cut down on typing boilerplate. The time-consuming part is not churning out code, but checking and testing it afterwards. Where's the point on getting AI to write code if you can't rely on it without painstakingly checking it line-by-line? Again - wake me up when AI can help with
checking code that I write.
We've had automatic code-writing software and other "rapid application development" gimmicks
since at least 1981 - but they fail to take account of the adage "the last 10% of the work takes 90% of the time" - they rip through the first 90% and leave you with something rough and inadequate which is a major pain to get finished - at worst needing to be re-written completely because the RAD system lacks support for some essential feature.
If I'm writing a document, banging words down is the easiest and quickest part - it's checking and refining that takes the time, which you have to do even more scrupulously with AI-written cobblers. Plus, of course, doing the research - where AI might be able to
help with the seaching but can't be trusted to automate it. If you struggle with writing - well, heck, its the 2020s, record a podcast or a video of you presenting your ideas as interpretative dance or a rap number - that's what the media revolution was
supposed to be about!