It’s 2-3 years away at most. Enjoy.
I’d love to see an explanation or sources that lead you to believe it. Please cite research that says we are remotely close that isn’t sponsored or affiliated with a company who would benefit from further investment by publishing hyped projections. There’s a reason so many OpenAI people left.
“Enjoy” is kind of low effort trolling me, instead of doing that why don’t you point to some evidence so I can learn why you think this?
I work in the field and have studied and implemented research in this area for a living, have you? If so I’d love a detailed reply even more, and I’m being serious because I’m sure there are things I’ve missed but I frankly doubt you’re deeply familiar with the disciplines. I’d love to be wrong and learn some things though, the internet is great that way when it’s used for knowledge instead of arguing.
Feels ironic that you'd say that to me given how regularly I say that to others. Lots of people in management seem convinced that AI is ready to take over all office and programming jobs.
The only jobs it's ready to take over in that domain are jobs that shouldn't exist at all - I keep telling them that if AI can handle it, it's a sign that it's totally pointless busywork that should just be eliminated.
Generating images or audio though? They're astonishingly good at that already, and I haven't heard people saying that AI is about to plateau on those tasks even though they're getting close to the point of matching professionals while generating output thousands of times faster.
Programming is one thing generative models coupled with other technology will be good at. I have seen deterministic source code generation in labs, years ago, and worked directly with the person that invented the method and that work predated transformer models by a few years.
It’s a lot easier to have a verifiably correct way for a machine to run than it is for a generated song to be evocative or art to be novel. Especially in the case of Generative “AI“ art where novelty is almost impossible and iteration doesn’t really work very well.
That said, there will be augmentations to creation tools but those jobs and art aren’t going away, just low quality stuff like background music or bad looking app icons or emojis. Humans are extremely good at pattern recognition so even the best ”AI art” starts to look the same after a while to us.
Kurzwell should have stuck to talking about keyboards.
He was absolutely ahead of his time there, I’ll give him that. Those synthesizers are pretty interesting even compared to what we have today.
Kurzweil was correct with some of the timeline in his first singularity book but that has more to do with the definition of what “AI” is and especially the notion of the Turing Test being good enough to base any related definition on being flawed. Just because it can fool a human with probabilistic text does not mean it is literally artificially intelligent. I really wish we had a better name for this technology.
One of my fears and assumptions is that before the decade is over we will have someone “claim” they have AGI, and they’re going to massage the definition because they made some agent really good at a specific domain which should be disqualifying. Generalized Intelligence is going to be very tough to crack and will require a lot of disparate disciplines to be involved working together, not just computer scientists. It may also require biological processors which
are starting to be developed and used in small tests.
My personal opinion is that we’re at least 15-20 years out from true AGI, and that is likely overconfident.