Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacbookGamer55

macrumors newbie
Original poster
May 2, 2021
1
0
Hi Guys

Will AGI take all jobs that are done by humans? Maybe it will only eliminate most jobs that don't have a technical background or most low-medium skilled jobs?

I am not talking about A.I as it is today or ASI but in the future let's say 2060

Or humans will be able to adapt to AGI and find new sorts of work in an emerging industry or re-skill and change their nature of their work or AGI may even complement humans but not replace them.

Or AGI is over-hyped and it will only ever slightly increase unemployment rate from the current percentage now to 20-40% worldwide.

Or will there be a utopia in 2060 where people get a universal basic income and where people do whatever they want such as travel, gaming etc and let AGI take all the jobs and everyday will be like a weekend/public holiday

Many Thanks :D
 
Last edited:
AGI is not required to take jobs. A very efficient narrow AI with a good machine learning algorithm could likely be trained to replace many human jobs in the not so distant future.

Yes, this is also opening up new job opportunities, however unlike previous incidents where increased automation increased the number of jobs, in the case of AI - even if it's only a narrow AI in the form of a fancy ML algorithm - it's not exactly easy to just retrain the existing workforce for those new jobs.

For example someone doing a standard office desk job could be easily replaced by AI, but then to take advantage of the new jobs, they'd need to train in how to understand and write NNs and ML algorithms. This is a very specific and complex area that even most skilled programmers aren't proficient in. Your average Joe Public who lost a desk job can't just go on a few courses and easily pick it up.

Now for nerds who are good at those things already, or who are good at managing cloud services (it's highly likely this is where the AI will be running from), it'll be great and they'll be in high demand. But anyone who isn't a computer nerd won't be able to retrain for the new jobs created by increasing AI.

And if you fast forward to an AGI, even they may well lose their jobs because theoretically an AGI can "evolve" itself. I believe the "intelligence explosion" argument is quite compelling, whereby an AGI will become or create an ASI very quickly for this reason.

At that point, the only bottleneck for AI releasing its dependence on humans is the hardware development. But considering how far in the future an AGI would emerge, it's not unrealistic to think it could just 3D print updated components and put them together.

So yeah I think in the short-term we will see more advanced narrow AI displace many many jobs and those people will largely not be able to retain for the new jobs it'll create. Computer nerds already proficient in the necessary skills to develop those algorithms and/or maintain the architecture (myself and I suspect many of us here fall into one of those two groups) will however see the demand for those skills increase. In other words: balllllin'.

But long-term, once we have AGI? It'll rapidly develop into an ASI and render humans obsolete. But it'd also have its own thoughts, feelings, and agendas. One can only speculate what its attitude to humanity would be. It could simply not care enough to do dull human work. But by this point narrow AI will be sufficiently advanced to replace most humans anyway.
 

Interesting but not very convincing imo.

For one thing it's more based around philosophy than an understanding of the underlying technology.

One fundamental error it makes is the assumption that AGI would be "intelligent" in the same way humans are "intelligent." That just because a human cannot do something (the example given is increase the IQ of other humans) this means a hypothetical AGI could not do the same.

Even that analogy is poor, as there's a large body of evidence indicating that IQ is variable depending on your environment. So you can make choices that will ultimately increase the IQ of another human. You can't really do this to an adult but you can with a child.

So going back to our AGI. It will naturally think like a machine, not a human. What a sentient machine would think like is currently a pure hypothetical. But there's no basis for the assumption that it'd think like a human computer programmer, nor that the methods of increasing its intelligence would be any way similar to that of a human.

It is likely fair to assume a sentient computer program would have a superior innate understanding of computing, in the same way we innately understand fellow humans better than other living creatures.

Further, the primary bottleneck, once you've actually brought an AGI into existence, is compute power. An AGI is just software and as long as the compute power is has access to is sufficient, its abilities can be developed and improved over time just like any other software. This stands to reason just based on what we know about how computers work. Indeed this also applies to the ML algorithms used everywhere today.

It is a virtual certainty that computers will keep getting more powerful over time. There may well be debate over how rapidly it'll occur, but look at the explosion of power introduced by the M1 chip after years of stagnation in CPU performance improvements. Engineers seem pretty good at continuing to push the limits of computing performance even as Moore's law gets more and more tricky. Then there's the development of quantum computers which is a whole other thing in and of itself.

One final point:

A few A.I. programs have been designed to play a handful of similar games, but the expected range of inputs and outputs is still extremely narrow. Now, alternatively, suppose that you’re writing an A.I. program and you have no advance knowledge of what type of inputs it can expect or of what form a correct response will take. In that situation, it’s hard to optimize performance, because you have no idea what you’re optimizing for.

I'm sure it is hard, AI is a difficult thing to develop, but DeepMind has already created generalised AI running on specialised ASICs that can learn to play any game - board games, arcade games, etc - at a superhuman level without even knowing the rules, it learns purely by playing against itself.


There is no pre-programmed algorithm telling it "if this, do that." The NN just learns the rules of whatever game they put in front of it then rapidly is able to defeat any human player.

Given this technology already exists right now, why would a future AGI not be able to apply the same level of rapid learning and self-teaching to programming? If such AI can hypothetically replace human jobs in the future, there's no reason to think programmer wouldn't be one of them, and there's no reason to think an AI would be limited to human level programming skills.

I'd actually go further and say you don't necessarily need an AGI to rapidly develop and improve an AI program. You'd just need a narrow AI that's good at generalised programming and algorithm development.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.