Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Huntn

macrumors Penryn
Original poster
May 5, 2008
24,098
27,195
The Misty Mountains
I listened to an interview this morning on NPR, where an expert from an algorithm institute in Canada (sorry did not catch the institute or his name) claimed that that we have entered the danger zone with AI. That AI can be programmed to seek and formulate it’s own goals and the danger is handing it agency, the ability to make changes Independently.

The expert cited an example where a Russian Early Warning System signaled an ICBM launch from the United States, and the officer who was in the position to push the button, did not, because he said it did not feel right. The early warning system was in error, there was no launch from the US, and a machine programmed to respond Independently would have sent nuclear missiles to the US.


A different interview:

Leading experts warn of a risk of extinction from AI​


In a recent interview with NPR, Hinton, who was instrumental in AI's development, said AI programs are on track to outperform their creators sooner than anyone anticipated.

"I thought for a long time that we were, like, 30 to 50 years away from that. ... Now, I think we may be much closer, maybe only five years away from that," he estimated.

Dan Hendrycks, director of the Center for AI Safety, noted in a Twitter thread that in the immediate future, AI poses urgent risks of "systemic bias, misinformation, malicious use, cyberattacks, and weaponization."
 
  • Like
Reactions: drrich2
Our society currently has a major problem with misinformation and there are essentially zero checks for mitigating it on any media channels. I think that is by far the most immediate threat. People are falling for really obvious human-made content in large groups already. AI will just make it easier to do in mass.

By this time next year I expect to not be able to tell if a photo is a Midjourney creation or a real photo. No one seems to understand that the only intelligence in LLMs is how to string words together in a convincing way. It is not general intelligence, and still very much on rails where the user/operator lays the track.

I fear the next 5 years as this will accelerate erosion of general trust that is already happening. Politics, news media, marketing, scam artists… a lot of not so great aspects of society that have already become invasive are about to get worse. I hate it.
 
I listened to an interview this morning on NPR, where an expert from an algorithm institute in Canada (sorry did not catch the institute or his name) claimed that that we have entered the danger zone with AI.
Is it Yoshua Bengio ?

Do you remember the Year 2000 Bug ? The experts scared everyone. It was the end of the modern world. Finally nothing happen.
 
Is it Yoshua Bengio ?

Do you remember the Year 2000 Bug ? The experts scared everyone. It was the end of the modern world. Finally nothing happen.
It may have been, the name went by while I was driving and I was who? I do think there is a huge danger her as outline in the second article I posted. We'll either get it in gear and corral the dangerous or we'll get some big surprise eventually...
Our society currently has a major problem with misinformation and there are essentially zero checks for mitigating it on any media channels. I think that is by far the most immediate threat. People are falling for really obvious human-made content in large groups already. AI will just make it easier to do in mass.

By this time next year I expect to not be able to tell if a photo is a Midjourney creation or a real photo. No one seems to understand that the only intelligence in LLMs is how to string words together in a convincing way. It is not general intelligence, and still very much on rails where the user/operator lays the track.

I fear the next 5 years as this will accelerate erosion of general trust that is already happening. Politics, news media, marketing, scam artists… a lot of not so great aspects of society that have already become invasive are about to get worse. I hate it.
Unfortunately it's a topic that has been deemed, at least the politics end of it as off limits at this site, but I agree with you.
 
With all the talk of AI in the news, Ex Machina (2014) is a must see. Even though this is fiction there are definitely AI lessons to be learned here. First and foremost Asimov’s 3 rules of Robotics. That in itself covers much of the pitfalls caused by Ava’s creator. It also raises other questions about moral sub routines, or lack thereof, and creating a simulated human that is not a sociopath.

This may sound like a spoiler, but it is not, and after watching the story and liking it, most likely you’ll think about motivations and desires that AI, if they are programmed to mimic humans might have and act on, if they are allowed to act on them, which circles back to the 3 Rules.

Technically impressive from a visual standpoint is the Android brain the creator calls wetware, (also known in the genre as the positronic brain) which has the ability to rearrange its circuitry, as far as I know, current tech is not quite there, but this concept is what seems to make a life-like android plausible.

C11C4017-709F-4328-A762-AA50C2FFA4EA.jpeg
 
Is it Yoshua Bengio ?

Do you remember the Year 2000 Bug ? The experts scared everyone. It was the end of the modern world. Finally nothing happen.
Actually with the Y2K bug there were a lot of problems which would have been worse if there had not been many experts warning about it well ahead of time... so before the year 2000 a lot of important systems were addressed.
But the idea in hindsight that nothing happened and it was all just a hype is a misconception and gives the idea to people that we should not worry when "experts" warn us because "everything will just be fine" (just think of climate change).

I think it would be very smart and prudent to be hesitant with AI, and other inventions that evolve very rapidly, as we humans tend to be way too arrogant about thinking that we know how things will work out for the best.

It would be easier to point out all the inventions we have made that have, and still, threaten our own existence...

The selfie stick, the perfect example of a way to kill many (young) people (yes this is a bit of sarcasm).
 
I wouldn't know, it's too far from my area of expertise, what I know is that AI can already take a few jobs.

With Microsoft Designer and Bing Image Creator, in a matter of minutes, you can create perfectly acceptable promotional material, it won't look highly professional or particularly creative, yet, but if an organization, a company is on a tight budget, then it can come handy.

Microsoft Copilot, I've seen some demos on YouTube, again, maybe it's not ready for a large corporation or a big investment bank, yet, but for a smaller company it can get the job done without needing more employees involved in analysis.

As for analysis, ChatGPT, at least in English, can already do that.
 
There is NO intelligence in AI, I mean nothing. AI is only about database and statistics.
"There is NO intelligence in humans, I mean nothing. Humans are only about neurochemical reactions and the detection of patterns."
 
  • Like
Reactions: bbrks
I've been visiting the "Ai Weirdness" website for quite some time.

It's kind of funny how bad the bots can be at so many apparently simple things that require actual recognition, instead of just text-based statistical replies. The "ASCII Art" exchanges have so many gems, like the giraffe and pony.

Of course, arguing that one is correct takes no recognition at all, just the ability to apply statistical text replies, so telling the bot how wrong it is just falls into the self-reinforcing argument loop.
 
Our society currently has a major problem with misinformation and there are essentially zero checks for mitigating it on any media channels. I think that is by far the most immediate threat.
This 100%. The immediate problem is a denial-of-service attack on the already inadequate resources for curating information. I think all the SF-inspired talk about AI supplanting humans is a dangerous distraction. E.g.

With all the talk of AI in the news, Ex Machina (2014) is a must see.
Sure but not until you've watched the hell out of Black Mirror which (mostly - there are some more fantastical episodes) is far more pertinent to the present day (skip the first one with the pig if you must). Unfortunately, some tech developers seem to see it as an instruction manual rather than a warning.

Even though this is fiction there are definitely AI lessons to be learned here. First and foremost Asimov’s 3 rules of Robotics.
Current AI is nowhere near being able to understand and apply Asimov's 3 laws... there's a long way to go between "car hitting tree-shaped-thing=-10 points, car hitting human-shaped-thing=-1000 points" and "a robot shall not harm a human or, by inaction, allow a human to come to harm". Heck, even the Asimov stories were mostly about playing philosophical games with how those laws could be (mis)interpreted and ended up with a few robots coming up with a "greater good" get-out clause. Most SF AI is still very anthropomorphic featuring robots/computers with "general intelligence" - its not that current/near-future AI doesn't understand morality - it doesn't understand anything.

Ok, maybe someday AI will get so complex that we'll see general intelligence pop out as "emergent behaviour" or someone will work out what the secret sauce is - but the pressing danger is that humans will save a buck by letting an "AI" make important decisions without human judgement.

Incidentally - I think one of the basis for Asimov's anthropomorphic robots was the assumption that a robot brain would be a hugely expensive item, so a single robot would need to be able to do a wide range of human tasks, including operating machinery designed for humans, rather than building specialised 'brains' into each machine. Reality is that powerful computing hardware is dirt cheap c.f. other resources and can be integrated into most things, so the value of a humanoid robot with general intelligence is questionable. There was also little concept of how important the "information economy" would become or how it could be threatened by something like GPT.
 
  • Like
Reactions: Huntn
This 100%. The immediate problem is a denial-of-service attack on the already inadequate resources for curating information. I think all the SF-inspired talk about AI supplanting humans is a dangerous distraction. E.g.


Sure but not until you've watched the hell out of Black Mirror which (mostly - there are some more fantastical episodes) is far more pertinent to the present day (skip the first one with the pig if you must). Unfortunately, some tech developers seem to see it as an instruction manual rather than a warning.


Current AI is nowhere near being able to understand and apply Asimov's 3 laws... there's a long way to go between "car hitting tree-shaped-thing=-10 points, car hitting human-shaped-thing=-1000 points" and "a robot shall not harm a human or, by inaction, allow a human to come to harm". Heck, even the Asimov stories were mostly about playing philosophical games with how those laws could be (mis)interpreted and ended up with a few robots coming up with a "greater good" get-out clause. Most SF AI is still very anthropomorphic featuring robots/computers with "general intelligence" - its not that current/near-future AI doesn't understand morality - it doesn't understand anything.

Ok, maybe someday AI will get so complex that we'll see general intelligence pop out as "emergent behaviour" or someone will work out what the secret sauce is - but the pressing danger is that humans will save a buck by letting an "AI" make important decisions without human judgement.

Incidentally - I think one of the basis for Asimov's anthropomorphic robots was the assumption that a robot brain would be a hugely expensive item, so a single robot would need to be able to do a wide range of human tasks, including operating machinery designed for humans, rather than building specialised 'brains' into each machine. Reality is that powerful computing hardware is dirt cheap c.f. other resources and can be integrated into most things, so the value of a humanoid robot with general intelligence is questionable. There was also little concept of how important the "information economy" would become or how it could be threatened by something like GPT.
I mentioned the 3 Robotic rules while understanding as stated they would not really work overall, but would have saved 1 or 2 humans. :D I can easily see a a set of constraints that would govern, just how far an Android could go, under specific circumstances, which could easily include killing a person under very specific circumstances. Yet as part of a plot in a story, I still recognize them as a set of limits required to give AI agency and avoid a robot rebellion. ;)

i watched part of Black Mirror and it did not stick with me. Any particular episodes you have in mind? :)
 
i watched part of Black Mirror and it did not stick with me. Any particular episodes you have in mind?
Just don't be put off by the first one, "The National Anthem" which was a bit of a one-off experiment in gross-out and isn't really representative of the pervasive "near future perils of technology" theme of the rest. It's an anthology, so you can't judge it by a single episode and you can dip in at any point (except for "The Black Museum" which is the only one that heavily references other episodes). "Be Right Back" for creepy uses of AI, "Nosedive" or "Hated in the Nation" for out-of-control social media, "Fifteen Million Merits" to cure you of Pelaton and reality TV and "San Junipero" (not really pertinent to this topic) if you need cheering up after that barrage of cynicism.
 
  • Like
Reactions: Huntn
Top scientists and governments are already discussing AI, and how to make certain rules about it.
but as soon as China or Russia develop an AI, then there is nothing we can do.

Don't worry about it, you or me can do absolutely nothing about AI development :)

And when a true AI comes alive, it will be the first non human life-form we will meet.

And don't think Black mirror or other AI movie is any indication on whats to come, we'll probably never be able to understand true AI when it comes alive.

If you want to watch a more realistic movie about AI, then watch"Her"
 
Last edited:
  • Like
Reactions: max2 and Huntn
Just don't be put off by the first one, "The National Anthem" which was a bit of a one-off experiment in gross-out and isn't really representative of the pervasive "near future perils of technology" theme of the rest. It's an anthology, so you can't judge it by a single episode and you can dip in at any point (except for "The Black Museum" which is the only one that heavily references other episodes). "Be Right Back" for creepy uses of AI, "Nosedive" or "Hated in the Nation" for out-of-control social media, "Fifteen Million Merits" to cure you of Pelaton and reality TV and "San Junipero" (not really pertinent to this topic) if you need cheering up after that barrage of cynicism.
Thank you for reminding me, I was so turned off by that episode it was first and last that I watched. I’ll give it another try. 🤔

Top scientists and governments are already discussing AI, and how to make certain rules about it.
but as soon as China or Russia develop an AI, then there is nothing we can do.

Don't worry about it, you or me can do absolutely nothing about AI development

And when a true AI comes alive, it will be the first non human life-form we will meet.

And don't think Black mirror or other AI movie is any indication on whats to come, we'll probably never be able to understand true AI when it comes alive.

If you want to watch a more realistic movie about AI, then watch"Her"

I’ve been meaning to watch that one. I thought an excellent portrayal of a relationship with an AI personality was in Bladerunner 2049. The tech was near futuristic, but my impression is that they may be getting close to the fidelity of the relationship, but that was just a fictional story. It seems to me that if you get something that acts human and looks human, people will fall for them. Referencing the original Twilight Zone, that theme has been around for at least 60 years, but in the show, the viewer is unaware until the end of the story.

 
Last edited:
  • Like
Reactions: Queen6
Thank you for reminding me, I was so turned off by that episode it was first and last that I watched. I’ll give it another try.
...there's plenty that is shocking or disturbing in the rest of BM but nothing on the same House of Cards-meets-American Pie level of squick as the first one (still, that was kinda the point - given the media can already get popularity-seeking politicians eating bugs on a reality show, how far could it be pushed? The protagonist just did what the focus groups and polls were telling him to do)

And don't think Black mirror or other AI movie is any indication on whats to come, we'll probably never be able to understand true AI when it comes alive.
No. BM deals with the more immediate problem - human abuse of forseeable technology including what is currently being promoted as "AI". For instance, this:

https://www.digitaltrends.com/computing/ai-being-used-to-let-people-speak-to-the-dead/ (2023)

...was literally the plot of Black Mirror: Be Right Back (2013).
 
  • Like
Reactions: Huntn
There is NO intelligence in AI, I mean nothing. AI is only about database and statistics.
The trouble with the reductionist argument is that you can be just as reductive about about human intelligence. 99.999% of us (and our intelligence) aren’t spending all day probing the limits of abstract reasoning toward the grand unified theory. Most of what a human brain does can be called poor statistical modeling off its own shoddy database for pretty unimpressive results.
 
  • Like
Reactions: fmpfmp and Huntn
And when a true AI comes alive, it will be the first non human life-form we will meet.
There are over 2 million different species of non-human life forms on this planet that show an entire range of unique abilities and specialties beyond our own, and only a handful of them have we even begun to scratch the surface of studying, much less communicating with. It’s an interesting value our species has, to be perfectly content with their mass extinction before even making a concerted effort to study them, while hanging our hopes on machines we create in our own image for interspecies contact.
 
  • Love
  • Like
Reactions: Queen6 and Huntn
The trouble with the reductionist argument is that you can be just as reductive about about human intelligence. 99.999% of us (and our intelligence) aren’t spending all day probing the limits of abstract reasoning toward the grand unified theory.
No, we're figuring out things like "no, my reflection in the mirror isn't a strange man looking at me", "the setting sun isn't a red traffic light" and not making funny mistakes like this:

recorder.jpg



Maybe we're even thinking "the references at the end of the paper are meant to refer to actual published works" or "the short story I just wrote is a pile of stinking crud" or "that's a banana with a psychadelic Dali-esque sticker next to it, not a toaster" (see https://arxiv.org/pdf/1712.09665.pdf).
 
There are over 2 million different species of non-human life forms on this planet that show an entire range of unique abilities and specialties beyond our own, and only a handful of them have we even begun to scratch the surface of studying, much less communicating with. It’s an interesting value our species has, to be perfectly content with their mass extinction before even making a concerted effort to study them, while hanging our hopes on machines we create in our own image for interspecies contact.
This could tie into a nice discussion about spirituality, if there was a religious forum here. ;)
 
Link describes the issue

Upon searching Amazon and Goodreads, author Jane Friedman recently discovered a half-dozen listings of fraudulent books using her name, likely filledwith either junk or AI-generated content. Both Amazon and Goodreads resisted removing the faux titles until the author's complaints went viral on social media.

In a blog post titled "I Would Rather See My Books Get Pirated Than This (Or: Why Goodreads and Amazon Are Becoming Dumpster Fires)," published on Monday, Friedman detailed her struggle with the counterfeit books.

"Whoever’s doing this is obviously preying on writers who trust my name and think I’ve actually written these books
 
  • Like
Reactions: Scepticalscribe
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.