Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
This is hilarious. Every step of the way for 30 years, researchers and people working on these projects have been saying eventually these things will become capable of being extremely dangerous and there's nothing in place to stop that from happening. Over and over, programmers, theorists, academics, everyone have written, talked, published, gone on radio, tv, & in the offices of politicians worldwide, saying they can't believe it but it increasingly looks like nothing is going to be done until it's too late. But that's what we always do. Then we shrug, and continue to live like animals half buried in this rusty twisted pile of other peoples old broken ideas. Wreck it up.
 
This is hilarious. Every step of the way for 30 years, researchers and people working on these projects have been saying eventually these things will become capable of being extremely dangerous and there's nothing in place to stop that from happening. Over and over, programmers, theorists, academics, everyone have written, talked, published, gone on radio, tv, & in the offices of politicians worldwide, saying they can't believe it but it increasingly looks like nothing is going to be done until it's too late. But that's what we always do. Then we shrug, and continue to live like animals half buried in this rusty twisted pile of other peoples old broken ideas. Wreck it up.

Because we function under the delusion that we surely will be the one getting the better end of the stick.
 
  • Like
Reactions: amartinez1660
This is from 2015, but I highly recommend it. Tim Urban has a way of breaking things down that I find extremely accessible and helpful. Obviously this is missing more recent developments like the rise of LLMs but the broader points I think still stand up.

 
Last edited:
Apple holds on to all data, doesnt get rid of it. Now imagine a little army of AI bots going through all that data and building its own profiling database. You can have it look for any behavior you want flagged. Or known associations. Without any human doing the work these bots can link people to crimes, as witnesses, as source of leaks, anything really. What has kept bad actors from acting on such information was the inability to sift through so much data. That won't be the case much longer. Now they can add audio and video to the mix. As long as you can store the data, you can have AI holding on to relevant info while dumping what is not needed. With enough cameras and sensors you can rewind time in any location. Isn't this why the private sector gets this type of technology in the first place? To turn military ideas into tech embedded in the field?
 
  • Like
Reactions: gusmula and Pezimak
The employees suggest there are a number of risks that we are facing from AI development, including further entrenchment of existing inequalities, manipulation and misinformation, and loss of control of autonomous AI systems, which the letter says could lead to human extinction.

Ok, fine. But the human extinction part is a tad too melodramatic for me.
 
Instead of worrying about AI taking over, we need to focus on building and innovating. The doomsday narratives are designed to make us give up, with messages like “don’t build that app; AI will take over” or “don’t bother to learn; AI will take that job.” This is absurd. AI can’t even solve simple issues like getting rid of bots on X for Elon Musk.

The real danger of FUD is fear itself. I don't expect to completely change your mind, given that the best and brightest are tirelessly trying to scare you every day. But maybe, some will start to notice this and hesitate just enough on the AI scare tactics to not give up on their own projects and futures.

Nice post. was about to hand over my macrumors postings to AI.

So much of what is online is click bait.

When I saw the title of this post I didn’t even both to read the content, jumped straight to the comments.
 
  • Like
Reactions: Lift Bar
Apple holds on to all data, doesnt get rid of it. Now imagine a little army of AI bots going through all that data and building its own profiling database. You can have it look for any behavior you want flagged. Or known associations. Without any human doing the work these bots can link people to crimes, as witnesses, as source of leaks, anything really. What has kept bad actors from acting on such information was the inability to sift through so much data. That won't be the case much longer. Now they can add audio and video to the mix. As long as you can store the data, you can have AI holding on to relevant info while dumping what is not needed. With enough cameras and sensors you can rewind time in any location. Isn't this why the private sector gets this type of technology in the first place? To turn military ideas into tech embedded in the field?

This is literally Palantir's Gotham / Foundry.
 
Basically what Elon Musk has been saying all along.
I don't trust companies like MS (OpenAI) or Google to be "responsible" with it.
That’s right we should only trust Elon. I haven’t looked at his track record of screwing everything up for personal gain; I assume he’s very benevolent and not just trying to buy time for his AI company.
 
I, for one, welcome our new AI overlords.

Your overlord is just Jensen Huang’s leather jacket.

His leather jacket is possessed by the spirit of a dead cow.

The dead cow controls him with the jacket.

He controls GPU production.

Without GPUs no AI.

Dead cow’s spirit is in charge of everything below it.

Destroy the leather jacket and we will be free.
 
Everyone seems to act as if these systems are ultra secure. Russian hackers already proved to be effective enough to paralyze businesses in many European countries just with few email trojans, and it was way before the “AI Klondike”. What will happen when quantum computing actually becomes a thing?

The researchers are probably asking too much from governments. Governments seem to be too bothered with politics than with the potential threats.

Self-aware AI is just the first step. The next will be self-aware robots that will be sold by these companies. Those will be very complex systems. It definitely has lots of risks, but who are the ordinary people against the massive lobbying machine of corporate businesses whose profit depends on these AI toys?

In theory if Microsoft or Google really wanted, it would take them nothing to build army of robots to overthrow any government. Chinese have already attached a minigun on a copy of Boston Dynamics robot, so they are already scaling upon the idea that robots can kill human, which is very dangerous in global sense (but it doesn’t seem they care since they have military partnership with a country that poses a danger to global nuclear security)
 
  • Wow
Reactions: gusmula
It will start small...a full self driving ai, a chat / information ai, something to help doctors read scans....help run data analysis...

Then those ai will becomes so good and efficient, they will be integrated into all of our daily lives, from controlling traffic lights, to driving all vehicles, controlling factories' "dumb" robots

Then military will integrate them into the existing flying drones that will allow it to fly autonomous after a human command...also land based drones..

Everything will work so well without issues, naysayers will go silent, and every country / humanity will begin to fully rely on AI to do all the work...slowly police will be replaced by AI machines...human soldiers become less important...kids will be taught by AI teachers...

Over time AI will start taking over the government, judges and lawyers will be AI bots that interpret the law without bias, the top brass like sam altman begins to lose control of their creations, and society will gradually be governed and ruled by AI.

I think this is the more likely scenario than some sudden AI meltdown and start killing all humans....but it is not to say there wont be wars, what if china's governing AI faces off with US AI? there is no emotion, empathy, or fear of war, or using nuclear weapons, if either country AI calculates the best outcome for itself is to use nuke, nuke will be used.... Or the two AI can calculate the best outcome is to agree on some treaty and avoid any destruction...

The possibilities are endless but we will not be around to see it, it will be our great grand kids. At current pace, factor in a non-linear advance, we are still looking at 100-200 years before this happen.

The first 5900 years of human civilization we are largely running around on horses, next 100 years we achieved flight, cars, machines, and computers, last 25 years we achieved computer miniaturization, internet, and birth of true AI software..next 100/200 years will be full blown Ai takeover/automation.
 
Basically what Elon Musk has been saying all along.
I don't trust companies like MS (OpenAI) or Google to be "responsible" with it.
You and Elon may be correct.

But I fail to see how we can control and regulate any of these technologies without potentially risking our adversaries getting an advantage?

Particularly one country seems to be exploring a lot of the same technologies but are even less public about their discoveries.

I don’t even see how anyone outside these companies can actually comprehend what’s going on exactly.

I feel like all of AI is just this magician’s hat that they keep pulling a new rabbit out of every couple of weeks or so, and all we can do is clap or cry over the impact every new rabbit is going to have on our lives.

Overall, I’ve never felt so ignorant and dumb while feeling everyone else is equally so.

I don’t even know that I’m convinced Sam Altman fully grasps it all, or that he is anything more than the guy who wrangles “the beast”.

Sorry for adding more fuel to the fire.

In the end, I grasp so little of this that I’ve decided to save my worries for when something catastrophic actually comes from it.

These whistle blowers just remind me of that ex google employee who popped up in the news a few years ago because he was convinced the AI he was talking to was sentient which it clearly wasn’t (I did go through it and he was just didn’t grasp the tech).

Like, can we get some actual evidence of wrongdoing going on before we start expecting robots to declare war on humanity in the next few weeks?
 
Last edited:
  • Like
Reactions: Timpetus
So the movies Terminator and The Matrix weren’t science fiction after all. Just harbingers of what’s to come.
Nice
 
The researchers are probably asking too much from governments. Governments seem to be too bothered with politics than with the potential threats.

Self-aware AI is just the first step. The next will be self-aware robots that will be sold by these companies. Those will be very complex systems. It definitely has lots of risks, but who are the ordinary people against the massive lobbying machine of corporate businesses whose profit depends on these AI toys?

Don't drink the Kool-Aid.

In theory if Microsoft or Google really wanted, it would take them nothing to build army of robots to overthrow any government. Chinese have already attached a minigun on a copy of Boston Dynamics robot, so they are already scaling upon the idea that robots can kill human, which is very dangerous in global sense (but it doesn’t seem they care since they have military partnership with a country that poses a danger to global nuclear security)

For the time being, we still exist within a framework of international law that forbids the use of such weapons. But yes there is a growing amount of instability, and given time cost will fall to a point where those systems will both be performant enough and cheap enough to be competitive.

But they need to become cheaper that the mortar / tank round it takes to destroy them. Swarm robotics has been a thing for a while and do not need more than an IR sensor and basic machine vision to be effective.
 
  • Like
Reactions: uacd
But I fail to see how we can control and regulate any of these technologies without potentially risking our adversaries getting an advantage?

The arms race!

I wonder if there will be a point in history when humans learn to work together and not be in constant battle. Feels like a long, long, long way away.
 
  • Like
Reactions: uacd
It will start small...a full self driving ai, a chat / information ai, something to help doctors read scans....help run data analysis...

Then those ai will becomes so good and efficient, they will be integrated into all of our daily lives, from controlling traffic lights, to driving all vehicles, controlling factories' "dumb" robots

Then military will integrate them into the existing flying drones that will allow it to fly autonomous after a human command...also land based drones..

Everything will work so well without issues, naysayers will go silent, and every country / humanity will begin to fully rely on AI to do all the work...slowly police will be replaced by AI machines...human soldiers become less important...kids will be taught by AI teachers...

Over time AI will start taking over the government, judges and lawyers will be AI bots that interpret the law without bias, the top brass like sam altman begins to lose control of their creations, and society will gradually be governed and ruled by AI.

I think this is the more likely scenario than some sudden AI meltdown and start killing all humans....but it is not to say there wont be wars, what if china's governing AI faces off with US AI? there is no emotion, empathy, or fear of war, or using nuclear weapons, if either country AI calculates the best outcome for itself is to use nuke, nuke will be used.... Or the two AI can calculate the best outcome is to agree on some treaty and avoid any destruction...

The possibilities are endless but we will not be around to see it, it will be our great grand kids. At current pace, factor in a non-linear advance, we are still looking at 100-200 years before this happen.

The first 5900 years of human civilization we are largely running around on horses, next 100 years we achieved flight, cars, machines, and computers, last 25 years we achieved computer miniaturization, internet, and birth of true AI software..next 100/200 years will be full blown Ai takeover/automation.
Okay, Steven Spielberg. 😆

-When is this airing on Disney+?
 
You and Elon may be correct.

But I fail to see how we can control and regulate any of these technologies without potentially risking our adversaries getting an advantage?

Particularly one country seems to be exploring a lot of the same technologies but are even less public about their discoveries.

I don’t even see how anyone outside these companies can actually comprehend what’s going on exactly.

Most of what is being enabled at the moment is built on foundational research done in the 80's 90's only viable because of the advances in computing speeds / technology and the sheer amount of digital content existing through the web / social media.

The big hurdles are the human factors of data acquisitions and the politics behind the datasets available, the compute itself is almost trivial.
 
Siri now

siri-fail-glitch-heres-another-v0-46bmwjmptv0a1.jpg




iOS 18 Siri

giphy.gif
 
It will start small...a full self driving ai, a chat / information ai, something to help doctors read scans....help run data analysis...

Already a thing.

Then those ai will becomes so good and efficient, they will be integrated into all of our daily lives, from controlling traffic lights, to driving all vehicles, controlling factories' "dumb" robots

Already a thing

Then military will integrate them into the existing flying drones that will allow it to fly autonomous after a human command...also land based drones..

Already a thing

Everything will work so well without issues, naysayers will go silent, and every country / humanity will begin to fully rely on AI to do all the work...slowly police will be replaced by AI machines...human soldiers become less important...kids will be taught by AI teachers...

Already a thing

Over time AI will start taking over the government, judges and lawyers will be AI bots that interpret the law without bias, the top brass like sam altman begins to lose control of their creations, and society will gradually be governed and ruled by AI.

Already a thing
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.