Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
> AI companies have "strong financial incentives" to forge ahead with development and to avoid sharing information about their protective measures and risk levels.

Incumbent leaders like OpenAI and most definitely Google have an even "stronger financial incentive" to push for regulatory capture.
Right, it all reeks of gatekeeping. God forgot Billy in his basement makes an AI that they can’t control. Think of all the bad things that can happen, that would be bad, we don’t know what those bad thing are, but it sounds bad, so it’s probably bad….
 
  • Like
Reactions: Timpetus
Uhmm, if a potential whistle-blower is witnessing a threat as serious as human extinction, why is he or she worried about potential retaliation from the corporation? In a situation like that, you blow the whistle no matter the personal consequences, unless, of course, you're just being a bit dramatic.
Not sure if serious? But I guess I’ll take the bait.

The answer is that people, including employees of these companies, have mortgages and families to feed.
 
  • Like
Reactions: drrich2
If I can write your job down on a piece of paper then it can be automated.

Navigating a board, shareholders, company culture, etc not sure how much of this can completely be automated. but you may be right, for a generic company it might be enough.

But no doubt, in the future there will be those that can afford AI and those that can’t. The advantage will sit with those that can afford it.
I think training the LLM is the most expensive part. Once that’s trained cost isn’t too much of a factor. Overall the investment in AI is coming down quite rapidly every year.
Good data collection and processing will be the most significant expense.
 
I really can’t handle all the doom and gloom we have to face on a daily basis. The sky is falling media headlines, the election, protests, our country’s border disintegrating (USA), people being hateful and violent in general, the comment sections online, knowing about every injustice that ever happens, every tragedy, every disaster…
Where is the internet kill switch.
First-world problems ♡
 
The risk isn’t AI. The risk is those who own the AI.

The calls for regulation from the owners are laughable. This one, from former employees who stand on principle, should be given all due consideration.
 
I think training the LLM is the most expensive part. Once that’s trained cost isn’t too much of a factor. Overall the investment in AI is coming down quite rapidly every year.
Good data collection and processing will be the most significant expense.

This is the thing with AI at the moment. When you generate something, whose dataset are you using? Companies that invest heavily into their own datasets won't be sharing them and instead using them for their own competitive advantage.
 
Uhmm, if a potential whistle-blower is witnessing a threat as serious as human extinction, why is he or she worried about potential retaliation from the corporation? In a situation like that, you blow the whistle no matter the personal consequences, unless, of course, you're just being a bit dramatic.
Easier said that done.

Also, they’re saying extinction is a possible outcome here. Not a sure thing. The point is they’re trying to wave their hands to get attention to this subject, because it is kind of insane that governments around the world are basically watching as these AI companies do whatever they want.

Politicians generally don’t do much about a problem unless they’re told it could become a crisis. If you wait til that point it’s too late.
 
  • Like
Reactions: DownUnderDan
Everyday Terminator is slowly becoming a documentary rather than entertainment!

Lol - AI today is only as good as the person that is using it.

A general intelligence that could mimic a human is decades if not longer away.
 
Extremely wealthy individuals, investors, and companies push doomsday narratives to make AI seem like it will take over everything, making it worth trillions instead of billions. This FUD discourages investment in real products and true innovation.

It’s an insidious tactic to manipulate markets or is touted by people seeking phony jobs and “influence” in AI “alignment.”

This. All this “we’re terrified of our own product! It’s going to take over the world!” talk sure sounds like a lot of marketing to me.
 
  • Love
Reactions: Lift Bar
The first 5900 years of human civilization we are largely running around on horses, next 100 years we achieved flight, cars, machines, and computers, last 25 years we achieved computer miniaturization, internet, and birth of true AI software..next 100/200 years will be full blown Ai takeover/automation.

Yes, but people are making the mistake of calling this “extinction” or a “takeover”. While it’s possible some stupid military lets ai control its nukes and it ends us all “including the ai as without power it’s dead too”.

What a “super intelligent” ai would be, if it’s possible to actually make one is just the next step in human evolution. We made it, with all our knowledge, we leave behind our biological meat suits for mechanical ones. It will make space travel a lot easier when we don’t have to worry about oxygen, food, gravity, etc.

I don’t think it’ll happen though as we seem hell bent on going extinct long before Skynet kills everyone.
 
…The point is they’re trying to wave their hands to get attention to this subject, because it is kind of insane that governments around the world are basically watching as these AI companies do whatever they want.

Politicians generally don’t do much about a problem unless they’re told it could become a crisis. If you wait til that point it’s too late.
It appears that way at first glance, but it's not quite the case. The "what will we ever do about our crazy amazing and terribly dangerous product!!" rhetoric is a tactic to angle for cushy positions in AI “alignment” or to secure even more funding by speeding FUD.

These doomsday narratives consistently manipulate attention and investment, not because there's an imminent crisis, but to bolster their own influence and financial backing.
 
I really can’t handle all the doom and gloom we have to face on a daily basis. The sky is falling media headlines, the election, protests, our country’s border disintegrating (USA), people being hateful and violent in general, the comment sections online, knowing about every injustice that ever happens, every tragedy, every disaster…
Where is the internet kill switch.
It started nearly a decade ago, with certain websites pushing clickbait headlines. Now, unfortunately, it’s practically unavoidable on the Internet, since the culture of fear, outrage, and half-truths or misinformation designed for little more than engagement have enveloped our society, and turned us all against each other. Also, the use of superlatives in everyday life, especially in the news media, has gotten majorly out of hand, and contributed to the modern culture where nobody accepts “good enough” or level-headedness.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.