Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I don't think any are perfect but some are far worse than others.

Europe sets benchmark for rest of the world with landmark AI laws.​

"Europe's landmark rules on artificial intelligence will enter into force next month after EU countries endorsed on Tuesday a political deal reached in December, setting a potential global benchmark for a technology used in business and everyday life.
The European Union's AI Act is more comprehensive than the United States' light-touch voluntary compliance approach while China's approach aims to maintain social stability and state control."

https://www.reuters.com/world/europ...ark-artificial-intelligence-rules-2024-05-21/
 
It’s in your hand. Probably on the right hand side of the device you’re using.
 
I really can’t handle all the doom and gloom we have to face on a daily basis. The sky is falling media headlines, the election, protests, our country’s border disintegrating (USA), people being hateful and violent in general, the comment sections online, knowing about every injustice that ever happens, every tragedy, every disaster…
Where is the internet kill switch.
Same. I literally have child filters on all of my apps because I'm tired of having doom and gloom and politics shoved down my throat. I now eat "gummies" to help block all of this from my head. I wear my AirPods with noise cancelling on anytime I'm in a store because I'm tired of overhearing about politics - people are so obsessed! I literally stay home and only watch positive and fun shows from the 70's and 80's. Everything sucks now because the internet and media have made everyone on edge with the constant sensationalism and negativity. Turn it all off.
 
Last edited:
Same. I literally have child filters on all of my apps because I'm tired of having doom and gloom and politics shoved down my throat. I now eat THC gummies to help block all of this from my head. I wear my AirPods with noise cancelling on anytime I'm in a store because I'm tired of overhearing about politics - people are so obsessed! I literally stay home and only watch positive and fun shows from the 70's and 80's. Everything sucks now because the internet and media have ****ing made everyone on edge with the constant sensationalism and negativity. Turn it all off.
A Brave New World
 
Most Deadly Events the world has seen.

Influenza pandemic (1918-19) 20-40 million deaths.

black death/plague (1348-50), 20-25 million deaths.

AIDS pandemic (through 2000) 21.8 million deaths.

World War II (1937-45), 40-60 million deaths.

World War I (1914-18) 30-35 million deaths.

Humanity will be fine and survive.

A short-term view like that does not tell the whole story. Check "extinction events" throughout all of Earth's history. The numbers of creatures that went extinct go as high as 90% and the survivors are never the complex creatures even remotely near human complexity... but the simplest ones. For example, there are no dinosaurs of the type that once ruled the entire planet (for FARRRRRRRRRRRRR longer than we humans have)... but there is still fungi. There are no more sabertooth tigers, but there is still an abundance of amoebas and viruses from that time. The simple, generally tiny creatures seem to be able to survive "the big ones." The richly complex creatures never have.

The only way "humanity will be fine and survive" is by "being fruitful and multiplying" well beyond this planet. Get a viable colony or two on other planets and odds in humans surviving anything happening on/to Earth goes way up. While we all share this one planet, there is an abundance of very real risk that could take us all out... far worse than any of those events you listed.

Yes, we humans are abundantly ingenious & innovative... but also ridiculously fragile. Strip us of our atmosphere for about 6 minutes or so and we're all dead (make it an hour or three for the few that might be able to get to stored air until they run out of that supply). Turn off the sun for a few days and we're all dead. Alter the planets orbit out of the "Goldilocks" zone and we're likely extinct pretty quickly. Sufficiently poison the water... sufficiently liquify our solid ground... make us too cold or too hot for too long... etc and all of the above list will look like novelty events by comparison. Let one good-sized rock happen to hit the planet... let one good gamma ray burst hit the planet... let one super-Ebola get airborne and thoroughly lose... let a good number of volcanoes erupt together... etc.

That list shows how lucky we are to have survived through relatively "little" (mostly very recent) events vs. some in geological history. If we really want humanity to be "just fine", we need upwards of several viable colonies off of this singular home base that currently holds ALL of us... so that it could be entirely destroyed but the remnants of humanity could continue on elsewhere. As long as we all live at the same galactic address, we're viable only as long as that address persists... which is 100% certainly not forever.
 
Last edited:
  • Like
Reactions: jimbobb24
A short-term view like that does not tell the whole story. Check "extinction events" throughout all of Earth's history. The numbers of creatures that went extinct go as high as 90% and the survivors are never the complex creatures but the simplest ones. For example, there are no dinosaurs of the type that once ruled the entire planet... but there is still fungi. There are no more sabertooth tigers, but there is still an abundance of amoebas and viruses from that time.

The only way "humanity will be fine and survive" is by "being fruitful and multiplying" well beyond this planet. Get a viable colony or two on other planets and odds in humans surviving anything happen on/to Earth goes way up. While we all share this one planet, there is an abundance of very real risk that could take us all out... far worse than any of those events you listed.

Yes, we humans are abundantly ingenious & innovative... but also ridiculously fragile. Strip us of our atmosphere for about 6 minutes or so and we're all dead (make it an hour or three for the few that might be able to get to stored air until they run out of that supply). Turn off the sun for a few days and we're all dead. Alter the planets orbit out of the "Goldilocks" zone and we're likely extinct pretty quickly. Sufficiently poison the water... sufficiently liquify our solid ground... make us too cold or too hot for too long... etc and all of the above list will look like novelty events by comparison. Let one good-sized rock happen to hit the planet... let one good gamma ray burst hit the planet... let one super-Ebola get airborne and thoroughly lose, etc.

That list shows how lucky we are to have survived through relatively "little" events vs. some in geological history. If we really want humanity to be "just fine", we need upwards of several viable colonies off on this singular home base that currently holds ALL of us... so that it could be entirely destroyed but humanity could continue on elsewhere.

Arguable, however there needs to be a commitment to human dignity and respect of current life. If not, why not go full controlled economy with mass genocide compouded by genome harvest to hold complexity and letting automation seed the off-world.

Slippery slopes, moreso when the radical ideas a being put forth in (within the current democratic pocket of history) unstable times.
 
There are cycles of existence on this planet. We could be nearing the next one. The trajectory is such that we will always end up here.
 
  • Like
Reactions: Morod
Arguable, however there needs to be a commitment to human dignity and respect of current life. If not, why not go full controlled economy with mass genocide compouded by genome harvest to hold complexity and letting automation seed the off-world.

Slippery slopes, moreso when the radical ideas a being put forth in (within the current democratic pocket of history) unstable times.

Agreed but who enforces the commitment and how do we all embrace and stick with it? It seems universally understood that cigarettes have no benefit at all and certainly contribute to harm. We've known this for a LONG time. And yet, cigarettes keep being made, sold, pitched to new customers, etc.

The whole covid episode could have been marginalized if basically everyone could have just gone into their homes and stayed clear of each other for as little as about 2 weeks (and yes, the homeless would have to stay clear of each other too). With no way to hop from person to person, it would have burned out and made extinct. Instead, too many of us rebelled and partied such that now there are countless variations with countless more variants to follow.

Pick a vice, any vice... known to only harm us but still in abundant availability. Why is it available? Because no amount of enforcement gets full buy-in. There was a great test of American might with Prohibition. But even great effort by the "most powerful nation on earth" could NOT fully enforce that law.

As an O.I. HUMAN, I 100% agree with you in principle... but know that the idea begins and ends there. We can do our best to insure that we- playing God to A.I.- forbid it from having a bite of the apple... but we already know how that story goes.
 
These are not serious people when this is their complaint: "existing inequalities, manipulation and misinformation, and loss of control of autonomous AI systems". Isn't the only serious AI problem loss of control. Talking about inequalities just makes you sound like a nut. Phones probably perpetuate inequalities as does solar power and clean drinking water. Just makes them sound distractible.
 
Whilst I certainly can see a lot of potential problems with unleashing powerful tools on the world, I think the concerted effort at censorship, baking in ‘wokeness’, and general approach to gatekeeping this stuff is doing way more harm than good. OpenAI should maybe just give ‘open’ a go and stop trying to play the role of a paternalistic government.
 
It will start small...a full self driving ai, a chat / information ai, something to help doctors read scans....help run data analysis...

Then those ai will becomes so good and efficient, they will be integrated into all of our daily lives, from controlling traffic lights, to driving all vehicles, controlling factories' "dumb" robots

Then military will integrate them into the existing flying drones that will allow it to fly autonomous after a human command...also land based drones..

Everything will work so well without issues, naysayers will go silent, and every country / humanity will begin to fully rely on AI to do all the work...slowly police will be replaced by AI machines...human soldiers become less important...kids will be taught by AI teachers...

Over time AI will start taking over the government, judges and lawyers will be AI bots that interpret the law without bias, the top brass like sam altman begins to lose control of their creations, and society will gradually be governed and ruled by AI.

I think this is the more likely scenario than some sudden AI meltdown and start killing all humans....but it is not to say there wont be wars, what if china's governing AI faces off with US AI? there is no emotion, empathy, or fear of war, or using nuclear weapons, if either country AI calculates the best outcome for itself is to use nuke, nuke will be used.... Or the two AI can calculate the best outcome is to agree on some treaty and avoid any destruction...

The possibilities are endless but we will not be around to see it, it will be our great grand kids. At current pace, factor in a non-linear advance, we are still looking at 100-200 years before this happen.

The first 5900 years of human civilization we are largely running around on horses, next 100 years we achieved flight, cars, machines, and computers, last 25 years we achieved computer miniaturization, internet, and birth of true AI software..next 100/200 years will be full blown Ai takeover/automation.

This actually sounds great.
 
AI is here to stay as there are obvious benefits to it, and we’d be at an obvious competitive disadvantage if other countries who don’t share the same philosophies decide to continue forward with it (in the case we tried to regulate ourselves).

With that said, the government really needs to have a plan for a lot of jobless people such as universal incomes, etc. There’s never been a single piece of technology that can upend so many industries and so many jobs. I think what those in charge don’t understand, is that simply saying “get an education and more skills” won’t work in this case. We aren’t talking about blue collar machining jobs here, the most impacted will be white collar jobs with higher degrees, like radiologists who will no longer be needed in nearly the current numbers to identify cancer on a CT scan, etc.
 
Reach behind the desk and remove the plug from the power outlet. AI DEFEATED ZOMGGGG
 
AI is here to stay as there are obvious benefits to it, and we’d be at an obvious competitive disadvantage if other countries who don’t share the same philosophies decide to continue forward with it (in the case we tried to regulate ourselves).

With that said, the government really needs to have a plan for a lot of jobless people such as universal incomes, etc. There’s never been a single piece of technology that can upend so many industries and so many jobs. I think what those in charge don’t understand, is that simply saying “get an education and more skills” won’t work in this case. We aren’t talking about blue collar machining jobs here, the most impacted will be white collar jobs with higher degrees, like radiologists who will no longer be needed in nearly the current numbers to identify cancer on a CT scan, etc.
The idea that AI will cause massive unemployment, is overblown. AI needs exponentially more data to achieve any such disruption.

Yes, many jobs aren’t as productive as believed, but this has been true for a long time, it isn’t a new issue.

The real “danger of AI” is that fear-mongering from AI proponents and critics alike could cause people to despair and give up on learning and personal growth. Keep studying and doing interesting work.
 
These are not serious people when this is their complaint: "existing inequalities, manipulation and misinformation, and loss of control of autonomous AI systems". Isn't the only serious AI problem loss of control. Talking about inequalities just makes you sound like a nut. Phones probably perpetuate inequalities as does solar power and clean drinking water. Just makes them sound distractible.
No, exacerbating existing inequalities is absolutely a concern with AI. AI isn't neutral, it's programmed by people, people with biases, and those biases affect the output of what they make. For example, face recognition is so bad for people of color that often times it will present two entirely different people of color as a face match because the algorithms for the tech have been trained on a majority of data for white people. If you're using facial recognition to say, find a suspect in a crime, having false positives like that can cause incredible harm to innocent people.

I can think of a thousand different scenarios where an AI built with existing inequalities programmed in would exacerbate those inequities as they are given more authority and control over our systems. And this isn't isolated to racial issues. Suppose an insurance company uses an AI to approve or reject claims, and it sees a patient marked as male gets a Pap smear. Well, the AI would reject that claim, right? Turns out that many FTM trans people can be marked as male on official forms and still need healthcare for people with vaginas. Now you'd hope anyone programming the AI would know this and build it in, but that's far from guaranteed. Now multiply this across every area of our life where companies are already adding AI to the decision making process (and possibly removing human oversight).
 
I can think of a thousand different scenarios where an AI built with existing inequalities programmed in would exacerbate those inequities as they are given more authority and control over our systems. And this isn't isolated to racial issues. Suppose an insurance company uses an AI to approve or reject claims, and it sees a patient marked as male gets a Pap smear. Well, the AI would reject that claim, right? Turns out that many FTM trans people can be marked as male on official forms and still need healthcare for people with vaginas. Now you'd hope anyone programming the AI would know this and build it in, but that's far from guaranteed. Now multiply this across every area of our life where companies are already adding AI to the decision making process (and possibly removing human oversight).

This is such a great example, I'll keep it handy !

Another one is the difference in token segmentation between languages (as a quick reminder, tokens are the fundamental units being manipulated by models in the process of building predictions based on input). We (English speakers) have a simple alphabet. Meaning that you could encode a page's content of text within a given amount of token. Some other languages with more complex alphabets will take multiples of that amount of token to encode the same amount of text.

Meaning that it is computationally more intensive to run the same amount of information in certain languages rather than others.

Meaning that those linguistic spheres (people), more often than not, will end up under-served by the tech or outright excluded from it's potential unless joining the lingua-franca that is English. Two problematic issues when it comes to fighting poverty and absolute inequality.

AI safety isn't about some crack team working to avoid a sentient machine overdrive. It's about being cognoscent about the interconnected effects of the increasing reliance on black boxes.

PS: That is glossing over the the foundational problem that is the availability of information in Digital form when compared with the domain of information available.
 
Last edited:
  • Like
Reactions: Supermallet
I, for one, welcome our new AI overlords.
I think our new AI (artificial intelligence) overlords will be an improvement over our current (completely devoid of real intelligence) overlords. It's not like our current overlords aren't racing us to the edge of extinction with gusto. At least the AI overlords will reference data. :)
 
  • Like
Reactions: fatTribble
Exactly, which begs the question, who do we trust to "regulate" it? Governments are made up of fallible, imperfect, partisan individuals, just like the boards of corporations.
Corporations are usually pretty clear about there objectives, maximize shareholder value. With government officials it is usually a lot more difficult to figure out there goals.

I mean unless they are honest when the share them. /s
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.