Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
62,931
29,557


Apple and other top tech companies have joined a new U.S. consortium to support the safe and responsible development and deployment of generative AI, the Commerce Department announced on Thursday (via Bloomberg).

NIST-AI-consortium.jpg
Image credit: NIST

Apple, along with OpenAI, Microsoft, Meta, Google, and Amazon, will join more than 200 members of the AI Safety Institute Consortium (AISIC) under the department, Commerce Secretary Gina Raimondo said.

"The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence," Raimondo said in a statement.

The group will work with the department's National Institute of Standards and Technology on priority actions outlined in President Biden's AI executive order, "including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content."

Other technology companies, as well as civil society groups, academics, and state and local government officials, will also be involved to establish safety standards regarding AI regulation.

Generative AI has spurred excitement due to its potential to enhance creativity, improve efficiency, and advance technology. However, fears surrounding generative AI include ethical concerns like deepfakes, the potential impact on jobs, issues around information reliability, and challenges in ensuring privacy and effective regulation.

Apple is said to be spending millions of dollars a day on AI research as training large language models requires a lot of hardware. Apple is on track to spend more than $4 billion on AI servers in 2024, according to one report.

Apple is said to be developing its own generative AI model called "Ajax". Designed to rival the likes of OpenAI's GPT-3 and GPT-4, Ajax operates on 200 billion parameters, suggesting a high level of complexity and capability in language understanding and generation. Internally known as "Apple GPT," Ajax aims to unify machine learning development across Apple, suggesting a broader strategy to integrate AI more deeply into Apple's ecosystem.

Aspects of the model could be incorporated into iOS 18, such as an enhanced version of Siri with ChatGPT-like generative AI functionality. Both The Information and analyst Jeff Pu claim that Apple will have some kind of generative AI feature available on the ‌iPhone‌ and iPad later this year.

Article Link: Apple Joins US Commerce Department's AI Safety Institute Consortium
 
  • Like
Reactions: SFjohn

Skyscraperfan

macrumors 6502a
Oct 13, 2021
718
1,978
If tech companies talk about "safety", I read "censorship".

It is okay if they block fake porn of famous people, but even ChatGPT already goes much further. It blocks everything that could be seen as controversial. Computers that are smarter than humans my be creepy for many people, but even more creepy is the idea that tech companies or governments control those super intelligent computers.

Try asking ChatGPT what advantages climate change has. It will refuse to answer that question.
 

VulchR

macrumors 68040
Jun 8, 2009
3,322
14,139
Scotland
There really should be regulation of AI research (cost/benefit analysis, risk assessments, independent ethical peer review, measures taken to mitigate any harms, protection for people exposed to experimental AI (workers, consumers, etc.), and possibly containment measures akin to those in biology so AI doesn't run amok). Right now AI research seems to be guided by academics, corporate employees, and CEO's, all of whom have a conflict of interest and who are working without formalised ethical frameworks and reviews so far as I can tell.
 
  • Like
Reactions: Ajones330

contacos

macrumors 601
Nov 11, 2020
4,454
17,268
Mexico City living in Berlin
Isn't "AI" in todays context really just a more futuristic sounding "buzzword" for "automated tasks" like I use ChatGPT and the "AI" feature in Photoshop every day but I have yet to find something where it does or learn something on its own like so far it can only do what it is being fed?
 

IllegitimateValor

macrumors member
Nov 13, 2023
39
85
Watermarking isn’t enough. The prompts used on AI created or manipulated images should be permanently encoded into the image. The images should always be allowed to be saved, yet come with the anti-screenshot tech turned on.

Loopholes should be closed as much as possible for laundering AI content to sell as real. That is the greatest danger to our society- to exist in a permanent unreal present where truth about the past and current events are in the hands of those with the power to manipulate the most believers.

I don’t trust even the companies involved to do right by us even with deep oversight.

We’re approaching the threshold of history- a veil beyond which prophets would not see clearly for the muddiness of the waters of truth, and time travelers will be unable to relay back what really happened for all the confusion.
 

VulchR

macrumors 68040
Jun 8, 2009
3,322
14,139
Scotland
...

We’re approaching the threshold of history- a veil beyond which prophets would not see clearly for the muddiness of the waters of truth, and time travelers will be unable to relay back what really happened for all the confusion.
We'll soon approach a point when so much of internet content will be generated by AI that AI 'hallucinations' might impair our ability to learn the truth about history or current affairs. We could have AI systems learning the hallucinations of other AI systems as they troll the internet for training data.
 

mdriftmeyer

macrumors 68040
Feb 2, 2004
3,740
1,722
Pacific Northwest
Isn't "AI" in todays context really just a more futuristic sounding "buzzword" for "automated tasks" like I use ChatGPT and the "AI" feature in Photoshop every day but I have yet to find something where it does or learn something on its own like so far it can only do what it is being fed?
It's not Artificial Intelligence, not by a long shot. It's Machine Learning/Finite Automata of massive data sets that are learning to produce insights through far larger sets possible back in the '90s and thus is now useful across hundreds of disciplines.

No. No Singularity. No consciousness of an artificial net mesh, etc.
 
  • Like
Reactions: MRMSFC

coffeemilktea

macrumors 6502a
Nov 25, 2022
634
2,427
Other technology companies, as well as civil society groups, academics, and state and local government officials, will also be involved to establish safety standards regarding AI regulation.
I look forward to a day when the only way to get uncensored open-source AI models (like the kind you can get today on sites like HuggingFace or CivitAI) is to torrent them on shady sites because the government prevents people from getting them normally... for our own good, apparently. 🤡 /s
 
  • Like
Reactions: gusmula and MRMSFC

lkrupp

macrumors 68000
Jul 24, 2004
1,844
3,713
I look forward to a day when the only way to get uncensored open-source AI models (like the kind you can get today on sites like HuggingFace or CivitAI) is to torrent them on shady sites because the government prevents people from getting them normally... for our own good, apparently. 🤡 /s
Or go back to paper and couriers. 😜
 

Karma*Police

macrumors 68020
Jul 15, 2012
2,487
2,785
If tech companies talk about "safety", I read "censorship".

It is okay if they block fake porn of famous people, but even ChatGPT already goes much further. It blocks everything that could be seen as controversial. Computers that are smarter than humans my be creepy for many people, but even more creepy is the idea that tech companies or governments control those super intelligent computers.

Try asking ChatGPT what advantages climate change has. It will refuse to answer that question.
Which goes to show it’s not true intelligence because it’s not capable of critical thought. But I guess you could say the same for college grads who’ve been “taught” what to think instead of how to think 😂
 

Karma*Police

macrumors 68020
Jul 15, 2012
2,487
2,785
Watermarking isn’t enough. The prompts used on AI created or manipulated images should be permanently encoded into the image. The images should always be allowed to be saved, yet come with the anti-screenshot tech turned on.

Loopholes should be closed as much as possible for laundering AI content to sell as real. That is the greatest danger to our society- to exist in a permanent unreal present where truth about the past and current events are in the hands of those with the power to manipulate the most believers.

I don’t trust even the companies involved to do right by us even with deep oversight.

We’re approaching the threshold of history- a veil beyond which prophets would not see clearly for the muddiness of the waters of truth, and time travelers will be unable to relay back what really happened for all the confusion.
The effects of postmodernism. Scary thing is, it was all intentional, led by the new leftists like Max Horkheimer and Herbert Marcuse of the Frankfurt school and further propagated by the KGB (there’s an interview with a defector from the 80s on YouTube and it’s scary how their plan worked to perfection and in the timeframe he outlined) and now China through Tik Tok; we know this because a lot of the nonsense on Tik Tok isn’t allowed in China.
 
  • Wow
Reactions: gusmula

G5isAlive

macrumors 68020
Aug 28, 2003
2,324
3,983
To those opposed to the government looking in to this, eh, got to start somewhere. Or maybe we should ask chatGPT for the best way to regulate. Good to see Apple finally getting involved.
 
  • Like
Reactions: PantherKang

endemize

macrumors member
Mar 8, 2022
69
61
Isn't "AI" in todays context really just a more futuristic sounding "buzzword" for "automated tasks" like I use ChatGPT and the "AI" feature in Photoshop every day but I have yet to find something where it does or learn something on its own like so far it can only do what it is being fed?
That’s exactly what it is. It takes an input to give an output. It’s a tool for us right now. That’s it. It is amazing and I use it everyday.
 

jimbobb24

macrumors 68040
Jun 6, 2005
3,330
5,349
Isn't "AI" in todays context really just a more futuristic sounding "buzzword" for "automated tasks" like I use ChatGPT and the "AI" feature in Photoshop every day but I have yet to find something where it does or learn something on its own like so far it can only do what it is being fed?
Yes. AI today is simply rebranding. However, in the new paths that have moved forward there is a glimmer of what AI could become. But current tools are not thinking. ChatGPT doesn't understand anything. Thats doesn't mean it isn't insanely useful and going to change everything and every job.
 

jimbobb24

macrumors 68040
Jun 6, 2005
3,330
5,349
Watermarking isn’t enough. The prompts used on AI created or manipulated images should be permanently encoded into the image. The images should always be allowed to be saved, yet come with the anti-screenshot tech turned on.

Loopholes should be closed as much as possible for laundering AI content to sell as real. That is the greatest danger to our society- to exist in a permanent unreal present where truth about the past and current events are in the hands of those with the power to manipulate the most believers.
Most of this is simply impossible to practically implement. I mean we can pass all the laws we want but it will not in practice stop this. Stable diffusion already watermarks. But it's in the code and you can unwatermark. You can train your AI to not do the watermark. We want to be safe and wise but some things are not practically possible to stop. It's like the invention of the camera and telling people they are only allowed to photograph what the govt says. Only a totalitarian govt could do it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.