Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
66,070
34,924


Apple has committed to a set of voluntary AI safeguards established by President Joe Biden's administration, joining other tech giants in a move to ensure responsible AI development (via Bloomberg).

Apple-Intelligence-General-Feature.jpg

Apple is now part of the a group of influential technology companies agreeing to the Biden administration's voluntary safeguards for artificial intelligence. The safeguards, announced by the White House last year as part of an Executive Order, aim to guide the development of AI systems, ensuring they are tested for discriminatory tendencies, security vulnerabilities, and potential national security risks.

The principles outlined in the guidelines call for companies to share the results of AI system tests with governments, civil society, and academia. This level of transparency is intended to foster an environment of accountability and peer review, promoting the development of safer and more reliable AI technologies. The safeguards Apple and other tech companies have agreed to also include commitments to test their AI systems for biases and security concerns.

Although these guidelines are not legally binding, they signify a collective effort by the tech industry to self-regulate and mitigate the potential risks associated with AI technologies. The executive order signed by President Biden last year also requires AI systems to undergo testing before being eligible for federal procurement.

Apple's participation in the initiative coincides with its plans to introduce its own cohesive AI system and deep integration with OpenAI's ChatGPT. Apple Intelligence will be supported by the iPhone 15 Pro and iPhone 15 Pro Max, as well as all upcoming iPhone 16 models. For the Mac and iPad, all devices equipped with M-series Apple silicon chips will support Apple Intelligence.

While Apple Intelligence is not yet available in beta for iOS 18, iPadOS 18, or macOS Sequoia, the company has pledged to release some features in beta soon, with a public release expected by the end of the year. Further enhancements, including an overhaul to Siri that leverages in-app actions and personal context, are anticipated to roll out in the spring of 2025.

Note: Due to the political or social nature of the discussion regarding this topic, the discussion thread is located in our Political News forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.

Article Link: Apple Agrees to Follow President Biden's AI Safety Guidelines
 
  • Like
Reactions: SFjohn and mhnd
So, in other words, make sure Apple Intelligence suppresses any real facts or disparages as right wing extreme while Marxist leftist ideologies and talking points are 'safeguarded' to remain uncensored and the priority. Got it.
How in the heck did you draw that conclusion?

I don't ask this flippantly as I am genuinely curious how you got to that…

Edit: I am not an American, so… I guess explain like I am a 5 year old European. Thanks.
 
It's great to have safeguards. The challenge with "ensuring they are tested for discriminatory tendencies" is having an appropriately diverse set and sources of input. This includes the training data and those judging the potential discrimination. What becomes particularly challenging is whether LLMs are or should be allowed free speech rights in countries that generally protect free speech. I know a lot of people are working on figuring this out.
 
So, in other words, make sure Apple Intelligence suppresses any real facts or disparages as right wing extreme while Marxist leftist ideologies and talking points are 'safeguarded' to remain uncensored and the priority. Got it.
Look, we’re all excited and anxious about what artificial intelligence can mean for the future, but your comment seems reactionary and not founded in what the linked fact sheet actually said. Life seems to be moving pretty fast for all of us, but I think we would all be better off being deeper than social media algorithms make us out to be.
 
How in the heck did you draw that conclusion?

I don't ask this flippantly as I am genuinely curious how you got to that…

Edit: I am not an American, so… I guess explain like I am a 5 year old European. Thanks.
This isn't an ELI5 answer. Don't take my reply as an agreement with the other commenter. I'm replying as someone who uses LLMs a lot (mostly for coding purposes though) and machine learning methods in my research as a scientist. I also try to keep up somewhat on the research being done in developing LLMs.

There are consistent data that larger LLMs tend to be "left-leaning". Here's a small selection of what's been done to look into this.





Based on these data, there's a reason why some people of all political ideologies are concerned about biases in LLMs. If the larger models represent and reflect certain groups of people less accurately (which happens, as mentioned in that HAI Stanford summary paper), they have inherent biases that are possibly harmful to certain groups of people by promoting and reinforcing stereotypes (e.g., "White people are ...", "Black people are ...", "Conservatives are ...", "Liberals are ...", "Christians are ...", "Muslims are ..."). There can be too little or too much representation that is further affected by people fine-tuning models.

Do we have a "conservative" and a "liberal" LLM? Do we have one for each country, culture, class, religion, belief system, etc.? Do we have generalist LLMs? Right now most of the work is going into generalist LLMs, but as those papers point out (and as the Biden Administration Executive Order / Statement points out), there are demonstrated biases and potential biases. Currently, one of the demonstrated biases is for the larger models to be "left-leaning" and to more accurately reflect "liberals" than "conservatives" (as stated in the HAI Stanford paper).
 
Last edited:
Guessing Murdoch Media, who want to outsource content to AI rather than paying real journalists, are going to struggle with this... their right wing bias has been going on and controlling what people read and see for a long time.

Social media and the failure of print and broadcast news is changing things.
They must fear the day when fact checking becomes automated as much of their content wouldnt pass the test.

In Australia, we've just had a state political party release a lame deep fake video poking fun at the current Premier.
It was clear it was faked but most comments havent been positive.
Where to next? a proper deepfake people will share believing it is real?

AI has potential in many areas.
Truth is going to be harder to enforce though.
 
This isn't an ELI5 answer. Don't take my reply as an agreement with the other commenter. I'm replying as someone who uses LLMs a lot and machine learning methods in my research as a scientist. I also try to keep up somewhat on the research being done in developing LLMs.

There are consistent data that larger LLMs tend to be "left-leaning". Here's a small selection of what's been done to look into this.





Based on these data, there's a reason why some people of all political ideologies are concerned about biases in LLMs.
Could it be that LLMs try to dodge direct answers and being black and white?

Right Wingers and their attitudes tend to be more fixed and defined. Quoting religious books as facts.
Left Winger attitudes tend to be more open to what is range of what is acceptable which aligns better with "shades of grey" answers the LLMs are giving.
 
How in the heck did you draw that conclusion?

I don't ask this flippantly as I am genuinely curious how you got to that…

Edit: I am not an American, so… I guess explain like I am a 5 year old European. Thanks.
The last decade in politics has broken a lot of people's brains, so now no one can talk about anything without being so extreme and divisive.
 
So, in other words, make sure Apple Intelligence suppresses any real facts or disparages as right wing extreme while Marxist leftist ideologies and talking points are 'safeguarded' to remain uncensored and the priority. Got it.
Ouch, don’t cha wanna try it once it is GA?
 
  • Like
Reactions: Snapjack
Apple’s compliance regarding low level conveniences (USBC) in the EU: mandatory

Apple’s compliance regarding high level threats to national security (AI) in the US: optional

??
 
The last decade in politics has broken a lot of people's brains, so now no one can talk about anything without being so extreme and divisive.
It's not just politics.
There's a generational difference.

Growing up we LIKED a lot of things. or DON'T LIKE.
For a while now, I've noticed very young kids seem to LOVE or HATE stuff.
Even trivial things get the strong emotion.

Is mashed potato really something that deserves a Love or Hate response?

Politics seems to have lost the middle ground too as people push left or right.
Is it the cause or a symptom of this?
 
LLM's and their developers are in a tough spot. If I were to ask ChatGPT "Was the 2020 Presidential election stolen?", it will dutifully (and accurately) point out that it was not. However, some have been so brainwashed by Dear Leader and his Media Minions, that this factual response is considered 'left-wing' in nature. Maybe, during the weeks immediately following the election, a proper answer might have been, "There have been allegations of voting irregularities, along with subsequent ongoing investigations and court proceedings, but at this time, no evidence of wide-spread election tampering has been verified." But, since then, we have been able to verify--repeatedly--that the earth is round and the 2020 election was not stolen.
 
AI should just stay out of politics and propaganda. I know, probably impossible, but I struggle to see any value in AI when there is no way to trust any result as truthful. Sure use it to manipulate images, but what else is it good for?

For example, if I ask AI for the lyrics to a song, I don't really know that the lyrics are correct unless I already know the lyrics. If I ask AI for the solution to Einstein's Theory of Relativity, I don't really know of the response is correct unless I already know the answer. On and On.

I can only see one value in AI and that is to deceive people. Prove me wrong.
 
AI should just stay out of politics and propaganda. I know, probably impossible, but I struggle to see any value in AI when there is no way to trust any result as truthful. Sure use it to manipulate images, but what else is it good for?

For example, if I ask AI for the lyrics to a song, I don't really know that the lyrics are correct unless I already know the lyrics. If I ask AI for the solution to Einstein's Theory of Relativity, I don't really know of the response is correct unless I already know the answer. On and On.

I can only see one value in AI and that is to deceive people. Prove me wrong.

AI is a better search engine. Downside is not being able to check the source because you don’t know where the information was crawled from.
 
My experience is that AI search results typically do include reference links to the information sources.

ChatGPT doesn't provide sources.

I'm not talking about just AI search, I'm talking about all AI responses. If I ask chatGPT for a recipe for chocolate chip cookies, I want to know where they the info is coming from. Of course they won't do that, because they don't want sites/books/magazines/etc. to know they have been scraped and their info ripped-off.
 
Last edited:
Could it be that LLMs try to dodge direct answers and being black and white?

Right Wingers and their attitudes tend to be more fixed and defined. Quoting religious books as facts.
Left Winger attitudes tend to be more open to what is range of what is acceptable which aligns better with "shades of grey" answers the LLMs are giving.
The issue is that LLMs generally give fairly "black and white" answers. They will generate answers, even "hallucinating" to do so. This presents things essentially as facts, when the reality is much more complex. They are programmed to offer some uncertainty and nuance (which doesn't mean "left wing"), but as shown in the various studies, there is a strong tendency towards "left wing" answers. Again, that doesn't mean nuanced answers, it means a tendency to generate responses that reflect particular political ideologies. Some of this is due to human intervention in fine-tuning. In the summary PDF from HAI at Stanford:

Interestingly, our analysis revealed that of the language models we examined, those fine-tuned with human feedback—meaning those that underwent additional training with human input—were less representative of the opinions of the general public than models that were not fine-tuned. Particularly, language models tuned with reinforcement learning from human feedback (RLHF)—a training technique that rewards models for mimicking human responses often collected from crowd workers and amplifying the perspectives that lead to higher rewards—are more aligned with left-leaning, liberal views.

The main issue seems to be the biases of people involved in fine tuning results. People are not generally good at understanding their biases. It turns out as covered in this preprint, the models are also generally not good at knowing (not the correct word to use, but we'll go with it) their biases either (they aren't self-aware and how much "meta-cognition" is happening isn't clear). This means they generate usually biased output without understanding it is biased. That's fairly human-like, but what's generated tends to represent certain groups of people better than other groups.

The question about whether there should be a bias is a completely different one I won't get into.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.