Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Guessing Murdoch Media, who want to outsource content to AI rather than paying real journalists, are going to struggle with this... their right wing bias has been going on and controlling what people read and see for a long time.

Social media and the failure of print and broadcast news is changing things.
They must fear the day when fact checking becomes automated as much of their content wouldnt pass the test.

In Australia, we've just had a state political party release a lame deep fake video poking fun at the current Premier.
It was clear it was faked but most comments havent been positive.
Where to next? a proper deepfake people will share believing it is real?

AI has potential in many areas.
Truth is going to be harder to enforce though.

Easy problem to solve. Just ask the bots to write content but add in any amount inflammatory nonsense that will keep the stock up, using transcripts of Daily Caller crazy. Then tack on the disclaimer crawl that prevents them from legal exposure and voila!
 
  • Like
Reactions: wbeasley
I can only see one value in AI and that is to deceive people. Prove me wrong.
While I agree with the problematic points you raise, LLMs are useful as an idea-giver and to provide initial pointers and suggestions or an initial draft, or for stylistic transformations. One has to understand that they are limited to recognizing and reproducing linguistic and semantic patterns from their training data, that they have limited to no reasoning capabilities, and that they don't have a consistent world-model that would let them distinguish between truths and falsehoods, but there are nevertheless still a lot of uses for pattern-oriented tasks.
 
Apple’s compliance regarding low level conveniences (USBC) in the EU: mandatory

Apple’s compliance regarding high level threats to national security (AI) in the US: optional

??

The US Congress has not passed a law requiring breaking encryption because even with their desperation for attention they know that it would be a security nightmare for all citizens. As for AI, again, Congress needs to do their job, but they will not. This current Congress doesn’t do anything at all.
 
ChatGPT doesn't provide sources.

I'm not talking about just AI search, I'm talking about all AI responses. If I ask chatGPT for a recipe for chocolate chip cookies, I want to know where they scraped the info. Of course they won't do that, because they don't want sites/books/magazines/etc. to know they have been scraped and their info ripped-off.

I was specifically referring to "AI search" as in Perplexity, Bing (AI), You.com, Komo, etc. which all include reference links to information sources.
 
  • Like
Reactions: Tagbert
This post reveals more about you than anything else. Please keep politics out of here.
This post is in the Political News forum. You or I might not like certain political ideologies expressed, but if you want to avoid politics, don't read this thread. It was designated as political by the MacRumors team. That means politics should be in here.

Note: Due to the political or social nature of the discussion regarding this topic, the discussion thread is located in our Political News forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.

You can go into your account preferences and check the box to hide the political forum: https://forums.macrumors.com/account/preferences
 
This post is in the Political News forum. You or I might not like certain political ideologies expressed, but if you want to avoid politics, don't read this thread. It was designated as political by the MacRumors team. That means politics should be in here.



You can go into your account preferences and check the box to hide the political forum: https://forums.macrumors.com/account/preferences

It doesn't seem to work. I have it checked on mine and yet here I am.

1721999573974.png
 
  • Like
Reactions: arkitect
The safeguards Apple and other tech companies have agreed to also include commitments to test their AI systems for biases and security concerns.
Lol that is what is their main concern. "Biases"! Ai is so smart that it can hardly be fooled like sheep can. If not carefully "coded and crafted" in certain ways, it would spill all beans with brink of eyes. And that what has to be avoided as it would ruin "business" :D
 
The Biden admin is mostly concerned that AI answer questions like "What is a woman" in a very specific way. When ChatGPT was first released, the answer to that question is what one should expect: "A human female." The answer across all chatbots is very different now.
 
The last decade in politics has broken a lot of people's brains, so now no one can talk about anything without being so extreme and divisive.
It’s broken yours too - you’re essentially giving an armchair medical diagnosis because you don’t agree with someone’s reaction on the internet. Your brain is just as broken as any political lunatic, at least be honest and admit that to yourself.
 
So, in other words, make sure Apple Intelligence suppresses any real facts or disparages as right wing extreme while Marxist leftist ideologies and talking points are 'safeguarded' to remain uncensored and the priority. Got it.
How in the heck did you draw that conclusion?

I don't ask this flippantly as I am genuinely curious how you got to that…

Edit: I am not an American, so… I guess explain like I am a 5 year old European. Thanks.
The main issue with conspiracy theorists and their subsequent conspiracies is they equate the words possible and probable. Most things in life that are possible are not probable. As it comes to human behavior, especially in the 21st century, truth will almost always be known especially when multiple people are involved.

In this specific scenario, by Apple and other companies committing to following the safety guidelines they are guaranteeing the low probability that there is misuse.

In short, although it's possible for what DELLsFan described to happen, it is now improbable to happen because a company committing to the safety guidelines would be subject to government and certainly public scrutiny for any misuse which would affect their bottom line.
 
How in the heck did you draw that conclusion?

I don't ask this flippantly as I am genuinely curious how you got to that…

Edit: I am not an American, so… I guess explain like I am a 5 year old European. Thanks.
It’s simple, the article headline had the word “Biden” in it.
That’s… literally it. That’s the level we have sunk to.
 
AI should just stay out of politics and propaganda. I know, probably impossible, but I struggle to see any value in AI when there is no way to trust any result as truthful. Sure use it to manipulate images, but what else is it good for?

For example, if I ask AI for the lyrics to a song, I don't really know that the lyrics are correct unless I already know the lyrics. If I ask AI for the solution to Einstein's Theory of Relativity, I don't really know of the response is correct unless I already know the answer. On and On.

I can only see one value in AI and that is to deceive people. Prove me wrong.
Literally nothing you suggested is exclusive to AI.
Even googling gets you wrong lyrics sometimes, and this is before any of their AI tools were introduced.
 
Fluff. So the tech companies are agreeing to test their models for flaws and vulnerabilities and to share information with each other. Words…words…and more words.
 
The issue is that LLMs generally give fairly "black and white" answers. They will generate answers, even "hallucinating" to do so. This presents things essentially as facts, when the reality is much more complex. They are programmed to offer some uncertainty and nuance (which doesn't mean "left wing"), but as shown in the various studies, there is a strong tendency towards "left wing" answers. Again, that doesn't mean nuanced answers, it means a tendency to generate responses that reflect particular political ideologies. Some of this is due to human intervention in fine-tuning. In the summary PDF from HAI at Stanford:



The main issue seems to be the biases of people involved in fine tuning results. People are not generally good at understanding their biases. It turns out as covered in this preprint, the models are also generally not good at knowing (not the correct word to use, but we'll go with it) their biases either (they aren't self-aware and how much "meta-cognition" is happening isn't clear). This means they generate usually biased output without understanding it is biased. That's fairly human-like, but what's generated tends to represent certain groups of people better than other groups.

The question about whether there should be a bias is a completely different one I won't get into.
I read the first report link you put up. Interesting read.

When talking about underrepresented views it seemed odd it highlighted older (expected), religious (expected) and widowed (unexpected) as the three mostly missing out. I'm trying to think on what areas widowed people arent covered and arent different to single people in general. I think the report mentioned 70 demographic groups so there must be a lot of splitting and nuance.

And Human Influenced models doing very little better maybe due to inherent bias was an interesting finding.

Given IT tends to come from West or East coast US, traditionally more left leaning voters compared to more conservative central regions, perhaps that impacts the models too.

End of the day, we should all be fact checking more no matter where information comes from. :)
 
It’s simple, the article headline had the word “Biden” in it.
That’s… literally it. That’s the level we have sunk to.

I myself disagreed with the title since I doubted the recommendations came from Biden. The article clarified it by saying the Biden administration which seems accurate. The mods should probably change the title to say Biden administration.
 
  • Like
Reactions: Timpetus
The Biden admin is mostly concerned that AI answer questions like "What is a woman" in a very specific way. When ChatGPT was first released, the answer to that question is what one should expect: "A human female." The answer across all chatbots is very different now.
Guessing your bias shows in your wording.
You could have left that out and just presented the bot answer and how it has changed over time.

All these LLMs are in early stage rapid dev and things are changing hugely. Alternative, expanded viewpoints and nuance answers. perhaps they could reflect percentages when answering complex matters? 10% of documents scanned believe a woman is... etc
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.