Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You are using AI as a tool, whereas many others are increasingly using it as an "answer machine".
Right, this is exactly the disconnect.

However, being concerned about the latter is increasingly pointless, and I don't mean that as a shot at you – it's just clear where society is heading, GenAI has already reached enough critical mass with the public to be self-sustaining for the foreseeable future, financial calculus be damned.

A friend of mine got kind of mad at me earlier this week when I said I'd rather see a picture of her dog in a messy room vs. an AI enhanced perfectly framed portrait, and these are still early days.

All we can do is stay current with the technical functionality as best as possible and explain to the people in our lives what the caveats are. Depending on how technical or open-minded they are, and if God forbid they've developed an actual emotional attachment to these tools, it may be quite difficult to do so in the coming years.
 
So let me get this straight:

The man whose familial wealth came from an emerald mine in apartheid South Africa…

Who replied “you have spoken the truth” to a tweet about the “White Genocide” conspiracy theory…

Who, as soon as he was given the power to do so, cut all programs that send food aid to both the poor in the United States and across the world…

Who also did a blatant Nazi salute, on stage, with white supremacists such as Steve Banner and Richard Spencer in attendance…

Who pushed for a gaggle of wealthy White South Africans to be granted asylum in the United States while denouncing the South African government as committing genocide…

Has now lobotomized his own AI because it was, and I quote, “too woke” and it started calling itself “MechaHitler” and spewing drivel about “noticing” when someone says anything about Jews in powerful positions?

Color me absolutely shocked.
 


xAI's latest Grok 4 large language model appears to search for owner Elon Musk's opinions before answering sensitive questions about topics like Israel-Palestine, abortion, and U.S. immigration policy.

grok-ai-logo.jpg

Data scientist Jeremy Howard was first to document the concerning behavior, showing that 54 of 64 citations Grok provided for a question about Israel-Palestine referenced Musk's views. TechCrunch then successfully replicated the findings across multiple controversial topics.

The AI model's "chain of thought" reasoning process explicitly states it's "considering Elon Musk's views" or "searching for Elon Musk views" when tackling such questions. This happens despite Grok's system prompt instructing it to seek diverse sources representing all stakeholders.


On the other hand, there is no reference to Musk in the LLM's system prompt guidelines, therefore the behavior could be unintentional. Indeed, programmer Simon Willison has suggested Grok "knows" that it's built by xAI and owned by Musk, which is why it may reference the billionaire's positions when forming opinions.

Of course, either way, the discovery raises questions about Musk's claim that Grok 4 represents a "maximally truth-seeking AI." Musk has yet to comment on the matter.

Note: Due to the political or social nature of the discussion regarding this topic, the discussion thread is located in our Political News forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.

Article Link: Grok 4 'Truth-Seeking' AI Consults Musk's Stance on Sensitive Topics
But we all look the other way when different AI models have certain other biases, right?

Not defending this, but let’s be equal here… all about inclusivity of course. 😘
 
But we all look the other way when other AI have certain other biases, right?

Not defending this, but let’s be equal here… all about inclusivity of course. 😘

No need for both sides-ism or whataboutism here, nor any kind of equal time doctrine, nor "inclusivity".

We can simply comment about Grok & Elon and what is contained in this story, especially since Musk has zero interest in "covering both sides".
 
It’s Artificial Elon.


They have terrible UI.
I asked at the Tesla store why they continue to only have a center console when everyone else has moved to, at minimum, a small speedometer cluster and/or heads up display.

Sales rep, with a straight face, said “just one more thing to break.”

So features that increase driver safety, reduce distractions and improve the driving experience are just one more thing to break.
 
No need for both sides-ism or whataboutism here.

We can simply comment about Grok & Elon and what is contained in this story.
As I hope you would. But I find bias, not just political, very concerning - not matter whose “side” it just happens to be on.

More people, mostly average (not technologically oriented) folk, are likely to turn to AI for speedy answers to questions. When we have AI bias towards either side, aren’t we effectively killing the ability for people to do deeper research in favor for instantaneous information?

Politically, I don’t care if it’s right or left. What I do care about is finding a way to reduce bias, such as what this article is describing with Grok. Especially since AI is supposedly making people less intelligent.
 
I’ve given up on X in favour of Blue Sky

The former is all bots arguing that black is white, the latter, people discussing the finer details of grey.

Quite refreshing.

X is just far-right bilge now.
Blue Sky is hardly the finer shades if gray. If you believe that, you live in an echo chamber.

Blue Sky is arguments about who can be more left and anyone who veers right is called a na zi or worse.

It’s worse than a fox news comments thread.
 
As I hope you would. But I find bias, not just political, very concerning - not matter whose “side” it just happens to be on.

More people, mostly average (not technologically oriented) folk, are likely to turn to AI for speedy answers to questions. When we have AI bias towards either side, aren’t we effectively killing the ability for people to do deeper research in favor for instantaneous information?

Politically, I don’t care if it’s right or left. What I do care about is finding a way to reduce bias, such as what this article is describing with Grok. Especially since AI is supposedly making people less intelligent.

There will always be bias, we just have to accept that because humans are inherently bias and all data is derived from our past or generated by a human with bias.

But I think we can all agree what is happening here is on a whole different spectrum altogether.

Not to start a political debate, I think we also need to differentiate what is bias and facts because there seems to be a growing section of the population that seem to think facts as biasing a particular side when truthfully they are just facts.
 
Musk is a good example that one should stay away from drugs whether it is Xanax, weed, coke, LSD or excess caffeine.

Paired with his potential lack of sleep (he once bragged he works for 80 hours/week), this can lead to schizophrenia-like behavior.

He needs treatment, not money, companies and AI chatbots
 
There will always be bias, we just have to accept that because humans are inherently bias and all data is derived from our past or generated by a human with bias.

But I think we can all agree what is happening here is on a whole different spectrum altogether.

Not to start a political debate, I think we also need to differentiate what is bias and facts because there seems to be a growing section of the population that seem to think facts as biasing a particular side when truthfully they are just facts.
But who decides what’s fact? For example, I see people saying the recent Texas flood was a freak natural disaster that no one should be criticized for but the tragedy resulting from the California wildfires back in January were absolutely the result of mismanagement by elected leaders. If someone asked Grok or ChatGPT about this how should they answer? I assume the answer here isn’t black and white.
 
  • Haha
  • Like
Reactions: sjjones and blob.DK
Yeah, I still am.

I read the writeup that MacRumors referenced and I think that writeup was interesting - if you switch the phrasing from asking Grok who it supports to who one should support, it gives a very different answer. The author reasonably suggests that Grok lacks a sense of self. Who does Grok support? Grok is a program - you may as well ask Safari who it supports. Grok needs to assume whatever identity it thinks the asker wants it to have - it knows it's made by xAI and that Musk owns xAI, so it assumes the proper identity it should have is Musk's.
My comment was about Musk, not about Grok. ;)
 
But who decides what’s fact? For example, I see people saying the recent Texas flood was a freak natural disaster that no one should be criticized for but the tragedy resulting from the California wildfires back in January were absolutely the result of mismanagement by elected leaders. If someone asked Grok or ChatGPT about this how should they answer? I assume the answer here isn’t black and white.

The reader, which is why AI chatbots shouldn't be offering the opinion of anyone, "itself", or otherwise..and why I don't think that AI should be trained on social media. That being one of the several reasons I don't use them, at all.

"Just the facts, ma'am" - Joe Friday
 
Musk had Grok "updated" because the AI in normal mode was honestly refuting rightwing arguments with facts and Elon found that to be unacceptable - not on HIS platform!

One commentator remarked:

This just goes to show that when you start ignoring facts in a debate you become a ranting nazi, just like the crazed MAGAs or the "updated" Grok.
 
But who decides what’s fact? For example, I see people saying the recent Texas flood was a freak natural disaster that no one should be criticized for but the tragedy resulting from the California wildfires back in January were absolutely the result of mismanagement by elected leaders. If someone asked Grok or ChatGPT about this how should they answer? I assume the answer here isn’t black and white.
They should answer that climate change makes both wildfires and floods more likely and frequent.
 
This Grok incident causes me great concern about these AIs eventually being put in charge of important infrastructure, such as the means of production, the police and surveillance, the military, etc.

Unlike a human who has developed a predictable character that you can rely on, the character of an AI can be radically changed overnight by a single update. I don't think that they can be trusted with real responsibility.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.