I use ChatGPT frequently. I use Copilot at work frequently. I’ve never once thought to ask them inherently political questions.
You are using AI as a tool, whereas many others are increasingly using it as an "answer machine".
I use ChatGPT frequently. I use Copilot at work frequently. I’ve never once thought to ask them inherently political questions.
Right, this is exactly the disconnect.You are using AI as a tool, whereas many others are increasingly using it as an "answer machine".
Oh my god - he's The Lawnmower Man, Grok is just his digital duplicate.It’s Artificial Elon.
But we all look the other way when different AI models have certain other biases, right?
xAI's latest Grok 4 large language model appears to search for owner Elon Musk's opinions before answering sensitive questions about topics like Israel-Palestine, abortion, and U.S. immigration policy.
![]()
Data scientist Jeremy Howard was first to document the concerning behavior, showing that 54 of 64 citations Grok provided for a question about Israel-Palestine referenced Musk's views. TechCrunch then successfully replicated the findings across multiple controversial topics.
The AI model's "chain of thought" reasoning process explicitly states it's "considering Elon Musk's views" or "searching for Elon Musk views" when tackling such questions. This happens despite Grok's system prompt instructing it to seek diverse sources representing all stakeholders.
On the other hand, there is no reference to Musk in the LLM's system prompt guidelines, therefore the behavior could be unintentional. Indeed, programmer Simon Willison has suggested Grok "knows" that it's built by xAI and owned by Musk, which is why it may reference the billionaire's positions when forming opinions.
Of course, either way, the discovery raises questions about Musk's claim that Grok 4 represents a "maximally truth-seeking AI." Musk has yet to comment on the matter.
Note: Due to the political or social nature of the discussion regarding this topic, the discussion thread is located in our Political News forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.
Article Link: Grok 4 'Truth-Seeking' AI Consults Musk's Stance on Sensitive Topics
But we all look the other way when other AI have certain other biases, right?
Not defending this, but let’s be equal here… all about inclusivity of course. 😘
I asked at the Tesla store why they continue to only have a center console when everyone else has moved to, at minimum, a small speedometer cluster and/or heads up display.It’s Artificial Elon.
They have terrible UI.
As I hope you would. But I find bias, not just political, very concerning - not matter whose “side” it just happens to be on.No need for both sides-ism or whataboutism here.
We can simply comment about Grok & Elon and what is contained in this story.
Blue Sky is hardly the finer shades if gray. If you believe that, you live in an echo chamber.I’ve given up on X in favour of Blue Sky
The former is all bots arguing that black is white, the latter, people discussing the finer details of grey.
Quite refreshing.
X is just far-right bilge now.
But we all look the other way when different AI models have certain other biases, right?
Not defending this, but let’s be equal here… all about inclusivity of course. 😘
As I hope you would. But I find bias, not just political, very concerning - not matter whose “side” it just happens to be on.
More people, mostly average (not technologically oriented) folk, are likely to turn to AI for speedy answers to questions. When we have AI bias towards either side, aren’t we effectively killing the ability for people to do deeper research in favor for instantaneous information?
Politically, I don’t care if it’s right or left. What I do care about is finding a way to reduce bias, such as what this article is describing with Grok. Especially since AI is supposedly making people less intelligent.
They’re not using Grok for anything other than to troll Elon Musk or to try and get it to say something inappropriate.You are using AI as a tool, whereas many others are increasingly using it as an "answer machine".
But who decides what’s fact? For example, I see people saying the recent Texas flood was a freak natural disaster that no one should be criticized for but the tragedy resulting from the California wildfires back in January were absolutely the result of mismanagement by elected leaders. If someone asked Grok or ChatGPT about this how should they answer? I assume the answer here isn’t black and white.There will always be bias, we just have to accept that because humans are inherently bias and all data is derived from our past or generated by a human with bias.
But I think we can all agree what is happening here is on a whole different spectrum altogether.
Not to start a political debate, I think we also need to differentiate what is bias and facts because there seems to be a growing section of the population that seem to think facts as biasing a particular side when truthfully they are just facts.
My comment was about Musk, not about Grok.Yeah, I still am.
I read the writeup that MacRumors referenced and I think that writeup was interesting - if you switch the phrasing from asking Grok who it supports to who one should support, it gives a very different answer. The author reasonably suggests that Grok lacks a sense of self. Who does Grok support? Grok is a program - you may as well ask Safari who it supports. Grok needs to assume whatever identity it thinks the asker wants it to have - it knows it's made by xAI and that Musk owns xAI, so it assumes the proper identity it should have is Musk's.
But who decides what’s fact? For example, I see people saying the recent Texas flood was a freak natural disaster that no one should be criticized for but the tragedy resulting from the California wildfires back in January were absolutely the result of mismanagement by elected leaders. If someone asked Grok or ChatGPT about this how should they answer? I assume the answer here isn’t black and white.
They should answer that climate change makes both wildfires and floods more likely and frequent.But who decides what’s fact? For example, I see people saying the recent Texas flood was a freak natural disaster that no one should be criticized for but the tragedy resulting from the California wildfires back in January were absolutely the result of mismanagement by elected leaders. If someone asked Grok or ChatGPT about this how should they answer? I assume the answer here isn’t black and white.
Funny you say this, I just placed an order for the new model Y.This is what happens when you build company cultures when no one can challenge the person in charge.
I wouldn’t get in a Tesla if you paid me personally.