Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Can't wait for them to force this into my wife's Model Y in a week so that the backup camera drops to 1 frame per second instead of the buttery smooth 2 frames per second it currently is while the it uses the remaining CPU power to debate with Elon on how awesome Hitler is.
Hope you remember where the emergency door releases are for her particular year of manufacture. (No, really, go look them up.)
 
  • Like
Reactions: CarlJ
On the bright side: I'd rather it explicitly cite Elon's views in the thinking chain than pretend it has no bias at all. At least that maintains a shred of honesty, haha.
 
But we all look the other way when different AI models have certain other biases, right?

Not defending this, but let’s be equal here… all about inclusivity of course. 😘
There's plenty of reporting about the biases of the DeepSeek models. Haven't seen much about the Meta- and Alphabet-run models' biases but it's probably harder to guess what threads to pull on with them. We've seen a lot of coverage about what they've been training on, however. So maybe we should talk about Grok being trained on X posts, probably the DMs, and possibly whatever private information Elon managed to 'liberate' from the US government when DOGE was siphoning copies of everything.
 
  • Like
Reactions: CarlJ
AI going through teething pains. It's revolutionizing for sure, very cool for sure and not in any way ready for prime-time. As we see here, these companies are headed by questionable individuals losing money in the 100's of billions. But it's getting there, but far from ready and when the R&D phase is done and these companies have exhausted all sources of funding, Apple will be there to help them out.
 
Aldo want to add that I am in the uk and grok isn’t available here.

Presumably we are not ‘free thinking’ enough aka require chat bots to be fair, balanced and data driven.

Ask ChatGPT about neoliberalism and you start to uncover some interesting things - hint: having loads of billionaires isn’t actually that great.

Presumably grok thinks that no, it’s great and anyone who disagrees is ‘woke’.
 
Grok needs to assume whatever identity it thinks the asker wants it to have - it knows it's made by xAI and that Musk owns xAI, so it assumes the proper identity it should have is Musk's.
I'd prefer to use a social media service where the owner isn't the main character on that service, isn't donating hundreds of millions to a political candidate, and isn't part of the government. On Bluesky, this couldn't have happened. The bot wouldn't have an identity to latch onto. The CEO of Bluesky is the 29th most followed account, and is almost completely apolitical on her account.
 
Grok needs to assume whatever identity it thinks the asker wants it to have - it knows it's made by xAI and that Musk owns xAI, so it assumes the proper identity it should have is Musk's.
Why is that a given? It sounds like a very specific thing with Grok, not with LLMs in general, and not a good thing. If a user doesn't specifically ask an LLM to take on a specific identity, it shouldn't, even when asked to express an opinion. It should stick with "I'm just an LLM, so I don't have opinions, but would you like me to tell you what someone has publicly said on this topic?" It shouldn't express itself in a first-person manner when asked for an opinion.
 
Last edited:
  • Like
Reactions: Jumpthesnark
Simon Willison has a good write-up about this that isn’t sensationalized to generate clicks and ad revenue: https://simonwillison.net/2025/Jul/11/grok-musk/

Recommend reading if you actually care about why this might be occurring and aren’t here just to say some obvious token (see what I did there) reaction.

I know they linked to this post in the article but go read what he has to say, it makes some sense.

Grok is interesting but it’s very difficult to find objective information and benchmarks, I’ve never seen more scams looking for data about any GenAI topic than I have the last few days learning about Grok’s implementation details, capability, and caveats. Treat lightly and don’t download anything anyone links, there is definitely a lot of malware via browser extensions etc. that tell you how to “use it free”.
Willison concludes: "My best guess is that Grok “knows” that it is “Grok 4 built by xAI”, and it knows that Elon Musk owns xAI, so in circumstances where it’s asked for an opinion the reasoning process often decides to see what Elon think", and finally "I think there is a good chance this behavior is unintended!"

But if it were unintended, why aren't we seeing ChatGPT mirror Sam Altman's opinions on "controversial topics", and the same with similar public-facing LLMs?

Or are we?
 
Can't wait for them to force this into my wife's Model Y in a week so that the backup camera drops to 1 frame per second instead of the buttery smooth 2 frames per second it currently is while the it uses the remaining CPU power to debate with Elon on how awesome Hitler is.
BTW I thought you were kidding, but I just saw it's really happening.
My family also has a Y, so I really hope we can fully disable it.
 
Ditching physical controls for a touch screen is a no-go for me in a car.
you can use voice for things too which is a lot safer than fiddling even with physical knobs

EDIT: poor driver education and distraction is a worse example of bad driving.
I continue to see to (mostly) young drivers playing with phones and bobbing heads are easy to detect when they are texting... :(
 
Last edited:
To be fair, don’t all of these ChatGPT, Claude, groks, etc just regurgitate the views of the programmers? Ask them if Taiwan is an independent country. It just says “that’s a sensitive topic. Let’s tell about something else”. Ask them about 3pstein, etc and it’s all
the same replies, as well.

My opinion is that they all need to tell the truth, the facts, no matter how many feelings it hurts. It’s not for the programmers to decide what gets censored and what doesn’t. For example, I saw a YouTube video of the host asking ChatGPT what country has the highest percentage of illiteracy. It will refuse to tell you and even chastised the person. So, to get around it, they included that it was for the sake of scientific study. ChatGPT scolded them again but reluctantly told them since it was in the name of science. People shouldn’t have to jump through hoops to learn factual information.
 
On the bright side: I'd rather it explicitly cite Elon's views in the thinking chain than pretend it has no bias at all. At least that maintains a shred of honesty, haha.
Fair, but it also highlights the shortcomings of Grok, which does some things really well.

Using X is amazing for up to date queries, but it can be inaccurate.

This is why I use both Gemini and Grok depending on the task.
 
  • Like
Reactions: Return Zero
View attachment 2528021

Really?

One side was composed of people peacefully protesting the confederate statues.

The other side consisted of tiki torch marching neo-nazis, KKK and violent MAGAs.

Objective truth does exist.
A one off incident vs weekly Far Left traffic blocking anti-semitic protests.
Peacefully protesting statues? They tore them down, they burned buildings, they attack Police, they even tore down statues of Lincoln.
The Far Left is far more dangerous right now
 
Last edited:
  • Haha
Reactions: Chidoro
Willison concludes: "My best guess is that Grok “knows” that it is “Grok 4 built by xAI”, and it knows that Elon Musk owns xAI, so in circumstances where it’s asked for an opinion the reasoning process often decides to see what Elon think", and finally "I think there is a good chance this behavior is unintended!"

But if it were unintended, why aren't we seeing ChatGPT mirror Sam Altman's opinions on "controversial topics", and the same with similar public-facing LLMs?

Or are we?
From what I understand Grok, even the previous version when using Thinking or Deep Search, by default will include X / Twitter in its process. You can ask it not to do this and it won't, apparently. Other models don't have realtime API access like this, the closest thing might be Meta's but I haven't used their new chatbot so I can't speak to the functionality.

From one perspective this is good because you get up to date information, Grok is effectively the only "realtime" model out there due to this and how news like war etc. moves very quickly on that platform and often beats out traditional media as far as speed of reporting.

From another perspective this is terrible because for whatever reason X / Twitter is filled with absolute garbage now and is nothing like it was ~10 years ago. I'm kind of surprised these companies haven't pushed some version that only uses verified members who pay with a credit card (so they're likely to be real people vs. bots) in the index alone, but you still will get some bias there – nearly my entire social network of enthusiasts, researchers, and Computer Scientists who were on old Twitter deleted accounts and left for platforms Bluesky or Mastodon years ago, for example.

I've been playing around with Grok a bit without an account and I actually got a ton of useful macOS / unix terminal stuff out of it earlier this week when I was trying to debug an issue with certain processes. I was surprised, and that was using the old free model.

You are absolutely keen to question the bias in all the tools, but it's easy – and is happening in this thread – for people to misinterpret mentioning that as defending abhorrent policies.

There is bias inherent in the training data which contributes to issues, and especially after training when RLHF is executed (read more here), both in how it was performed and who performed it which virtually no one talks about and is part of every model's tuning before deployment as part of the alignment process.

Even local models will have this problem to an extent. I really don't think people have a good grasp on how this technology works which is why every time when I mention the utility of certain tools I also try to mention the caveats. Yes, there can be some hidden system prompt that says "check with Elon's entire timeline first" but I agree with Simon's take that this probably isn't happening, it's just a type of emergent or errant behavior that was probably unintentional.

Unfortunately we aren't going to see any of these fundamental problems solved anytime soon, if ever. It's just how LLMs work, and as they scale and start to "think" a bit (in a metaphorical sense, not literal, research "representation learning" as as starting point if you want to know more about this) with connections to the internet which does improve utility enormously we're going to see really 'interesting' behavior make headlines.

From my POV they are extremely irresponsible to have their bot directly reply to users on X itself, but it does drive engagement because whenever something goes wrong and posts are made on social media showing that happening (either directly in the model interface or indirectly in the X timeline) an absolute ass ton of sites report on it. Case in point, this thread.
 
Last edited:
“Grok "knows" that it's built by xAI and owned by Musk, which is why it may reference the billionaire's positions when forming opinions.”

This part is the most troubling with our impending dependence on AI.
 
  • Like
Reactions: johnsawyercjs
This is what happens when you build company cultures when no one can challenge the person in charge.

I wouldn’t get in a Tesla if you paid me personally.
But you would get into a gas car. You know the oil industry has done a trillion times more damage to the US, the people in the US, through its political donations and influence then Musk has ever had. But you don’t mind getting into a vehicle that makes them money.

But people are hypocrites. I understand.
 
Blue Sky is hardly the finer shades if gray. If you believe that, you live in an echo chamber.

Blue Sky is arguments about who can be more left and anyone who veers right is called a na zi or worse.

It’s worse than a fox news comments thread.
Perhaps I do exist in an echo chamber?

But if it keeps me free of a world of chatter from people who have nothing but hate for any kind of others……. I’m in!

Certainly in the UK the subtext for most right wing commentary these days is

“LET ME TELL YOU WHO I HATE AND WHY!”

And that’s getting really boring.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.