Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
As Elon would often say, interesting!

To be fair though, I can see a scenario where that instruction is logical. If it’s trying to find the biggest spreaders of misinformation, then it might make sense to ignore sources that explicitly say someone spreads misinformation to remove bias.

But it would only make sense if it’s doing that for everyone, not just 2 people.
 
Last edited:
That’s not at all what the concern is here, that’ like saying “it’s ok that this food I’m being told to eat is gravel, I prefer not to eat anyways". The concern is adding that in the first place. Musk has repeatedly described this as a "maximally truth-seeking AI”, so why do this?
Because he’s a liar and a conman and anyone with two functioning neurons can see that.
 
That is utter ********. ANYONE who is a software developer knows that you create pull requests, the code is thoroughly reviewed by a senior dev (or higher), then the change is pushed.

I highly doubt people at Twitter are forgoing code reviews since one wrong change could be detrimental to a platform so...yea...someone got caught lying.
Exactly. Pushing new release to production is not possible without several reviews and approvals. Especially if it is a fundamental change, that can screw up the system - like new prompt for your main and heavily-used public model, affecting literally millions of users.

They keep lying about their lies. Mind-****ing-blowing, really.
 
Last edited by a moderator:
Grok is fun to BS with about real things (way more than nerfed GPT - which is way better at productivity) but it sucks lately.

I was talking to it about Elon Musk being hypocritical around free speech, in two specific instances:

1. Removing Asmongold's checkmark (and thus lowering his voice on the platform) who elevated a video that exposed Elon as a cheater on Path of Exile 2 (which Elon later admitted to)
2. Shadow banning Laura Loomer when she disagreed with him on H1B

I have no horse in either race but I wanted to explore the contradictions with grok and it gave me some really nerfed responses, I had to draw it out of it, it was a dumb echo chamber experience.

Then again when talking to Grok about JD Vance's post about how Ukraine's security guarantees (from the Budapest Memorandum) are from a different era in history, I asked how that compared to how he talks about the constitution, and again it gave me a bunch of BS, I had to really push it to say it was hypocritical.

I'm not really interested in politics but I'm more interested in how AIs can influence people around popular topics, politics being one of them.

I think grok is fun but I wouldn't trust it, just like I don't trust other AIs or most people.
I’d suggest talking about this with an actual person instead of a chatbot programmed by minions of the liar you’re trying to learn more about.
 
I’m still super impressed with it. The think mode gives an insure to the thought process which is extremely useful to see where the answers are coming from and why. I’ve used it for some searches that required an extreme amount of thought process and pulling in data that would have taken me hours to even start and it did it in a minute. As with all AI, it’s important to check. But the initial information for researching and data analyzing I’ve found typically beats hoping it’s on some webpage.

Let’s not forget that a lot of webpages have misinformation as well.
I haven't used it much yet, but am in the process of testing out the newest model. I asked it to create a Dockerfile and what it created seemed comprehensive. Testing of it is still to happen, but compared to alternative outputs from OpenAi's o1 and o3 models, it seems like Grok 3 is excellent, at least in that specific task.

As for these other issues with LLMs, I stay out of that. I mainly use them to help speed up some coding.
 
Last edited:
That is utter ********. ANYONE who is a software developer knows that you create pull requests, the code is thoroughly reviewed by a senior dev (or higher), then the change is pushed.

I highly doubt people at Twitter are forgoing code reviews since one wrong change could be detrimental to a platform so...yea...someone got caught lying.
100%, we have one person that can run pull requests against our live branch at my company. Sadly most people don’t understand this so they will continue drinking Elon’s cool aid.
 
I see what you are saying but as soon as you impose a limit, then who gets to decide that limit? Where is the line? Who decides what disinformation is? There are things in 2015 we thought were disinformation based on the news and now we know them to be true. It's a sticky thing really. Even in Schenck v. United States, the Supreme Court rules that speech present a clear and present danger is NOT protected, but they didn't rule you can't just spew a lie. Maybe the lie will become true one day, maybe it'll just be a lie. I think that's the importance though of free public discourse is people have the space to discuss these things. Instead of just banning things we initially think are false, we will never change anyones mind that way.

I'll get off the soap box now. Feel free to disagree... its your first amendment right lol
Surely the line is wether something is true or not. AI needs to be able to distinguish between facts and opinions.
 
  • Like
Reactions: endemize
This is why they are so much behind the concept of free speech at all costs.

Free speech at all costs is their only way to spread so much misinformation a day to so many followers.

This is kind of ironic though, that Elon censored Grok itself.

Look, I'm 100% for free speech, hell who wouldn't be? But there's some kind of limit, to bring so much hatred and to de-educate people on this platform.
Finally someone said it. I have had this debate with my friends and they don’t get it. They refuse to see that their favorite person (Elon) would do that.
 
Interesting to see ChatGPT's answer. Aside of Elon on first spot it states

"Verified Users (“Superspreaders”): Studies have shown that a small fraction of users are responsible for the majority of misinformation. Notably, less than 1% of X users were found to have posted 80% of misinformation about the 2020 U.S. election. Additionally, during the initial week of the Israel-Hamas conflict in October 2023, verified users on X were responsible for 74% of the most viral false or unsubstantiated claims. "

Interesting.
 
The info/accusation is in the open - no one is surprised. The most important bit will be how xAI responds.

Regarding AI models, you have to treat them like you would any other source of untested information. They can be programmed to be as smart as Einstein, as naive as a child, and as cunning as a con-artist. How you order your prompts can greatly affect the results you receive. You have to go into every interaction with critical thinking, sound logic, and healthy skepticism. And because modern education has sorely neglected these things, it’s no wonder that people are championing restricted speech and censorship, and always to their own benefit. Arm this country with powerful thinkers and bad/manipulated AI will be mocked instead of feared.
 
That is utter ********. ANYONE who is a software developer knows that you create pull requests, the code is thoroughly reviewed by a senior dev (or higher), then the change is pushed.

I highly doubt people at Twitter are forgoing code reviews since one wrong change could be detrimental to a platform so...yea...someone got caught lying.

Exactly. Pushing new release to production is not possible without several reviews and approvals. Especially if it is a fundamental change, that can screw up the system - like new prompt for your main and heavily-used public model, affecting literally millions of users.

They keep lying about their lies. Mind-****ing-blowing, really.
Actually that's not how training LLMs work, the code around that data most likely goes through the normal PR process but from regular model training to various 'purple' / 'red' training is probably more abstract, with only specific people having access to what the model trains on.

Basically they either tilt the data the model learns on or have specialized input/output models that aren't apparent at the time of PR.
 
  • Like
Reactions: rehkram
Surely the line is wether something is true or not. AI needs to be able to distinguish between facts and opinions.
See but, initially for instance the Hunter Biden laptop story was labeled "disinformation" yet it ended up being true. Sometimes facts evolve. And completely banning them and them turning out to be true later sets a bad precedent. Think of Nixon Watergate, Regan and the Iran Contra Affair, Bill Clinton and Monika Lewinsky, Bush and weapons of mass destruction, Trump and alleged Russian interference, Biden and the Hunter Biden Laptop, Russian gas companies. All of these things, initially, the facts were unclear, and some still are, so to try to ban anything that doesn't get your interpretation of the events right that you deem the facts of it, won't be helpful.

There are always scandals. Information has to evolve to come to light and accuracy. Sometimes we are initially wrong and that's okay. I don't blame people for thinking Hunter Biden laptop story was some made up hoax, honestly it sounds like it, and if you don't even care about it I wouldn't blame you either cause it wasn't Biden or anything. However, the fact that it was banned because it was labeled "misinformation" is wrong. This is why we have public discourse and cannot distinguish between facts and opinions immediately. Information evolves
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.