Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
That is utter ********. ANYONE who is a software developer knows that you create pull requests, the code is thoroughly reviewed by a senior dev (or higher), then the change is pushed.

I highly doubt people at Twitter are forgoing code reviews since one wrong change could be detrimental to a platform so...yea...someone got caught lying.
Depends on how much of a cowboy-programmer culture they have over there.

Of course that raises all sorts of other questions.
 
The big problem with the "misinformation" and "disinformation" labels is that most of the people who use them don't tell the truth. Just one example that comes to mind is the claim that Hunter B.'s Macbook was likely planted by Russian Intelligence and the implication that its contents were fabricated. No one is claiming that anymore, and everyone who initially made the claim was lying.
 
Isn't the fact that Musk has asked for it to be "maximally truth seeking" the very thing that makes this a story? I mean, absent that commitment there would be nothing to juxtapose.
 
Remember, statistics don’t lie, but liars use statistics.
Eh, statistics are only as good as the assumptions behind them, the sample, the underlying data, and the models employed. While statistics might not lie (since lying is a human trait), they can be false. They can also be true, but used by people to mislead (lie).
 
Actually that's not how training LLMs work, the code around that data most likely goes through the normal PR process but from regular model training to various 'purple' / 'red' training is probably more abstract, with only specific people having access to what the model trains on.

Basically they either tilt the data the model learns on or have specialized input/output models that aren't apparent at the time of PR.
There's training and there're system prompts, which is what was changed, it seems.
 
See but, initially for instance the Hunter Biden laptop story was labeled "disinformation" yet it ended up being true. Sometimes facts evolve. And completely banning them and them turning out to be true later sets a bad precedent. Think of Nixon Watergate, Regan and the Iran Contra Affair, Bill Clinton and Monika Lewinsky, Bush and weapons of mass destruction, Trump and alleged Russian interference, Biden and the Hunter Biden Laptop, Russian gas companies. All of these things, initially, the facts were unclear, and some still are, so to try to ban anything that doesn't get your interpretation of the events right that you deem the facts of it, won't be helpful.

There are always scandals. Information has to evolve to come to light and accuracy. Sometimes we are initially wrong and that's okay. I don't blame people for thinking Hunter Biden laptop story was some made up hoax, honestly it sounds like it, and if you don't even care about it I wouldn't blame you either cause it wasn't Biden or anything. However, the fact that it was banned because it was labeled "misinformation" is wrong. This is why we have public discourse and cannot distinguish between facts and opinions immediately. Information evolves
In those cases mentioned above we can make a distinction between allegations and facts. Something might start off as an allegation and turn out to be true or not. Only when it’s proven to be true can it be considered a fact. Information can evolve over time which is why it’s important for AI not to speculate. AI should only flag something as fact if it has been proven beyond doubt. It’s no different to how a court of law works.

I don’t want to ban anything. I’m just saying it should be correctly labelled.
 
Last edited:
  • Like
Reactions: G5isAlive
There's training and there're system prompts, which is what was changed, it seems.
That's a good point, there could be a preemptive prompt but my earlier chats didn't seem that way, it seemed to be more embedded into its responses. Also prompts at the input level can be disregarded by data in the same prompt.

From an earlier Grok chat that seemed to defend Musk:
Screenshot 2025-02-25 at 9.35.13 AM.png


So for sure, it could be prompts, but it seems more baked into the model in my experience based on how the responses can vary based on the conversation (later in the same chat):
Screenshot 2025-02-25 at 9.36.21 AM.png
 
  • Like
Reactions: blob.DK
Free speech is a concept that pre-dates and did not foresee how algorithms (including those used by search engines, social media feeds, and so-called "AIs") would direct you to or away from certain kinds of speech. There is nothing "free" about any speech on Google, Facebook, X, ChatGPT, xAI, etc. They make their money based on what you are looking at. Is Google involved in free speech by limiting your search to 10 responses per page, even if it gives you whatever your looking for within that format (they want to be able to throw new ads at you for every new page you load)? Is X ever involved in free speech when their internal algorithms point you purposely to tweets you will find either sufficiently rewarding or repulsive to stay engaged? No. This is not free speech. It is curtailed speech with profit as the primary incentive, and now, to an increasingly greater degree, with hidden and not so hidden political agendas. I don't claim to have any solution to offer, but I would say at a minimum we should stop saying any of these forums, communities and applications offer anything like free speech. They don't. Confusing this is certainly part of the problem.
 
That is utter ********. ANYONE who is a software developer knows that you create pull requests, the code is thoroughly reviewed by a senior dev (or higher), then the change is pushed.

I highly doubt people at Twitter are forgoing code reviews since one wrong change could be detrimental to a platform so...yea...someone got caught lying.
You're describing software development in a sane company.

Maybe you aren't aware, but this is a company run by Elon Musk.
 
Yeah... I'm not at all surprised about this one.

I've been on Twitter for 14 years, and it's gone from being a platform that was pretty good to being overrun by bots, Elon pushing his tweets everywhere where they don't need to be in the For You page and having the algorithm be wildly wrong (and pushing what are likely bot accounts "science videos", "cute cats" etc). I stopped posting there a few months ago.

A week or two ago I decided to try out this Grok thing Elon keeps boasting about though. And after asking a simple question "Who would win in a Pokemon battle, Pikachu or Charizard" it gave me an answer of Charizard, because Electric is not very effective against Fire/Flying, and Fire is super effective against Pikachu. I said the type matchups are wrong, it "corrected" itself to say that Electric is super effective against Flying, but also that Fire is super effective against Electric-types...

This is basic stuff. If it can't even get that right, why would anyone want to trust what it says about anything more serious?
 
  • Like
Reactions: blob.DK
Depends on how much of a cowboy-programmer culture they have over there.

Of course that raises all sorts of other questions.


No it doesn't. That's crazy. Each of us can take the rules that apply in our own workplaces and categorically and without doubt apply them over there.
 
Musk is not interested in the truth. He has a extreme right wing agenda he wants to push.
No, I completely agree, my comment was only to highlight his hypocrisy and why there needs to be accountability. We can’t just say “I do my own research”, I mean, I do too... but AI builds a false sense of support for things that are not-so-important, with a lower need to fact-check. That’s when the less obvious missinformation creeps in.
 
  • Like
Reactions: tk421 and 0049190
I don't claim to have any solution to offer, but I would say at a minimum we should stop saying any of these forums, communities and applications offer anything like free speech. They don't. Confusing this is certainly part of the problem.
Correct. Private companies are free to censor anything they want. "Free speech" in the United States is specific to the 1st Amendment and the guarantee that the government can't punish you for your speech.

Example: the White House banned the AP from press conferences because they don't use 'Gulf of America' in place of 'Gulf of Mexico'. The AP has filed a 1st Amendment lawsuit over that since the government is trying to punish them for free speech.
 
I personally am planning to sell all my Apple devices and live without technology for the next few years.

I'm not selling anything, but I won't be buying any tech unnecessarily for the remainder of this administration. We're putting as much money as possible into savings.

...as for this...Big Suprise!™ ..but also another reason added to my growing list of reasons why I just don't use LLM AI, from any company.
 
  • Like
Reactions: tk421
See but, initially for instance the Hunter Biden laptop story was labeled "disinformation" yet it ended up being true. Sometimes facts evolve. And completely banning them and them turning out to be true later sets a bad precedent. Think of Nixon Watergate, Regan and the Iran Contra Affair, Bill Clinton and Monika Lewinsky, Bush and weapons of mass destruction, Trump and alleged Russian interference, Biden and the Hunter Biden Laptop, Russian gas companies. All of these things, initially, the facts were unclear, and some still are, so to try to ban anything that doesn't get your interpretation of the events right that you deem the facts of it, won't be helpful.

There are always scandals. Information has to evolve to come to light and accuracy. Sometimes we are initially wrong and that's okay. I don't blame people for thinking Hunter Biden laptop story was some made up hoax, honestly it sounds like it, and if you don't even care about it I wouldn't blame you either cause it wasn't Biden or anything. However, the fact that it was banned because it was labeled "misinformation" is wrong. This is why we have public discourse and cannot distinguish between facts and opinions immediately. Information evolves
In the case of the Hunter Biden laptop; that is in fact still disinformation. Yes, the laptop originated from Biden, but its contents to this day couldn't be verified as authentic. There was no clear chain of ownership and various people who had an interest in damaging Biden had that laptop in their possession before law enforcement ever got their hands on it.

Stories can be half true and still be disinformation. As a matter of fact, that specifically is the most problematic kind of misinformation; things that are false but have a particle of truth to them, which then is used by conspiracy theorists as proof that "no all of it is true".
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.