So, pathological liar and scam artist ordered his minions to censor information about him lying and scamming.
Why is this news...
Why is this news...
Because he’s a liar and a conman and anyone with two functioning neurons can see that.That’s not at all what the concern is here, that’ like saying “it’s ok that this food I’m being told to eat is gravel, I prefer not to eat anyways". The concern is adding that in the first place. Musk has repeatedly described this as a "maximally truth-seeking AI”, so why do this?
Exactly. Pushing new release to production is not possible without several reviews and approvals. Especially if it is a fundamental change, that can screw up the system - like new prompt for your main and heavily-used public model, affecting literally millions of users.That is utter ********. ANYONE who is a software developer knows that you create pull requests, the code is thoroughly reviewed by a senior dev (or higher), then the change is pushed.
I highly doubt people at Twitter are forgoing code reviews since one wrong change could be detrimental to a platform so...yea...someone got caught lying.
Yeah. Who would think Any AI ne telling us truth 😂Now this is a surprise… who'd have thought.
/s
I’d suggest talking about this with an actual person instead of a chatbot programmed by minions of the liar you’re trying to learn more about.Grok is fun to BS with about real things (way more than nerfed GPT - which is way better at productivity) but it sucks lately.
I was talking to it about Elon Musk being hypocritical around free speech, in two specific instances:
1. Removing Asmongold's checkmark (and thus lowering his voice on the platform) who elevated a video that exposed Elon as a cheater on Path of Exile 2 (which Elon later admitted to)
2. Shadow banning Laura Loomer when she disagreed with him on H1B
I have no horse in either race but I wanted to explore the contradictions with grok and it gave me some really nerfed responses, I had to draw it out of it, it was a dumb echo chamber experience.
Then again when talking to Grok about JD Vance's post about how Ukraine's security guarantees (from the Budapest Memorandum) are from a different era in history, I asked how that compared to how he talks about the constitution, and again it gave me a bunch of BS, I had to really push it to say it was hypocritical.
I'm not really interested in politics but I'm more interested in how AIs can influence people around popular topics, politics being one of them.
I think grok is fun but I wouldn't trust it, just like I don't trust other AIs or most people.
"an ex-OpenAI employee that hasn't fully absorbed xAI's culture yet" who "pushed the change without asking."
I haven't used it much yet, but am in the process of testing out the newest model. I asked it to create a Dockerfile and what it created seemed comprehensive. Testing of it is still to happen, but compared to alternative outputs from OpenAi's o1 and o3 models, it seems like Grok 3 is excellent, at least in that specific task.I’m still super impressed with it. The think mode gives an insure to the thought process which is extremely useful to see where the answers are coming from and why. I’ve used it for some searches that required an extreme amount of thought process and pulling in data that would have taken me hours to even start and it did it in a minute. As with all AI, it’s important to check. But the initial information for researching and data analyzing I’ve found typically beats hoping it’s on some webpage.
Let’s not forget that a lot of webpages have misinformation as well.
100%, we have one person that can run pull requests against our live branch at my company. Sadly most people don’t understand this so they will continue drinking Elon’s cool aid.That is utter ********. ANYONE who is a software developer knows that you create pull requests, the code is thoroughly reviewed by a senior dev (or higher), then the change is pushed.
I highly doubt people at Twitter are forgoing code reviews since one wrong change could be detrimental to a platform so...yea...someone got caught lying.
Please show multiple citations to demonstrate this. And not from fringe sources.Well, finally some censorship on the right! It seemed like the left were the only ones dipping into that pool for a while now.
In all seriousness I hope censorship on both sides goes away. Free speech FTW
Surely the line is wether something is true or not. AI needs to be able to distinguish between facts and opinions.I see what you are saying but as soon as you impose a limit, then who gets to decide that limit? Where is the line? Who decides what disinformation is? There are things in 2015 we thought were disinformation based on the news and now we know them to be true. It's a sticky thing really. Even in Schenck v. United States, the Supreme Court rules that speech present a clear and present danger is NOT protected, but they didn't rule you can't just spew a lie. Maybe the lie will become true one day, maybe it'll just be a lie. I think that's the importance though of free public discourse is people have the space to discuss these things. Instead of just banning things we initially think are false, we will never change anyones mind that way.
I'll get off the soap box now. Feel free to disagree... its your first amendment right lol
Finally someone said it. I have had this debate with my friends and they don’t get it. They refuse to see that their favorite person (Elon) would do that.This is why they are so much behind the concept of free speech at all costs.
Free speech at all costs is their only way to spread so much misinformation a day to so many followers.
This is kind of ironic though, that Elon censored Grok itself.
Look, I'm 100% for free speech, hell who wouldn't be? But there's some kind of limit, to bring so much hatred and to de-educate people on this platform.
It’s disappointing that Apple has decided to resume advertising on X.
That is utter ********. ANYONE who is a software developer knows that you create pull requests, the code is thoroughly reviewed by a senior dev (or higher), then the change is pushed.
I highly doubt people at Twitter are forgoing code reviews since one wrong change could be detrimental to a platform so...yea...someone got caught lying.
Actually that's not how training LLMs work, the code around that data most likely goes through the normal PR process but from regular model training to various 'purple' / 'red' training is probably more abstract, with only specific people having access to what the model trains on.Exactly. Pushing new release to production is not possible without several reviews and approvals. Especially if it is a fundamental change, that can screw up the system - like new prompt for your main and heavily-used public model, affecting literally millions of users.
They keep lying about their lies. Mind-****ing-blowing, really.
The point was to see what it said about it... like in the direct context of this article.I’d suggest talking about this with an actual person instead of a chatbot programmed by minions of the liar you’re trying to learn more about.
See but, initially for instance the Hunter Biden laptop story was labeled "disinformation" yet it ended up being true. Sometimes facts evolve. And completely banning them and them turning out to be true later sets a bad precedent. Think of Nixon Watergate, Regan and the Iran Contra Affair, Bill Clinton and Monika Lewinsky, Bush and weapons of mass destruction, Trump and alleged Russian interference, Biden and the Hunter Biden Laptop, Russian gas companies. All of these things, initially, the facts were unclear, and some still are, so to try to ban anything that doesn't get your interpretation of the events right that you deem the facts of it, won't be helpful.Surely the line is wether something is true or not. AI needs to be able to distinguish between facts and opinions.