Since when do two wrongs make a right?
not that OpenAI is much better
Also, why is everyone suddenly interested in what happened at Tiananmen Square? It's not like "western" LLMs don't have biases of their own. Every output from AI needs to be taken with a grain of salt and checked for accuracy.
Sure. My point is, everyone knows how a Chinese model will answer (or skirt) questions about certain historic events. The more subtle biases are much more interesting. And western models will be full of those as well.I think trying to rewrite history is something else than a bit of “bias.”
Nevertheless, hilarious to see OpenAI to complain about a genuine open (source) AI company about stealing.
I know the difference between bias and censorship. I just don't think it's surprising or interesting. In fact, it's the most predictable thing about a Chinese AI service I can think of.Yes, every LLM carries the bias of the data it is trained on. I can trust ChatGPT to answer any question with a western political bias. However, there is a difference between bias and censorship.
I read a neat article yesterday about how the Deepseek programmers used PTX instead of CUDA for their coding because CUDA (a higher level language than PTX) didn’t expose certain functions/methods the Deepseek developers used to improve performance.The issue does not seem to be what material was used for training, which at best could explain the low development cost. But it does not explain the low requirement of hardware. If it only depended on the learning material, everyone including Open AI could make models that run on same cheap HW.
I agree. They’re good aren’t they.Typical China.
It does if you misspell it (among many other ways): https://news.ycombinator.com/item?id=42859772Were you asking it about Tiananmen Square by chance? It definitely doesn’t want to answer any questions about that.
This difference is important to me because I use LLMs as a substitute for search engines.
What was the line in Pirates of Silicon Valley? Bill said he got to the TV first, Steve?There's never been much honour between thieves
They’re all crooks and the norm seems to be; say one thing, do another, deny or apologize when caught and complain when you’re the victim of your own deceitful behavior. 🤔🤔Just wanted to second (or Nth) all the comments about Open AI, having stolen IP from virtually everywhere they could get it, now complains that someone used query responses from their own system to build their own model.
Ha ha f**king ha. All crooks.
Not coding, but using company-proprietary data that needs to be kept private. The solution may be convincing my boss to sign up for an enterprise account of ChatGPT, but think he'd be more comfortable with it living locally (but not comfortable enough to buy me a new Mac 😂) .I run them on my 64 GB M1 Max. Depends on the use case, a decent 70 B model will take 40-50 GB of Memory, and if you are using for coding with in visual studio, you may need more than LLM memory. Do you get paid for it? How crucial is data security and privacy? I would go 128 GB RAM, a used M2 Ultra for cheap. That’s a lot of money, cloud could be cheaper for occasional use.