But if they took a massive shortcut then its not so disruptive, right?“Keep up” is such silly reporting.
Deepseek has done what it’s done on cheaper hardware. That’s why they’ve been so disruptive in the space.
But if they took a massive shortcut then its not so disruptive, right?“Keep up” is such silly reporting.
Deepseek has done what it’s done on cheaper hardware. That’s why they’ve been so disruptive in the space.
This is now a part of workflow in many enterprises. Once the "fad" collapses, the entire economy collapses as well.The sooner the AI fad collapses, the better...
Actually it is true. It is only censored on their public web version.. and in fact, it answers then post-censors and deletes. If you download the model and run it yourself (you will need server-level hardware for the 671b model) it will answer anything you ask it.Not true. Have you even tried it?
DeepSeek is OSS and can be run locally and it will gladly answer all the things its creators don't want it talking about
That's just the web version. The full open source model doesn't censor.
When I say that ChatGPT plagarizes authors, I mean that they take other people's work that has been made available on the Internet, and then plagarize it by rewording it just enough to circumvent copyright laws.
Copyright is automatic for prose, there is no ”real copyright”. And shifting the blame isn’t the way to go here.That also means that you don't use it correct? Because if you use it, even without paying, you are still allowing them to earn revenue from you, and while you are not writing a check to them, you still provide them revenue and thereby support them plagiarizing authors. I am not making an accusation, I just want to make sure you understand that by using the platform you ARE contributing to the issue...if you use it, which I am not saying you do.
Also, when you say plagiarizing authors, are you meaning everyone who "contributes" or specifically authors with materials under a real copyright?
Transformer models are trained to predict next tokens in a text. Optimally, they'd always guess right and if they always guessed right, they'd have de-facto memorized the training data. The larger the models get, the more they will be enabled to do that. You can right now, even with very humbly-sized models, ask to quote the first sentences in prominent books and they'll succeed with a high likelihood. So yes, in part, they absolutely do contain works, some more than others. They don't just passively consume books with little retention (like maybe we'd imagine a human would), they actively attempt to memorize everything and compress it as much as they can. The intelligence is in the act of compression.That’s not the way machine learning or the transformer model works. The models do not contain any works, only the training data does.
You can ask the best models to output text in the style of Bukowski or Hemingway and it still won’t read like Bukowski or Hemingway. A bad impersonation at best, and a bad impersonation is not plagiarism.
The models are trained on labelled text to learn how to predict sequences of words and patterns.
All they need is copious amounts of text. You could have a dataset without any books in them. The dataset could just be Reddit posts or Macrumors posts. The models will still learn the same linguistic patterns and still be able to output fictional content such a novels.
That doesn’t mean they will be good of course. Only tech nerds and dimwits look at the novels produced with ChatGPT or Llama and think it is good writing.
I would never pay OpenAI because that would be allowing them to profit by plagarizing authors who OpenAI neither credits nor financially compensates.
In other words, I don't pay thieves for stolen property. Thieves like Sam Altman should be in jail for theft, not financially rewarded for theft.
No. It doesn’t. DeepSeek is not multimodal. DeepSeek cannot search the web. Its answers are nowhere near as complete as o3. ChatGPT does not censor its answers to comply with the political mandates of the Chinese Communist Party.DeepSeek does the same thing for 97% less, so unless o1 is 34x better, it’s a worse value.
Didn't they leak over a million chats though?"Rivaling DeepSeek". What a time to be alive. Don't sleep on OpenAI, they still have the best models. For now.
And ChatGTP hasn't leaked over a million chats yet.You can ask it about Tiananmen Square massacre and it will answer it. This alone makes it superior to DeepSeek
Aren't they supposed to be ahead? How the tables have turned.OpenAI is aiming to keep up with Chinese company DeepSeek.
I will trust a US company any time of the day before trusting a Chinese company. I'd be scared to even try DeepSeek.joke I'm enjoying Deep seek u know a version that's free for something that works
I will trust a US company any time of the day before trusting a Chinese company. I'd be scared to even try DeepSeek.
o3-mini is the first reasoning model that OpenAI is making available to free users.