Or right…?So the Apple will only choose woke media to train its AI? What could go wrong?
Or right…?So the Apple will only choose woke media to train its AI? What could go wrong?
Wait wait wait… ChatGTP is NOT a knowledge base that can give you facts or fiction. It has no idea what climate change or fake news is.Yes, include all sources. An AI should be "smart" enough to figure out any fake news without the need of giving it only curated information.
ChatGPT already shows a problematic bias. I asked it the simple question "Are their any advantages of climate change" and it refused to answer and told me all the well known reasons why climate change is bad. Imagine though you are a lawyer and want to defend your client against the attacks of a smart lawyer on the other side. Then you want to know all the arguments that contradict your position. Those are the arguments the other side could come up with. I am sure than ChatGPT will also take a one sided position in others controversial topics: Gun laws, abortion, military conflicts and so on. The more people use ChatGPT or any other AIs, the more influence it will have on the public opinion.
That always was my biggest fear when I heard of a "general AI": That some programmer told it some ethics instead of letting it develop its own ethics. In future AI will have to make a lot of life and death decisions. For example in case of an unavoidable accident. Is it better to kill one child in an accident or two very old people who might die soon anyway. In many of those questions society comes to different conclusions than I do. I for example don't think that the life of an old person is worth less than the life of a young person.
I learned that in one of the biggest military conflicts at the moment AI already chooses the targets for rockets. I did not see that dystopian future coming so quickly. It already reminds me of "Skynet" from the Terminator movies. I expected something like that in 2060, but not in 2023.
Good grief. Talk about choosing completely unbiased and factually reliable publication groups to train data on.
I second this. I also very much appreciate their transparency whenever reporting on an issue that relates to a person or company that donates to NPR. They do their best to tell the news without the opinions. When opinions are given, I feel they are fluid and based on the best available facts.The source that doesn't rely on advertising dollars for funding is the best starting place for this discussion. NPR and PBS aren't perfect but I would trust their news and opinions over any other sources even though I do not agree with some of their programming.
It may not really "understand" what it says, but there still is a lot of security built in to prevent a disaster like that one Chatbot that started writing hateful and racist things. Many people try to get ChatGPT to say something that is "politically incorrect" or even illegal. That of course would hurt the reputation of Open AI. So programmers try everything to prevent ChatGPT to say anything controversial.Wait wait wait… ChatGTP is NOT a knowledge base that can give you facts or fiction. It has no idea what climate change or fake news is.
It is a language model that basically produces a word soup on command. It does not understand what it is saying. So yes, you could train a model on the bible, on all the work of Shakespeare, on Fox News reports, but it will not reason or take sides. It will not ”develop“ its own ethics. It will still only reproduce words in a soup that go well together.
If you only train the model with poor data, let’s say climate hoax arguments, it will only reproduce those. So, if you’re looking for confirmation bias, it will give you that on the condition it was trained with that kind of data.
The reason why a LLM like ChatGTP works so well is the sheer amounts of data fed into it. So if it’s trained on 99,5% scientific data and 0,5% conspiracy theories it will still produce answers that ’sound’ scientifically correct.
But you’re probably right: lots of people will use it as a source of the ‘truth’. Without critical thinking. Scary.
BTW: there are advantages to climate change. But they are local, not global. Your local weather could become more favourable for human living or specific commercial activities, specific plant species could grow faster at specific sites, … Just to fact check you: I also asked ChatGTP and it gave me a correct answer in line with what I just wrote. So no issue there.
I wonder what would happen if it is trained on those sources. They provide complete opposite views of each other. Will the result be an insane AI 😂Humans by nature are biased. I guess the better question is who is less biased and again that depends on the individual. To some Fox News is the most accurate news organization, to others it’s pure trash. To some CNN is more accurate and honest, while to others it’s the spawn of satan. It’s hard to answer.
this thread is a great discussion. enjoying all the various opinions.... My goal was not to make ChatGPT a climate change denier, but at least it could have listed advantages like "less people freeze to death" or "less accidents from slippery roads". That is what a large language model without any intervention would have done.
You’re right about the reputation and making sure it doesn’t produce politically incorrect or illegal things. But is that wrong? What would you expect? That it repeats the garbage people post all over the net. Remember the Cortana disaster from 2016. ChatGTP would be over in second.It may not really "understand" what it says, but there still is a lot of security built in to prevent a disaster like that one Chatbot that started writing hateful and racist things. Many people try to get ChatGPT to say something that is "politically incorrect" or even illegal. That of course would hurt the reputation of Open AI. So programmers try everything to prevent ChatGPT to say anything controversial.
My goal was not to make ChatGPT a climate change denier, but at least it could have listed advantages like "less people freeze to death" or "less accidents from slippery roads". That is what a large language model without any intervention would have done.
I wonder what would happen if it is trained on those sources. They provide complete opposite views of each other. Will the result be an insane AI 😂
Techno-nerd hype, it certainly is. But it sure as hell ain't only hype. We've seen the future before: People didn't give a rat's ass about cars in the 1890s. Or about personal computers in the 1970s. Or about short-form video online in 2012. Or about Taylor Swift in 2004.End users (us) couldn’t give a rat’s ass about ai.
It’s all techno-nerd hype.
Techno-nerd hype, it certainly is. But it sure as hell ain't only hype. We've seen the future before: People didn't give a rat's ass about cars in the 1890s. Or about personal computers in the 1970s. Or about short-form video online in 2012. Or about Taylor Swift in 2004.
At this point maybe people don't give a rat's ass about AI.
But AI gives a rat's deep indexed, cross referenced, meta tagged, psych profiled, IOT surveilled, geolocated and credit scored ass about them.
Concur. Information based workers aren't necessarily OWED a living, any more or less than other economic sectors because capitalism. Between the leading AI vendors and Boston Dynamics (which is to say SoftBank and Hyundai), there's no denying the potential SCALE of a hypothetical problem.The premise is that AI systems will prove a general benefit. But the reality is that AI poses more of a threat. AI systems displace workers and to date there’s no plan for how to support all these people who won’t have any job prospects. And that’s just one of the major issues AI systems pose.
thanks.i never heard of that film and just looked it up. it sounds great....
"How to Frame a Figg", Universal Pictures, 1971
...
The problem is one side does not even proclaim to follow journalistic principles, but since their audience doesn’t press them on it, it never comes up. It doesn’t help that one man owns 600 media outlets in the US and is heavily involved in what messages go out. So, a wide number of Americans really only hear his voice.I'm not sure why wouldn't they train their bots on 2 or more sides of the same story. I still think open-source chatbots are more balanced than agenda driven big corporations chatbots.
No one. That is why you train it with a much broader spectrum. Or else you get AI that is crippled by having only one viewpoint to train on.I don't think any of these publications listed are trying to be unbiased. They're just glorified opinion pieces for readers of a certain political persuasion and of course, Apple already has relationships with many of them via their News app. But that begs the question...Who is unbiased and factual in the publishing space?
Why shouldn’t we freely scrape off every source imaginable?Apple can set the standard for not freely scraping, we will all be better off! I'm just not sure if this AI's moment, or a passing fad like 3D televisions. Remember how they tried to become a thing three or four times over the decades...
I imagine there is a ton of copyright infringement going on, and personally speaking, shouldn't this be the sort of thing that content creators have the right to opt out for, if they do not wish for their work to be used for training AI?Why shouldn’t we freely scrape off every source imaginable?
Concur. Information based workers aren't necessarily OWED a living, any more or less than other economic sectors because capitalism. Between the leading AI vendors and Boston Dynamics (which is to say SoftBank and Hyundai), there's no denying the potential SCALE of a hypothetical problem.
Why shouldn’t we freely scrape off every source imaginable?
No one. That is why you train it with a much broader spectrum. Or else you get AI that is crippled by having only one viewpoint to train on.