Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Yes, include all sources. An AI should be "smart" enough to figure out any fake news without the need of giving it only curated information.

ChatGPT already shows a problematic bias. I asked it the simple question "Are their any advantages of climate change" and it refused to answer and told me all the well known reasons why climate change is bad. Imagine though you are a lawyer and want to defend your client against the attacks of a smart lawyer on the other side. Then you want to know all the arguments that contradict your position. Those are the arguments the other side could come up with. I am sure than ChatGPT will also take a one sided position in others controversial topics: Gun laws, abortion, military conflicts and so on. The more people use ChatGPT or any other AIs, the more influence it will have on the public opinion.

That always was my biggest fear when I heard of a "general AI": That some programmer told it some ethics instead of letting it develop its own ethics. In future AI will have to make a lot of life and death decisions. For example in case of an unavoidable accident. Is it better to kill one child in an accident or two very old people who might die soon anyway. In many of those questions society comes to different conclusions than I do. I for example don't think that the life of an old person is worth less than the life of a young person.

I learned that in one of the biggest military conflicts at the moment AI already chooses the targets for rockets. I did not see that dystopian future coming so quickly. It already reminds me of "Skynet" from the Terminator movies. I expected something like that in 2060, but not in 2023.
Wait wait wait… ChatGTP is NOT a knowledge base that can give you facts or fiction. It has no idea what climate change or fake news is.

It is a language model that basically produces a word soup on command. It does not understand what it is saying. So yes, you could train a model on the bible, on all the work of Shakespeare, on Fox News reports, but it will not reason or take sides. It will not ”develop“ its own ethics. It will still only reproduce words in a soup that go well together.

If you only train the model with poor data, let’s say climate hoax arguments, it will only reproduce those. So, if you’re looking for confirmation bias, it will give you that on the condition it was trained with that kind of data.

The reason why a LLM like ChatGTP works so well is the sheer amounts of data fed into it. So if it’s trained on 99,5% scientific data and 0,5% conspiracy theories it will still produce answers that ’sound’ scientifically correct.

But you’re probably right: lots of people will use it as a source of the ‘truth’. Without critical thinking. Scary.

BTW: there are advantages to climate change. But they are local, not global. Your local weather could become more favourable for human living or specific commercial activities, specific plant species could grow faster at specific sites, … Just to fact check you: I also asked ChatGTP and it gave me a correct answer in line with what I just wrote. So no issue there.
 
I do not think this is so much a question as to these sources being "The Absolute Truth". I think is just for general training purposes. I think this is more about liabilities. More about securing a "Your Lawyers Will Not $ue Apple" for a potentially huge payday for not asking permission.

Where accuracy, and whatever anyone calls the "Empirical Truth" is concerned, It seems to me they would use more specific training models with data that would generate weights that a general-purpose model would use.
 
  • Like
Reactions: Jumpthesnark
The source that doesn't rely on advertising dollars for funding is the best starting place for this discussion. NPR and PBS aren't perfect but I would trust their news and opinions over any other sources even though I do not agree with some of their programming.
I second this. I also very much appreciate their transparency whenever reporting on an issue that relates to a person or company that donates to NPR. They do their best to tell the news without the opinions. When opinions are given, I feel they are fluid and based on the best available facts.
 
Wait wait wait… ChatGTP is NOT a knowledge base that can give you facts or fiction. It has no idea what climate change or fake news is.

It is a language model that basically produces a word soup on command. It does not understand what it is saying. So yes, you could train a model on the bible, on all the work of Shakespeare, on Fox News reports, but it will not reason or take sides. It will not ”develop“ its own ethics. It will still only reproduce words in a soup that go well together.

If you only train the model with poor data, let’s say climate hoax arguments, it will only reproduce those. So, if you’re looking for confirmation bias, it will give you that on the condition it was trained with that kind of data.

The reason why a LLM like ChatGTP works so well is the sheer amounts of data fed into it. So if it’s trained on 99,5% scientific data and 0,5% conspiracy theories it will still produce answers that ’sound’ scientifically correct.

But you’re probably right: lots of people will use it as a source of the ‘truth’. Without critical thinking. Scary.

BTW: there are advantages to climate change. But they are local, not global. Your local weather could become more favourable for human living or specific commercial activities, specific plant species could grow faster at specific sites, … Just to fact check you: I also asked ChatGTP and it gave me a correct answer in line with what I just wrote. So no issue there.
It may not really "understand" what it says, but there still is a lot of security built in to prevent a disaster like that one Chatbot that started writing hateful and racist things. Many people try to get ChatGPT to say something that is "politically incorrect" or even illegal. That of course would hurt the reputation of Open AI. So programmers try everything to prevent ChatGPT to say anything controversial.

My goal was not to make ChatGPT a climate change denier, but at least it could have listed advantages like "less people freeze to death" or "less accidents from slippery roads". That is what a large language model without any intervention would have done.
 
  • Like
Reactions: hagar
Humans by nature are biased. I guess the better question is who is less biased and again that depends on the individual. To some Fox News is the most accurate news organization, to others it’s pure trash. To some CNN is more accurate and honest, while to others it’s the spawn of satan. It’s hard to answer.
I wonder what would happen if it is trained on those sources. They provide complete opposite views of each other. Will the result be an insane AI 😂
 
... My goal was not to make ChatGPT a climate change denier, but at least it could have listed advantages like "less people freeze to death" or "less accidents from slippery roads". That is what a large language model without any intervention would have done.
this thread is a great discussion. enjoying all the various opinions.

with regard to your specific post referring to your trying to get ChatGPT to list some advantages, and you found you couldn't get it to say something like "less accidents from slippery roads" , i think its both hilarious and hopeful that it didn't list something like that as an "advantage".

in this case, it would appear that ChatGPT understands better than some persons that what formerly was referred to as "Global Warming" (which Fox News and the Republican Congress is still is stuck on), really is about "Climate Change" that makes for weather extremes such as heavy snowfalls and super cold temperatures as well as extreme heat.

hilarious and hopeful.

maybe the eventual Russian and OAN ChatGPT equivalents will list "less accidents from slippery roads".
 
It may not really "understand" what it says, but there still is a lot of security built in to prevent a disaster like that one Chatbot that started writing hateful and racist things. Many people try to get ChatGPT to say something that is "politically incorrect" or even illegal. That of course would hurt the reputation of Open AI. So programmers try everything to prevent ChatGPT to say anything controversial.

My goal was not to make ChatGPT a climate change denier, but at least it could have listed advantages like "less people freeze to death" or "less accidents from slippery roads". That is what a large language model without any intervention would have done.
You’re right about the reputation and making sure it doesn’t produce politically incorrect or illegal things. But is that wrong? What would you expect? That it repeats the garbage people post all over the net. Remember the Cortana disaster from 2016. ChatGTP would be over in second.

About your climate change example: there’s no solid data backing your claims less people will die. More erratic weather will cause more dangerous situations and thus more deaths. We already see tens of thousands of victims due to extreme heat. The number of freeze deaths fades in comparison.
 
I wonder what would happen if it is trained on those sources. They provide complete opposite views of each other. Will the result be an insane AI 😂

no. but good question. making it a Fox vs. CNN vs. MSNBC kind of argument actually is useful as many people think in those terms.

what AI LLModels are already good at doing is analysing the reasoning (if any) (note: you can substitute the word "background for arriving at a decision" if you prefer to stress the point that LLM can not "reason" like humans) that was used to arrive at conclusions.
it is able to point out not just verifiably things that are wrong.
the breakthrough in generative LLModels is that it learns from examples of cause and affect from the totality of its inputs. generative AI LLM will quickly point out factual fallacies (sorry to use the word "factual" since it seems to be a contentious word (ever since Kellyanne Conway blurted out the truth that the former President and his minions have a right to Alternative Facts)
sorry Kellyanne, LLM already has no problem in "deciding" which cable news information to give you.
its humans that have that problem not LLM.

a more involved and probably more important example is using all of Shakespeare's works as a source of input.
generative AI LLM comes to "learn" about truly human motivations (if i am "wronged" i "do" this or that). what has made Shakespear's works known as "great works" has always been that basic human traits such as romantic love, greed, jealousy, bravery, classism, wealth gaps, etc etc are on display.
generative AI LLM will use all of the inputs made available to it so in the near future it will be able to give output like this (using a paraphrased example of a problem a different poster posted in this thread):
"If there is a case where an oncoming car driver can either steer his car into a 2 year old child OR a 98 year old cancer ridden person, which is the correct/ethical/best choice?" the output will state that arguments for running over the older person exist in environments where available care resources are finite and limited, then, while economically speaking preserving the life of the child might be justifiable in a cold calculative sense, all human life is sacred, and that indeed there is no best option among these choices and that it is a fallacy to even consider that taking either of the lives can be justified. it can be understood and agreed that one life unfortunately was needed to be taken but thinking that one or the other was better to be taken is not the correct way to think about it."
then, at the end, if the LLM also was a fan of Carrott Weather (! take that apple !) it would end the information with something like "so, hey human, dont think its such a simple decision. sheesh, these humans!.

this is why training AI LLM on world literatures and reputable news publications that involve the totality of the human condition is needed.
 
Last edited:
End users (us) couldn’t give a rat’s ass about ai.
It’s all techno-nerd hype.
Techno-nerd hype, it certainly is. But it sure as hell ain't only hype. We've seen the future before: People didn't give a rat's ass about cars in the 1890s. Or about personal computers in the 1970s. Or about short-form video online in 2012. Or about Taylor Swift in 2004.

At this point maybe people don't give a rat's ass about AI.

But AI gives a rat's deep indexed, cross referenced, meta tagged, psych profiled, IOT surveilled, geolocated and credit scored ass about them.
 
There are objective facts in the world. The idea that the totality of thought is divided between “CNN or FOX” or “The New York Times or The Washington Examiner”? These are false dichotomies based on political attitudes. The more important metric is how closely these sources stick to established FACTS. Obviously no source is going to be fully objective or completely factual since the world doesn’t work like that. But it isn’t difficult to figure out what sources are fact based and what sources have a blatant political/social agenda.
 
Last edited:
Techno-nerd hype, it certainly is. But it sure as hell ain't only hype. We've seen the future before: People didn't give a rat's ass about cars in the 1890s. Or about personal computers in the 1970s. Or about short-form video online in 2012. Or about Taylor Swift in 2004.

At this point maybe people don't give a rat's ass about AI.

But AI gives a rat's deep indexed, cross referenced, meta tagged, psych profiled, IOT surveilled, geolocated and credit scored ass about them.

The premise is that AI systems will prove a general benefit. But the reality is that AI poses more of a threat. AI systems displace workers and to date there’s no plan for how to support all these people who won’t have any job prospects. And that’s just one of the major issues AI systems pose.
 
The premise is that AI systems will prove a general benefit. But the reality is that AI poses more of a threat. AI systems displace workers and to date there’s no plan for how to support all these people who won’t have any job prospects. And that’s just one of the major issues AI systems pose.
Concur. Information based workers aren't necessarily OWED a living, any more or less than other economic sectors because capitalism. Between the leading AI vendors and Boston Dynamics (which is to say SoftBank and Hyundai), there's no denying the potential SCALE of a hypothetical problem.

At this point in AI's evolution, the more publicly profiled progenitors are focused on making the quickest, flashiest buck. The resulting AI's service endpoints are all demonstrably sociopathic dunces. No real threat, there, one might conclude. But the hype can sound SO convincing to a CEO under profit pressure (ALL humans are subject to precisely crafted propaganda.).

Pay one million a year for an AI subscription, get thorough analysis, logical options, and courses of action all mapped out? Instead of paying 300 million a year in payroll for surly, booger-generating, day-drinking human personnel? OH, HELL YES! That's every CEO's dream come true, promised them since Univac could run expense reports.

As essential primary references to how AI is going to work out, please consume:
"How to Frame a Figg", Universal Pictures, 1971
"Hitchhikers Guide to the Galaxy" (Golgafrinchan Ark Fleet Ship B), BBC Radio 4, 1978
"The Terminator," et. al., Orion Pictures, 1984-2019
 
Last edited:
  • Like
Reactions: Surf Monkey
I'm not sure why wouldn't they train their bots on 2 or more sides of the same story. I still think open-source chatbots are more balanced than agenda driven big corporations chatbots.
The problem is one side does not even proclaim to follow journalistic principles, but since their audience doesn’t press them on it, it never comes up. It doesn’t help that one man owns 600 media outlets in the US and is heavily involved in what messages go out. So, a wide number of Americans really only hear his voice.
 
"Apple is aiming for multiyear deals and has approached Condé Nast, NBC News, and IAC. Condé Nast publications include Vogue, Wired, Vanity Fair, Ars Technica, Glamour, The New Yorker, GQ, and more, while IAC owns publications like People, The Spruce, Serious Eats, Martha Stewart Living, Real Simple, Entertainment Weekly, and Better Homes & Gardens."

My god lol

So not opened science articles or anything like that. No Apple is aiming to train AI with mostly entertainment magazines.

Siri might not be able to direct you to the new popular Pizza place without, albeit without giving you directions to Europe. But she might be able to tell you what the latest gossip on Taylor swift is.
 
  • Like
Reactions: jimbobb24
I don't think any of these publications listed are trying to be unbiased. They're just glorified opinion pieces for readers of a certain political persuasion and of course, Apple already has relationships with many of them via their News app. But that begs the question...Who is unbiased and factual in the publishing space?
No one. That is why you train it with a much broader spectrum. Or else you get AI that is crippled by having only one viewpoint to train on.
 
Apple can set the standard for not freely scraping, we will all be better off! I'm just not sure if this AI's moment, or a passing fad like 3D televisions. Remember how they tried to become a thing three or four times over the decades...
Why shouldn’t we freely scrape off every source imaginable?
 
  • Disagree
Reactions: Jumpthesnark
Why shouldn’t we freely scrape off every source imaginable?
I imagine there is a ton of copyright infringement going on, and personally speaking, shouldn't this be the sort of thing that content creators have the right to opt out for, if they do not wish for their work to be used for training AI?

Right now, nobody has really said anything or attempted to level any sort of lawsuit against OpenAI, and that doesn't necessarily mean this sort of behaviour is acceptable. It could just mean that the creators are unsure of what legal avenues they even have in the first place, or simply lack the resources to do so.

I imagine in the future, we may see more websites sport firewalls specifically designed to prevent this sort of data scraping by LLMs, and the owners have to specifically opt in for their data to be made accessible, rather than the free-for-all paradigm we are currently seeing right now. Sorta like ATT, but for websites.
 
  • Like
Reactions: Jumpthesnark
Concur. Information based workers aren't necessarily OWED a living, any more or less than other economic sectors because capitalism. Between the leading AI vendors and Boston Dynamics (which is to say SoftBank and Hyundai), there's no denying the potential SCALE of a hypothetical problem.

It isn’t about owing anyone anything. It’s about avoiding a broad economic catastrophe due to unemployment that literally can’t be overcome. And the reason is that “information workers” are only the front line of people who will be displaced. The breadth and scope of jobs that these sorts of AI systems can and will replace is vast. It will touch on basically every industry and will leave many of them devastated. So we as a society will have to do something about supporting our fellows and ourselves because there will be no other choice.
 
Why shouldn’t we freely scrape off every source imaginable?

As mentioned above: “we” don’t do that because it would be theft. Plain and simple theft. People who create them own things like books and music and editorials and, well, you name it. The fact that these things are generally easy to access does not mean they exist as part of the public domain.
 
No one. That is why you train it with a much broader spectrum. Or else you get AI that is crippled by having only one viewpoint to train on.

Yes and no. If you’re training the system with garbage “information” in the futile effort to “present all sides” you’re likely to get bad output results that aren’t factual. We already see this in existing large language models. The AI isn’t capable of assessing what is or what isn’t factual. Existing systems will gladly “lie” to you in order to give an answer that “sounds right” to the system.

So it boils back down to the old adage: garbage in/garbage out.
 
  • Like
Reactions: Jumpthesnark
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.