Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
this is excellent news.

generative language models are like children.
children, like leaves, dont fall far from their parental tree trunk.

its clear that the results of the dozen or so of each of the emerging LLM giants relies on the data bases they have used to shape the output.

diverse cultures (read: foreign cultures) need to be included in the original data bases. because, what can (and probably will) emerge, are culturally specific results (LLM shaped by largely usamerican input will bend towards a usamerican bias; LLM that used western european data bases will yield a western european bias).

generative AI LLM represent a once in a 1000 years opportunity to help the world become closer together instead of building walls between people.

great literature and thorough journalism can come from many many countries. usa, eqypt, russia, china, japan, greece, italy, south africa. everywhere really.

people use a combination of deductive reasoning and inferential inductive reasoning; objective experience and learned personal biases. all of us on this planet are living results of these biases, received from our parents and from our cultures.

i hope apple partners with many many non-usa publishers as well.
as John lennon told us, Imagine..
What a great point. It hadn’t even occurred to me that these LLMs know every language on Earth and we should therefore be feeding the great works from each language, at a minimum.
 
means Apple is lagging behind when it comes to AI technology

FFS. Just because Apple doesn’t have a chatbot, does not mean it’s lagging behind in AI. Seriously how hard is it to scrape data and make an LLM? Given that everyone and their brother is building their own LLMs, just goes to show it’s not technically difficult… it’s mainly time consuming and resource draining.

Apple’s new auto correct and predictive text uses the same transformer language model as ChatGPT and every other chatbot. Apple uses “AI” from managing power efficiency in their silicon to selecting handwritten text in notes, to object occlusion in augmented reality scenes. It’s pervasive throughout their devices and operating systems.
 
  • Like
Reactions: TwoBytes
Humans by nature are biased. I guess the better question is who is less biased and again that depends on the individual. To some Fox News is the most accurate news organization, to others it’s pure trash. To some CNN is more accurate and honest, while to others it’s the spawn of satan. It’s hard to answer.

But that’s not how news should work in a functioning democracy. You can’t mention CNN and Fox News in one breath as if they’re two sides of the same coin.

News channels will always somewhat colour and frame the news, but they should at least agree on the facts. In the US even that is no longer the case.

It’s understandable Apple will not fall into that trap. They’ll most likely focus on pop culture and entertainment for their LLM. And if you want the news, you’ll be able to get your flavour of poison directly from the source.
 
I have noticed that the new Siri, have stopped understanding my english.

It is like a different person has replaced her, and she misunderstands even searching for bus connections to the city centre,

and she wants to remind me if the flashlight is already off, when I want to turn it on,
and other basic functions, she misunderstands.

It is so much that I am annoyed, and I think I must stop using Siri.
 
Good grief. Talk about choosing completely unbiased and factually reliable publication groups to train data on.
Facts don’t matter for training a language model. It’s supposed to teach it language.

I know people are selling language models as knowledge databases, but I hope Apple isn’t as reckless. LLMs should be the "UI" to reliable information, not sort it out themselves among their hallucinations.
 
This is deep entertainment (species of publications), the need for un-biasedness is lessened by essence and the required subjectivity to inherent to this kind of information "News" it is not.
There is the same principle as with history. It is written by winners. Here rules are set by those who rule. So forgot about unbiassed info/answers even AI has much bigger potential to analyze info then we to decide what is more probable. AI with exception of some small teams programmed to give those right answers with one "truth". Otherwise those winners/rulers wont be winners for long lol.
 
It stands to reason that if you train an LLM on a more literate, more erudite set of texts that you get a more well-spoken chatbot out of it. That has a certain intangible value, I think it is a very Apple thing to do.
 
What is Apple going to use this for? Training data on publications like People and GQ for what? TV+ features?
 
  • Like
Reactions: gusmula
I have noticed that the new Siri, have stopped understanding my english.

It is like a different person has replaced her, and she misunderstands even searching for bus connections to the city centre,

and she wants to remind me if the flashlight is already off, when I want to turn it on,
and other basic functions, she misunderstands.

It is so much that I am annoyed, and I think I must stop using Siri.
This is so true. Way too often it says "The lights are already on" when I say "turn the lights off". And seriously how likely am I to ask it to turn them on if they're already on. It just goes with the first thing it thinks it heard instead of having some logic :/ Maybe there's another phrase I can use like "switch the lights" without specifying but I haven't tried it.
 
So the Apple will only choose woke media to train its AI? What could go wrong?
Ah yes. The woke media that is Martha Stewart Living.

So what are you proposing? Adding a bit of Fox News or News Max in the mix? So Siri sounds more like your racist uncle? Or go all in on so called free speech and let it train on the sewage of Truth Social?
 
Another thing that comes to mind - if Apple can set the expectation that content creators should be compensated for the data scrapped by LLMs, what impact will this have on existing companies like OpenAI, which is still not profitable? Apple has more than enough money to bankroll this for all eternity. So once again, it may not be a question of who is first, but who the last man standing will be.
Remember that Microsoft is investing money in OpenIA, gave them access to Azure, and even have a non-voting seat in the board. If for some reason OpenAI cannot survive by themself, MS could acquire them. With MS support, I don't see OpenAI having money issues.
 
So what are you proposing? Adding a bit of Fox News or News Max in the mix? So Siri sounds more like your racist uncle? Or go all in on so called free speech and let it train on the sewage of Truth Social?
Yes, include all sources. An AI should be "smart" enough to figure out any fake news without the need of giving it only curated information.

ChatGPT already shows a problematic bias. I asked it the simple question "Are their any advantages of climate change" and it refused to answer and told me all the well known reasons why climate change is bad. Imagine though you are a lawyer and want to defend your client against the attacks of a smart lawyer on the other side. Then you want to know all the arguments that contradict your position. Those are the arguments the other side could come up with. I am sure than ChatGPT will also take a one sided position in others controversial topics: Gun laws, abortion, military conflicts and so on. The more people use ChatGPT or any other AIs, the more influence it will have on the public opinion.

That always was my biggest fear when I heard of a "general AI": That some programmer told it some ethics instead of letting it develop its own ethics. In future AI will have to make a lot of life and death decisions. For example in case of an unavoidable accident. Is it better to kill one child in an accident or two very old people who might die soon anyway. In many of those questions society comes to different conclusions than I do. I for example don't think that the life of an old person is worth less than the life of a young person.

I learned that in one of the biggest military conflicts at the moment AI already chooses the targets for rockets. I did not see that dystopian future coming so quickly. It already reminds me of "Skynet" from the Terminator movies. I expected something like that in 2060, but not in 2023.
 
FFS. Just because Apple doesn’t have a chatbot, does not mean it’s lagging behind in AI. Seriously how hard is it to scrape data and make an LLM? Given that everyone and their brother is building their own LLMs, just goes to show it’s not technically difficult… it’s mainly time consuming and resource draining.

Apple’s new auto correct and predictive text uses the same transformer language model as ChatGPT and every other chatbot. Apple uses “AI” from managing power efficiency in their silicon to selecting handwritten text in notes, to object occlusion in augmented reality scenes. It’s pervasive throughout their devices and operating systems.
You have to consider that Apple cannot make just another chatbot / AI service. They have to make something at the level of OpenAI / Microsoft and Google, and that's not easy.
 
Looks like the rest of the worlds need to build their own LLM. Having two to five different LLM in each country would be broad enough to capture variances culture, religion and political system.
As a scientist I suggest
Nature springer
Science
Wiley
Elsevier
Plos Journals
 
  • Like
Reactions: arc of the universe
I'm not sure why wouldn't they train their bots on 2 or more sides of the same story. I still think open-source chatbots are more balanced than agenda driven big corporations chatbots.

The presumption is both sides are equally valid points of view; which isn't always the case. Presenting facts on both sides is important, but you need not, nor should not, present some crackpot POV simply because it represents the other side.

The source that doesn't rely on advertising dollars for funding is the best starting place for this discussion. NPR and PBS aren't perfect but I would trust their news and opinions over any other sources even though I do not agree with some of their programming.

I agree and have found them to try to offer a balanced reporting and offering differing viewpoints to weigh in; although they have to skew some of their opinion programming to the viewpoint of the supporters to keep getting enough funds to stay on the air.
 
Remember 2013 when people said Apple cares too much about privacy and no one else really cares about it? And because of this privacy obsession Apple is doomed? And then after a few years the same people said privacy was very important, but Apple is just doing it for money and not because they care?

Well, in a few year the same people will say that getting rights to train AI with copyrighted material is important. Else large companies are just steeling from content creators and owners. But Apple doesn’t care, they just did it for money.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.