Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Right now, nobody has really said anything or attempted to level any sort of lawsuit against OpenAI, and that doesn't necessarily mean this sort of behaviour is acceptable. It could just mean that the creators are unsure of what legal avenues they even have in the first place, or simply lack the resources to do so.

False.





In fact OpenAI is facing a growing mountain of legal challenges over their use of copy written material to train their system.
 
False.





In fact OpenAI is facing a growing mountain of legal challenges over their use of copy written material to train their system.
Fair enough point, though I would to highlight that the companies suing OpenAI are all pretty big themselves (meaning they have the resources to hire the best lawyers and engage in a protracted lawsuit with them). I am thinking more about the everyday person who may have their own blog. It's not much, it doesn't necessarily make them any money, but it's still their content, and what recourse do they have?

At least when Google indexes their links, there is a chance that this can help direct more traffic to their blogs in the future, so there's a win-win scenario here. But with LLMs, the whole idea is that they make visiting websites redundant because the information would be summarised for you right away.

While we are at it, what's Macrumour's stance on this matter? Any preventive measures taken so far?
 
Fair enough point, though I would to highlight that the companies suing OpenAI are all pretty big themselves (meaning they have the resources to hire the best lawyers and engage in a protracted lawsuit with them). I am thinking more about the everyday person who may have their own blog. It's not much, it doesn't necessarily make them any money, but it's still their content, and what recourse do they have?

At least when Google indexes their links, there is a chance that this can help direct more traffic to their blogs in the future, so there's a win-win scenario here. But with LLMs, the whole idea is that they make visiting websites redundant because the information would be summarised for you right away.

While we are at it, what's Macrumour's stance on this matter? Any preventive measures taken so far?

Several of the suits are being brought by individual writers. This is a major problem for all AI systems going forward. You can’t just take and use copyrighted material and not pay for it.
 
My biggest issue with Siri is when setting up a smart device to do something you have to train it with every command. If its a smart light why can't Siri just know what to do to turn it on or set it to a colour or light level?
 
  • Like
Reactions: Surf Monkey
I'm stunned how many posts here complain about "bias" or being "woke" in the publications whose work Apple is reportedly looking to license for this.

This piece is about Apple looking to train a LLM. So it can, in effect, learn to speak and write. To learn language and how it is used. To learn words and grammar. To learn how to write everything from a simple declarative sentence to an essay.

Is the alphabet "biased?" Are the rules of clear and communicative writing "woke?" Because how to express thoughts, not which thoughts to express, is what this is about.

If this deal happens and if Apple does create an AI chatbot or whatever it is, it will have been trained to "write" on the edited, curated articles found in the archives of those publications, after they have been paid by Apple for that access.

The only biases I am seeing are the ones that people are bringing to this article, by presuming that Apple's product won't adhere to a political leaning that they agree with.
 
Last edited:
Apple is aiming to train AI with mostly entertainment magazines.
Let's see: "Vogue, Wired, Vanity Fair, Ars Technica, Glamour, The New Yorker, GQ" - you know what they have in common?

It's this: high level of English writing. As in, content presented with a mastery of the English language. Both Vanity Fair and The New Yorker have for many years represented high water marks for widely available periodicals.

Now, you may not like their content (... why?)

Professional journals tend to write in jargon that have very limited audiences. Thus said journals are not the best choices for an LLM, at least for the majority of the training.
 
Again, people have this misconception that LLM's learn the content that they're being trained on, but the AI (or more accurately, a neural network) is only learning the grammar and how stuff should be written.

There does not need to be a political balance because the model does not learn politics, its learning how to write.
I'm assuming Apple is gonna use this in a mix of "find info" and "generate contextual phrase" with Siri, so when you ask her "Hey Siri, when did George Washington assume the role of president?" another model fetches the data online asking the LLM to form a phrase using the info.

It's like asking a vocal synthesis model to generate a song, yes, it can sing, but the model only learns contextual acoustic data, not the composition of the song itself.
 
Again, people have this misconception that LLM's learn the content that they're being trained on, but the AI (or more accurately, a neural network) is only learning the grammar and how stuff should be written.
this.
thanks for posting this important point.
it is, after all, why it is called a LANGUAGE model.

it is reasonable to say that these generative large language models are gathering data like this:

1 factual info (even though some people want to say "but MSNBC facts are different from Fox facts" which is not true. (these people unfortunately can not see the way that info is introduced or being interpreted and is leading them towards a certain conclusion). sorry people, facts are still facts.
"Biden won the 2020 Presidential election. However, there are conspiracy theorists who maintain, without evidence, that the election was tainted with irregularities, which has not been proven in any court of law." Sorry, Kellyanne.

2 deep structural grammar (not grammar that people think of from elementary school)
it is here how we communicate and interpret (meaning: make sense of) our surrounding environment.
structure of adjectival phrases; copula; associative ability to rearrange the order of words and phrases to be able to communicate with each other.
morning. sky. red.

3 but i have also mentioned that there is an additional important feature that LLM are at the same time also "learning" which is gained from seeing trillions and trillions of examples of human logic, emotions (what do people say when various things are happening, and what are the key motivations of what people do and say when they face certain situations).
morning. sky. red. nice !
the "computer" doesn't get all warm and fuzzy by seeing a nice morning sunrise, but it has observed that humans do.
this is why training LLM on the worlds great literatures which contain the breadth of our human experience is necessary.
 
Last edited:
  • Like
Reactions: nachodorito
it is, after all, why it is called a LANGUAGE model.
as a person that is interested in ai (and not in the nftbro way of OMG THIS IS SO SMART I HOPE IT REPLACES WRITERS AND ARTISTS IT TOTALLY IS SENTIENT AND KNOWS EVERYTHING!!!!!) i hate the way the current scene is being misrepresented. the technology is not a fad nor a replacement, it is a tool like photoshop is to photo editing. people either have this total incorrect acceptance of it or a complete aversion, and none are right! even generative models for music or images can be made and used in the correct way, but people dont know how (except for the singing vocal synth scene, which already implements some sinsy/tacotron derivative stuff in CeVIO AI/Synthesizer V AI/VOCALOID6 in an ethical and commercial way)
 
this.
thanks for posting this important point.
it is, after all, why it is called a LANGUAGE model.

it is reasonable to say that these generative large language models are gathering data like this:

1 factual info (even though some people want to say "but MSNBC facts are different from Fox facts" which is not true. (these people unfortunately can not see the way that info is introduced or being interpreted and is leading them towards a certain conclusion). sorry people, facts are still facts.
"Biden won the 2020 Presidential election. However, there are conspiracy theorists who maintain, without evidence, that the election was tainted with irregularities, which has not been proven in any court of law." Sorry, Kellyanne.

2 deep structural grammar (not grammar that people think of from elementary school)
it is here how we communicate and interpret (meaning: make sense of) our surrounding environment.
structure of adjectival phrases; copula; associative ability to rearrange the order of words and phrases to be able to communicate with each other.
morning. sky. red.

3 but i have also mentioned that there is an additional important feature that LLM are at the same time also "learning" which is gained from seeing trillions and trillions of examples of human logic, emotions (what do people say when various things are happening, and what are the key motivations of what people do and say when they face certain situations).
morning. sky. red. nice !
the "computer" doesn't get all warm and fuzzy by seeing a nice morning sunrise, but it has observed that humans do.
this is why training LLM on the worlds great literatures which contain the breadth of our human experience is necessary.

Only if it’s in the public domain.
 
as a person that is interested in ai (and not in the nftbro way of OMG THIS IS SO SMART I HOPE IT REPLACES WRITERS AND ARTISTS IT TOTALLY IS SENTIENT AND KNOWS EVERYTHING!!!!!) i hate the way the current scene is being misrepresented. the technology is not a fad nor a replacement, it is a tool like photoshop is to photo editing. people either have this total incorrect acceptance of it or a complete aversion, and none are right! even generative models for music or images can be made and used in the correct way, but people dont know how (except for the singing vocal synth scene, which already implements some sinsy/tacotron derivative stuff in CeVIO AI/Synthesizer V AI/VOCALOID6 in an ethical and commercial way)

I can tell you as a graphic designer who uses Photoshop in my work, Adobe’s AI features are already doing the work that entry level designers would do. It’s only a matter of time before Adobe can basically replicate any style of art via text prompts. A lot of graphic artists will be out of a job.
 
  • Like
Reactions: picpicmac
Disappointing.

That’s a very skewed list.

I’d expect apple to go for a more well rounded approach.

If you want nothing but a liberal opinion echo chamber, then you’ll be happy with that list.

But if you’re more conservative or somewhere in the middle, that’s a major letdown.

Personally, I go to a variety of sources before I form an opinion. CNN, Fox, sky, bbc, msnbc, etc. used to be a fan of are technically but the last few years they’ve devolved into amateurish opinionators on the scale of say Engadget. I’d rather have some real journalism. Something to rely on without the need for concern over tainted or skewed results based on some author/publisher opinion.

If the world thought Russia “hacked” a US election by posting ads and messages on Facebook, how much worse are such tactics when baked into a source your supposed to trust as your go-to.

If it were skewed conservative, that would be lame. If it were skewed liberal, that’s lame too. Better to get actual fact data without opinion attached. None of the publishers on that list seem capable. So at the very least, get a broader set of data from a wider group more indicative of the world at large, not just a subset.
 
  • Like
Reactions: siri_3005
I can tell you as a graphic designer who uses Photoshop in my work, Adobe’s AI features are already doing the work that entry level designers would do. It’s only a matter of time before Adobe can basically replicate any style of art via text prompts. A lot of graphic artists will be out of a job.
You and me both. It’s a sad sad day. Basically amalgamated the work of so many true artists and combines it.

Hopefully there are some honest paws put in place to constrain what often amounts to theft. Even if it’s just stealing 5% from this artist and 5% from that artist etc. with a bit of algorithm to mash it up enough to be “original.”

Soon AI will be combined with robotics and mechanics, nurses, surgeons, automakers, etc will all be out of jobs.
 
I can tell you as a graphic designer who uses Photoshop in my work, Adobe’s AI features are already doing the work that entry level designers would do. It’s only a matter of time before Adobe can basically replicate any style of art via text prompts. A lot of graphic artists will be out of a job.
yeah, the same way very cheap projects were replaced by people ****ing around with clipart and canva. it is what it is, graphic designers are in a downhill fight against technologybros and companies looking to save money and you just gotta adapt or die (as a graphic designer, not literally)
big projects and studios will still exist because for people that actually give a darn because simply typing "good pretty ad for headphones render trending in artstation" wont cut it for them. note that this does not mean endorsing unethical ai models. if its gonna be trained off art, it better be out of people that consented to it.

i say this as a graphic designer too, but one's gotta adapt and move from small projects or else you're gonna be cut or become an "ai prompter" designer. it is what it is.
 
yeah, the same way very cheap projects were replaced by people ****ing around with clipart and canva. it is what it is, graphic designers are in a downhill fight against technologybros and companies looking to save money and you just gotta adapt or die (as a graphic designer, not literally)
No, it’s far beyond that already.
 
  • Like
Reactions: picpicmac
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.