Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Yeah, this is not great. Normally I would be 200% against it, but at the same time China is stealing everything and using it for free to gain an advantage. So it is a tricky situation.
 
  • Like
Reactions: GlassFingers


OpenAI, known for its ChatGPT chatbot, today submitted AI recommendations to the Trump administration, calling for deregulation and policies that give AI companies free rein to train models on copyrighted material in order to compete with China on AI development.

open-ai-new-typeface.jpg

AI companies cannot freely innovate while having to comply with "overly burdensome state laws," according to OpenAI. The company claims that laws regulating AI are "easier to enforce" with domestic companies, imposing compliance requirements that "weaken the quality and level of training data available to American entrepreneurs." OpenAI suggests that the government provide "private sector relief" from 781+ AI-related bills introduced in various states.

OpenAI outlines a "copyright strategy" that would preserve "American AI models' ability to learn from copyrighted material." OpenAI argues that AI models should be able to be trained freely on copyrighted data, because they are "trained not to replicate works for consumption by the public" and thus align with the fair use doctrine. With its AI copyright laws, OpenAI says that the European Union has repressed AI innovation and investment.

OpenAI claims that if AI models are not provided with fair use access to copyrighted data, the "race for AI is effectively over" and "America loses." OpenAI asks that the government prevent "less innovative countries" from "imposing their legal regimes on American AI firms."

For AI data sharing, OpenAI suggests a tiered system that would see AI tech shared with countries that follow "democratic AI principles," while blocking access to China and limiting access to countries that might leak data to China. The company also suggests government investment in utilizing AI technology and building out AI infrastructure.

The use of copyrighted material for AI training has angered artists, journalists, writers, and other creatives who have had their work absorbed by AI. The New York Times, for example, has sued Microsoft and OpenAI for training AI models on news articles. Many AI tools assimilate and summarize content from news sites, driving users away from primary sources and oftentimes providing incorrect information. Image generation engines like Dall-E and Midjourney have been trained on hundreds of millions images scraped from the internet, leading to lawsuits.

OpenAI has submitted its proposals to the Office of Science and Technology Policy for consideration during the development of a new AI Action Plan that is meant to "make people more productive, more prosperous, and more free." The full text is available on OpenAI's website.

Note: Due to the political or social nature of the discussion regarding this topic, the discussion thread is located in our Political News forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.

Article Link: OpenAI Calls on U.S. Government to Let It Freely Use Copyrighted Material for AI Training
You either have copyright or you don't. If someone is allowed to use copyrighted material for free, then you don't have copyright. Maybe Sam Altman's bank account should be available to everyone in order to make the world better.
 
Yeah, this is not great. Normally I would be 200% against it, but at the same time China is stealing everything and using it for free to gain an advantage. So it is a tricky situation.

Nothing about this is "tricky"

OpenAI is a private company with no legitimate Fair Use claim here

This isn't "USA vs China" in Olympic sports
 
  • Like
Reactions: Manzanito
Amazing how people are so angry about this.

You think music writers don't write music after listening to copyrighted music? You think one could just say "oh no, my brain listened to The Killer, so I need to completely shut off that part of my brain before I write music"?

Even Louis CK accused Dane Cook of listening to his material and unintentionally writing the same jokes. That's just the nature of the human brain.

Age of abundance is happening. People who are against this are for age of starvation.
 
I wonder what Trump has allowed Elon to hoover up into Grok through all of this. With so much DOGE secrecy and a lack of accountability, we don't know.
 
Amazing how people are so angry about this.

You think music writers don't write music after listening to copyrighted music? You think one could just say "oh no, my brain listened to The Killer, so I need to completely shut off that part of my brain before I write music"?

Even Louis CK accused Dane Cook of listening to his material and unintentionally writing the same jokes. That's just the nature of the human brain.

Age of abundance is happening. People who are against this are for age of starvation.

I see where you’re coming from—human creators inevitably draw on their influences. No one is writing music (or jokes, or novels) with their mind wiped of everything they’ve ever heard. However, it’s a leap to say that this is equivalent to AI companies scraping copyrighted material to train a single, commercial, ultra-scalable model. A human might create one new work at a time, with limited reach and output; meanwhile, an AI can generate massive amounts of content—instantly and on a global scale—yielding unprecedented wealth for its creators.

This difference in scale and capacity is exactly why the question of fair use and copyright for AI isn’t the same as “Dane Cook’s brain accidentally internalizing Louis CK’s jokes.” It’s one thing when an individual person is shaped by their influences; it’s quite another when a corporation ingests massive copyrighted datasets to produce infinite creative outputs at near-zero marginal cost.

Now, I understand the argument that loosening copyright restrictions may be necessary to stay competitive with authoritarian regimes that don’t respect IP rights. If that’s the path taken, however, the logical follow-through is that any AI-generated material—derived from that non-consensually acquired training data—should be placed in the public domain. If the rationale is that “we have to do this for the good of society,” then that benefit should flow to everyone, not just the commercial entity that built the AI.

“Age of abundance” sounds great, but if the abundance of output is locked behind a paywall or serves primarily to enrich a small group, we’re still circling back to “age of starvation” for the original creators or for the public that sees none of the direct benefit. Essentially, if the idea is that we have to do this, let’s make sure we do it in a way that genuinely serves all.
 
So OpenAI believes that authors should not be financially compensated for OpenAI stealing their intellectual property, but OpenAI should be allowed to profit from that theft.

It's sad and pathetic that so many people support billionaires like Scum Altman at the expense of the common man.

No wonder many Silicon Valley oligarchs like Scum Altman (and Tim Crook) donated millions of dollars to Donald Trump's inauguration. They not only want Trump to continue to shift the tax burden from the rich to the working class, but now also want Trump to allow the theft of intellectual property of the non-rich so that billionaire Silicon Valley oligarchs involved in AI can profit even more.


Good! I want American tech oligarchs to lose. They are fascists whose goal is to increase income inequality as much as possible in order to profit as much as possible.
The real fascists are the ones who facilitate them, ordering them to do what they do, praise them, use and abuse them for their own goals, etc.
 
After skimming some of the other comments here, this is probably an unpopular opinion but - how is this any different than how a human learns? I feel like a good example would be an artist. An artist can look at all the paintings and pictures they want and never have to actually buy a single one … that artist can then use what they learned from observations of other art, create their own, and sell it to make money. How is that different than what OpenAI wants to do?
 
I see where you’re coming from—human creators inevitably draw on their influences. No one is writing music (or jokes, or novels) with their mind wiped of everything they’ve ever heard. However, it’s a leap to say that this is equivalent to AI companies scraping copyrighted material to train a single, commercial, ultra-scalable model. A human might create one new work at a time, with limited reach and output; meanwhile, an AI can generate massive amounts of content—instantly and on a global scale—yielding unprecedented wealth for its creators.

This difference in scale and capacity is exactly why the question of fair use and copyright for AI isn’t the same as “Dane Cook’s brain accidentally internalizing Louis CK’s jokes.” It’s one thing when an individual person is shaped by their influences; it’s quite another when a corporation ingests massive copyrighted datasets to produce infinite creative outputs at near-zero marginal cost.

Now, I understand the argument that loosening copyright restrictions may be necessary to stay competitive with authoritarian regimes that don’t respect IP rights. If that’s the path taken, however, the logical follow-through is that any AI-generated material—derived from that non-consensually acquired training data—should be placed in the public domain. If the rationale is that “we have to do this for the good of society,” then that benefit should flow to everyone, not just the commercial entity that built the AI.

“Age of abundance” sounds great, but if the abundance of output is locked behind a paywall or serves primarily to enrich a small group, we’re still circling back to “age of starvation” for the original creators or for the public that sees none of the direct benefit. Essentially, if the idea is that we have to do this, let’s make sure we do it in a way that genuinely serves all.


your argument has creativity in error humans and ai both borrow from prior work, but ai does so more quickly and cheaply. restricting ai training on works under copyright is similar to keeping artists from being able to learn from the masters. ai doesn't "steal", it makes, as human artists do, but in scale.

the premise that ai labor has to be public domain is in opposition to fact. these models are extensively invested in by corporations, in the same manner that publishers make investments in human writers. a double standard prevails in reasoning that ai productivity has to be free but that human productivity gets to stay under copyright

ai is not killing creativity. it is enhancing it. history has shown that new tools broaden art, not substitute for it. instead of fighting, we must embrace ai's ability to foster innovation.

like i said, "age of abundance".
 
After skimming some of the other comments here, this is probably an unpopular opinion but - how is this any different than how a human learns? I feel like a good example would be an artist. An artist can look at all the paintings and pictures they want and never have to actually buy a single one … that artist can then use what they learned from observations of other art, create their own, and sell it to make money. How is that different than what OpenAI wants to do?

OpenAI ≠ a human being

It should not be surprising that we have different goals and opinions based upon what we let individuals do with information on their own, or in fair use, vs a private corporation hoovering things up directly for repackaging and use to support their business use cases (and only their business use cases).
 
  • Like
Reactions: Bungaree.Chubbins
OpenAI ≠ a human being

It should not be surprising that we have different goals and opinions based upon what we let individuals do with information on their own, or in fair use, vs a private corporation hoovering things up directly for repackaging and use to support their business use cases (and only their business use cases).
Yes, an AI model is not a human being - we agree on that haha.

We have different goals and opinions based upon what we let individuals do with information. You know my opinion, but what are my goals? Also, you don’t know what I let people do with my information. The second part of yore post sounds like you pulled it from the middle of a conversation we have never had.

I don’t think you know what Fair Use is. Fair Use is not merely the private use of a copyrighted work, but it also includes commercial use for profit.

Are you only against private companies doing this? Google is publicly traded, so you’re good with them doing it?
 
They are entitled to violate as many copyright as they want, as long as they give Trump enough money first.
 
  • Like
Reactions: darkpaw
But the proposal by OpenAI doesn't seem to include purchasing books, music, graphic materials, etc. from their creators, at least not in the same ways or at the same monetary levels as humans, using the systems that have been in place for many years to properly compensate the creators of those works. And the mass distribution of information that AI enables, unbound by copyright, is one of the things that makes AI literally very different from how humans distribute what they've learned.
Yeah they should access all material in the same way as everyone else!

There is, however, no way to extract or even deduce the material upon which LLMs have been trained; it’s simple not there anymore but for a combined abstraction over all content consumed.
So after it’s been consumed, we can no more tax the LLM than we can humans who produce YouTube material based on knowledge they gathered.

So yes; of course OpenAI etc. should pay for consuming material exactly like the rest of us, especially when they themselves make money off the end product.
 
  • Love
Reactions: turbineseaplane
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.