Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
69,774
41,160


Sora, OpenAI's AI video app, will no longer allow users to create videos featuring celebrity likenesses or voices.

openai-sora-app.jpg

OpenAI, SAG-AFTRA, actor Bryan Cranston, United Talent Agency, Creative Artists Agency, and Association of Talent Agents today shared a joint statement about "productive collaboration" to ensure voice and likeness protections in content generated with Sora 2 and the Sora app.

Cranston raised concerns about Sora after users were able to create deepfakes that featured his likeness without consent or compensation. Families of Robin Williams, George Carlin, and Martin Luther King Jr. also complained to OpenAI about the Sora app.

OpenAI has an "opt-in" policy for the use of a living person's voice and likeness, but Sora users were able to create videos of Cranston even though he had not permitted his likeness to be used. To fix the issue, OpenAI has strengthened guardrails around the replication of voice and likeness without express consent.

Artists, performers, and individuals are meant to have the right to determine how and whether they can be simulated with Sora. Along with the new guardrails, OpenAI has also agreed to respond "expeditiously" to any received complaints going forward.

OpenAI first tweaked Sora late last week to respond to complaints from the family of Martin Luther King Jr., and the company said that it would strengthen guardrails for historical figures. OpenAI said there are "strong free speech interests" in depicting deceased historical and public figures, but authorized representatives or estate owners can request that their likeness not be used on Sora cameos.

Sora launched on September 30, and it has since become one of the most popular apps in the App Store.

Article Link: OpenAI Strengthens Sora Protections Following Celebrity Deepfake Concerns
 
Forget celebrities, this sort of tool can quickly ruin the life of the common person. Just imagine someone making a fake video of you doing something offensive, and then post it on social media. Before you could defend yourself, you'd be executed by the court of public opinion. Even if there was a retraction or correction, too late, damage done. Soon we will get to a place where we believe nothing we see or hear, unless it's happening right in front of us.
 
absolutely disgusting and reprehensible use of technology, electric power, water, etc. Can't wait till this AI bubble pops
All these resource hungry services such as cloud computing, digital currency, AI, etc and some people were making drastic cuts to reduce their daily carbon footprint, well glad to know that all those years of lifestyle adjustments got erased in seconds.
 
Forget celebrities, this sort of tool can quickly ruin the life of the common person. Just imagine someone making a fake video of you doing something offensive, and then post it on social media. Before you could defend yourself, you'd be executed by the court of public opinion. Even if there was a retraction or correction, too late, damage done. Soon we will get to a place where we believe nothing we see or hear, unless it's happening right in front of us.
Imagine being fired or turned down for an interview or not selected for a job or promotion. Do we litigate OpenAI or some random persons for using these tools.
 
I’m afraid this app is going to be discountinued soon. It’s a liablity because it’s very hard to control the output, people will always find a way around the restrictions. I believe this tech is targeted to big companies anyway for movies, shows and ads. That’s where the money is and ai companies can shield themselves by making their big customers sign contracts that prevent them from generating celebrities or other copyrighted content.
 
  • Like
  • Haha
Reactions: cardfan and Big_D
Forget celebrities, this sort of tool can quickly ruin the life of the common person. Just imagine someone making a fake video of you doing something offensive, and then post it on social media. Before you could defend yourself, you'd be executed by the court of public opinion. Even if there was a retraction or correction, too late, damage done. Soon we will get to a place where we believe nothing we see or hear, unless it's happening right in front of us.
Fortunately for the average guy it currently doesn’t work that well because most people aren’t in the training data so the videos will be a very rough approximation of you. The more public videos and photos there are out there of you in different angles, the more accurate the generation will be
 
  • Like
Reactions: drmacnut
Great that celebrities and the rich might get protection, but what about the rest of us? We all just have to live with the fact that someone could use an app to make deepfakes of us? What good will come of an app like this?

All OpenAI cares about is making the rich richer and lining their pockets. They’re willing to destroy society and the earth to do it.
 
Fortunately for the average guy it currently doesn’t work that well because most people aren’t in the training data so the videos will be a very rough approximation of you. The more public videos and photos there are out there of you in different angles, the more accurate the generation will be
Consider how many millions of people post countless videos of themselves, usually talking extensively. There’s probably more than enough high quality video and audio to get the job done. The part I’ve never understood about social media is how folks surrender their identities for a handful of likes and followers. We would question if the government asked to collect all the information people give about themselves freely to handful of trillion dollar corporations. 🤷‍♂️
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.