Digital Care

OpenAI’s Sora Deepfake Videos: What’s Going On?

OpenAI has launched a new AI tool called Sora, and it’s changing how people make videos.
With just a short text, Sora can create realistic videos — even of people who are no longer alive.

This sounds amazing, but it has also raised many ethical and legal questions. Let’s understand what’s happening and why it matters.

Futuristic AI interface creating realistic human videos on floating screens, representing OpenAI’s Sora deepfake technology.
OpenAI’s Sora can generate lifelike videos from text — a breakthrough that’s raising both excitement and concern.

 

 

 

 

 

 

 

 

🎥 What Is OpenAI Sora?

Sora is a text-to-video generator. You just type a short sentence, and within seconds, it produces a video that looks almost real.

Unlike earlier deepfakes, Sora’s videos have smooth motion, natural lighting, and lifelike faces. Because of this, many users are sharing them across social media platforms such as TikTok and YouTube.

AI system generating a lifelike video from a short text prompt on a futuristic computer screen, representing OpenAI’s Sora text-to-video tool.
Sora can transform a simple text prompt into a realistic video with smooth motion and natural lighting.

 

 

 

 

 

 

 

 

⚠️ Why It’s Causing Problems

The controversy started when users began making videos of famous people who are no longer alive — like Martin Luther King Jr., Kobe Bryant, and Queen Elizabeth II.

Sadly, not all these videos are respectful. Some creators have used the tool to make silly or offensive clips, which has upset families and fans.

Kelly Carlin, daughter of comedian George Carlin, said she found AI videos of her late father “depressing and disturbing.”

Therefore, the issue isn’t only about technology. It’s also about respect, consent, and ethics.

Illustration showing AI recreating blurred famous figures with a caution symbol, representing ethical issues in AI-generated deepfakes.
Sora’s power to recreate lifelike videos has sparked debates about respect, consent, and the misuse of AI.

 

 

 

 

 

 

 

 

⚖️ The Legal Grey Area

There are still no clear laws about using the image of someone who has passed away.

  • In most countries, dead people don’t have privacy or publicity rights.

  • Some U.S. states allow families to control likeness rights, but many do not.

  • Platforms like OpenAI may be protected by Section 230, which limits their responsibility for user-made content.

Because of this, it’s unclear who is responsible if a fake video harms someone’s reputation. The law simply hasn’t caught up with this fast-moving technology.

Digital illustration showing justice scales balanced with AI hologram elements, symbolising the legal uncertainty of deepfake technology.
As AI advances faster than the law, it’s unclear who bears responsibility for harmful or misleading deepfake videos.

 

 

 

 

 

 

 

 

🧩 What OpenAI Is Doing About It

After public backlash, OpenAI promised to improve its policies.
It now allows families or representatives to request removal of fake videos. Moreover, it plans to add watermarks and content labels to make it clear when a video was created by AI.

However, critics argue that these steps are not enough. They believe OpenAI should completely block fake videos of deceased people until stronger rules are in place.

Futuristic AI dashboard showing OpenAI’s efforts like content removal, watermarking, and policy updates to manage deepfake videos.
OpenAI is introducing new safeguards — including watermarks and removal requests — to reduce misuse of its Sora technology

🧩 What OpenAI Is Doing About It

After public backlash, OpenAI promised to improve its policies.
It now allows families or representatives to request removal of fake videos. Moreover, it plans to add watermarks and content labels to make it clear when a video was created by AI.

However, critics argue that these steps are not enough. They believe OpenAI should completely block fake videos of deceased people until stronger rules are in place.


🌍 Why This Matters

Sora shows how fast AI is changing the world. On one hand, it opens creative doors for storytelling, marketing, and education. On the other hand, it also blurs the line between reality and illusion.

If we can make anyone “speak” or “move” again, what will happen to history and truth? Will people start doubting everything they see online?

These questions show why we need to think carefully about how such technology is used.


💡 How We Can Handle It Better

Here are a few steps that can make AI video tools safer and fairer:

  1. Ask for consent first: No one’s image should be used without permission.

  2. Show clear AI labels: Every AI-made video should say “This video was created using AI.”

  3. Create stronger laws: Governments should protect digital likeness rights.

  4. Educate the public: Everyone should learn how to identify deepfakes and misinformation.

🧠 Moving Toward Safer AI Videos

OpenAI’s new rules show that the company understands how serious this issue is. By adding watermarks and labels, it wants people to know when a video is made by AI. This can help stop the spread of fake or misleading clips. Still, many experts say that laws and technology both need to improve together so that people’s privacy and memories are protected in the future.

Leave a Reply

Your email address will not be published. Required fields are marked *