Imagine conjuring vivid, lifelike scenes straight out of a Hollywood blockbuster with just a few lines of text. In a matter of minutes, Open AI’s Sora technology can bring cinematic visions to life, transforming mundane prompts into captivating hyper-realistic video clips.
With OpenAI’s Sora, woolly mammoths tread through the snow, a papercraft coral reef teems with marine life, and a cat tries to wake its sleeping human in a cozy bed. It’s the latest wave in generative AI models, taking text-to-video creation to new levels of animation and computer-generated imagery.
But media professionals, industry critics, and security experts are concerned about how this technology might be used. Can people from all walks of life tell the difference between video footage and AI-generated videos?
How quickly and easily could Sora spread misinformation and disinformation about science, politics, health and medicine, economics, history and culture, consumer products and services, consequential events around the globe, and more? And what biases might it perpetuate?
Luckily, for now, Flaws in OpenAIβs Sora Make It Possible to Detect Fake Videos, as seen above. This WSJ Explains video shares what to look for.
Soon, AI technology will be powerful enough to resolve many of these issues and may become a video tool for creatives. Could it also easily trick people? Could people begin to mistrust what they see in real videos?
Are these golden retriever puppies real or AI-generated?
File under media literacy and The Need for Media Literacy Education.
Plus, watch these videos next:
β’Β Why do people fall for misinformation?
β’Β How to spot a misleading graph
β’Β How to spot a pyramid scheme
β’ Debunking fake βkitchen hacksβ that have billions of views
Curated, kid-friendly, independently-published. Support this mission by becoming a sustaining member today.