Skip to playerSkip to main content
  • 2 months ago
Transcript
00:00Sam Altman here, reporting from Straight Arrow News.
00:02Standing in for Kennedy Felton today, we're taking a closer look at the impact of Sora on the future of AI-generated media.
00:08Wait.
00:10Hold on, something's not right.
00:12No, I knew it wasn't going to work.
00:17Okay, you got me.
00:19No, that wasn't the real Sam Altman, but it was a video I created using OpenAI's new Sora 2 app.
00:30The term deepfake first popped up back in 2017, coined by a Reddit user who used AI to swap celebrity faces into existing videos.
00:42Since then, generative AI has evolved fast from image apps like Midjourney to video creators like Sora,
00:50a new invite-only social platform where every clip is completely AI-generated.
00:56It surpassed 1 million downloads during its first week.
01:00People often tell me I'm a woman of many talents.
01:04I mean, did you see my recent appearance at Fashion Week?
01:09Pretty good, right?
01:10I had to redeem myself after a failed time travel experiment that somehow became a big-budget movie.
01:17Hi, I just popped up out of nowhere, don't know what this place is called, but if there's...
01:21And yes, I even filmed a new cooking show last week.
01:25Well, Sora did.
01:26First things first, positive affirmations.
01:28You're magnificent marbling.
01:30You are worthy of a cast-iron throne.
01:32You can create just about anything on the app.
01:35Even the animation Sora 2 creates looks professionally made.
01:39You're slowing down, Russ.
01:41Try it.
01:43All right, keep your hands where I can see them.
01:44I don't want to do this, little guy.
01:44Turn around for me.
01:45I wasn't stealing!
01:46Just playing the game!
01:47Now, sure, some of the so-called AI slop looks fake, but a recent poll found more than 50% of
01:54people aren't confident they can detect whether something is made by AI or a human.
01:59And while most of the AI, especially on Sora 2, is made to be comical, the silly can sometimes
02:06have serious consequences.
02:08100% zany juice!
02:10There was a very famous robocall that used the voice of Joe Biden to try to convince voters
02:16in New Hampshire and Vermont not to go and vote.
02:19And that was manipulated by the other side to try to manipulate the election.
02:24Northern Illinois University professor David Gunkel says the biggest danger is deception
02:30and our laws aren't ready for it.
02:32Technology moves at light speed.
02:35Law and policy move at pen and paper speed.
02:38So we are always playing catch-up.
02:40We are always trying to make existing laws fit novel circumstances and then trying to write
02:46new laws to cover unanticipated opportunities and challenges that these technologies make available
02:52to us.
02:53But Gunkel says generative AI isn't so much a turning point as it is an evolution of things
02:59that have been happening for years.
03:01In photography, for instance, you can capture something real or you can use lighting, angles
03:07and editing to create a reality that doesn't exist.
03:11AI, he says, is just the next step in that evolution, another tool that blurs the line between
03:16what's real and what's made.
03:19Dozens of lawsuits are now testing those boundaries.
03:22In a recent win for authors, AI company Anthropic agreed to pay $3,000 for each of an estimated
03:29half-million books used to train its models without permission.
03:33Even as apps try to limit abuse, their own rules are raising eyebrows.
03:38Sora 2's strict content filters block certain requests and have become a running joke on
03:43the app.
03:43Just put a nice little tree right over, huh?
03:46What is this?
03:46I can't, it won't let me.
03:47Let me paint!
03:50That's it.
03:51Ha!
03:51I'm coming for you, Sam Altman.
03:53Even I was flagged for violating the terms and conditions.
03:56When I instructed Sora to insert my likeness into a workout class, it flagged me for depictions
04:02of teens and children.
04:03I guess that's a compliment.
04:05Maybe I should do my next story on my skincare routine?
04:09Anyways, even OpenAI didn't realize how big of a problem these flagged requests would
04:14be.
04:15Tonight, in this very arena, my dream is to make freedom ring.
04:19But not everyone finds the app funny.
04:22You just saw artist Bob Ross in a video someone created.
04:26Other public figures like Robin Williams and Dr. Martin Luther King Jr. are being recreated,
04:31which isn't against the terms of service, since deceased figures aren't protected.
04:36It's even prompting backlash from their families.
04:39Robin Williams' daughter, Zelda, posted on Instagram, begging people to stop sending her
04:44AI videos of her father.
04:46She said, in part, to watch the legacies of real people be condensed down to this, vaguely
04:51looks and sounds like them, so that's enough, just so other people can churn out horrible
04:56TikTok slot puppeteering them, is maddening.
04:59Martin Luther King's daughter, Bernice King, echoed those concerns online, urging people to
05:05stop.
05:05There's less of a risk of creating scary, hyper-realistic, deepfakes that damage reputations or cause
05:17disruption because they've implemented a lot of these safety guardrails to make it lean
05:24funny.
05:25While safety guardrails have reduced the amount of inappropriate content, not everyone is
05:31using the tech responsibly.
05:33Some users are pushing the limits by making inappropriate or sexualized content, and that's
05:38becoming a whole other issue in itself, porn AI.
05:41Somebody was making all these videos of me and my clones, like, hanging out, which I thought
05:47was so funny at first.
05:49And then I see more and more videos, and he's trying to make my clones, like, make out.
05:54And he does this with a lot of girls.
05:57And you can read their prompts trying to get around these guardrails because you can get
06:01around anything.
06:02It's the internet.
06:03And to confuse an AI is not that hard.
06:08Currently, not only are there no federal regulations governing generative AI, but it's unclear who
06:14would be held responsible if something harmful happens.
06:17Gunkel said we'll likely see a lawsuit over the next few years on the topic.
06:21Usually, when you use a tool to do something, it is the user of the tool and not the tool
06:27or the manufacturer of the tool who is held accountable for the good or bad outcomes.
06:33But we are seeing this as a kind of moving target now.
06:39While the technology might be scary, AI is being weaved into our daily lives more and more.
06:44Gunkel wants to remind people that this sort of pushback happens any time something new
06:49is introduced.
06:50Case in point, Socrates once came out against writing when it was first introduced because
06:55he thought it wouldn't be an effective means of communicating knowledge.
06:58We are only three years out from ChatGPT being released.
07:03That's really early on in a new technology.
07:07And if there is a lot of hyperbole on both sides of the debate, people are really excited
07:11about it, people are really afraid of it, that's par for the course.
07:15We've been here before.
07:16And I think it is a matter of some thoughtful response to this technology, some critical
07:23perspective and recognition that, you know, we've done this before and we can be confident
07:28in the face of these new challenges.
07:30So for now, Sam Altman won't be filling in for me.
07:34No hard feelings, Kennedy.
07:38But if my AI twin starts reporting from Cabo, don't say I didn't warn you.
07:44With Straight Arrow News, I'm Kennedy Felton.
07:47For more on this story and others, visit san.com or download our mobile app.
Be the first to comment
Add your comment

Recommended