Decoder with Nilay Patel artwork

AI deepfakes are cheap, easy, and coming for the 2024 election

Decoder with Nilay Patel

English - February 29, 2024 10:00 - 41 minutes - ★★★★ - 2.4K ratings
Technology News Tech News entrepreneurship business entrepreneur interview health leadership politics startups finance productivity Homepage Download Apple Podcasts Google Podcasts Overcast Castro Pocket Casts RSS feed


Our new Thursday episodes of Decoder are all about deep dives into big topics in the news, and this week we’re continuing our mini-series on one of the biggest topics of all: generative AI. Last week, we took a look at the wave of copyright lawsuits that might eventually grind this whole industry to a halt. Those are basically a coin flip — and the outcomes are off in the distance, as those cases wind their way through the legal system. 

A bigger problem right now is that AI systems are really good at making just believable enough fake images and audio — and with tools like OpenAI’s new Sora, maybe video soon, too. And of course, it’s once again a presidential election year here in the US. So today, Verge policy editor Adi Robertson joins the show to discuss how AI might supercharge disinformation and lies in an election that’s already as contentious as any in our lifetimes — and what might be done about it.

Links: 

How the Mueller report indicts social networks

Twitter permanently bans Trump

Meta allows Trump back on Facebook and Instagram

No Fakes Act wants to protect actors and singers from unauthorized AI replicas

White House calls for legislation to stop Taylor Swift AI fakes

Watermarks aren’t the silver bullet for AI misinformation

AI Drake just set an impossible legal trap for Google

Barack Obama on AI, free speech, and the future of the internet

Credits:

Decoder is a production of The Verge and part of the Vox Media Podcast Network.
Today’s episode was produced by Kate Cox and Nick Statt and was edited by Callie Wright.
The Decoder music is by Breakmaster Cylinder.

Learn more about your ad choices. Visit podcastchoices.com/adchoices

Our new Thursday episodes of Decoder are all about deep dives into big topics in the news, and this week we’re continuing our mini-series on one of the biggest topics of all: generative AI. Last week, we took a look at the wave of copyright lawsuits that might eventually grind this whole industry to a halt. Those are basically a coin flip — and the outcomes are off in the distance, as those cases wind their way through the legal system. 


A bigger problem right now is that AI systems are really good at making just believable enough fake images and audio — and with tools like OpenAI’s new Sora, maybe video soon, too. And of course, it’s once again a presidential election year here in the US. So today, Verge policy editor Adi Robertson joins the show to discuss how AI might supercharge disinformation and lies in an election that’s already as contentious as any in our lifetimes — and what might be done about it.



Links: 



How the Mueller report indicts social networks
Twitter permanently bans Trump
Meta allows Trump back on Facebook and Instagram
No Fakes Act wants to protect actors and singers from unauthorized AI replicas
White House calls for legislation to stop Taylor Swift AI fakes
Watermarks aren’t the silver bullet for AI misinformation
AI Drake just set an impossible legal trap for Google
Barack Obama on AI, free speech, and the future of the internet



Credits:


Decoder is a production of The Verge and part of the Vox Media Podcast Network.

Today’s episode was produced by Kate Cox and Nick Statt and was edited by Callie Wright.

The Decoder music is by Breakmaster Cylinder.


Learn more about your ad choices. Visit podcastchoices.com/adchoices