Is that really me?
Imagine opening your phone or laptop to find a video of you saying things that you never said. Or worse: a pornographic clip starring your face. Or your brokerage account emptied because a system “recognized” your AI-generated voice. This isn’t science fiction anymore. Generative AI can now create photos, videos and audio so convincing that the difference between fake and real is nearly impossible to detect.
Until recently, deepfakes were a novelty — silly memes or awkward imitations. Today, they are powerful tools for fraud, blackmail and disinformation. A few seconds of your voice scraped from a podcast or a voicemail, paired with a handful of online photos, is enough to generate a synthetic version of “you.” That false version can authorize a wire transfer, spread racist propaganda or destroy a reputation overnight. Proving it wasn’t you is nearly impossible.
The law hasn’t kept up. Copyright protects movies and music, but not your likeness or voice. Some countries are exploring limited identity protections, but in the U.S., the legal recourse for someone targeted by a deepfake is minimal. The problem is magnified by the platforms that spread this content.
Social media companies — Facebook, YouTube, TikTok, X — are the megaphones for deepfakes and misinformation. They don’t just host content; they curate it, recommend it and monetize it. Their algorithms favor whatever grabs attention, even if it’s false or harmful. Yet under Section 230 of the Communications Decency Act, they enjoy sweeping immunity from liability for what their users post.
That means if someone defames you in a TikTok video or posts a fake image of you on Instagram, you can’t sue the platform. Newspapers and broadcasters can be sued for false or defamatory content they publish. But online platforms get a free pass. Section 230 was written in 1996, when the internet meant static bulletin boards and dial-up modems. It was designed to protect neutral hosts, not trillion-dollar companies deploying AI-driven feeds to billions of people.
Defenders of Section 230 argue that eliminating it would stifle free speech. If platforms could be sued for every false restaurant review or mean tweet, they say, no company could function. But that argument exaggerates the threat. Traditional publishers face lawsuits too, and courts have long recognized defenses for opinion, fair comment and protected speech. The real issue isn’t whether every post creates liability — it’s whether platforms that profit from amplifying harmful content should be able to shrug off responsibility altogether.
Victims of online defamation or harassment often can’t even identify the anonymous user behind a post, let alone sue them. Section 230 blocks them from suing the platform, leaving them without a remedy. Meanwhile, platforms have little incentive to act responsibly. Why invest in stronger verification, better moderation, or safer algorithms when the law shields you from consequences?
The harm is no longer abstract. Deepfakes can sway elections, fuel conspiracy theories and incite violence. They can bankrupt families, wreck careers and destroy trust in public life. If we can’t believe what we see or hear online, the very foundation of democratic debate erodes.
It’s time to update the law. Section 230 was the right solution for a different internet. But today’s platforms are publishers in all but name: they edit, amplify and profit from content. They should be held to the same standards as newspapers and broadcasters. Eliminating–or at least narrowing–Section 230 immunity would force companies to design safer systems and put user protection above ad revenue.
Defamation, fraud, threats and exploitation are already illegal. Yet online platforms routinely shelter those who spread them, citing Section 230 as a shield. That outdated protection comes at the expense of users, victims and society as a whole. The law must change before the next wave of deepfakes leaves us asking, “Is that really me?”
– — —
Lee Keet lives in Saranac Lake