Taylor Swift, Deepfakes, and the First Amendment: Changing the Legal Landscape for Victims of Non-Consensual Artificial Pornography
In January 2024, artificially generated pornographic images (also known as “deepfakes”) of pop superstar Taylor Swift circulated the social media platform, X (formerly Twitter), at an alarmingly quick rate. Within hours, some images were seen more than 45 million times and accrued thousands of shares and likes before eventually being taken down. The incident, which gained media attention in part due to Swift’s mega-star status and passionate fan base, brought up fascinating First Amendment questions about the role of social media platforms in regulating obscene speech and protecting victims, especially minors, from these types of attacks. Legal advocates also pondered possible remedies available for victims under current legal standards. Part I of this Article will walk through the history of “deepfakes” and the role of artificial intelligence in the development and circulation of fake pornographic images. Part II will discuss the First Amendment legal standard for obscene speech and how social media platforms may be able to regulate these harmful images and videos on their platforms to mitigate harm. Part III will consider the New York Times libel standard for public figures and officials and relevant immunity granted by the Communications and Decency Act Section 230. Part IV will highlight the disproportionate impact and harm that unregulated and widely available deepfakes have on women and girls. And finally, Part V will discuss proposed legislation at the state and federal level and the ways these bills could support victims of cruel deepfakes and prevent future images and videos from wide circulation.