The Weekly Guide to Employment Law Developments

The Rocky Mountain Employer

Labor & Employment Law Updates

“Take it Down Act” Criminalizes Nonconsensual AI-Generated Content in the Workplace and Elsewhere, on a Federal Level

“Take it Down Act” Criminalizes Nonconsensual AI-Generated Content in the Workplace and Elsewhere, on a Federal Level

Rob Thomas, Of Counsel

            Workplace sexual harassment and abuse are not uncommon phenomena, but the United States Congress and President Trump recently passed a bill called the “Take it Down Act” which criminalizes, on a federal level, the nonconsensual publication of intimate or sexually graphic images of another person—whether legitimate or artificially generated (i.e. via artificial intelligence)—and creates avenues of redress for those persons affected by such conduct.  It also sets forth compliance requirements for any web-based platform to address complaints of AI-generated “deepfakes,” and declares a platform’s failure to comply to be an unfair or deceptive trade practice.      

The Take it Down Act (“TIDA”).[1]

         TIDA authorizes the Federal Trade Commission to enforce notice-and-removal requirements regarding nonconsensual or AI-generated content of a sexually-explicit nature.  TIDA is unique in that it includes nonprofit organizations, which closes a significant loophole for victims of AI-generated and nonconsensual—or “deepfake”—pornography.  In particular, TIDA criminalizes the use of interactive computer services to publish digital forgeries of persons (both minors and adults) without consent.  The law is in response to recent trends of artificially-generated pornographic or otherwise malicious content intended to harm another, which is a common concern in the workplace and is often referred to as “revenge porn.” 

            The penalties for using deepfake technology to publish content about another adult are steep, and may include up to two years of imprisonment.  For minors, the penalties are unsurprisingly and justifiably more punitive, and may warrant up to three years of imprisonment.  As another protection for affected individuals, the Act provides that even though consent may have been provided for the creation of the “intimate visual depiction,” such consent does not establish that the depiction’s publication was consented to as well, adding another layer of protection. 

            For web-based platforms (which would include employers), such entities are required to provide a clear and conspicuous notice, in plain language, regarding the manner and method in which a victim covered under the statute may submit a demand for the removal of the content.  The platform must, in response to such a request, remove the objectionable content or make reasonable efforts to identify and remove any known identical copies of such content within 48 hours.  A platform’s failure to comply is, under TIDA, an “unfair or deceptive” trade practice, which can subject the platform to liability under the Federal Trade Commission Act.  On the other hand, an individual making a demand must provide information substantiating his or her “good faith belief” that the image is not consensual, plus sufficient information for the platform to contact the victim.

            Notably, the criminal elements of TIDA are in effect as of its signing, but the requirements for covered web-based platforms to establish the Act’s notice-and-removal requirements do not go into effect until May 19, 2026.  As a defense, an employer could attempt to show one of a handful of elements if an employee uses its resources to publish covered, prohibited content, such as 1) the content was obtained under circumstances in which the person knew or reasonable should have known that the identifiable individual had no expectation of privacy, or the content was published with the person’s consent; 2) the conduct depicted was voluntarily exposed in a public or commercial setting; or 3) what was depicted was not a matter of public concern.  The latter element is difficult, because it requires a showing that no harm was caused by the publication—a tall order for any defense if the person did not provide consent. 

            As relevant to employers, a covered platform is any website, online service, online application, or mobile application that provides a forum for user-generated content, but it does not include broadband internet service providers, websites or applications that provide pre-selected content, or email hosting.  Nonetheless, the scope of the statute’s reach is broad, given employers’ dependence on Teams, Zoom, WhatsApp, and other similar applications.      

Employer Considerations

            Interestingly, Colorado has been at the forefront of the regulation of deepfake technology, although not in the pornographic context.  Rather, Colorado’s first deepfake laws passed in 2024 and were focused on election fraud, and the statute was the first of its kind.[2]  Since then, Colorado legislators have followed up with laws concerning AI-generated sexual content.[3]  But, since the passage of Colorado’s political and election-related deepfake laws, 45 states have also passed their own laws with similar proscriptions against artificially-generated sexual content.  Now, the federal government has moved in-step with the vast majority of states addressing such issues.  In any event, employers must be vigilant about the content and substance of electronic communications occurring on their platforms, or else they may face consequences on both a state and federal level.

[1] Sen. 146, 119th Cong. § 55 (2025).  The full name of the Act is “Tools to Address Known Exploitation of by Immobilizing Technological Deepfakes on Websites and Networks Act.” 

[2]See Colo. Rev. Stat. §§ 1-45-111.5 et seq.

[3] Colo. SB25-288 (signed into law, June 2, 2025).