New Senate Bill Aims to Combat AI-Generated Deepfakes with Content Watermarks
A bipartisan group of U.S. senators, led by Senator Maria Cantwell of Washington, has unveiled a new Senate bill that targets AI-generated deepfakes. Called the Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act, the legislation seeks to address the growing concerns surrounding deepfake technology by introducing a standardized method for watermarking AI-generated content. This watermarking would make it easier to detect deepfakes and hold creators accountable for their content. The bill also requires AI tool providers to allow creators to attach provenance information to their content, ensuring transparency and preventing the removal of this information.
The COPIED Act has garnered support from both sides of the aisle, with Senators Marsha Blackburn (R-Tenn.) and Martin Heinrich (D-N.M.) signing on as co-sponsors. Senator Cantwell emphasized the need for transparency and control over AI-generated content, stating that the bill will put creators, including journalists, artists, and musicians, back in control of their work.
The legislation proposes that the National Institute of Standards and Technology (NIST) develop the method for watermarking content and data. Additionally, the COPIED Act calls for a ban on unauthorized use of content for AI training, while ensuring that creators have control and are compensated for their work. The enforcement of the act would fall under the jurisdiction of the U.S. Federal Trade Commission (FTC) and state attorneys general.
The introduction of the COPIED Act follows the FTC’s claim of authority to enforce laws related to artificial intelligence in the United States. The FTC expressed concerns about the proliferation of deepfakes, which could be used for fraudulent activities, and stressed its role in regulating AI to protect consumers.
Deepfake technology has also been a subject of legal disputes, as generative AI developers clash with media outlets and entertainment industry professionals over copyright infringement claims. To counter unauthorized data harvesting by AI bots, internet security company Cloudflare recently launched a tool to block them from scraping websites.
Industry organizations, including the Screen Actors Guild–American Federation of Television and Radio Artists (SAG-AFTRA) and the Recording Industry Association of America (RIAA), have praised the COPIED Act’s introduction. They see it as a necessary step toward ensuring transparency and traceability in the use of AI technology, particularly in relation to artists’ control over their image and voice.
Last year, SAG-AFTRA and the Writers Guild of America (WGA) staged a strike due, in part, to disagreements over the use of AI in Hollywood. The COPIED Act aims to address these concerns and protect the work and legacies of artists.
The importance of artists controlling their digital likeness was also highlighted during a testimony by British actor and musician FKA Twigs before the U.S. Senate Judiciary Committee. FKA Twigs spoke about the need for legislation to prevent the misuse of artists’ work without their consent.
The COPIED Act represents a significant step in the ongoing efforts to address and mitigate the risks posed by AI-generated deepfakes. The legislation seeks to provide transparency, accountability, and protection for content creators, while also deterring the malicious use of deepfake technology.
