In response to increasing concerns about AI-generated deepfakes and the misuse of original content, the United States has introduced the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act). This groundbreaking bill, which has garnered strong bipartisan support, aims to safeguard the integrity of original works and mitigate the misuse of AI technology.
Bipartisan Support for the COPIED Act
The introduction of the COPIED Act follows the recent proposal of the “Take It Down Act” in the Senate, which targets the removal of AI deepfakes depicting non-consensual intimate imagery. The legislative push for the COPIED Act comes after a series of high-profile incidents, such as the viral spread of AI-generated deepfake nude images of Taylor Swift on social media platforms like X (formerly Twitter), Facebook, and Instagram in January. These incidents have ignited a nationwide debate on the ethical implications and dangers of AI technology.
Addressing Content Creator Concerns
The COPIED Act aims to protect content creators, journalists, artists, and musicians who have seen AI systems profit from their work without acknowledgment or fair compensation. A recent Forbes report accused Perplexity AI, an AI-enabled search engine, of content theft, which was corroborated by Wired magazine’s investigation. Wired found that Perplexity was summarizing its articles despite the presence of the Robot Exclusion Protocol, which is designed to prevent such unauthorized use.
Ensuring Content Authentication
The COPIED Act proposes the establishment of a digital document called “content provenance information,” similar to a logbook for all types of content, including news articles, artistic expressions, images, and videos. This mechanism will ensure the authentication and detection of AI-generated content. The bill also includes provisions to make it illegal to tamper with this information, helping journalists and creative artists safeguard their work from AI exploitation.
Legal Enforcement and Protection
The bill empowers state officials to enforce its provisions, creating a legal pathway for suing AI companies that remove watermarks or use content without consent and compensation. This approach aims to provide robust protection for original content creators against unauthorized use by AI systems.
Comparison with International Regulations
The European Union (EU) has already established comprehensive legislation to regulate AI, known as the EU Artificial Intelligence Act. This act requires European states to classify AI systems into four categories based on risk levels: Unacceptable Risk, High-Risk AI, Limited-Risk AI, and Minimal-Risk AI. AI systems like those used in China for ascribing social scores to citizens are deemed to pose an unacceptable risk and are prohibited under the EU’s regulations.
In contrast, India has yet to implement specific AI regulatory laws. However, a directive from the Ministry of Electronics and Information Technology in March required AI systems labeled as “under-tested” or “unreliable” to seek government approval before deployment. This directive was later overturned to avoid stifling innovation, reflecting a cautious approach to AI regulation.
As the U.S. moves forward with the COPIED Act, it joins a global effort to regulate AI technology and protect original content creators. The bill’s success could set a precedent for future legislation, aiming to balance innovation with ethical considerations and the protection of intellectual property. Your thoughts on the COPIED Act and its potential impact on AI regulation and content protection are welcome.
Leave a Reply