Skip to content

The Legal Status of Deepfakes and AI-Generated Media

The Legal Status of Deepfakes and AI-Generated Media

Introduction

The emergence of deepfake technology and AI-created content detached from real-world impacts has fundamentally changed how people create, consume and interact with digital content. Deepfakes can create realistic videos, images, and audio by using sophisticated machine learning algorithms, especially generative adversarial networks (GANs), to overlay a person’s voice or face onto someone else’s body and speech. While the possible uses for this technology across innovation, entertainment, and education industries are plentiful, its ethical, social, and legal repercussions are equally concerning. This article looks at the legal aspects surrounding deepfakes and AI-generated media, with special focus on their regulation, existing laws, landmark cases, and judicial analysis, seeking to address how society can deal with the challenges brought by this new technology.

Understanding Deepfakes and AI-Generated Media

Deepfakes are the result of highly sophisticated artificial intelligence techniques that use GANs. A GAN uses two neural networks competing against each other. One creates content, while the other seeks to detect it. At the end of each round, the two will swap positions. The AI trained to spot fakes will be better at spotting them while the one trained to generate them will be better at generating them. The result is media content that is extremely convincing but fake. AI-generated media includes deepfakes, but also visual and audio, computer-generated arts, music, literature, and so many more. These developments are transforming what is understood as creativity and bringing moral and legal issues regarding creation, copyright, and responsibility.

The focus of image and video manipulation technology has shifted to the concerns of damage that can be done to people and society as a whole. Some such harmful uses include non-consensual pornography, identity deception, political tampering, and even monetary scams. Legal systems in many regions are struggling with how to enforce laws on this advanced technology without limiting freedom and creativity.

Regulatory Frameworks Governing Deepfakes

Regulating deepfakes involves a delicate balance between mitigating harm and upholding freedom of expression and technological progress. Different jurisdictions have adopted varied approaches, reflecting their legal traditions, cultural values, and levels of technological advancement.

United States

The approach to regulating deepfakes in the US is disjointed and fragmented, varying widely by state. Some states like California, Texas, and Virginia have taken steps to legislate certain malicious applications of deepfake technology. For instance, California’s AB 730 bans the use of videos which falsely claim to be deepfakes within 60 days before an election. AB 602 also helps victims of deeply non-consensual pornographic deepfake videos by criminalizing the creation and advertisement of such videos. The legislation in Texas has also evolved to recognize the dangers of deepfake technology by criminalizing the use and creation of deepfakes that cause damage to people or manipulate election outcomes.

At the state level, the DEEPFAKES Accountability introduces legislation that aims to counter the use of deepfake technology from a more holistic point of view. The Act is not yet in effect but suggests deepfake content marked with identifying labels along with penalties for abusive uses failing which will result in severe punishments. While there are other laws such as the Communications Decency Act (Section 230) and some intellectual property laws do aid in trying to address some of the deepfake problems, their influence is quite passive, and vague.

European Union

The European Union has a broader strategy for regulating AI-based media. The outlined Artificial Intelligence Act (AIA) classifies AI systems into distinct risk classes and lays down highly restrictive obligations on those high-risk applications, the deepfakes. Transparency is one of the “cornerstones” of the AIA, and it requires disclosure whenever content is created or changed by an AI system.

The EU’s General Data Protection Regulation (GDPR) is also an important tool for the prevention of deepfakes. An unlawful generation or sharing of deepfake content is commonly achieved by, for instance, processing personal information without permission in a manner prohibited by the provisions of the GDPR. Specifically, the Digital Services Act (DSA) and the Digital Markets Act (DMA) are works in progress that will seek to improve the responsibility of online platforms with respect to tackling harmful content, like deepfakes, amongst others.

India

In India, the legal framework to deal with deepfakes is still in its infancy. Although no specific law specifically criminalizes the use of deepfake technology, the Indian Information Technology Act, 2000, and the Indian Penal Code (IPC) are used as legal frameworks to prosecute the offences that are related to this technology. Section 67A of Ithe T Act makes it unlawful to publish inc. nonconsensual pornographic deepfakes. Relevant other sections are defamation (Section 499 of the IPC) and identity theft (Section 66C of the IT Act). Nevertheless, enforcement difficulties remain because of the anonymity afforded by digital platforms and jurisdictional issues.

Key Legal Issues Surrounding Deepfakes 

Privacy and Consent

Privacy violations and lack of consent are among the most pressing legal concerns associated with deepfakes. Non-consensual pornographic deepfakes disproportionately target women and have devastating consequences for their victims. Legal systems are increasingly recognizing the need to criminalize such conduct. However, the enforcement of privacy laws remains challenging, particularly in the digital age, where anonymity and cross-border platforms complicate accountability.

Intellectual Property

Deepfake and AI media produce a host of questions centred around the issues of intellectual property. The central issue is whether or not AI-generated media is copyrightable and if so who should own the copyright. The United States Copyright Office has clarified that a work will not be eligible for copyright protection simply because it was created solely by AI and as a result. After all, such works lack human authorship. However, when an AI is used as a tool by a human creator the resulting work may qualify for protection. Similar questions are being raised in the EU and other jurisdictions where laws are grappling with the concept of authorship about AI.

Defamation and Misinformation

Deepfakes have been used to create false and damaging representations of individuals, leading to defamation claims. The difficulty lies in proving the falsity and harm caused by the deepfake, as well as identifying the creator. The use of deepfakes in spreading political misinformation further complicates matters, raising concerns about the integrity of democratic processes. Legal frameworks must address these risks while safeguarding freedom of speech and expression.

National Security and Public Safety

Deepfakes pose significant risks to national security and public safety. They can be weaponized to spread disinformation, impersonate public officials, or incite panic. For example, a deepfake of a government leader issuing a false directive could have catastrophic consequences. Addressing these risks requires a multi-faceted approach, including robust legal and regulatory measures, technological interventions, and public awareness campaigns.

Landmark Cases on Deepfakes and AI Media

A myriad of legal cases have framed the debate on deepfakes and AI media, showcasing how the field is shifting:

People v. Tracey (California, 2020) – The case dealt with the nonconsensual deepfake pornography production and its distribution. The court upheld the California AB 602 law which said that there needs to be stronger legal boundaries against the infringement of privacy.

Deepfakes in Political Campaigns: There are still developing cases but there has been some discussion within the courts regarding the use of deepfakes in political elections. The suspension proceedings within California AB 730 cases illustrate the importance of the judicial power in stopping electoral fraud.

Thaler v. Copyright Office (2022): This case dealt with the AI-created works regarding copyright. The United States Copyright Office denied a copyright application for a piece of art generated from an AI program with no human involvement, thus restating the need for human authorship. 

EU Jurisprudence on GDPR Violations: European courts have been increasingly dealing with the issue of personal information being used without consent for the making of deepfakes, demonstrating the relationship between the law and technology.

The Path Forward for Deepfakes and AI-Generated Media

Strengthening Legal Frameworks

To address the challenges posed by deepfakes and AI-generated media effectively, legal systems must evolve. Comprehensive legislation should explicitly define and regulate the creation, distribution, and use of deepfakes. Transparency requirements, such as labelling AI-generated content, should be mandated, and malicious uses of the technology, including non-consensual pornography and disinformation campaigns, must be penalized.

Enhancing International Cooperation

The borderless nature of the internet necessitates international collaboration to combat the misuse of deepfake technology. Harmonizing legal standards and facilitating cross-border enforcement through treaties and agreements are crucial steps in this direction.

Leveraging Technology

Regulators and law enforcement agencies can harness AI and machine learning to detect and combat deepfakes. Developing robust detection tools and integrating them into online platforms can help mitigate the spread of harmful content and reduce the technology’s misuse.

Promoting Ethical AI Development

Governments, tech companies, and civil society must share the responsibility of ensuring that AI technologies are developed and deployed responsibly. Ethical guidelines and industry standards can play a pivotal role in minimizing the risks associated with deepfakes.

Conclusion

The rise of deepfakes and AI-generated media creates unprecedented legal difficulties which must be dealt with creatively and proactively. While the existing laws provide some protection for the issues at hand they cannot address some of the issues that the tremendous evolution of technology creates. A forward-thinking view must be taken alongside innovative solutions to make use of the potential offered by these technologies while also protecting individual rights, public safety and democracy. Robust legal frameworks, international cooperation, technological development and ethical AI techniques will be essential in dealing with the complexities of this crucial turning point.

Search


Categories

Contact Us

Contact Form Demo (#5) (#6)

Recent Posts

Trending Topics

Visit Us

Bhatt & Joshi Associates
Office No. 311, Grace Business Park B/h. Kargil Petrol Pump, Epic Hospital Road, Sangeet Cross Road, behind Kargil Petrol Pump, Sola, Sagar, Ahmedabad, Gujarat 380060
9824323743

Chat with us | Bhatt & Joshi Associates Call Us NOW! | Bhatt & Joshi Associates