Combating Deepfake Content
August 25, 2025Deepfake technology uses artificial intelligence (AI) — specifically deep learning techniques — to create highly realistic fake images, videos and audio recordings.
Such content can be based on modifying existing material (e.g., face-swapping) or generating entirely new material in which a person appears to say or do something they never actually did.
While these examples may seem harmless at first glance, they infringe upon an individual’s right to their own likeness and voice. This was, among other reasons, one of the drivers behind the 2023 strike of American actors, which halted film and TV production for several months.
Why is deepfake a threat?
Beyond entertainment purposes, deepfakes are often used in far more harmful contexts — spreading disinformation, engaging in financial fraud, and facilitating cybercrime.
Government measures
Deepfake fraud is a global issue, most prevalent in technologically advanced regions, but increasingly present in emerging markets as well.
The United States has adopted the Take It Down Act, requiring harmful deepfake content to be removed within 48 hours and imposing federal penalties for its distribution.
In 2024, the European Union introduced the Digital Services Act (DSA), aimed at preventing illegal and harmful online activities and curbing the spread of disinformation. The United Kingdom followed suit with the Online Safety Act, enacted in early 2025 with similar objectives.
Danish legislative proposal
In June 2025, Denmark announced amendments to its Copyright Act to directly address the misuse of deepfake technology. The changes protect not only artists but also any individual whose likeness could be digitally reproduced. Creating deepfake content would not be prohibited, but its public distribution without the person’s consent would be restricted. Protection applies only to realistic imitations, while clearly artificial content, satire, and parody are excluded.
The proposal grants individuals and artists the right to request the removal of unauthorized digital imitations from social media and other platforms without having to prove damages. Compensation may be sought under general Danish law, and platforms that fail to remove infringing content after an official notice may face financial liability. The right to control one’s digital likeness would last for 50 years after the death of the person or performer being imitated.
The amendments are expected to be introduced to Parliament in October 2025 and come into force by the end of March 2026. However, open questions remain — the lack of a clear standard for determining a “realistic” imitation, limited territorial application, and the absence of punitive provisions against the creators of deepfake content themselves.
Author: Ana Radojević
This article is for informational purposes only and does not constitute legal advice. If you need additional information, please feel free to contact us.