The dynamic development of generative artificial intelligence (GenAI) algorithms has presented the Polish justice system with challenges that were considered science fiction . Deepfake technology, which enables near-perfect image and sound synthesis, poses one of the most serious threats to individuals' personal rights today. In the era of "post-truth," the question of procedural instruments that will effectively protect one's image and obtain adequate financial compensation becomes crucial.
Substantive legal basis of protection: Duality of claims
In the Polish legal system, protection against the effects of deepfakes is based on a duality of legal bases. On the one hand, we are dealing with the infringement of personal rights (Article 81, Section 1 of the Act on Copyright and Related Rights), and on the other – with the classic infringement of personal rights under the Civil Code (Articles 23 and 24 of the Civil Code).
Contemporary case law (including the Supreme Court in its judgment of 20 May 2022, file reference II CSKP 433/22) increasingly emphasizes that an image is not merely an "outward appearance," but an emanation of a person's identity. In the case of deepfakes, not only is the likeness used without consent (the lack of permission referred to in Article 81 of the Copyright Act) but it often leads to false light —presenting a person in a false, often humiliating context, which directly undermines their honor and dignity.
AI Act and unlawfulness of action
AI Act is currently becoming a key point of reference for lawyers . Article 50 of this regulation imposes a transparency obligation on providers and users of AI systems – synthetically generated materials must be clearly identified as deepfakes.
From a procedural perspective, the deepfake creator's failure to fulfill this obligation constitutes a powerful argument to demonstrate the unlawfulness of the action . In personal rights cases, the presumption of unlawfulness applies (Article 24 of the Civil Code), but invoking a violation of the AI Act standards further burdens the defendant, making their defenses (e.g., invoking parody or freedom of expression) much more difficult to maintain.
Property claims – Compensation and damages
A victim of a deepfake can claim two types of monetary benefits:
- Monetary compensation for harm suffered (Article 448 of the Civil Code):
When determining the amount of compensation, courts consider the perpetrator's degree of guilt, the territorial and temporal scope of the publication, and the intensity of the violation. In the case of non-consensual deepfakes (pornography ), case law tends to award amounts in the range of tens or even hundreds of thousands of zlotys, considering such actions a form of digital violence that destroys the victim's private and professional life.
- Compensation (repair of property damage):
If unlawful AI material has led to tangible losses (e.g., loss of an advertising contract by an influencer, dismissal from employment), the plaintiff may demand full compensation (Article 415 of the Civil Code). The concept of the "market value of an image" . If the image has commercial value, compensation should correspond to the amount the perpetrator would have to pay for legal use of the likeness in a given context.
Platform Accountability: The DSA Standard
In the era of online creator anonymity, claims are often brought against intermediaries. The Digital Services Act (DSA ) has revolutionized the liability rules for platforms (Facebook, X, TikTok). Under the notice-and-action , a platform that receives a credible report of an unlawful deepfake and fails to promptly remove it loses its immunity and may be held jointly and severally liable with the perpetrator for the consequences of the infringement of personal rights.
Evidential Challenges and Prevention
From a procedural perspective, securing evidence using blockchain technology or qualified trust services (so-called e-security of online content) is crucial, preventing the authenticity of the material from being questioned in court. In deepfake cases, expert evidence from computer forensics and AI experts is often necessary, confirming that the material is not merely a re-edited work but an advanced AI synthesis, which increases the perpetrator's culpability.
Fighting deepfakes in court is no longer a tilt at windmills. Combining classic protection of personal rights with new EU regulations (AI Act, DSA) creates a comprehensive system of protection. For a lawyer representing a victim, it's crucial not only to demonstrate the infringement, but above all, to precisely quantify the damage – both in terms of intangibles and purely market value. In the age of image manipulation, the right to control one's image is becoming one of the most important subjective rights in the catalogue of civil liberties.
This article is for informational purposes only and does not constitute legal advice.
The law is current as of February 25, 2026.
Author:
Series editor:
