Milan Filipović
Director of Research at Lawyers Committee for Human Rights
Artificial intelligence is today capable of generating text, believable imitations of a person's voice, and realistic photos and videos. The real danger lies not in the creation of humanoid robots, but in the mastery of language. By using deepfake technology, it is possible to discredit the organizers or participants of protests, creating false evidence of violence, hate speech or other illegal activities.
Deepfake technology can portray peaceful protesters as violent, thereby justifying a harsh police response or the imposition of restrictive measures
Deepfakes and artificial intelligence challenge our ability to distinguish reality from manipulation. In the near future, realistic videos of violent demonstrations and peaceful protest leaders calling for violence may become ubiquitous.
This technology could be used to discourage citizens from joining protests or to cause division among people over the brutal crackdown of peaceful protests
In addition to generating deepfake content, AI algorithms can be used to spread disinformation through social networks, amplifying the impact of disinformation campaigns.
This poses a serious threat to public gatherings, as dipfakes are strategically used to spread false narratives, manipulate public opinion, and incite conflict
The viral nature of digital content makes it difficult to counter the effects of disinformation based on deepfakes, which further threatens the process of informed decision-making and makes it difficult to organize peaceful protests.
Faced with new challenges, we often hastily reach for the idea of improving legislation, without considering how well the existing regulations are actually implemented.
Serbia does not lack a legal framework for responding to the prevention of public gatherings through threats, force or deception
Article 151 of the Criminal Code prescribes the criminal offense of Preventing a Public Assembly. However, this and many other crimes are rarely prosecuted despite the serious social need to combat them. The existing mechanisms of legal protection often lose their purpose due to the length of the proceedings, as well as other obstacles. Lawsuits for defamation will not discourage tabloids from breaking the law, if the state will pay them through funding schemes a larger amount of money than the amount of compensation for defamation. If today we cannot understand how anyone can believe the incredible headlines of the tabloids, imagine what it will be like tomorrow when the tabloid televisions materialize those headlines in the form of believable moving pictures.
Legal actions can play an important role in curbing the malicious use of dipfakes and correcting the damage caused. However, it should be borne in mind that they often cannot fully restore things to the state before the harmful events, especially when spreading disinformation.
To reduce the profitability of malicious use of dipfakes, a holistic approach combining legal, technological and educational measures should be implemented
This includes passing clear regulations that prohibit the abuse of deepfake technology, improving the efficiency of the judiciary in these cases, developing technological tools for detecting deepfakes, as well as campaigns that inform the public about the risks and recognition of fake content. The goal is to create an environment where the malicious use of dipfakes would be less attractive, while at the same time preserving freedom of expression.
The rise of artificial intelligence and dipfakes raises complex legal and ethical issues. Artificial intelligence has the potential to improve society, but at the same time it poses a serious risk to human rights. Erosion of trust, manipulation of public discourse, suppression of dissent, dissemination of misinformation and legal challenges require urgent attention. Demands for banning the use of artificial intelligence can be heard more and more often in society, mainly due to the fear of losing jobs that could be replaced by this technology. However, the current efforts of states are mainly focused on mandatory labeling of fake or manipulated content, although it is difficult to imagine that such regulations could stop those who want to abuse this technology.
The new regulations target tech companies that have an obligation to moderate content
According to EU requirements, when it comes to the use of chat bots, users must be clearly informed that they are not talking to a real person. In addition, the European Union directed to Facebook, Tik Tok and Googlea request yes to begin to mark content created using artificial intelligence. At the beginning of the year, the guidelines of the Chinese Administration for cyber space (CAC) on the method of creating deepfake content, so that it is now prohibited to create it without the consent of the person to whom it relates, as well as in the case when it is in conflict with the national interests of this country, and it was ordered that deepfake content be clearly marked in order to fight against online fraud and defamation. In line with this, the popular platform TikTok, owned by Chinese entrepreneurs, has already aligned its guidelines. In other countries like Great Britain and US the emphasis is on combating the use of deepfake content in revenge porn.
The European Court of Human Rights has already found the platform responsible for moderating hate speech comments in Delphi vs Estonia. While the judgement in that case at worst resulted in the media's decision to delete the comment sections and lose some of the interaction with the audience, the effect on the platforms where the video content is uploaded is not easy to predict.
One of the possible consequences of curbing the misuse of dipfakes could be the inability to upload content by unverified members, which would make it difficult for dissidents in authoritarian regimes who, for fear of reprisals, want to anonymously post footage of police brutality at protests
What is even more worrying is that, in the absence of human capacity, the moderation will be left to artificial intelligence.
In Serbia, a proactive approach regarding deepfake content has yet been adopted yet
However, creating and publishing a deepfake video of a real person without their consent in Serbia can be a violation of the law if it invades their privacy, damages their reputation or violates their Copyright.
Public figures, including celebrities from the world of entertainment and sports, often face a lower level of privacy than ordinary citizens. However, even though they are exposed to more public attention, this does not mean that they do not have the right to privacy.
It is important to emphasize that the unauthorized use of deepfake material of public figures for advertising purposes should be prohibited.
As holders of public functions, politicians face pressures arising from their role. In many democratic societies, freedom of expression and criticism are important values to protect. Politicians should have a greater tolerance for public criticism, including political parodies and satires, therefore clearly labeled dipfakes of politicians should be allowed.
Photographing and filming people who are not public figures in a public place is in principle allowed, as long as it does not significantly intrude on their privacy. However, if such a recording was used to create deepfake content showing that person in a private act, it could be a criminal offense.
With its content, a dipfake video can harm a person's reputation, and if it was created by processing an author's work (e.g. a lecture), it can be a copyright violation. Depending on the circumstances of the case, in addition to criminal proceedings, victims of the abuse of deepfake technology may also address their protection in civil proceedings for damages due to injury to honor and reputation, i.e. infringement of copyright, as well as in proceedings before independent and regulatory bodies.
For more information on the rights and obligations of organizers of public gatherings in the digital age, you can see "Protests and digital technologies - Guide for organizers ” which was prepared by the Lawyers Committee for Human Rights - Jukom.
Source >>