A newly surfaced video allegedly featuring fugitive Miloš Medenica has ignited debate on social media, leaving many questioning whether it is a product of artificial intelligence or something more serious. Medenica, who was sentenced to ten years and two months in prison on January 28, 2023, for organizing a cigarette smuggling network, reportedly addressed the Director of the Police Administration, Lazar Šćepanović, stating he will continue to speak publicly until his arrest.
In the video, Medenica purportedly says, “I will comment every day until I am arrested or until they deny that I am a bot.” He also referred to Montenegro as being “captured” and dismissed claims that the video was generated by AI. This followed earlier reports from the Police Administration that identified a previous video as AI-generated, yet no official statements have been released regarding the latest footage.
Concerns about the authenticity of such videos have been discussed by experts in digital forensics. Dr. Nikola Cmiljanić, a professor at the Faculty of Information Technology, noted that many recent videos of wanted individuals have raised suspicion about their legitimacy. He emphasized that serious investigations should not rely on a single “quick check” but instead utilize a combination of methods and multiple independent indicators to draw conclusions.
Understanding AI-Generated Content
AI-generated videos, according to Dr. Cmiljanić, involve recordings where parts of the image, sound, or both are created or altered using generative models. This technology can convincingly change facial features, synchronize lip movements with audio, or even create voices that resemble real individuals. The quality of these videos can be so high that the average viewer may struggle to discern authenticity based solely on visual inspection.
Deepfake technology, a narrower subset of AI video content, aims to impersonate specific individuals, making it look as if they are saying or doing things they have not. This complicates the landscape further, as the broader category of AI-generated videos can also include entirely synthetic scenes or alterations that do not involve face-swapping.
Dr. Cmiljanić pointed out that forensic analysis heavily relies on the quality and originality of the material. The most reliable results are obtained from original files, which enable analysts to examine technical traces within the recording itself, such as file structure and consistency in lighting and movement.
The Challenges of Detection
Despite these technological advancements, the detection of AI-generated content remains a significant challenge. Dr. Cmiljanić explained that tools for automatically identifying such videos are still in their early development stages, leading to varying reliability, especially when the footage is sourced from social media platforms as “re-uploads” or compressed versions.
He cautioned that many traces can be lost in these processes. The issue becomes even more complex when identifying the creator of the content. Even when it is reasonable to conclude that the material is synthetic or significantly manipulated, pinpointing the author typically requires a blend of digital forensics and traditional investigative methods.
Key traces are often found on the devices used for processing and publishing the content, including project files and application logs. However, the rapid distribution of videos through fake accounts and VPN services often obscures the chain of responsibility.
Dr. Cmiljanić urged caution when sensational videos emerge, particularly in sensitive situations involving wanted individuals. He advised waiting for official verification before making conclusions based solely on initial impressions. “It is technically possible to create convincing videos that could incite panic, compromise investigations, or damage reputations. Therefore, verifying sources and original materials is crucial,” he stated.
The case of Miloš Medenica underscores the importance of a careful approach in the digital age. After being convicted of leading a cigarette smuggling operation, he reportedly fled Montenegro via an illegal border crossing into Serbia, where he may seek assistance from influential contacts.
In the wake of the recent video, Milica Kovačević, the Program Director of the Center for Democratic Transition, noted that their attempts to verify the video yielded inconclusive results, even with the best available tools and expert consultations. “We are obligated to document and record our methodology. If we assert that something is AI-generated, we must provide proof,” she said.
As the investigation into Medenica’s case continues, the intersection of technology and law enforcement remains a pressing concern, highlighting the need for robust verification processes in the face of rapidly evolving digital capabilities.
