GeneralNews

PM Modi Issues Warning: Making Deepfake Videos Can Lead to 1 Lakh Fine and 3 Years Jail

In a notable development today, Prime Minister Narendra Modi addressed the growing issue of artificial intelligence (AI) being exploited to create deepfake videos, emphasizing the gravity of the situation. Expressing deep concern over the potential ramifications of such manipulated content, he has proactively engaged the ChatGPT team to play a pivotal role in identifying and flagging deepfake videos circulating on the internet.

Acknowledging the transformative power of technology and AI, the Prime Minister highlighted the alarming ease with which images, videos, and audio can be manipulated, resulting in content that is entirely misleading. This concern has been further amplified by recent incidents, such as the viral deepfake video featuring actress Rashmika Mandana, crafted using AI-based tools.

In response to this emerging threat, Prime Minister Modi has urged the media to take an active role in educating the public about the dangers associated with deepfake technology. By fostering awareness, the media can contribute significantly to empowering individuals to discern between authentic and manipulated content, thereby fortifying the digital landscape against deceptive practices.

Prime Minister Modi Sounds Alarm on Deepfake Technology, Calls for Vigilance to Prevent Social Unrest

In a recent statement, Prime Minister Narendra Modi voiced serious concerns regarding the potential societal repercussions of deepfake technology. Emphasizing the need for caution, he underscored that deepfake content has the capacity to generate significant unrest within society. The Prime Minister advocated for transparency in the use of generative AI for creating photographs or videos, urging that such content must carry a clear disclaimer acknowledging its origin as a product of deepfake technology.

Recognizing the profound impact of deepfakes on public perception, Prime Minister Modi highlighted the importance of individuals and media outlets exercising extreme care when dealing with content generated by AI. He stressed the necessity of heightened awareness and responsible practices to mitigate the risks associated with the proliferation of deepfake material.

During his remarks, Prime Minister Modi shared a personal experience, recounting an incident where he came across a deepfake video featuring himself engaged in a garba dance. Despite the video’s authenticity appearing remarkably convincing, the Prime Minister clarified that he had not participated in such an activity since childhood. This anecdote serves as a poignant example of the potential for deepfakes to deceive even those depicted within the manipulated content.

The Prime Minister’s remarks come in the wake of a recent viral video purportedly showing him participating in a garba dance during Navratri. However, an investigation by Aajtak revealed that the video actually featured actor Vikas Mahante, who bears a striking resemblance to Prime Minister Modi. This incident underscores the imperative for heightened scrutiny and fact-checking in an era where deepfake content can easily be misconstrued as genuine, potentially leading to misinformation and societal discord.

Stringent Legal Measures Imposed: 1 Lakh Fine and 3 Years Imprisonment for Viral Deepfake Creation

In a decisive move, the Central government has instituted stringent penalties for the creation and dissemination of viral deepfake content. Offenders now face a substantial fine of 1 lakh and a prison sentence of three years. This robust legal framework aims to curb the proliferation of deepfake videos, which pose a significant threat to public trust and safety.

Tweet

Victims of deepfake incidents have been advised by the Center to file a formal police complaint and utilize the remedies provided under the Information Technology Rules. Union Minister Rajiv Chandrasekhar underscored the legal responsibility of online platforms to actively prevent the spread of misinformation. Emphasizing the government’s commitment to the safety and trust of citizens, particularly children and women who are often targeted by such content, the minister affirmed that these concerns are taken very seriously.

Deepfake Menace Persists: Kajol Becomes Latest Victim Following Rashmika’s Controversial Video

The troubling trend of deepfake videos continues to plague the entertainment industry, with Bollywood actress Kajol being the latest target after Rashmika Mandana. A video featuring Kajol has surfaced on social media, purportedly showcasing the actress changing her outfit. This incident comes on the heels of a similar deepfake video featuring Rashmika, which gained widespread attention on social media platforms.

In the viral video, Kajol’s likeness is manipulated using AI technology to create a false narrative of the actress changing her attire. The emergence of deepfake content raises serious concerns about the potential misuse of technology to deceive and manipulate public perception.

Rashmika Mandana, who recently experienced a deepfake video circulating on social media, took a stand against the malicious use of her image. Notably, her advocacy garnered support from prominent figures in the industry, including Amitabh Bachchan and Mrinal Thakur.

Deepfakes: Unveiling the Technology Behind the Illusion

Coined in 2017, the term “deepfake” has become synonymous with a disturbing trend that emerged on the American social news aggregator Reddit, where numerous manipulated videos featuring celebrity faces began to surface. Notably, actresses such as Emma Watson, Gal Gadot, and Scarlett Johansson found themselves unwittingly portrayed in explicit content through these deepfake creations.

At its core, deepfaking is the sophisticated act of seamlessly integrating someone else’s face, voice, and expressions into genuine video, photo, or audio footage. The level of realism achieved in these manipulations is often so convincing that distinguishing between the authentic and the fabricated becomes a formidable challenge. The technique leverages machine learning and artificial intelligence, employing advanced technology and software to create deceptive videos and audios.

Punit Pandey, an expert in AI and cybersecurity, underscores that deepfaking has evolved into a readily available technology accessible to virtually anyone. The present state of technology not only excels in visual manipulations but has also made substantial strides in replicating authentic sound. Particularly, voice cloning has emerged as a potent and potentially perilous aspect of this technology, further amplifying the risks associated with the proliferation of deepfakes. As the accessibility and sophistication of these tools continue to grow, society is faced with the imperative of addressing the ethical, legal, and societal implications arising from the misuse of deepfake technology.

Niyati Rao

Niyati Rao is a seasoned writer and avid consumer who specializes in crafting informative and engaging articles and product reviews. With a passion for research and a knack for finding the best deals, Niyati enjoys helping readers make informed decisions about their purchases.