KI News

Deepfake is a big challenge in the era of Artificial Intelligence

Decrease Font Size Increase Font Size Text Size Print This Page

The challenges posed by deepfakes are first and foremost the erosion of trust and reputation.

By: Dr Satyavan Saurabh

Deepfakes are video or audio recordings that are manipulated using artificial intelligence (AI) to make it appear that someone is saying or doing something they never did. It has emerged as a significant technological threat with far-reaching implications. It is said that in this era of information technology, data is like oil, meaning its importance is very high, and it has become very difficult to do any work without data.

As the use of data is increasing, the methods of its purification are also increasing, while such technology has also emerged which can do many amazing things using your data. One such technology is Artificial Intelligence, which has changed the technological world forever. Tools like Google Bard have become a treasure trove of amazing information that has taken research, education, and productivity to a different level, while neural networks, deep learning, and machine learning have transformed areas like sales, marketing, social media, and entertainment.

The challenges posed by deepfakes are first and foremost the erosion of trust and reputation. Deepfakes can be used to spread misinformation, damage reputations, and incite social unrest. Once a deepfake video goes viral, it may be difficult to contain the damage, as people may not be able to distinguish between genuine and manipulated content.

The threat to individuals and society has now come to a head. Deepfakes can be used for cyberbullying, blackmail, and even interference in elections. The potential for its misuse to cause harm to individuals and society is very high. The difficulty in tracing and holding people accountable has increased more than ever. The sophistication of deepfakes is constantly evolving, making them increasingly difficult to detect and detect.

This is a challenge for law enforcement and social media platforms. Legal and ethical considerations appear to be weakening as the emergence of deepfakes has raised complex legal and ethical questions regarding privacy, freedom of expression, and the limits of artificial intelligence (AI).

Deep fakes are being misused in every field, deep fakes are a threat to democracy. Deep fakes can serve to undermine democracy by altering democratic discourse and spreading public distrust of important institutions. Deep fake can become the biggest threat to politics and democracy, because in this, a video of any leader’s false or inflammatory statement can be made viral, which can lead to riots in the country.

Deep fakes can be used in elections to spread caste hatred, unacceptability of election results, or other types of misinformation, which can pose a major challenge to a democratic system. According to this year’s State of Deepfakes report, India is the sixth most vulnerable country to deepfakes. Deep fakes can change the mind of the voter. If you have made up your mind to vote in favor or against any party or candidate till two days before the election and in the meantime, there is an objectionable photo of a big leader of that party or any candidate.

If a message, video, or audio goes viral on social media, then the voter may change his decision to vote for that party. Let us tell you that in the last Prime Ministerial election in Britain, a deep fake video of the candidates of the Labor Party and Conservative Party surfaced in which they were seen supporting each other, in India, in the 2019 LokSabha elections, deep fake videos of some politicians were revealed. Fake videos went viral on social media which were later removed.

Legal remedies for victims of deepfakes in India Social media platforms include addressing complaints related to cybercrime and remove deepfake content within 36 hours. To report cybercrime, victims can complain to the National Cyber Crime Helpline (1930) and seek assistance from a cyber-lawyer. Section 66 of the Information Technology Act, 2000 deals with cybercrime offenses including counterfeiting and dissemination of false information.

Under the Copyright Act of 1957, deepfakes may violate copyright laws if they involve unauthorized use of copyrighted material. Indian Penal Code (IPC) provisions like defamation (Section 499) and criminal intimidation (Section 506) can be invoked depending on the nature of the deepfake.

Technological solutions are paramount among the measures that can be taken to combat deepfakes. Researchers are developing AI-powered tools to detect and authenticate deepfakes. These tools analyze subtle imperfections in deepfake content, such as inconsistencies in facial expressions or eye movements. Governments and international organizations are exploring legal and regulatory frameworks to combat deepfakes. This includes defining deepfakes as a form of cybercrime, establishing reporting mechanisms, and holding producers and distributors accountable.

It is important to educate the public about deepfakes to reduce their impact through public awareness and education. This includes teaching people to identify deepfakes, recognize the potential for manipulation, and be cautious about sharing content online.

Social media platforms and technology companies have a responsibility to combat deepfakes. This includes investing in detection technologies, implementing clear policies, and working with law enforcement. An AI-based solution to combat deepfakes could involve training AI models to identify deepfakes based on subtle anomalies in facial movements, skin texture, and voice patterns. Source code watermarking Unique watermarks—embedded in the source code of digital content—can help trace the origin of deepfakes and identify perpetrators. AI-powered fact-check tools can assist social media platforms in verifying the authenticity of user-generated content. Media Authentication Standards Open technical standards such as the Coalition for Content Provenance and Authenticity can help establish the authenticity of digital content.

India needs to develop and train its own AI algorithms to propagate the worldview of Indian thought and spirituality. This is perhaps the only way to combat AI bias. If anti-India forces are indeed actively using AI to propagate their own biases, then we need to create our version of AI that is pro-India.

In a world increasingly mediated by multiple interest groups and stakeholders, knowledge can no longer remain neutral. We need to think beyond our role as passive consumers of AI and white-collar workforce for big tech companies and become active players in the ever-evolving AI discourse.

Deepfakes pose a significant challenge to our digital society, however, they also present an opportunity for innovation and collaboration. By combining technological advancements, legal frameworks, and public awareness, we can work towards a future where deepfakes are less harmful and more accountable. In conclusion, there is a primary need to develop a comprehensive approach that addresses the technical, legal, and ethical dimensions of this emerging threat.

The writer is a Poet, freelance journalist and columnist, All India Radio and TV panellist.

Leave a Reply

Your email address will not be published. Required fields are marked *