Deepfake: Dual threat to democracy and social institutions in the digital age

Deepfake is actually a form of synthetic media developed through artificial intelligence, especially techniques like deep learning algorithms. This technology can create videos, visuals, audios, and content in a manipulative way to appear very authentic

Deepfake has emerged as a serious threat to democracy and social institutions across the world. The propagation of deep-fake content by social media platforms has aggravated this challenge. The Ministry of Electronics and Information Technology has, from time to time, advised social media platforms to exercise due diligence and take expeditious action against deepfakes. IT Minister recently interacted with representatives from academia, industry bodies, and social media companies on the need to ensure an effective response to deepfakes. It was further agreed that within the few days, actionable items on four pillars would be identified, including detection, prevention, reporting, and creating awareness. The Centre will also soon appoint an officer to take appropriate action against such content.

Deepfake is actually a form of synthetic media developed through artificial intelligence, especially techniques like deep learning algorithms. However, this AI is a disruptive technology that has changed different aspects of our lives and society, and it has various usable aspects. This technology can create videos, visuals, audios, and content in a manipulative way to appear very authentic. Therefore, using this technology, one can create different types of learning videos, tutorials, movies, and films, which can be used to improve the quality of education and entertainment in society. However, if these deepfakes are used to portray celebrities in a negative light and create unlawful images and content, it poses a real challenge. It has been observed that in recent years, its prevalence has increased significantly due to the availability of technology and widespread internet access in our country. Data is also cost-effective and cheap in our country, which is a positive aspect for society. According to the IT Act S. 71, along with S. 71(2) and S. 71(3), due diligence is required to ensure that platforms are not used by technology or content developers to create misinformation, deepfake videos, and so on. It is not just the responsibility of the content developers; intermediates also play a significant role in proliferating these activities on a large scale and are therefore accountable. The government is considering the possibility of making the due diligence provisions even more stringent.

The IT rules have different mechanisms for addressing grievances. If an aggrieved individual determines that certain content is inappropriate and defamatory towards them, they can approach a grievance officer. The grievance officer is responsible for handling such complaints and should take action to remove the content within 36 hours. Intermediaries have a duty to exercise due diligence on their own and can utilise various machine learning and AI tools to identify problematic content, as they already do for issues like child pornography and gang rape videos. The Supreme Court has also expressed the view that such content should not be present on these platforms. Therefore, intermediaries are diligently performing their responsibilities, yielding positive outcomes. However, when it comes to other types of videos, such as funny videos that may demean celebrities, intermediaries may not be taking aggressive action. In response, the government has issued advisories periodically, and aggrieved parties can approach the grievance officer of social media intermediaries. If the grievance is not addressed satisfactorily, the individual can appeal to the Grievance Appellate Committee (GAC). This process is digitally designed and conducted entirely online. An online hearing is conducted, where the GAC can summon both parties and render a decision that is binding on the intermediaries and the parties involved. This existing mechanism is governed by IT rules in our country.

So, deepfake is just one aspect of the broader framework of generative AI, which amplifies creativity. This creativity can be utilised for positive purposes, such as enhancing efficiency and productivity. However, it can also be misused for spreading misinformation or pornography. Generative AI has both positive and negative aspects, so it is crucial to exercise caution while formulating legal provisions concerning its growth in the Indian context. Deepfakes pose significant risks, as they can disrupt social harmony. The government is already taking steps in the right direction to address this issue. It is essential to raise awareness among the public about the illegality of generating and circulating deepfakes. Effective detection mechanisms need to be established, either through legal provisions or by encouraging social media platforms to implement such measures. It’s important to note that this is an ongoing challenge because the technology is continuously evolving. Deep learning and generative AI are still rapidly developing concepts. Tools like ChatGPT and DALL have demonstrated their capabilities, and they will continue to evolve, enabling the creation of new types of deepfakes and the spread of misinformation on a larger scale. Therefore, we must remain vigilant and adapt to these evolving dangers. While acknowledging the potential risks associated with generative AI, by maintaining our vigilance, developing effective detection tools, and establishing robust legal provisions, we can effectively address these dangers.

Deepfake technology is certainly a dangerous and negative aspect of generative AI. However, as with other aspects of AI, it has the potential to amplify both positive and negative actions. While AI enhances our capabilities to do good, it also amplifies our ability to do harm. The government is playing its part in addressing these issues, but it is also important for the general public, including the audience, to stay vigilant. When encountering any information, especially on social media, we should be cautious and not blindly trust it. It is crucial to verify the authenticity of any content before forwarding or sharing it with others. This responsibility falls on each individual to ensure that they are not contributing to the spread of misinformation or deepfakes. By being alert and practicing verification, we can collectively combat the negative impacts of deepfakes and other misleading content. It is a shared responsibility that requires active participation from individuals in safeguarding the integrity of information and promoting a more informed and responsible digital environment.

(The author is an Advocate by profession)

Share This Article