Deep Fake: Emerging Challenge to Cyber Security

Deep fake is a combination of “deep learning” and “fake”. A deep fake can be in the form of a digital photo, video or sound file of a real person that has been edited to create an extremely realistic but false depiction of them, doing or saying something that they did not actually do or say.

  • It is an Artificial Intelligence (AI) software that superimposes a digital composite on to an existing video (or audio), creating a fake version of the content.
  • The origin of the word “deep fake” can be traced back to 2017 when a Reddit user, with the username “deepfakes” posted explicit videos of celebrities.

Threats of Deep Fakes

  • Deep fakes are created by machine learning models, which use neural networks to manipulate images and videos, thereby manipulating reality and public opinion for the purposes of defamation, propaganda, psychological terrorism, etc. Thus, deep fakes have become a threat to cyber security due to the technical feasibility and affordability of these technologies making them accessible to criminal organizations of all sizes.

The challenges posed by deep fakes to cyber security are as follows:

1. Manipulation on Social media

  • Deep fakes are used on social media platforms, often to produce strong reactions. Social media posts supported by convincing manipulations have the potential to misguide and inflame the internet-connected public. Deep fakes provide the media that help fake news appear real.

2. Disrupting Social Order

  • Content using Deep fake technology can be a potential threat to social harmony and social order. It can be used to radicalize and polarise people and communities on the basis of caste, class, religion etc.
  • It can serve a great role in spreading fake news, hate news and misinformation.

3. Undermining Political Systems

Deep fakes have the potential to undermine political systems and disrupt the democratic processes.Examples-

  • In India, a candidate used deep fake videos in different languages to criticize the incumbent legislator.
  • In Gabon, a deep fake video led to an attempted military coup in the East African nation.

4. Terrorist Propaganda

Deep fake technologies can be used by terrorist organizations and insurgents to further their agenda of destabilizing governments. They can spread false information about institutions, public policy, and politicians for this purpose.

5. Effects on Businesses

The easy availability of huge amount of corporate digital data online offers opportunities for cyber-attackers to simulate prominent people, in order to manipulate others and commit crimes against companies. Companies are concerned about several scams that rely on deep fake technology.

  • Multiple cases have been reported of cybercriminals using artificial intelligence-based deep fake technologies to impersonate CEOs to request the fraudulent transfer of corporate funds.
  • Deep fakes are used to stain the reputations of individuals and spread propaganda against them. Several extortion scams have also been executed using this technology. Thus, deep fakes raise major security concerns.

Possibilities and Prospects

In the information age, technologies like deep fake have potential to threaten socio-economic and political security of a nation. As existing laws are not enough to protect individuals against deep fakes, following measures can be taken-

  • Policymakers need to understand how deep fakes can threaten polity, society, economy, culture, individuals, and communities.
  • Specific legislations should be made in order to cover criminal activities performed using deep fake.
  • Social media platforms should be made accountable in detection and removal of such content.
  • Develop AI-based detection tools with the capability to detect deep fakes.
  • Deep fakes must be included under hateful manipulated media, propaganda, and disinformation campaigns.
  • Journalists should be provided with tools, resources and proper training to examine the authenticity of images, video, and audio recordings.
  • Add noise pixels in videos to prevent modification, or analyze frames or acoustic spectrum in order to detect any distortions.
  • It is essential to start a scalable training program on how to identify and handle deep fake content.