India, 25 April 2024— McAfee, a global leader in online protection, today released new research exploring the impact Artificial Intelligence (AI) and the rise of deepfakes are having on consumers. The data, from research conducted in early 2024 reveals that nearly 1 In 4 Indians (22%) said they recently came across a political deepfake they later discovered to be fake. Additionally, with the ongoing elections and sporting events in India, the actual number of people exposed to deepfakes is expected to be much higher given that many Indians are not able to decipher what is real versus fake, thanks to the sophistication of AI technologies.
This rise in the difficulty of discerning truth from fiction has emerged as the ease with which AI can replicate voices and images poses serious concerns about the spread of misinformation. There has been a massive surge in cases of Deepfake scams that impersonate not only consumers but also prominent public figures across spheres such as business, politics, entertainment, and sports. This issue is magnified in India, as many people unknowingly forward deepfake content on social media, mainly WhatsApp and Telegram groups, without verifying its origin, causing a multiplier effect. Additionally, there are paid troll armies that facilitate such acts.
Misinformation and disinformation emerged as key concerns for Indians surveyed, with recent incidents involving Sachin Tendulkar, Virat Kohli, Aamir Khan, Ranveer Singh serving as an example of what could become a widespread issue. When asked what potential uses of deepfakes are most concerning, 55% said cyberbullying, 52% said creating fake pornographic content, 49% said facilitating scams, 44% said impersonating public figures, 37% said undermining public trust in media, 31% said influencing elections, and 27% said distorting historical facts.
“In this day and age anyone can create deepfakes and cloned audio using readily accessible tools, which takes only a few minutes to create. Recently, India has been witness to an unprecedented surge in cases of Deepfake content of public and private figures. The ease with which AI can manipulate voices and visuals raises critical questions about the authenticity of content, particularly during a critical election year.” said Pratim Mukherjee, Senior Director of Engineering, McAfee. “It’simperative that consumers to be cautious and take proactive steps to stay informed and safeguard themselves against misinformation, disinformation and deepfake scams. We encourage consumers to maintain a healthy sense of skepticism. Seeing is no longer believing, and it is increasingly becoming important to take a step back and question the veracity of the content one is viewing. Fortunately, there are now AI tools to beat AI – from robust detection, such as McAfee’s deepfake audio detection technology showcased in CES 2024, to online protection that uses AI to analyze and block dangerous links on text messages, social media, or web browsers to help protect your privacy, identity and personal information.”
It is vital for consumers to become more aware of deepfakes and increase their media literacy. With deepfakes becoming more convincing, they are effective ways to identify them and stay safer from scammers and misinformation. It is important to use fact-checking tools and reputable news sources to validate information before passing it along to others, to prevent the spread of misinformation and harmful content. Staying on the lookout for distorted images and robotic voices is a good way of identifying AI generated content. One should also keep an eye out for emotionally charged content, as much like phishing emails that urge readers to act without thinking, fake news reports stir up a frenzy to sway your thinking. For an extra layer of safety, an investment into cybersecurity tools & software can help identify online scams.
Consumers are increasingly concerned about telling truth from fiction.
Major ongoing and upcoming political and sports events, both locally and globally such as Elections, Olympics, IPL and more, will see an increased number of such deepfake content. In India:
How to stay safe and promote information integrity.
About McAfee’s Deepfake Detection Technology: Project Mockingbird
McAfee’s proprietary AI-powered Deepfake Audio Detection technology, known as Project Mockingbird, was developed to help defend consumers against the surging threat of cybercriminals utilising fabricated, AI-generated audio to carry out scams that rob people of money and personal information, enable cyberbullying, and manipulate the public image of prominent figures. Its industry-leading AI model was developed and trained by McAfee Labs, the innovation and threat intelligence arm of McAfee, to identify whether the audio in a video is likely AI-generated. This helps people understand their digital world and assess the likelihood of content being different than it seems.
About McAfee
McAfee Corp. is a global leader in online protection for consumers. Focused on protecting people, not just devices, McAfee’s consumer solutions adapt to users’ needs in an always online world, empowering them to live securely through integrated, intuitive solutions that protect their families, communities, and businesses with the right security at the right moment. For more information, please visit https://www.mcafee.com
About the Research
McAfee conducted research in January and February of this year, across multiple countries to understand how artificial Intelligence and technology are changing the future. The study was conducted by MSI- ACI with 7000 consumers globally across US, UK, France, Germany, Australia, India, Japan.
Forward-Looking Statement
This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice at McAfee’s sole discretion. Nothing in this document shall be considered an offer by McAfee, create obligations for McAfee, or create expectations of future releases which impact current purchase or partnership decisions.
Please note: Reference to any events, individuals, or institutions within this content are purely for illustrative purposes, aimed at fostering awareness and caution regarding such misinformation/misrepresented facts and not intended to cause harm or offense. The primary goal is to promote understanding and critical thinking in navigating information.