Issue 03 - 2023MAGAZINETechnology
GBO_ Deepfake

Deepfake: No one is safe in cyber world

In deepfake, algorithms analyze and learn patterns from vast amounts of data to generate highly convincing and realistic outputs

Earlier in May of this year, came one disturbing news from China. A man from the country’s Baotou City became the victim of a cunning scam involving AI-backed deepfake technology.

The scammer used AI-powered face-swapping technology to impersonate the victim’s close friend during a video call. The individual was asked to transfer 4.3 million yuan to his ‘friend’ and the man reportedly believed that his ‘buddy’ urgently required funds for a deposit during a bidding process, thus falling for the scam.

He only came to know about the crime as the real friend turned up and mentioned that the person in the video call wasn’t him. As per the Reuters report, the police managed to recover the lion’s share of the stolen fund. However, the incident has become one more addition to the growing trend of AI-driven fraud all over the world. China has already implemented a tough set of regulations since January 2023.

Knowing the crime’s anatomy

Deepfake, in plain language, is a product of artificial intelligence, where deep learning (a method which teaches computers to process data in a way inspired by the human brain) is deployed to create fake videos/images, with a pinch of next-level realism.

In deepfake, algorithms analyze and learn patterns from vast amounts of data to generate highly convincing and realistic outputs. Once the tool is ready, it is then applied to complete heinous tasks like manipulating videos/images to that extent, where the outputs look too real to have even a doubt about.

The threat actors, before conning someone with deepfake, start their homework by collecting a huge tranche of visual and audio data about the target individual, and they take the help of public sources like social media or photos of the individual’s public appearances. This data is then used to train a deep learning model to mimic the deepfake’s target.

“Deepfake scams have been around for some time primarily focusing on misinformation and social engineering. With advances in AI, deepfake scams are likely to focus on becoming more convincing to the intended victims primarily by engaging in adapted and manipulated live conversations and engagements,” Mr. Muzammil Patel, Head of Strategy, Finance and Operations and Co-founder of Acies, told the Global Business Outlook.

“Deepfake scams are likely to become more and more sophisticated as the innovations in AI move from conversational text engagements to image and video-based generation. We are likely to move into an era where you cannot believe what your eyes see without a form of revalidation,” the official stated further.

The cancer is spreading fast

Here are some compelling facts and incidents about the disturbing trend.

These deepfake crimes are mostly banked on exploiting raw human emotions. For example, as per an ‘ARS Technica’ report, one couple in the United States sent $15,000 through a Bitcoin terminal to a scammer after believing they had spoken to their son. The ‘son’ was an AI-generated voice, which mimicked the person’s ‘nervous voice’ and told the couple that he needed legal fees after being involved in a car accident that killed a diplomat.

According to the US Federal Trade Commission, impostor scams have of late emerged as an extremely common one in the country. In 2022, the above category of crime generated the second-highest financial losses within the country. Out of 36,000 reports, over 5,000 victims were scammed out of $11 million over the phone. With deepfake getting sophisticated faster than anyone’s imagination, expect the tally to go higher in 2023 too.

While the year 2023 has so far been all about generative AI tools like ChatGPT bringing disruptive changes in day-to-day activities of the global economy and opening up new possibilities, it has also created a situation where AI voice-modelling tools can use to improve text-to-speech generation, create new modes for speech editing, and even bringing elements like voice cloning into the play. While these innovations can be used for online humour purposes, the same tools can also be exploited by threat actors to produce near-perfect voice simulations and cause financial scandals.

Noted computer scientists Matthew Wright and Christopher Schwartz, in an article on The Conversation, stated, “As computer security researchers, we see that ongoing advancements in deep-learning algorithms, audio editing and engineering, and synthetic voice generation have meant that it is increasingly possible to convincingly simulate a person’s voice.”

“Even worse, chatbots like ChatGPT are starting to generate realistic scripts with adaptive real-time responses. By combining these technologies with voice generation, a deepfake goes from being a static recording to a live, lifelike avatar that can convincingly have a phone conversation,” the experts added further.

As of June 2023, there is a steady growth of services ready to produce moderate to high-quality voice clones for a fee. Some of these deepfake tools need just a sample of only a minute long, or even just a few seconds of an individual’s recorded conversations, in order to produce a ‘Convincing Voice Clone’.

So how can one stay protected against such deepfake voice calls?

“The simplest way to identify a deepfake scam is to create a form of two or multi-factor authentication between individuals. Engaging with machines through two-factor authentication has become the norm. This norm is likely to extend to engagements between humans,” Mr. Muzammil Patel told the Global Business Outlook.

“Until video interaction systems and video calling tools do not incorporate person-to-person authentication protocols via multi-factor secret words, PINs and other similar mechanisms, it is important for a similar verbal confirmation protocol to be established for critical conversations and financial transactions. It is equally important for people engaged in video engagement to look for tell-tale signs like unnatural movements, lack of eye blinking, poor video resolution and character inconsistencies,” he added further.

Deepfake fuelling blackmails more?

As per the FBI, the number of reported sextortion cases in the United States increased by 322% between February 2022 and February 2023, as the probe agency noted the significant rise in the crimes involving AI-doctored images.

Selfies/videos uploaded on social media sites can be twisted into sexually explicit, AI-generated images that are almost “true-to-life” and nearly impossible to differentiate, the FBI stated further.

The investigators even warned against predators targeting juveniles online to coerce money out of them or their families. In some cases, these elements are even blackmailing the kids to get sexually graphic images.

Yes, sextortion is real and now, with the AI’s help, the cancer is taking a monstrous shape.

“Enforcement agencies need to focus on the following areas: creating awareness among people who are victims along with step-by-step response mechanism which evokes trust in the victims that law enforcement is in control of and can remedy the situation,” said Mr. Muzammil Patel, while adding, “Rapid response even for small scams. With the limited amount of law enforcement resources, it becomes harder to attend to every scam. However, as small scams are not nipped in the bud, it emboldens the scammers and makes available resources to them to become more sophisticated over time.”

“Cross border inter-agency co-operation: Given that scammers can originate in any country and the victims can be in any country, inter-agency co-operation to eliminate scammer networks locally before they become larger global networks is critical,” he stated further.

“Law enforcement needs to scour through the web and identify deepfakes and their sources and start tracking back the source of creation to eliminate scammers before they can take on active roles. This requires significant investment in AI and deep learning technology by law enforcement to enable quick identification and response,” the expert opined.

Also, these criminals are not only stopping at blackmail, they are using the tech to execute romance scams and con people.

Cybercriminals are utilizing AI to create realistic and compelling online profiles, so compelling that these images appear genuine and attractive to potential victims. AI algorithms are generating near-realistic profile pictures and engaging bios. The technology is even simulating human-like conversation patterns. AI has become the scammers’ best friend to establish trust and emotional connections with their unsuspecting targets, thus setting the stage for heinous crimes.

We all know that thanks to generative AI, chatbots have now been programmed to simulate human responses and emotions, even to the extent of mimicking flirtatious behaviour. And this sophisticated tech is now being abused by scammers, in order to manipulate emotions, build rapport, and exploit vulnerabilities to gain the trust and affection of their potential targets.

AI algorithms can now analyze vast amounts of data from social media platforms to gather personal information about an individual. Aided by this information, cybercriminals are then crafting targeted messages that exploit specific interests, hobbies, or life experiences of their potential victims. These scammers are playing on the elements of human emotions, familiarity and trust. Emotional manipulation of the victims has become the key weapon here.

“Social media companies have a fine balance to work with between their ability to protect potential victims and their legal rights to act on information without investigation by law enforcement agencies. Social media companies consider themselves a medium for publishing rather than being the publisher and in general consider themselves absolved of the responsibility associated with content and engagement on their platforms,” commented Mr. Muzammil Patel, while interacting with the Global Business Outlook.

“However, scams on a platform erode trust in the platform and in the long-run impact the social media company’s ability to conduct business. For social media giants to deal with miscreants, the first step is the create teams focused on identifying deepfakes. Focusing efforts on pre-emptively identifying modified content and having a clear and consistent mechanism of distinguishing between satire and malevolent intent is critical to nipping problems in the bud,” he added further.

Is there any solution?

However, a recent report from Forbes has finally spoken about a potent solution against the deepfake menace.

“Using blockchain technology, digital IDs can provide verified proof of identity, adding trust and safety to interactions online. Decentralized identities, made with tools from companies like PolygonMATIC 0.0%, Nuggets and Unstoppable Domains, offer verified credentials for a user’s digital persona. Blockchain-verified profiles can offer a type of ‘proof of humanity,’ a place to store digital assets and identifiers, and a single point of access into the Web3 ecosystem — all while keeping the user’s privacy under their control. Meanwhile, AI can work to monitor and secure these profiles, keeping people from becoming the next victims of AI-wielding criminals,” the report, titled ‘How AI And Web3 ID Tech Can Defeat Deepfake Frauds’, commented.

In 2019, cybersecurity company Deeptrace’s study found that over 96% of the videos developed in deepfakes were pornographic in nature, with a lion’s share of the victims being women.

In another 2021 study by Euler Hermes, two-thirds of the surveyed companies had been victims of fraud attempts in the last twelve months.

“Deepfakes can cause serious issues to the global economy and markets which are attuned to responding to statements and news. News-based algorithms amplify the effect of market news and execute significant portions of transactions today. Where social media noise is amplified by deepfakes, it can cause serious volatility in markets and can create chaos in economic systems,” Mr. Muzammil Patel said, while adding, “It could also be used to cause social disorder. Dealing with deepfakes is no longer about protecting individual victims or dealing with small-scale miscreants. It is only a matter of time before coordinated and well-thought-out attacks on economic systems will originate from deepfakes and amplification of news caused by them.”

Since 2020, Microsoft has been working on the ‘Video Authenticator’ tool. Facebook is developing Reverse Engineering, which detects fingerprints left by an AI model, and Swiss cybersecurity company Quantum Integrity is marketing an AI-based offering that tries to determine whether images or videos have been manipulated.

Will the AI researchers, law enforcement agencies and the tech industry stakeholders come together, and deploy the quick fix against the ‘intelligent’ cyber criminals on an ASAP basis?

Only time will answer the question. Until then, we need to practice the virtue called ‘caution’. Yes, technology has become a part and parcel of our life in the 21st century, but the same technology has been given a demonic form by threat actors. Staying constantly updated about this demonic form and practising caution around our digital life will be the best weapon for us, till the tech sector stakeholders come up with a potent weapon against the deepfake menace.

Related posts

Africa is on a path to slow economic recovery

GBO Correspondent

Singapore insurance market sees growth amid pandemic

GBO Correspondent

Mobile money revolutionises Kenyan fintech

GBO Correspondent