Skip to content

AI-driven account takeover: sophisticated or just convenient?

Learn what has changed in the fraud world since account takeover and AI have become a powerful combo. But have they, truly?

account-takeover-ai-mangopay-desktop account-takeover-ai-mangopay-desktop

Account takeover - a fraud type that has been around for decades, ever since people started transacting online - is still one of the most common threats in the online business environment. Even without AI, account takeover has been a fruitful tactic for fraudsters. What's different now is that with the arrival of generative AI and other automation improvement tools that are easy to use, cybercriminals can now commit fraud more easily and faster than they could before. 

More importantly, these new tools, which aren't necessarily cutting-edge, can offer fraudsters a bigger payoff and maximize their ROI. That’s why we shouldn't ignore how powerful they can become. 

We're going to dive into what account takeover looked like before AI got involved, what's happening now that AI it's part of the picture., and what the future might hold for fraud and AI. You’ll also learn that account takeover (ATO) hasn’t become high-tech overnight. Yet, driven by ambitious fraudsters, experts, or newbies, and by tools available to anyone to get their hands on, it can become harder to fight. 

Account takeover before AI 

The methods and tools that fraudsters had been using before Generative AI came into play are efficient enough. Year after year, this type of crime has been growing. Here are the methods fraudsters have been using so far to gain  unauthorized access to accounts:

Credential stuffing. They'd take usernames and passwords that had already been leaked or stolen and try them on various accounts, hoping that some people used the same credentials more than once.

Social engineering. They'd send fake emails,  texts, or phone calls to trick people into giving away their login details or to make them click on bad links that could then steal sensitive information.

Keylogging. Malware installed on a user's device captures keystrokes and sends the data to the attacker, who can obtain login credentials and other sensitive information. 

Brute-force attacks: Using automated software, fraudsters would just keep guessing passwords, often aiming for ones that are simple or common.

Fraudsters use these methods, and they get away with it when they manage to spoof the device fingerprint with dedicated software acquired most of the time from the Darknet, such as antidetects or dedicated privacy extensions. 

What resources and tools do fraudsters use from the Darknet?

The Darknet is the richest marketplace with stolen data and malicious software - the place where fraudsters can buy what they need to do their jobs. Here, fraudsters can find valuable data such as credentials, card numbers, gift cards, and device information. On top of that, they can also get their hands on tools to efficiently perform their work, such as keyloggers, phishing kits, antidetect browsers, residential proxies, VPNs, automatic account creation tools and more. All these tools are not AI-powered, and yet they’ve been doing their job for many years.

Account takeover in the AI era

Generative AI has changed the game, but not in the way most would expect. While AI can now improve any tactic, good or bad, what it has actually brought to fraudsters is not a sophisticated approach designed to trick the most advanced fraud detection software. It has brought speed,  convenience, and an improved ROI. 

Fraud is a costly business. All the previously mentioned techniques come with hefty price tags. The steps below take a lot of time and money. 

fraud-tactics-money-laundering-dark-web

Experienced fraudsters often prefer to handle things themselves, taking a DIY approach, while others turn to crime-as-a-service, which tends to be more expensive than using their own skills and resources. But with the rise of large language models (LLMs), the act of fraud becomes simpler,cheaper and larger at scale. 

The evil cousins of ChatGPT

With a bit of know-how and creativity, it's possible to manipulate ChatGPT to carry out unethical tasks. However, why go through all that trouble when there are specialized versions of LLMs available on the darker corners of the internet?

Models like WormGPT and FraudGPT are employed for creating phishing campaigns or malware. However, the quality of what they produce is minor-league, especially in the case of malware. This is because they are based on older, less capable models compared to something more advanced like GPT-4. 

Open-source models easy to be trained

More open-source models are becoming available. They're not harmful on their own, and not designed to help anyone commit fraud. However, because of their unrestricted, uncensored nature, fraudsters can modify them to their advantage. This means these models allow their users to use them for various bad purposes, such as launching phishing attacks or creating malware. There are no built-in limits to prevent this misuse, so there is no need to jailbreak the AI. All commercial models, such as ChatGPT or Bard, try to prevent those. So, fraudsters are likely to use these models for account takeover and other types of fraud. 

Future trends 

Fraud might soon be more about quantity than quality because using AI to commit fraud is significantly cheaper, leading to a higher number of attempts that are likely to show a positive ROI. Ironically, AI, an innovative technology of this century, is more affordable to use for fraud than traditional (yet advanced) tools. 

The potential profits are shared on the sly within the crime-as-a-service communities on the Darknet, so it’s hard to know for sure the true gain out of AI-driven fraud. Experienced fraudsters might continue to build their schemes themselves, while the less savvy ones may continue to modify open-source large language models (LLMs). 

In such an environment, it's crucial not to become the low-hanging fruit. As fraudsters' methods become simpler, attacks on your business and customers might increase.

We can't share all the details here because we don't want to help fraudsters with our knowledge. If you want to learn how to protect your business from the latest fraudulent tactics and how our fraud detection powered by AI and darknet insights can help, please contact us.