How to detect and avoid fraud in the age of AI
Dynamic advances in technology have made it easier for scammers to learn about and deceive people and organizations.
Artificial intelligence, or AI, is a catch-all term for technology and tools that are streamlining and automating many business and personal tasks. While experts debate just how much “intelligence” these tools contain, there’s no denying their effects. Businesses and individuals are using the tools.
Like any technology, AI can serve malicious intentions. Common scams and financial fraud attempts, such as business email compromise and imposter scams, can become much more convincing and easier to deploy with AI tools.
While financial institutions and businesses are alert to this threat, families must also protect themselves. “For years, banks have developed ways to authenticate a customer, like asking for data only you would know before we provide a service,” says Sue Ross, Head of Financial Crimes Prevention at Regions Bank. “As AI-assisted fraud emerges, people will need to take extra steps to authenticate their bank or anyone they do business with.”
Education is still the best fraud defense, whether you’re protecting your own assets and identity or those of your company.
How can AI enable fraud?
Most types of fraud depend on winning a person’s trust. Through manipulative tactics that convince people to provide sensitive personal or financial information (a.k.a. “social engineering”), fraudsters can convince people they are communicating with someone they know or someone who works for a legitimate organization.
AI has the potential to make these scams more effective and harder to spot. “We see fraudsters using large language models to craft written and verbal communications that look much more authentic than they did even a few years ago,” says Ross. “Where we used to advise people to look out for phishing emails with spelling mistakes and odd phrasing, now scammers can sound just as polished as a legitimate professional or friend.”
AI tools can also help fraudsters manipulate video feeds or photos to create so-called deepfakes, which appear to be real people.
Generative AI has also led to breakthroughs in speech synthesis, or “voice cloning.” With just a few seconds of a voice recording, text-to-speech algorithms can produce a believable reproduction of an actual person’s voice. Th ese tools could produce messages that sound like a boss, a financial advisor or a relative to make a request for money or sensitive information seem legitimate.
Ross notes that fraudsters can harness AI tools to collect information about a person or organization within seconds, which they can use to make scam communications more convincing. They can also make it much easier to build sham websites or generate emails that bear marks and logos of known institutions.
“AI can make fraudsters more dangerous,” says Ross. “They don’t need to be experts in building fake websites or apps or harvesting personal information to build synthetic identities. Generative tools can do a lot of heavy lifting.”
How to watch for AI-assisted fraud
The advice to “trust but verify” still applies in the age of AI. But everyone will need to slow down and question more communications than before, especially when it deals with sensitive information.
“Fraudsters have tools that create fake security alerts and outreach messages. Consumers and employees will need to adjust and take extra steps to verify it’s not a scammer who has collected enough information—and is using good enough tools—to sound like someone we know and trust,” says Ross.
Other ways to defend yourself include:
- Being careful about what personal information you share online. “AI tools can help fraudsters harvest more information faster than ever before,” says Ross. “It’s even easier for them to pretend to be you or someone who knows a lot about you.”
- Storing contact information for your bank and other organizations that can access your financial or personal information, and regularly updating it. This precaution can help you spot scam fraud alerts or other types of interactions in which you might be talking to a scammer.
- Partnering with financial institutions and businesses with strong security controls. Fraud prevention increasingly is a two-way operation. Wherever possible, work with institutions that have strong anti-fraud controls and identity protection mechanisms. Take the time to learn how these organizations will communicate with you and through what channels.
Every person who uses digital technology should remember that fraud is a universal threat. “The biggest risk remains thinking it can’t happen to you,” says Ross. “Everyone is a potential target. AI just increases the chance that if we’re not cautious and act too fast, we’ll believe a fake message or website is safe.”
Talk to your Regions Wealth Advisor about:
- Providing you with our latest thinking about fraud.
- Connect you with our team of fraud specialists.
Need guidance for the road ahead?
Explore our wealth management guide and take the first step.
Interested in talking with an advisor but don’t have one?
Find a contact in your area.