AI Scams: How Fraudsters are Using Artificial Intelligence to Target Consumers
In 2023, the Federal Trade Commission received a staggering 2.6 million fraud reports totaling $10 billion lost to scams, the highest annual loss ever reported. Of those reports, the overwhelming majority were imposter scams where a fraudster impersonates a bank’s fraud department, the government, a business, a relative, a love interest, or a technical support representative. As technology continues to advance, scammers are finding new and innovative ways to exploit unsuspecting individuals, with artificial intelligence (AI) playing a key role in their tactics.
As AI becomes more accessible and sophisticated, scammers are leveraging this technology to gain access to individuals’ accounts, draining them of money, points, and miles. The Federal Trade Commission (FTC) is actively seeking to thwart AI-generated deepfakes, enacting a rule prohibiting the impersonation of individuals. Deepfakes are images or videos that have been digitally manipulated using AI technology, allowing fraudsters to make it appear as if someone is saying or doing something that never happened. This rule would be an extension of an existing rule against impersonating businesses or government officials.
One common method scammers use AI technology is through impersonation of loved ones. By cloning a loved one’s voice or generating fake images, scammers can create a convincing narrative to trick individuals into sending them money. For example, a scammer could impersonate a child’s voice, claiming to be in distress and in need of immediate financial assistance. Additionally, scammers can spoof email addresses and even mimic an individual’s writing style to make their messages appear more convincing.
Another prevalent tactic used by scammers is credential stuffing attacks, where hackers use automated scripts to gain access to individuals’ accounts using leaked usernames and passwords. This method is particularly effective in targeting loyalty accounts, where scammers can exploit vulnerabilities to access and drain points or miles.
As AI technology continues to evolve, there is a growing concern about the potential for scammers to create deepfake videos. With the ability to impersonate someone else on video, scammers could further deceive individuals and institutions, posing a significant threat to online security. As technology progresses, it may become increasingly challenging to verify the authenticity of digital interactions, raising concerns about the erosion of trust on the internet.
To protect themselves from AI scams, individuals can take several precautionary measures. Establishing a family verification method, such as a secret password or unique question, can help verify the identity of loved ones and prevent falling victim to impersonation scams. Additionally, being cautious of urgent requests and taking the time to validate information before taking action can help individuals avoid falling prey to scams.
Reporting fraud or fraud attempts is essential in combating scammers and preventing further attacks. By blocking suspicious phone numbers or email addresses and reporting them as spam or phishing, individuals can help prevent scammers from targeting others. In the event of fraudulent activity, contacting banks or loyalty account customer service and reporting the scam to the FTC can aid in recovering lost funds and preventing future incidents.
Regularly monitoring account activity, setting up account notifications, and updating security settings can help individuals stay vigilant against AI scams. By staying informed and proactive in online security practices, individuals can protect themselves from falling victim to fraudsters using AI technology. As technology continues to advance, it is crucial for individuals to remain vigilant and take necessary precautions to safeguard their personal information and financial assets from scammers leveraging AI tactics.