How Hacks Happen
Hacks, scams, cyber crimes, and other shenanigans explored and explained. Presented by your friendly neighborhood cybersecurity gal Michele Bousquet.
How Hacks Happen
2025: The Year of AI Scams
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI is undoubtedly the technology of the year for hacks and scams. While scammers are busy using AI to make scams more convincing, scam baiters are fighting back with ingenious grandma-style AIs that keep scammers busy for hours on end. There have also been some spectacular busts of scam centers. What’s next, AI scammers chatting with AI scam baiters? Now, that would be entertaining.
Resources
- Meet Daisy, the AI 'granny' chatbot that wastes the time of phone scammers
- SpaceX disables thousands of Starlink devices being used by Myanmar scam centers
- China sentences 5 to death for building, running criminal gang fraud centers in Myanmar's lawless borderlands
- Myanmar has declared a 'zero tolerance' policy for cyberscams, but the fraud goes on
- Content creators, banks use AI to waste scammers' time
- Kitboga on YouTube: I Built a Bot Army that Scams Scammers
Join our Patreon to listen ad-free!
Happy end of 2025 from How Hacks Happen. This is Michele Bousquet, and I wanted to end the year on a positive note with some good news about hacks and scams.
Well, I didn't have to look far to find the technology of the year when it comes to hacks and scams, for both making them worse and for fighting them. Can you guess what it is? That technology is artificial intelligence, otherwise known as ai. This year we saw the use of AI for hacks and scams skyrocket in a bunch of different ways.
We also saw AI being used to combat hacks and scams. This new technology, something we only dreamed about just a few years ago, is having a huge impact on entire industries for both the good guys and the bad.
So let's take a look at the year in AI from the perspective of hackers and scammers and also from the other side. Those who combat these criminals.
Before we dive in, let's do a refresher on what artificial intelligence actually is. AI is a technology that functions like a robot researcher with access to thousands upon millions of pieces of information.
But unlike a simple search tool like what Google used to be like a few years ago, one that searches based on just some singular piece of data, AI also looks at patterns and relationships so it can make new connections that follow the same patterns.
Like for a text-based AI program, you can ask it to write a poem about laundry detergent in the style of William Shakespeare.
“O fragrant knight, thou arm’st my shirts with peace,
So linen lies in lavendered repose.
The tyrant odor flees before thy charge…”
And it will combine what it knows about the patterns in Shakespeare’s style and the known and advertised merits of laundry detergent…
“No thread is torn beneath thy measured might,
No color slain, though stains are put to death…”
…and the AI will come up with something in seconds. Now, will this poem be a work of art? Probably not, but it will rhyme and it would probably be pretty funny.
“For in this swirling stage of soap and spin,
Thou prov’st: clean garments are a noble end.”
A program that uses AI is called an AI model because it's modeled or shaped in a certain way, and all the AI models are a bit different. You've probably heard of some of these. There's ChatGPT, Deepseek, Gemini, a whole long list of them these days.
But before one of these AI models can be put to use, it goes through a training phase where it looks through zillions of existing pieces of data to learn patterns and it practices giving answers, and those answers are then evaluated by humans. Humans say “This answer's good. This one, not so much,” and sends that evaluation back into the AI model. So, it eventually learns what's right and wrong, which can be subjective, but it's all part of the training.
Then the AI model goes back for more training and again and again. And then, when the AI model seems to be doing things correctly, the training is done and the AI model is ready for use. That next phase is called inference because the AI model's job is to infer answers based on what it's learned. So there's training, then inference.
Whenever you see an AI model that is open to the public for use, like ChatGPT, that model is being used for inference. But it also still does a bit of training. Like if it gives you a wrong answer and you say, “Hey, that answer's wrong,” the model will take in that information and add it to its store of training.
If you haven't played around with any AI models yet, I highly recommend that you do so, if only for 15 minutes, so you can get a feel for what we'll all be up against from the scammer community in the years to come.
You can test out AI for free in a bunch of different places. The easiest one is called ChatGPT. Just go to chatgpt.com, open a new chat and type a question. It can be a real question:
"How much does a hotel room cost in Paris?"
or a ridiculous one:
"What kind of diapers should I use in zero gravity?"
Chat GPT doesn't care and it's gonna try and answer no matter what you ask.
"Is it possible to travel faster than the speed of light?"
Once you get started, see how long it takes you to stop. I bet you that 15 minutes turns into two hours real quick.
Okay, back to our regularly scheduled programming. This is Michele Bousquet of How Hacks Happen, and we're talking about the 2025 technology of the year, artificial intelligence, and how it can be used for good and for bad.
Remember when just a few short years ago, you could spot a scam email or text a mile away because the English was so bad? The grammar, the spelling, it was all off. Well, those days are over because a scammer can get ChatGPT to fix their written English in about two seconds.
Not so long ago, if you got an email or a text with really bad grammar errors and spelling and strange wording, you could tell it was a scammer with a language other than English as their first language.
“Please pay to the fee of the Facebook for the one million dollars prize.”
But nowadays, a scammer can just put text into an AI model and the AI will fix it up and make all the grammar correct.
“Pay the entry fee to enter Facebook’s one million dollar sweepstakes.”
This is making it a lot harder to spot these types of scam texts.
So you gotta use your noggin in some different ways. Like if you get an email saying that your credit card account has been charged some money for something you didn't buy, and the English all looks fine, it doesn't mean it's legit. If there's a phone number there in a big font, it's probably a scam.
If you think you might actually have been charged, try contacting your credit card company instead and see if that charge is actually there. Chances are, there's no charge at all.
For romance scams, these are also getting harder to spot because scammers, again, are using AI before they send their text messages.
Same with crypto scams, where some stranger contacts you and tries to make friends, the two of you exchange a few messages. You're getting along great. Next thing you know, they start telling you about this great investment opportunity, one that's made it possible for them to go on vacation, buy a new car, buy a house.
You used to be able to spot these because the English was so bad, but now it's not.
And another way used to be able to spot a scammer is if they would refuse to do a voice call with you or a video chat. But scammers are using AI to make their voices sound like they have an American accent rather than their real accent. So they can hop on a call with you and it sounds like, yeah, they are from Philadelphia or San Francisco or wherever it is that they say they're from, not their actual country.
Of course, even with these voice changers, you can sometimes spot the scammer by the type of wording they use. For example, a really common tell is when your so-called boyfriend or girlfriend says, “Have you eaten today?” That's a common expression of caring in Africa, not necessarily the United States or other countries, but a lot of romance scam victims report falling for this because isn't it nice to have somebody who cares enough about you to ask if you've eaten dinner?
Another way that scammers are using AI is to clone voices. I did an earlier episode where I cloned my own voice and played it for you, and it sounded a lot like the real thing. And we hear about voice cloning being used for things like the grandparent scam, where a grandparent will get a phone call that says it's from their grandchild and they're in jail and they need help.
“Grandma, I’m in Mexico and I got arrested. Please don’t tell Mom and Dad.”
It is just a cloned voice of their grandchild, but a lot of people still fall for it, unfortunately. The thing to do if you get a call like this is hang up and try to call your actual grandchild or their parents.
And then there's AI generated video where you can take a photo and make it move around and talk. Like a photo of a celebrity like Johnny Depp or Jennifer Aniston, or even just a photo that they got off of Instagram.
You couple that with a cloned voice or with an altered voice that sounds American, or whatever nationality the person is supposed to be, and you have a fake talking video that can easily fool someone who doesn't know this technology is possible.
“Hi Rachel, it’s your boyfriend Elon. I love you and I'm ready to take you away to my big mansion. I just need $300 in gift cards so I can rent a car.”
These videos can be very convincing. You can kind of tell they're fake if you really take a good look, like there's something wrong with the eyes or with the mouth, or their ear keeps appearing and disappearing. But if you don't know what to look for, you can get fooled.
Now, this voice changing technology and AI, video technology, you can get it to work in real time, like as the person’s speaking. But right now, there would be a delay in between when the scammer talks and when the changed voice comes out, or between when you ask a question and the video starts to move, but the tools are going to get better and that delay will get shorter and shorter.
What we're seeing more of now is the scammer will make a video ahead of time and then send it to the person, but because there's not a huge delay, like the person says, “Hey, send me a video of you talking about our upcoming vacation together,” and the video shows up 15 minutes later, that can be pretty convincing. But the thing is, the scammer needed that 15 minutes to go make that video.
And the tools to generate this text and clone voices and make the AI generated video, a lot of these tools can be found for free or for a subscription price of just a few dollars a month.
So that's the bad news. Scammers can use AI to do bad things. But then on the good side, we have people using AI for good.
This is one of my favorite stories of the year, UK Telecommunications company Virgin Media O2 recently developed their own AI bot named Daisy to help combat scammers.
Daisy, whose voice was sampled from a real live elderly woman, keeps scammers busy on the phone with chitchat about her hobbies like knitting, and she pretends to play along, but she never quite gets around to giving up things like her credit card number.
Daisy loses track of where her purse is. She forgets what she's looking for. She can't find her phone, she can't find her glasses, and then she goes on a long monologue about birds or something. Daisy is wonderful.
She was designed to waste scammers time, and she does it brilliantly. If she can waste half an hour of a scammer's time, that's half an hour when the scammer can't be scamming Someone else.
Daisy went public late last year in 2024, but it was November, so I'm counting it for 2025.
The use of bots in this way for the good side does bring up another sort of bad but good story.
You may have heard about these scam compounds in Myanmar, or Burma, as it used to be called. Myanmar has been involved in civil unrest for the past five years, so things are a little chaotic there. But over the past couple of years, there have been multiple reports of Malaysians being tricked into working at scam centers in Myanmar.
It starts when they are promised a customer service job in Thailand and they're hired because of their good English. But then they are driven for like two days, far from their home, and they end up in Myanmar with their passport taken away and living in this compound and told they can't leave until they pay off what it cost to transport them there.
And they're forced to live and work in this guarded compound and run romance scams and investment scams, sometimes up to 14 hours a day under threat of physical punishment.
This has gone on for years, but there was little the Malaysian government could do to bring people home. The podcast Scam, Inc, by The Economist has an entire series about it with interviews with people who actually did get out and what they went through.
It's basically human trafficking with usually Chinese scammers running the entire operation, and the Myanmar border being really hard for Thai law enforcement to penetrate without the risk of creating some sort of international incident.
But in the past few months, we've had some good news. In October, first of all, StarLink satellite communications were cut off in Myanmar, which crippled many of these scam offices' ability to get on the internet.
And the Myanmar military rated one of the largest scam compounds, KK Park, which is right near the border with Thailand, and freed more than 2000 people who were in forced labor there. Myanmar really doesn't want to be known as Scam Central, and they're really cracking down.
Then in November, China sentenced several of their own citizens to death for their part in these scam rings.
I've been following these stories of human trafficking for months and was really happy to hear that these scam rings are being broken up, not just for those being scammed, but for the people who were basically kidnapped and forced into the life.
While it sucks for a scam victim to get scammed out of a few thousand dollars, getting physically beaten or deprived of food for not scamming enough people, and having no hope of ever getting out of there, yeah, that would be worse.
So it's good that these scam rings are being broken up and there's hope that the kidnapping will end altogether, because scammers are reportedly saying that pretty soon they won't need real people to do their scams anymore because they can just set up AI bots to chat with people and do the scamming. So that's kind of good news, I guess.
Speaking of bots, one of my favorite YouTubers, Kitboga, this year, it came up with his own army of bots, sort of like Daisy, except that Kitboga has more than a dozen variations. These bots serve the same purpose as Daisy keeping scammers busy, so they can't call real people and try to scam them.
He also has a fake Bitcoin claim website where he sends scammers to, to collect fictitious Bitcoin that one of Kitboga's many characters has pretended to deposit into a Bitcoin ATM, and they get a little receipt out of it. Kitboga sends the scammer a picture of the receipt, and so then the scammer goes to this website to try to redeem the bitcoin.
It has this verification process that makes scammers draw different animals or take a selfie with a shoe on their head, and no matter how hard they try, the scammers will never get the Bitcoin because it doesn't exist.
Some of these scammers have spent hours, even days navigating this endless maze trying to get at the Bitcoin, and every minute that they're spending in the maze is a minute when they're not scamming somebody.
So that's the How Hacks Happen roundup of AI being used for hacks and scams, both good and bad. The best way to protect yourself is to know how AI works and what it can do. So get in there, play around with it. Clone your voice. Clone your friend's voice. Make a video of your friend talking like Johnny Depp. Have a good time with it and you'll be safer just because you know. Plus, it's really fun.
As for 2026 coming up soon, I have a prediction. We'll see AI scam bots calling people, but we'll also have AI bots answering those calls. So I bet we'll see our first recorded instances of an AI scam bot trying to scam an AI grandma. And the call will go on for hours and hours with the scam bot never giving up, and the grandma never giving out any information either.
This is Michele Bousquet from How Hacks Happen wishing you a prosperous and scam-free new year. Bye-bye.