I wanna start with saying that I don’t have a specific person in mind when writing this. I use “him” as a placeholder.
Picture this:
It’s a Tuesday night in Stockholm. A student finishes a late shift at the bar, hops on the night bus home, and stops by a kiosk for a korv med bröd and a cheap energy drink. The card gets swiped three times in under fifteen minutes; 25 SEK, 15 SEK, 30 SEK.
Back in the student corridor, the Wi-Fi is useless (again), so he logs in through a VPN to pay his rent. He updates the delivery address on his food app because his corridor has rotating mail slots, and things tend to disappear.
Nothing strange about this life, except to a fraud detection system.

To the machine, those small, repeated payments look like someone testing stolen cards. A VPN? Must be hiding something. Multiple delivery addresses? Suspicious. Odd-hour shopping? Definitely a red flag.
Within minutes, the system locks him out.
The energy drink never arrives.
That’s the hidden cost of fraud detection: it doesn’t just catch crime. It sometimes catches survival.
Fraud Models 101: Why They Flag the Wrong People
Fraud detection is everywhere, from CashApp to Klarna to your bank card. These systems are meant to spot “unusual” behavior and shut it down fast.
Traditionally, they’ve been rule-based. A few classic red flags:
- More than three small transactions in a short time = possible card testing.
- Logging in from a new location = risk.
- VPN or proxy = hiding intent.
- Multiple addresses on file = identity theft.
And yes, these rules once worked decently against straightforward fraud. But rules don’t age well. Life has moved faster than the models built to protect it.
Machine learning promised a smarter solution: algorithms that adapt and learn from data. And they’re better at catching fraud, no doubt. But there’s a catch: If the dataset reflects only “majority” behavior, everything else gets treated as an anomaly.
In most of the evolved world, that “majority” often means stable income, permanent address, consistent online habits. Anything else? Red flag.
When “Unusual” Becomes “Suspicious”
Here’s the problem: fraud detection doesn’t operate in theory. It collides with reality.
Take a few examples closer to home:
Students making repeated tiny purchases with their Mecenat (student benefits) discounts because CSN (student money) hasn’t dropped yet.
Shift workers tapping their cards after midnight on the way home.
Young renters moving from sublet to sublet, constantly updating their address.
Migrants sending remittances from public Wi-Fi or borrowed laptops.
Frequent travelers whose logins jump from Arlanda to Berlin to Copenhagen to Barcelona.
None of these are crimes. But fraud systems often treat them like one.
And the consequences sting: frozen accounts, blocked CashApp payments, online shopping carts abandoned because the bank thinks you’re “suspicious.”
It’s not just a technical hiccup. For the person living paycheck to paycheck, every block hurts. A missed payment isn’t just inconvenient but it can mean a late rent fee, a hungry evening, a lost gig shift.
The Bias Baked Into Data
Here’s an uncomfortable truth: fraud models reflect the assumptions of their creators.
If the training data is filled with middle-class, salaried, steady consumers, the kind with stable housing, clean credit histories, and 9–5 jobs, then the system learns that as “normal.”
So what happens when a Swedish student logs in from a VPN because the corridor Wi-Fi is trash? Or when someone juggles three addresses in one year because of Stockholm’s brutal rental market?
The model doesn’t see reality. It sees risk.
And unlike a human bank clerk who might nod and say “ah, student life,” the algorithm doesn’t care. It blocks with no nuance, no empathy, and rare second chances.
The Human Cost of Being Flagged
Let’s play out some everyday scenarios:
– A 22-year-old student can’t pay for a night bus ticket because his card has been “temporarily suspended.” He ends up walking home across town.
– A gig worker renting second-hand apartments keeps getting flagged for “unusual activity” every time she updates her delivery address. She starts avoiding certain platforms altogether.
– A young immigrant sending money to family abroad gets rejected at the worst possible moment. His family waits an extra week for support.
These aren’t fringe cases. They’re everyday life for people who don’t fit neatly into the safe, stable patterns a fraud model was trained to trust.
The message people hear isn’t “we’re keeping you safe.” It’s “your life looks suspicious.”
Guardian or Gatekeeper?
Fraud detection is positioned as a guardian of trust. And it is. Without it, the system collapses under scams and stolen identities.
But there’s another side. These same systems also act as “gatekeepers of access.” They don’t just decide what gets blocked, they decide WHO gets blocked.
When a Swede with a steady job and a mortgage buys something online, the system waves it through. When a broke student with three roommates does the same, the system hesitates.
That’s not just a tech problem. That’s a design problem.
Are these systems protecting consumers? Or are they silently reinforcing who gets to participate fully in the digital economy?
Towards Inclusive AI in Fraud Detection
So what would it take to fix this? It’s not about going soft on fraud. It’s about smarter, fairer systems. A few principles:
1. Contextual risk scoring
Don’t block just because one signal looks odd. Consider the bigger picture: does this actually resemble fraud, or just irregular but explainable life?
2. Transparency in decisions
If a payment is blocked, explain why. “Suspicious activity” isn’t enough. People deserve clarity, not mystery.
3. Appeal channels
Let flagged users challenge decisions. Students, shift workers, migrants, all need a way to prove “I’m me, not a fraudster.”
4. Diverse training data
Train models on behaviors that reflect reality, not just the majority. That means late-night spending, VPN logins, frequent address changes, the very stuff life looks like, for millions.
5. Shift the design philosophy
Stop building fraud detection as a digital wall. Build it as a filter that separates real risk from everyday struggle.
Because here’s the truth: safety and fairness aren’t opposites. They’re two halves of the same trust.
The Future: AI Agents in Fraud Detection
Today’s fraud systems are mostly static: they either block or allow, based on signals they’ve been taught to trust. But the future of fraud detection will look different.
With the promise of working, autonomous AI agents around the corner, we’ll see fraud detection systems that can do more than follow rules. They’ll act more like digital investigators:
Instead of automatically blocking a transaction, an AI agent could check additional context in real time — “This student has always bought food around this area, even at odd hours.”
Instead of treating a VPN as an instant red flag, the agent could reason: “This corridor’s Wi-Fi has a history of outages. A VPN here makes sense.”
Instead of freezing accounts for every address change, an AI agent could look for patterns — “Multiple moves, yes, but consistent identity across payments and devices.”
These systems could adapt dynamically to individual behavior, rather than forcing everyone into the same rigid mold.
The dream isn’t a softer fraud model — it’s a smarter one. One that uses context, not just correlation. One that learns that “different” doesn’t equal “dangerous.”
But there’s a catch: AI agents can also scale bias if they’re poorly trained. If we don’t design them with inclusion in mind, we’ll just be automating exclusion faster.
So the real challenge for the future is this: when we unleash autonomous AI agents into fraud detection, will they be guardians that understand, or gatekeepers that exclude?
Closing: The Student and the Korv
Let’s circle back.
That Swedish student wasn’t committing fraud when he bought a korv med bröd and an energy drink after a shift. He was just tired, broke, and trying to get through another week before CSN arrives. His “crime” was living outside the pattern the model expected.
Fraud detection will always be necessary. But if we let bias harden into code, we’re not just catching fraudsters but we’re punishing students, workers, and anyone whose life looks different.
Not everything unusual is fraud. Sometimes it’s just life.
And if AI is to be truly inclusive, it must learn the difference.

Lämna en kommentar