The Case for Real-Time Global Fraud Intelligence

Table of contents

The Case for Real-Time Global Fraud Intelligence

Why policy must catch up with how scams actually operate

Written by Dali Kaafar – CEO of Apate.ai

Executive Summary

  • Fraud operates as a modular, cross-border supply chain, but intelligence still moves within national silos
  • Loss reporting is structurally late and cannot drive prevention on its own
  • Payments, crypto payments, and mule intelligence are strong starting points, but effective disruption depends on sharing a full fraud schema in real time, with evidence-based and accurate signals across networks, campaigns, and individual artefacts
  • Governments, Banks, Telcos and FSIs need real-time, structured, and interoperable intelligence exchange models
  • We are not short on analysis. We are short on systems that can move intelligence fast enough to change outcomes.

 

Fraud is global. Our response is not.

Fraud now scales in ways most systems were not designed to handle. And the gap between how organised fraud operates and how we respond to it is widening – not closing.

Europol, INTERPOL, and UNODC all describe the same pattern. Operations spread across jurisdictions, roles split across borders, and infrastructure that shifts as soon as pressure appears [Europol, 2021–2024; INTERPOL, 2023–2024; UNODC, 2023–2024].

At Apate.ai, we see this from the inside. Our AI systems interact directly with fraud actors across voice and messaging channels – engaging scammers in real time, observing how their scripts evolve, tracking how their infrastructure shifts. What we observe operationally maps precisely onto what Europol, INTERPOL, and UNODC describe in their reporting: modular, distributed, and deliberately designed to absorb disruption.

Australia’s Scamwatch and the US FTC both report billions of dollars in annual fraud losses, while also noting that under-reporting remains significant [ACCC Scamwatch, 2023–2024; FTC, 2023–2024]. The Global Anti Scam Alliance reported that in 2024, approximately 1.03 trillion USD was stolen from victims by scammers.  

For comparison’s sake, that is equivalent to:  

  • the combined annual revenue of every airline in the world.  
  • The GDP of Taiwan
  • The combined annual revenue of every telecommunications provider globally

Those figures tell you how large the problem is but say very little about how it actually operates.

What really matters is how that scale is produced, and how little of that system is visible end- to-end.

 

The real problem is timing, not awareness

Most fraud reporting systems are structurally reactive. They are designed to capture what happened, not to interrupt what is happening. A victim files a report after persuasion has succeeded, after payment instructions have been delivered, and often after the first fund transfer has cleared. By that point, the scam operation has already moved on.

The average time between a scam payment and victim reporting is measured in days to weeks. Scam infrastructure – phone numbers, domains, payment accounts – typically rotates within hours of detection. Europol’s operational reporting confirms this: infrastructure lifetimes are shrinking, with some disposable SIM-based calling infrastructure lasting under 48 hours before being cycled out [Europol, 2022–2024]. By the time a pattern is identified and shared, the underlying accounts and channels have already been replaced.

The UK National Audit Office and equivalent bodies in Australia and the US have each flagged the same structural issue: fraud response responsibilities are split across agencies with no shared data architecture, and reporting timelines are built around evidence preservation rather than real-time interdiction [UK NAO, 2023–2024]. The result is systems that are well-designed for prosecution but poorly designed for prevention.

There is also a structural interoperability problem that rarely gets discussed plainly. The majority of cross-agency fraud intelligence is still exchanged as free-text incident reports, PDF attachments, or spreadsheet exports. These formats are not machine-readable at scale. They cannot be automatically enriched, correlated against live threat feeds, or used to trigger real-time controls at payment gateways or network layers. They require human triage at every step – which means the bottleneck is not data volume, it is processing latency. Even where structured formats like STIX/TAXII have been adopted for cyber threat intelligence, equivalent standards for scam-specific artefacts (phone numbers, mule account clusters, scam script taxonomies, crypto wallet identifiers) remain inconsistent or absent across jurisdictions.  

The practical consequence: a bank in Australia and a telco in the UK may both have signals pointing to the same mule network, but neither can act on the other’s data without manual intervention. By the time that intervention happens – if it happens – the window for disruption has closed.

 

Fraud now runs as a modular supply chain

To understand why intelligence fragmentation is so damaging, it helps to understand how modern scam operations are actually structured. They do not operate as unified organisations. They operate as loosely coupled service ecosystems – more analogous to a cloud-native software architecture than a traditional criminal hierarchy. Each functional component is provided by a different actor, often in a different jurisdiction, and components are swapped in and out based on availability, cost, and exposure risk.

Lead generation might be handled by a group running malvertising campaigns or bulk SMS spoofing in Southeast Asia. Victim engagement is handed off to call centre operators – sometimes running from compounds in Myanmar, Cambodia, or the UAE – who work from scripted playbooks tailored to specific scam typologies (investment fraud, romance scams, impersonation of government agencies). Payment collection uses a separate layer of recruited money mules or purpose-built shell accounts. Crypto off-ramps and cross-border wire transfers handle final laundering. Those roles are not fixed. They shift as pressure is applied.

This is what makes the ecosystem resilient in a technical sense. There is no single point of failure. Disrupting a call centre operation does not stop the lead generation pipeline feeding it. Seizing a mule account cluster does not interrupt the scripting infrastructure being used for victim engagement. Each disruption is local; the system absorbs it and reroutes. This is not an accident – it is a deliberate design principle that has emerged from years of operational exposure to law enforcement takedowns.

The architectural parallel to distributed software systems is not superficial. Like a microservices deployment with redundant nodes, these operations are built so that no single takedown produces system-wide failure. INTERPOL’s Operation First Light series and Europol’s EMMA operations have each demonstrated this: significant arrests and infrastructure seizures produce temporary degradation, not collapse. Within weeks, reconstituted operations are running again, often with improved operational security based on lessons learned from the disruption [INTERPOL, 2023–2024; Europol, 2021–2024].

Public reporting reflects this pattern consistently:

  • Acquisition through ads, impersonation, and compromised accounts
  • Engagement through call centres and messaging channels
  • Monetisation through payments, cards, and crypto
  • Laundering through mule networks and cross-border movement

[Europol, 2021–2024; INTERPOL, 2023–2024; FATF, 2020–2023]

This structure explains why scams scale as effectively as they do. Each layer is specialised, loosely connected, and replaceable without affecting the rest of the system.

No single actor needs to control the full chain, and no single point of disruption is enough to stop it.

Now compare that architecture to how detection and response is currently organised. Telcos can observe traffic anomalies – unusual call volumes from specific number ranges, bulk SMS origination patterns – but have no visibility into the financial outcomes those calls produce. Banks can see anomalous transaction patterns and authorised push payment (APP) fraud indicators, but cannot see the social engineering that preceded the payment. Platforms can identify impersonation content and scam lures at the acquisition stage, but lose all visibility the moment a victim moves to a separate communication channel. Government agencies are expected to synthesise all of this into a coherent response – typically with delayed, incomplete, and non-interoperable data inputs.

The result is a fundamental visibility asymmetry. The threat operates end-to-end. The defence operates in silos. One way to close that gap is to go further than passive detection – to actively engage the fraud network itself, observe how it operates in real time, and use those interactions to generate intelligence that passive monitoring cannot produce. That is the model Apate.ai is built around.

No one sees the system end to end.

That is the mismatch.

A modular, cross-border system on one side, and fragmented, sector-based visibility on the other.

Those gaps create the conditions where organised fraud can operate and scale.

 

Payments intelligence is the cross-border join key

Every scam, regardless of typology, ends the same way. The money has to move. And that movement – from victim account to first-hop mule, through layering transactions, to final extraction – creates a chain of artefacts that, if captured and shared in near real-time, provides the most reliable basis for cross-border linkage and network attribution.

FATF has repeatedly identified mule account networks as the connective tissue of cross-border fraud laundering – the mechanism through which proceeds are layered across jurisdictions before extraction [FATF, 2020–2023]. Europol and INTERPOL financial intelligence teams focus on payment flows specifically because they lead upward through the network toward organisers and financiers, not just the operators executing individual scams [Europol, 2021–2024; INTERPOL, 2023–2024]. A seized call centre reveals operators. Following the money reveals the infrastructure that will be used to spin up the next one.

In data architecture terms, payments intelligence functions as the foreign key that links otherwise disconnected fraud datasets. A beneficiary account number seen in an Australian APP fraud case may match an account flagged by a UK institution in an investment scam, a number cluster flagged by a Singaporean telco for unusual outbound call volumes, and a wallet address appearing in a crypto fraud complaint filed in Canada. Without a shared, structured mechanism to resolve these references against each other in near real-time, each institution responds to a fragment. With one, the network becomes visible.

The signals that matter most for cross-border linkage are not exotic.

They are the artefacts that every financial institution already generates, but rarely shares in structured form at speed:

  • Beneficiary account identifiers and receiving institution BICs – the first-hop destination of fraud proceeds
  • Behavioural indicators of mule activity – rapid pass-through velocity, unusual account age-to-activity ratios, structured withdrawal patterns
  • Recurring account clusters appearing independently across multiple institutions – the fingerprint of shared mule infrastructure
  • Crypto wallet addresses and fiat off-ramp touchpoints – particularly exchange deposit addresses and OTC desk identifiers used for final extraction

The constraint is not data availability. Every major financial institution is generating these signals continuously. The constraint is signal isolation – the absence of structured, privacy-preserving, legally governed mechanisms to resolve these artefacts across institutional and national boundaries at the speed required to matter.

In practice, this means institutions operating in the same network – seeing the same mule accounts, the same originating phone numbers, the same scam script patterns – each respond in isolation. Freeze decisions are made without knowledge of parallel action elsewhere. Warnings are not propagated. The network absorbs the partial disruption and continues. Structured, lawful exchange is what turns isolated observations into network-level disruption.

What “good” global intelligence sharing looks like

Policy is moving in the right direction. The UK’s Online Safety Act, Australia’s Scams Prevention Framework, and the EU’s Payment Services Directive 3 each introduce elements of cross-sector data sharing obligation. The UN Global Fraud Summit produced commitments to accelerate interagency coordination. But regulatory intent and operational capability are not the same thing. Threat actors operate on a timeline measured in hours. Policy implementation cycles are measured in years. The gap between them is where fraud scales.  

A workable model relies on clarity and discipline, not perfect alignment.

Five principles matter.

1. Share artefacts, not narratives

Intelligence exchange fails when it defaults to narrative. Case summaries, incident reports, and situation briefings are useful for institutional awareness. They are not useful for automated triage or real-time control. A workable exchange model centres on structured artefacts: account identifiers, phone numbers, domain names, wallet addresses, scam script fingerprints – each with mandatory metadata fields including confidence score, source category, first-seen and last-seen timestamps, and jurisdiction of origin. Artefacts without provenance cannot be trusted. Artefacts without confidence scores cannot be weighted. Both are required for downstream automation.

2. Separate prevention from evidence

One of the most persistent blockers to intelligence sharing is the conflation of prevention data with evidence. Law enforcement agencies and financial intelligence units often apply evidence-chain standards – admissibility, chain of custody, source protection – to data that is being requested for prevention purposes, not prosecution. These are legitimate concerns for evidence. They are actively counterproductive for prevention. A separate, clearly scoped prevention data tier – designed for speed and interoperability, with appropriate use restrictions – removes this bottleneck without compromising evidentiary integrity.

3. Design for privacy and sovereignty

Centralised pooling of fraud intelligence across borders creates immediate legal and political friction – data sovereignty concerns, GDPR and equivalent frameworks, varying national definitions of what constitutes personal data in a financial context. Federated exchange architectures sidestep much of this by keeping data resident within national or institutional boundaries, and exchanging derived signals or hashed identifiers rather than raw records [EDPB, 2023–2024]. Privacy-preserving record linkage (PPRL) techniques – where account identifiers are hashed before sharing, allowing cross-institutional matching without exposing raw PII – are mature enough for production deployment and provide a practical path to legally compliant cross-border correlation.

4. Standardise formats

The cyber threat intelligence community converged on STIX/TAXII as a shared schema for structured threat indicator exchange. Fraud intelligence lacks an equivalent standard with comparable adoption. ACAMS, FinCEN’s 314(b) programme, and various bilateral sharing arrangements each use different data models, different field definitions, and different taxonomies for scam typology classification. A minimum viable schema for scam artefact exchange – covering identifier type, confidence tier, scam typology tag, geographic scope, and provenance fields – would allow automated ingestion and correlation across platforms. Without it, every bilateral sharing agreement requires bespoke integration work that slows adoption and limits scale.

5. Measure time-to-warn

Time-to-warn – the elapsed time between a fraud signal being generated at one institution and a downstream warning or control action being triggered at another – is currently not systematically measured anywhere. This is a critical gap. If the average mule account lifetime is 72 hours and the average time-to-warn across a sharing network is 96 hours, the network is structurally incapable of preventing harm. Treating time-to-warn as a first-class operational metric, with SLAs attached, changes the incentive structure for participating institutions and creates accountability for the speed of the overall system, not just the quality of individual contributions.

This is the shift. From informal sharing to engineered intelligence exchange. From reactive reporting to active disruption. From observing the network to engaging it directly – which is precisely where Apate.ai operates.

The system we need is starting to take shape, but we need to accelerate intelligence sharing

The UN Global Fraud Summit, INTERPOL’s operational programmes, and FATF’s typologies work all point in the same direction. The policy consensus exists. The operational will is building. What is lagging is the technical and governance infrastructure to convert that consensus into systems that actually change outcomes at scale.

The analysis is thorough. The typologies are well-documented. The regulatory frameworks are improving. What is missing is the layer between policy intent and operational reality: the APIs, the shared schemas, the federated matching infrastructure, the governance frameworks that allow institutions to act on each other’s signals in near real-time without violating privacy law or sovereignty constraints. That is an engineering and governance problem as much as a policy problem, and it requires the same rigour applied to both.

A credible response requires:

  • Sub-minute signal propagation from point of detection to downstream control action – not hours, not days
  • Structured, machine-readable formats with standardised schemas that allow automated ingestion and correlation across institutional and jurisdictional boundaries
  • Legal governance frameworks that separate prevention-tier data sharing from evidence-tier obligations, making fast exchange legally viable
  • Cross-sector coordination architecture that mirrors the modular structure of fraud networks – connecting telco, financial, platform, and government signals into a unified operational picture

The UN Global Fraud Summit was an important moment. The commitments made there matter. But commitments age. The infrastructure to operationalise them does not build itself.  

Until it exists – until there are systems that can move a fraud signal from detection at one institution to a control action at another in under a minute, across borders, at scale – the response will remain structurally slower than the threat. At Apate.ai, closing that gap is not a policy aspiration. It is an engineering problem we are actively solving – through AI systems that engage fraud actors directly, generate verified indicators at scale, and feed that intelligence into the workflows where it can actually stop money moving. Our pledges at the summit – 500,000 minutes of scammer engagement, $10 million in fraud prevented, 10,000 fraud indicators identified – are measurable commitments to that mission. We are on it.

Follow Apate.ai on LinkedIn for more insights on organised fraud and scam disruption.  

Join us in revolutionising scam 
prevention into a proactive force for change.

Book a Demo

Let’s work together.

We work closely with each client to understand their unique requirements and provide a solution that fits. Reach out for a personalised consultation and to explore how our technology can transform your scam prevention and intelligence strategy.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.