← Back to Blog

Payment Processor Fraud in 2024: What the Data from 1 Billion Transactions Shows

2024 year in review data report visual

Every year the attack patterns shift. Some changes are incremental — card testing operations getting more distributed, fraud ring infrastructure migrating to different residential proxy providers. Some are structural — new attack categories that weren't material the year before. 2024 had both types, and the data from the transactions InferX processed through the year makes the patterns clear enough to be worth documenting while the memory is fresh.

This is not a comprehensive industry report — we're describing patterns in transactions InferX scored, which represents a specific slice of the US fintech payment processor market. The patterns will differ for processors with different merchant mixes or geographic concentrations. With that caveat stated: here's what we saw.

Attack Type Distribution in 2024

Card testing remained the dominant fraud category by transaction volume, representing approximately 34% of fraud attempts in our 2024 data. This is consistent with the prior two years. What changed is the sophistication distribution within card testing: the percentage of card-testing operations using distributed residential proxy infrastructure (as opposed to simpler datacenter IP rotation) increased from roughly 45% in 2023 to approximately 68% in 2024. The simple attacks are being screened out before they reach our customers' endpoints; what arrives is predominantly the harder distributed variant.

CNP fraud on stolen card credentials (distinct from card testing — actual fraudulent purchases using validated stolen card data) represented 28% of fraud attempts. This category showed a 15% volume increase year-over-year, driven primarily by increased card data availability from several major data breach incidents in 2023 that produced large card dumps entering the fraud market in early-to-mid 2024.

Account takeover fraud (compromised cardholder credentials used to access existing legitimate accounts) represented 21% of fraud attempts, up from 17% in 2023. The increase reflects the improving quality of phishing infrastructure and the growing market for credential stuffing as a service. Account takeover is the fastest-growing fraud category in our network by percentage growth year-over-year.

Friendly fraud chargebacks (disputes filed on legitimate transactions) represented approximately 12% of total chargebacks, consistent with prior years but increasingly concentrated in specific merchant categories — subscription billing and digital goods accounted for 73% of the friendly fraud volume.

Synthetic identity and first-party fraud schemes represented the remaining roughly 5% of flagged fraud events by volume but disproportionate dollar value — the average loss per synthetic identity bust-out event is dramatically higher than a single card-fraud transaction. This category requires different detection methodology than the others and is tracked separately.

The November–December Spike: Sharper Than Expected

Seasonal fraud patterns are well-documented: fraud attempts increase during peak retail periods as transaction volume provides cover for fraudulent authorizations. In 2024, the November–December spike was sharper than our 2022 and 2023 data suggested. The peak fraud attempt rate (daily attempts as a percentage of total daily transactions) was 38% above the January–October 2024 baseline, compared to 22% above baseline in November–December 2023.

The primary driver was card-testing volume. October and November saw coordinated testing campaigns that appeared to be using fresh card dumps from late-Q3 2024 breach events — the BIN concentration patterns were consistent with specific issuer BIN ranges that also appeared in public breach notifications. The attacks were clearly timed for the high-transaction-volume holiday period, when fraudulent transactions are harder to distinguish from the general traffic surge and velocity-based rules have elevated false positive risk if thresholds aren't dynamically adjusted for volume.

Processors whose fraud thresholds are set as static rules rather than volume-normalized metrics were most exposed to this pattern. A threshold of "decline if more than 10 transactions per device per hour" that's appropriate for a Monday in February is much too permissive during peak holiday traffic when 10 transactions per hour from the same device represents a small fraction of the overall elevated volume. Dynamic threshold adjustment tied to real-time transaction volume is a lesson that 2024's holiday season reinforced.

Account Takeover: The Credential Stuffing Industrialization

The account takeover data from 2024 revealed something about the industrialization of credential stuffing that's worth flagging. In 2022 and 2023, credential stuffing attacks were distinguishable by their originating IP concentration — a stuffing run would produce a cluster of login attempts from a recognizable block of datacenter or VPN IPs. The pattern was detectable with IP reputation scoring.

In 2024, roughly 55% of the account takeover attempts we analyzed showed IP characteristics consistent with residential proxies rather than datacenter IPs. The credential stuffing operations have fully migrated to the same residential proxy infrastructure as card testing, making IP-based detection significantly less effective for both attack types simultaneously. The operational crossover — where the same infrastructure and operator networks are being used for both card testing and account takeover — is a relatively new pattern.

The behavioral biometrics data on these account takeover attempts is instructive. The automated credential stuffing sessions showed statistically distinguishable interaction patterns from legitimate login sessions even when device fingerprints and IP signals were masked. Keystroke timing regularity (coefficient of variation below 0.08) and form completion time variance (much lower than human populations) remained detectable signals even as other signals degraded. The behavioral layer is where detection held up when other signals were bypassed.

Geographic Distribution Shifts

The originating geography of fraud attempts (based on IP geolocation of the initiating session) showed a notable shift in 2024. Southeast Asian origin IPs increased from approximately 18% of fraud origins in 2023 to 27% in 2024. This is partly a real shift in where fraud operations are running and partly an artifact of the residential proxy migration — the residential proxy pools available in 2024 are disproportionately concentrated in specific Southeast Asian markets.

The implication for processors: using originating IP geography as a fraud feature requires recalibration when residential proxy infrastructure shifts the apparent geographic distribution of fraud attempts. A model trained on 2023 data where Southeast Asian IPs had certain fraud rate characteristics will have drifted by 2024 because the same IP ranges now represent different fraud and legitimate populations than they did when the model was trained. This is a concrete example of the adversarial concept drift that makes regular retraining necessary.

Within the US, the fraud attempt distribution by state was fairly stable year-over-year, with California, Texas, Florida, and New York representing disproportionate shares of both transaction volume and fraud attempt volume — which is expected given their share of the US consumer population. The fraud attempt rate (attempts per 1,000 transactions) was highest in Florida and Nevada, consistent with patterns from prior years.

The Fastest-Growing Attack Vectors

By percentage growth year-over-year rather than absolute volume, the three fastest-growing fraud vectors in our 2024 data were: push payment fraud (authorized push payment scams where victims are socially engineered into initiating transfers), AI-generated synthetic identity applications (a new category we began tracking in Q2 2024), and real-time payment fraud (fraud on faster payment rails that doesn't allow the normal 1–3 day clearing window for fraud detection).

Push payment fraud grew 41% year-over-year by dollar value in our network. This fraud type is structurally difficult for payment processors to detect because the victim initiates the transaction voluntarily — from the processor's perspective, it looks like a legitimate authorized payment. Detection requires behavioral signals at the session level (was the customer under unusual time pressure? did they change a payment destination immediately before submitting?) and account-level monitoring (is this the first time this account has sent funds to this recipient? to this bank?). Pure transaction-level scoring misses most push payment fraud entirely.

AI-generated synthetic identity applications is a category we began seeing in Q2 2024. The identities use generated photos that pass standard liveness checks, SSN-name-DOB combinations that are internally consistent and pass bureau lookups, and fabricated document imagery that passes automated document authentication at rates that suggest purpose-built generation tools. The volume is still small relative to traditional synthetic identities but the growth rate is rapid. This category will require new detection methods that go beyond document authentication and SSN verification as the generation tooling improves.

What the Detection Data Shows About ML vs. Rules

Processors in our network who were running InferX ML scoring alongside legacy rule-based systems (as a parallel track for comparison purposes) showed an aggregate ML detection rate 23 percentage points higher than rule-based detection for card-testing fraud and 18 percentage points higher for CNP fraud on stolen cards, with false positive rates 61% lower for the ML system on the same transaction populations.

The cases where rule-based systems outperformed ML scoring were in specific, well-understood attack patterns where a single signal was definitively predictive: a specific known-bad IP block appearing in a card-testing run, or a specific BIN range from a known compromised issuer. Rules execute these exact-match lookups faster and with more precision than a general ML model. The architecture that maximizes detection efficiency is both: ML for the complex, distributed attack patterns and rules for the high-confidence exact-match signals. Neither alone is optimal.

Projections for 2025

Based on the 2024 trend data, the attack vectors we're watching most closely in 2025 are account takeover through AI-assisted phishing (more convincing phishing infrastructure producing higher credential theft rates), real-time payment fraud as faster payment rails expand merchant and consumer adoption, and the continued evolution of AI-generated synthetic identities. All three categories favor attackers who have access to improving AI tooling and create detection challenges for systems built around static behavioral and biometric baselines.

The defensive investments with the highest expected return for 2025 are behavioral biometrics for account takeover prevention (the signal that held up most consistently against 2024's attack evolution), real-time payment monitoring with account relationship history (new payee detection, amount anomaly detection for authorized push payment fraud), and improved synthetic identity detection at account opening that goes beyond document authentication.

The trend from 2024 that is most concerning for the industry overall is the industrialization of residential proxy infrastructure for fraud operations. When the same infrastructure enables card testing, account takeover, and credential stuffing simultaneously, the traditional approach of building separate detection rules for each attack type becomes less effective. The defenders who will fare better in 2025 are those with detection infrastructure that can surface infrastructure reuse across attack types — the network graph approach applied not just to entity relationships within a single fraud type but across attack categories that share operational infrastructure.