Artificial Intelligence 12 min read

User Behavior Sequence Based Transaction Anti‑Fraud Detection

This presentation explains how leveraging user behavior sequences with supervised and unsupervised deep learning models, including end‑to‑end and two‑stage architectures, improves transaction fraud detection by identifying distinct patterns of account takeover and stolen‑card activities and outlines the engineering deployment pipeline.

DataFunTalk
DataFunTalk
DataFunTalk
User Behavior Sequence Based Transaction Anti‑Fraud Detection

The talk introduces transaction anti‑fraud detection, first reviewing traditional fraud pipelines that rely on structured data, manual feature engineering, and classic models such as LR and GBDT.

It then highlights the limitations of using only structured data and motivates the use of user behavior sequences—high‑dimensional, non‑structured signals collected from browsing, searching, and purchasing actions.

Two main fraud types are described: (1) Account takeover, where attackers quickly search, view, and purchase high‑value items; (2) Stolen financial cards, where thieves bind stolen cards and make expensive purchases after brief browsing.

Statistical analysis shows that normal users browse more pages, spend longer per session, and have higher view‑item counts than fraudulent users, demonstrating the discriminative power of behavior data.

The modeling part covers:

Supervised models: an end‑to‑end Deep & Wide architecture that embeds each behavior sequence, processes them with LSTM, CNN, and attention, then concatenates with traditional wide features for classification; a two‑stage approach that first learns behavior embeddings with deep models and feeds them into downstream models.

Unsupervised models: bi‑LSTM encoders for event and time sequences, attention‑fused embeddings, next‑event prediction to learn behavior vectors, followed by HDBSCAN clustering to identify risky clusters, with rule‑based explanations derived via decision‑tree analysis.

Training uses a sliding window (e.g., 30 days for training, day 31 for testing) and GPU‑accelerated FAISS for efficient nearest‑neighbor search.

Deployment consists of an offline module that extracts behavior embeddings and populates risky clusters, and an online pipeline that encodes real‑time sequences, computes similarity to cluster centroids, and applies rule‑engine decisions to flag transactions.

The session concludes with a Q&A covering heterogeneous user behavior handling, embedding generation, differences between supervised and unsupervised pipelines, feature granularity, and practical advice for early‑stage fraud systems.

fraud detectionuser behaviorDeep Learningembeddingsupervised learningunsupervised clustering
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.