Generative Pretraining at Scale: Transformer-Based Encoding of Transactional Behavior for Fraud Detection

Kavli Affiliate: Zheng Zhu

| First 5 Authors: Ze Yu Zhao, Zheng Zhu, Guilin Li, Wenhan Wang, Bo Wang

| Summary:

In this work, we introduce an innovative autoregressive model leveraging
Generative Pretrained Transformer (GPT) architectures, tailored for fraud
detection in payment systems. Our approach innovatively confronts token
explosion and reconstructs behavioral sequences, providing a nuanced
understanding of transactional behavior through temporal and contextual
analysis. Utilizing unsupervised pretraining, our model excels in feature
representation without the need for labeled data. Additionally, we integrate a
differential convolutional approach to enhance anomaly detection, bolstering
the security and efficacy of one of the largest online payment merchants in
China. The scalability and adaptability of our model promise broad
applicability in various transactional contexts.

| Search Query: ArXiv Query: search_query=au:”Zheng Zhu”&id_list=&start=0&max_results=3

Read More