Anti‑Fraud Strategies and Practices for the Jimu Social App
This article presents Xu Ming, head of risk control at Jimu, sharing comprehensive insights and practical experiences on combating black‑gray market fraud within the Jimu app, covering the platform’s risk points, common challenges, overall anti‑fraud strategy, detailed operational tactics, and reflective thoughts on future improvements.
Guest: Xu Ming, Head of Risk Control at Jimu Editor: Frank Platform: DataFunTalk
Introduction: The article introduces Xu Ming’s insights and practical experience in combating black‑gray‑market fraud on the Jimu app, outlining the app’s main risk points, common problems, overall anti‑fraud thinking, practical tactics, and concluding reflections.
01
About Jimu
Jimu is a youth‑focused interest‑based social platform where Gen‑Z users can quickly find like‑minded friends for activities such as rap music, bungee jumping, skydiving, and skiing.
The platform’s high‑value user base and diverse interests attract malicious actors.
Major fraud categories inside the app:
Naked‑chat extortion
"Pig‑butchering" scams – investment‑type and emotional‑type
Red‑packet begging – fabricated stories (e.g., needing a ride, medicine, food delivery) and direct requests for red packets
02
Common Problems and Overall Anti‑Fraud Thinking
1. Common Problems
Key business questions to consider:
How many types of scams exist in the business and what are their proportions?
What are the main malicious tactics used by the black market?
What are the costs and revenues for the black‑market operators?
These questions help align risk control with business weak points and understand attacker incentives.
Summarized issues:
Degree of importance – fraud often occurs via third‑party chat tools, so platform attention varies; risk control must negotiate the right level of involvement with product.
External threat intelligence – many platforms lack real‑time intel, leading to delayed response.
Prioritization – decide which nodes to attack and which punitive measures minimize business impact.
Post‑ban appeals – large numbers of banned accounts appeal repeatedly, creating workload challenges.
2. Overall Anti‑Fraud Thinking and Results
Four‑word summary of the method: "Pre‑knowledge attack, post‑thinking defense, resource strike, strong education." Allocation of effort: 40% on pre‑knowledge attack, 25% on resource strike, 25% on strong education.
Recent monitoring data shows a dramatic drop in complaint volume across stages, confirming the effectiveness of the strategy.
03
Practical Experience
1. Pre‑knowledge Attack
Two main supply‑chain models for fraudulent accounts:
Account studios or crowdsourced sellers obtain accounts and sell them to middlemen or overseas gangs.
Overseas gangs create accounts themselves, eliminating the middle‑man step.
Account creation methods include protocol accounts, soft‑modifications, hard‑modifications, and crowdsourced accounts. Each method varies in cost and difficulty to block.
After creation, accounts may be kept, sold, or used for direct scams (direct login, secondary verification code capture, data‑packet buying/selling, or "running fans" where contact info is exchanged between scammers and fan merchants).
2. Post‑thinking Defense
Risk perception: build alarm mechanisms and an effective intelligence system. Alarms enable minute‑level response to sudden attacks, reducing user exposure.
Risk blocking: target core resources (device fingerprints, IP, network environment, user‑generated content such as avatars, signatures, posts) and implement pre‑emptive blocks during registration, login, or matching stages.
Key considerations for alarms: avoid overly granular or excessive alerts that overwhelm staff, and keep rules simple for maintainability.
3. Resource Strike and Pre‑emptive Blocking
Monitoring points include registration pre‑checks, login, matching, device type changes, and network environment variations. Abnormal query spikes often indicate black‑market probing.
Intelligence gathered from community groups (QQ, Telegram, forums) reveals attacker tactics, account pricing, and tool usage, informing strategy adjustments.
Handling user complaints: built a relationship‑network system that masks key data, grants senior support staff query rights, and reduced monthly ticket volume by 75%.
4. Strong Education
Beyond technical measures, educate users: push weekly ecosystem reports, system messages about penalties, and embed anti‑fraud guides in onboarding. Quick complaint handling (average 10 minutes) improves user trust.
Surveys show high user recognition of these efforts and willingness to participate in ecosystem governance.
5. Reflection
Future considerations include cross‑platform data sharing of black‑market core data, pre‑warning mechanisms that respect privacy, and post‑incident support such as police‑enterprise cooperation and dedicated 1‑on‑1 customer service.
Thank you for listening.
Please like, share, and give a three‑click boost at the end of the article.
Guest Introduction
Community Recommendation
Join the DataFunTalk Intelligent Risk Control community for peer exchange. Scan the QR code to add the assistant and enter the group.
About Us
DataFunTalk focuses on big‑data and AI technology applications, hosting over 100 offline and online events since 2017, with nearly 1,000 experts and scholars, 400+ original articles, and a large, engaged audience.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.