Low‑Power ADAS on Didi’s JueShi Devices Reduces Traffic Accidents
This article describes how Didi’s vehicle‑vision team built an ultra‑low‑power ADAS solution on the JueShi dash‑cam platform, using lightweight detection models, temporal fusion, camera‑calibration techniques and data‑driven optimization to cut rear‑end collision rates by over 11% and improve overall traffic safety.
Road traffic accidents cause massive loss of life and property each year. As a deep‑tech player in transportation, Didi continuously explores ways to reduce these incidents. This paper explains how the vehicle‑vision team applied ultra‑low‑power ADAS technology on JueShi devices to lower accident rates and protect drivers and passengers.
Application Background – Analysis of historical accidents shows that rear‑end collisions account for 60% of serious incidents, with 80% caused by following too closely. Studies from AXA and the US IIHS confirm that forward‑collision warning (FCW) can reduce crashes by up to 69%. The JueShi ADAS fuses front/rear cameras, IMU, GPS and runs DMS, collision detection, and driver‑behavior algorithms to identify risky situations and issue real‑time alerts.
Effectiveness after Deployment – An AB test over one month with hundreds of thousands of devices covering billions of kilometers demonstrated an 11.4% reduction in rear‑end accident rate and a 9.1% overall accident reduction, with a 16.7% drop during peak hours.
Solution Overview – JueShi devices use an MTK8665 processor (four‑core Cortex‑A53, up to 1.5 GHz) with only 5% of compute budget allocated to ADAS. To meet real‑time constraints, the team designed a lightweight detection pipeline, optimized with the in‑house IFX acceleration framework, keeping CPU usage below 5%.
1. Ultra‑Low‑Power Front‑Car Detection – Instead of heavyweight cloud models (e.g., TridentNet on a GPU), the team adopted a single‑stage lightweight backbone (ShufflenetV2 + SSD) and introduced a custom anchor‑regression detector called ZoomNet. Anchors are placed every 120 px in a 960 × 960 crop; the model predicts offsets (Δ) relative to these anchors, which are averaged to obtain the final bounding box. This approach achieves real‑time detection on the A53 core.
2. Stability Enhancement – Lightweight models suffer from small‑target miss, box jitter, and false detections. The solution adds a temporal‑fusion module: two deep networks produce coarse and fine detections, while a Kalman filter predicts the vehicle’s next position. This reduces bounding‑box jitter by 23.3% and mitigates target loss.
3. Camera‑Installation Calibration – Varying mounting angles make distance estimation difficult. Two methods are proposed: (a) a deep‑learning model regresses the vanishing point directly from images, enabling pitch‑angle calibration; (b) statistical estimation of the horizontal vanishing point by aggregating long‑term model outputs, providing yaw‑angle references.
4. Alert Timeliness – Time‑to‑Collision (TTC) is computed as distance divided by relative speed, with a typical threshold of 2.7 s. To improve TTC alerts, a brake‑light classification model detects sudden braking of the lead vehicle, triggering the BLW (Brake‑Light Warning) function, especially effective at high speeds.
5. Long‑Tail Data Mining – The AIoT platform streams collision, driver‑behavior, and DMS data back to the cloud. By mining pre‑alert and post‑alert scenarios, the team identifies missed detections and false alarms, continuously retraining models via OTA updates. An automated pipeline expands the training set with hard examples and evaluates impact on safety metrics.
Conclusion – The JueShi ADAS system demonstrates that ultra‑low‑power edge AI can substantially reduce traffic accidents. Future work includes expanding model coverage for long‑tail scenarios, adding pedestrian‑collision warning (PCW), and further optimizing the detection pipeline.
References: [1] Board NTS. Special investigation report—highway vehicle and infrastructure‑based technology for the prevention of rear‑end collisions. 2001. [2] Farmer CM. Crash avoidance potential of five vehicle technologies. IIHS, 2008. [3] Li Y et al. Scale‑aware trident networks for object detection. CVPR 2019. [4] Redmon J, Farhadi A. YOLOv3: An incremental improvement. arXiv 2018. [5] Liu W et al. SSD: Single shot multibox detector. ECCV 2016. [6] Ma N et al. ShuffleNet V2: Practical guidelines for efficient CNN architecture design. ECCV 2018.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.