White‑Box Cost Governance in Big Data: Engine, Warehouse, and Tool Optimizations
This article details a year‑long white‑box cost governance practice at Kuaishou, covering data governance dimensions, engine auto‑tuning (HBO), compression algorithm migration, operator analysis, data‑warehouse metrics, duplicate‑computation reduction, chain‑depth simplification, automated routine governance, and the resulting performance and cost benefits.
The presentation introduces Kuaishou's recent white‑box cost governance practice for big‑data platforms, dissecting the architecture into engine, data‑warehouse, and tooling layers to achieve deep, measurable cost reductions.
Data Governance System : The governance framework is divided into four pillars—efficiency (development and consumption), security (production and consumption), quality (prevention, detection, fault outcome, post‑mortem), and cost (storage, compute, traffic).
Engine White‑Boxing : A project named "Engine White‑Box" implements several optimizations: HBO automatic parameter tuning replaces manual tuning, addressing difficulty, volatility, and cost; compression algorithm replacement switches from GZIP to ZSTD, improving compression ratio by 3‑12% without sacrificing speed; and operator analysis examines Spark execution plans, resource usage, and UDF impact to pinpoint bottlenecks.
HBO Tuning Process : Four steps—profile construction, coarse tuning, parameter dispatch, and fine‑tuning—automatically adjust CPU/memory allocation, task sharding, and execution parameters, yielding better performance and lower resource consumption.
Data‑Warehouse White‑Boxing : Defines quantitative metrics for completeness, reuse, and compliance; outlines a six‑step workflow to detect and merge duplicate operators via signature collision; proposes reducing chain depth by building operator‑level lineage and automating remediation; and introduces routine governance automation with a five‑step safety framework.
Benefit Analysis : The combined efforts deliver roughly 5% higher storage compression, 16% compute resource savings, and a 14% reduction in job runtime, along with lower failure rates, GC time, and OOM incidents.
Future Plans : Continue improving storage compression (dynamic compression, encoding), deepen data‑warehouse architecture enhancements, extend engine white‑boxing depth, and explore next‑generation technologies to break current efficiency and cost limits.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.