How DataTester’s Architecture Upgrade Uses DDD to Tame Code Complexity
DataTester’s A/B testing platform underwent a comprehensive architectural overhaul, applying domain‑driven design, modular refactoring, automated validation, and dependency inversion to reduce change amplification, cognitive load, and unknown unknowns, ultimately improving code readability, maintainability, scalability, and development efficiency across its lifecycle.
Why Code Quality Matters
Good code must first satisfy product requirements, but its quality is also judged by architecture, readability, maintainability, and extensibility.
System Evolution Stages
Early : Rapid iteration, one‑week feature delivery, orderly code.
Mid : Growing feature coupling, technical debt, increased cognitive load, slower development.
Late : Frequent small changes affect many places, high on‑call burden, system chaos.
Final : Near‑zero productivity, only maintenance possible.
Complexity Sources
Following John Ousterhout’s definition, software complexity has three observable aspects:
Change amplification : Simple changes require modifications in many places.
Cognitive load : High effort needed to understand the system.
Unknown unknowns : Uncertainty about which parts need change and the impact of those changes.
Complexity stems from excessive dependencies and ambiguity, which create a feedback loop that accelerates system decay.
Domain‑Driven Design (DDD) as a Remedy
DDD helps answer “where to put code” by structuring the system into bounded contexts, entities, value objects, and services. Applying DDD to DataTester’s experiment management reduces coupling and improves testability.
Module Overview
The experiment domain is split into four main modules: Log, Experiment, Experiment‑Layer Management, and Workflow. The refactor focuses on the Log and Experiment core modules.
Log Domain : Provides operation‑log APIs, change‑tracking, and log file generation.
Experiment Domain : Consists of
BaseExperiment,
ExperimentExtension, and
ExperimentPluginto balance reuse and extensibility.
Version Management : Handles experiment versions, white‑list logic, and special scenarios.
Target Audience : Encapsulated in a
TargetRuleentity for routing and filtering.
Business Process Flow
Every experiment operation can be abstracted into three steps: validation , business processing , and persistence . This separation enables plug‑in‑style extensions and step‑independent workflows.
Automated Validation Mechanism
Validation is divided into field, dependency, functional, and logical checks. A
Validatorobject registers needed checks based on which fields are present (all fields are optional except the ID), enabling on‑demand validation without step coupling.
<code>func GetModuleConfigMapForExpStart() map[string]interface{} {
return map[string]interface{}{
"Application": map[string]interface{}{
"ModuleList": []string{"ApplicationInfo"},
},
"Experiment": map[string]interface{}{
"ModuleList": []string{"BaseExperimentEntity", "Version", "RunningExpListAtSameLayer"},
},
"Extension": map[string]interface{}{
"ParentChild": map[string]interface{}{
"ParentExperiment": map[string]interface{}{"ModuleList": []string{"ParentBaseExperiment"}},
},
"Intelligent": map[string]interface{}{"ModuleList": []string{"TrafficMap"}},
},
"Plugin": map[string]interface{}{
"ExperimentWorkflow": map[string]interface{}{"ModuleList": []string{"ExperimentWorkflow"}},
"Rollout": map[string]interface{}{"ModuleList": []string{"Rollout"}},
},
}
}
</code>On‑Demand Aggregate Construction
Different API calls require different aggregates. A JSON configuration describes which modules to build for a given scenario, reducing unnecessary data loading.
<code>{
"Application": {"ModuleList": ["ApplicationInfo"]},
"Experiment": {"ModuleList": ["BaseExperimentEntity", "Version", "RunningExpListAtSameLayer"]},
"Extension": {
"Intelligent": {"ModuleList": ["TrafficMap"]},
"ParentChild": {"ParentExperiment": {"ModuleList": ["ParentBaseExperiment"]}}
},
"Plugin": {
"ExperimentWorkflow": {"ModuleList": ["ExperimentWorkflow"]},
"Rollout": {"ModuleList": ["Rollout"]}
}
}
</code>Business Logic Distribution
Logic is split among the three experiment modules:
BaseExperiment: Simple core logic, can be extended as needed.
ExperimentExtension: Handles type‑specific extensions.
ExperimentPlugin: Provides advanced features via a chain‑of‑responsibility pattern.
This design isolates responsibilities, making each function easier to test and reuse.
External Service Integration
After business processing, external services (e.g., message queues, heat‑map generators) are invoked through a unified service‑call layer, allowing type‑specific handling without polluting core logic.
Refactoring Outcomes
Development efficiency increased by ~30%.
Performance improved by ~50%.
The refactor has been deployed to all environments, supporting dozens of experiment types and laying groundwork for future plugin‑based extensions.
ByteDance Data Platform
The ByteDance Data Platform team empowers all ByteDance business lines by lowering data‑application barriers, aiming to build data‑driven intelligent enterprises, enable digital transformation across industries, and create greater social value. Internally it supports most ByteDance units; externally it delivers data‑intelligence products under the Volcano Engine brand to enterprise customers.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.