Artificial Intelligence 23 min read

Meta‑Knowledge Transfer for Automated Machine Learning: System Architecture and Methodology

This article proposes a meta‑knowledge transfer framework for AutoML systems, detailing a four‑layer architecture, methods for collecting and updating structured model meta‑knowledge, and strategies that use this knowledge to guide hyper‑parameter search and early‑stop training, thereby improving efficiency and reducing resource consumption.

JD Tech Talk
JD Tech Talk
JD Tech Talk
Meta‑Knowledge Transfer for Automated Machine Learning: System Architecture and Methodology

Human learning leverages prior knowledge, and the authors explore applying a similar transfer‑learning mechanism to machine learning by introducing meta‑knowledge that captures experience from previous model training tasks.

Current AutoML systems rely heavily on expert intuition, grid/random search, or generic hyper‑parameter optimization algorithms, which are resource‑intensive and do not exploit historical task knowledge.

The proposed solution consists of a four‑layer system architecture: (1) Infrastructure layer (CPU/GPU/FPGA clusters, storage, networking); (2) Engine and scheduling layer (ML/DL frameworks, AutoML engine, training/evaluation engine, early‑stop controller); (3) Meta‑knowledge management layer (collection, migration, update modules, knowledge repository, search‑space and model warehouses); (4) Application layer (visual UI for task design, strategy configuration, and monitoring).

Meta‑knowledge is collected in two stages: (a) extracting dataset and task attributes using a predefined feature metric set; (b) capturing model‑level features such as loss curves, accuracy trajectories, and final performance during training. These features are stored as structured vectors in a central repository.

Meta‑knowledge migration employs similarity measures (cosine, Minkowski, VDM, KL‑divergence) to retrieve relevant prior knowledge, which then guides hyper‑parameter initialization (addressing the cold‑start problem) and informs early‑stop decisions by comparing live training curves with historic ones.

The system also defines update procedures that assess the diversity contribution of new meta‑knowledge, adding valuable entries while discarding redundant or low‑quality data, thus maintaining a healthy knowledge base.

In conclusion, the authors present a comprehensive AutoML enhancement that integrates meta‑knowledge collection, storage, and transfer, enabling more efficient hyper‑parameter search and reduced training waste, and they provide detailed architectural diagrams and reference literature supporting the approach.

system architectureMachine LearningModel TrainingAutoMLHyperparameter OptimizationMeta-Knowledge
JD Tech Talk
Written by

JD Tech Talk

Official JD Tech public account delivering best practices and technology innovation.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.