Artificial Intelligence 25 min read

Comprehensive Survey of Pre-trained Models for Natural Language Processing

This article provides a detailed survey of pre‑trained models (PTMs) for natural language processing, classifying them into shallow embeddings and contextual encoders, discussing training paradigms such as knowledge integration and model compression, and offering guidance on transfer learning and future challenges.

DataFunTalk
DataFunTalk
DataFunTalk
Comprehensive Survey of Pre-trained Models for Natural Language Processing

Pre‑trained models (PTMs) have revolutionized NLP by leveraging large amounts of unlabeled data to learn general language representations that can be fine‑tuned for downstream tasks. The article begins with a motivation for pre‑training, highlighting benefits such as richer language understanding, better initialization, and regularization.

PTMs are divided into two major paradigms: shallow (non‑contextual) word embeddings like word2vec, GloVe, and NNLM, and deep contextual encoders such as ELMo, GPT, BERT, XLNet, etc. Shallow embeddings are static and suffer from OOV and polysemy issues, while contextual encoders produce token representations that depend on surrounding context.

Within contextual encoders, three families of language‑modeling objectives are described: (1) Autoregressive language models (LM) that predict the next token sequentially, (2) Denoising auto‑encoders (DAE) that mask tokens and predict them bidirectionally (e.g., BERT, RoBERTa), and (3) Permuted language models (PLM) that randomize the factorization order (e.g., XLNet). Additional contrastive‑based objectives such as Deep InfoMax, Replaced Token Detection, and Sentence Order Prediction are also covered.

The survey outlines several extensions of PTMs: incorporation of external knowledge (ERNIE‑THU, LIBERT, SenseBERT, etc.), model compression techniques (pruning, quantization, parameter sharing, knowledge distillation), multimodal pre‑training (VideoBERT, ViL‑BERT, etc.), domain‑specific pre‑training (BioBERT, SciBERT, Clinical‑BERT), and multilingual or language‑specific models (mBERT, XLM, CamemBERT, etc.).

For transfer learning, the article discusses strategies such as selecting appropriate pre‑training tasks, model architectures, and data, choosing which layers to transfer (embedding, top‑layer, or all layers), and deciding between feature extraction and fine‑tuning. Advanced fine‑tuning methods include multi‑stage training, multi‑task learning, adapter modules, and layer‑wise freezing.

Finally, the article identifies open challenges: scaling PTMs further, task‑oriented pre‑training and compression, improving transformer efficiency for longer sequences, enhancing knowledge transfer during fine‑tuning, and increasing interpretability and reliability of PTMs.

model compressionNatural Language Processingtransfer learningpretrained modelsknowledge integration
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.