Tag

Pretraining

0 views collected around this technical thread.

Tencent Technical Engineering
Tencent Technical Engineering
May 12, 2025 · Artificial Intelligence

Comprehensive Summary and Expansion of Andrej Karpathy’s 7‑Hour LLM Lecture

This article provides a detailed Chinese‑to‑English summary of Andrej Karpathy’s 7‑hour LLM tutorial, covering chat process analysis, tokenization, pre‑training data pipelines, model architecture, training strategies, post‑training fine‑tuning, reinforcement learning, chain‑of‑thought reasoning, and current industry applications.

AILLMModel Architecture
0 likes · 25 min read
Comprehensive Summary and Expansion of Andrej Karpathy’s 7‑Hour LLM Lecture
Alimama Tech
Alimama Tech
Apr 23, 2025 · Artificial Intelligence

Distribution-aware Graph Prompt Tuning (DAGPrompT) for Heterophilic Graphs

Distribution‑aware Graph Prompt Tuning (DAGPrompT) tackles the pre‑training/downstream mismatch on heterophilic graphs by jointly applying low‑rank GLoRA adaptation and hop‑specific prompts that recast tasks as link‑prediction, yielding up to 4.79% accuracy gains and an average 2.43% improvement in few‑shot node classification.

Graph Neural NetworksPretrainingPrompt Tuning
0 likes · 9 min read
Distribution-aware Graph Prompt Tuning (DAGPrompT) for Heterophilic Graphs
Cognitive Technology Team
Cognitive Technology Team
Mar 22, 2025 · Artificial Intelligence

Three Stages of Developing Large Language Models and Practical Guidance

The article outlines the three development phases of large language models—building, pre‑training, and fine‑tuning—describes usage options, highlights key factors such as data scale, architecture, training processes, and evaluation, and offers practical advice for cost‑effective development.

Artificial IntelligenceFine-tuningLLM
0 likes · 3 min read
Three Stages of Developing Large Language Models and Practical Guidance
JD Tech Talk
JD Tech Talk
Mar 5, 2025 · Artificial Intelligence

GLM: General Language Model Pretraining with Autoregressive Blank Infilling

GLM introduces a unified pretraining framework that combines autoregressive blank‑filling with 2D positional encoding and span‑shuffle, achieving superior performance over BERT, T5 and GPT on a range of NLU and generation tasks such as SuperGLUE, text‑filling, and language modeling.

2D positional encodingNLUPretraining
0 likes · 27 min read
GLM: General Language Model Pretraining with Autoregressive Blank Infilling
Architect
Architect
Feb 11, 2025 · Artificial Intelligence

DeepSeek: Training Process, Working Principles, and Recent Innovations

The article explains DeepSeek's two‑stage training pipeline—including massive pre‑training on trillions of tokens and post‑training via instruction tuning and reinforcement learning from human feedback—describes the differences between its V3 instruction model and R1 reasoning model, and highlights performance optimizations and emerging research directions.

AIDeepSeekPretraining
0 likes · 8 min read
DeepSeek: Training Process, Working Principles, and Recent Innovations
DataFunSummit
DataFunSummit
Feb 5, 2025 · Artificial Intelligence

Exploration and Practice of Large‑Model Data Construction

This presentation details engineering‑focused approaches to building, mixing, and filtering data for large language models, covering data preparation, pre‑training mix strategies such as DoReMi, DoGE and online sampling, post‑training data quality selection methods, and practical Q&A on scaling laws and PDF processing.

AIData EngineeringLarge Language Models
0 likes · 15 min read
Exploration and Practice of Large‑Model Data Construction
DevOps
DevOps
Dec 8, 2024 · Artificial Intelligence

Understanding Fine-Tuning in Machine Learning: Concepts, Importance, Steps, and Applications

This article explains fine‑tuning in machine learning, covering its definition, why it matters, the role of pre‑trained models, detailed step‑by‑step procedures, advantages, and diverse applications such as NLP, computer vision, speech and finance, with practical examples like face recognition and object detection.

AI applicationsFine-tuningPretraining
0 likes · 16 min read
Understanding Fine-Tuning in Machine Learning: Concepts, Importance, Steps, and Applications
Bilibili Tech
Bilibili Tech
Nov 5, 2024 · Artificial Intelligence

Bilibili's In-House Role-Playing Large Language Model: Architecture, Training Stages, Evaluation, and Demonstrations

Bilibili’s in‑house role‑playing large language model, built on the Index architecture and refined through pre‑training, supervised fine‑tuning, and preference optimization (PPO and DPO), achieved top scores on the Chinese CharacterEval benchmark, surpassing rivals while incorporating safety alignment and showcasing consistent, personality‑driven dialogue examples.

Evaluation BenchmarkPretrainingSupervised Fine-tuning
0 likes · 13 min read
Bilibili's In-House Role-Playing Large Language Model: Architecture, Training Stages, Evaluation, and Demonstrations
Bilibili Tech
Bilibili Tech
Sep 18, 2024 · Artificial Intelligence

Index-1.9B-32K: A 2% GPT-Size Model with Powerful Long-Context Capabilities

Index-1.9B-32K is a 1.9B-parameter model with a 32K token context window, achieving strong long‑text performance comparable to larger models while using only about 2% of GPT‑4’s compute, trained via long pre‑training and supervised fine‑tuning, with a trade‑off of reduced short‑context ability.

AIFine-tuningOpen-source
0 likes · 12 min read
Index-1.9B-32K: A 2% GPT-Size Model with Powerful Long-Context Capabilities
DataFunSummit
DataFunSummit
Sep 1, 2024 · Artificial Intelligence

Data Management in Large Language Model Training: Overview, Pre‑training, SFT, and Future Challenges

This article surveys data management for large language model training, covering an overview, pre‑training data composition, scaling‑law‑driven quantity control, quality filtering, deduplication, harmful‑content removal, instruction fine‑tuning strategies, dynamic data selection, and emerging research challenges such as bias mitigation, multimodal data handling, and synthetic‑data filtering.

Artificial IntelligenceInstruction Fine-TuningLarge Language Models
0 likes · 18 min read
Data Management in Large Language Model Training: Overview, Pre‑training, SFT, and Future Challenges
DataFunTalk
DataFunTalk
Aug 5, 2024 · Artificial Intelligence

Enhancing Taobao Display Advertising with Multimodal Representations: Challenges, Approaches, and Insights

This article presents a comprehensive study on integrating multimodal image‑text representations into large‑scale e‑commerce advertising CTR models, introducing a semantic‑aware contrastive pre‑training (SCL) method and two application algorithms (SimTier and MAKE) that together achieve over 1 % GAUC improvement and significant online gains.

CTR predictionPretrainingRecommendation systems
0 likes · 21 min read
Enhancing Taobao Display Advertising with Multimodal Representations: Challenges, Approaches, and Insights
Bilibili Tech
Bilibili Tech
Jun 14, 2024 · Artificial Intelligence

Technical Report on the Index-1.9B Series: Model Variants, Pre‑training Optimizations, and Alignment Experiments

The report presents the open‑source Index‑1.9B family—base, pure, chat, and character variants—detailing benchmark results, pre‑training optimizations such as a normalized LM‑Head and deeper‑slim architectures, the importance of modest instruction data, alignment via SFT/DPO, role‑play enhancements with RAG, and acknowledges remaining safety and factual limitations.

LLMPretrainingalignment
0 likes · 15 min read
Technical Report on the Index-1.9B Series: Model Variants, Pre‑training Optimizations, and Alignment Experiments
DataFunTalk
DataFunTalk
May 15, 2024 · Artificial Intelligence

Advances in Video Multimodal Retrieval: Video‑Text Semantic Search and Video‑Video Same‑Source Search

This article presents Ant Group's multimodal research on video retrieval, detailing video‑text semantic search and video‑video same‑source search, introducing a large Chinese pre‑training dataset, novel pre‑training, hard‑sample mining, fine‑grained modeling techniques, and an efficient end‑to‑end copyright detection framework.

Pretrainingcopyright detectionfine-grained modeling
0 likes · 38 min read
Advances in Video Multimodal Retrieval: Video‑Text Semantic Search and Video‑Video Same‑Source Search
DataFunSummit
DataFunSummit
Apr 24, 2024 · Artificial Intelligence

Multimodal Content Understanding in Baidu Commercial Systems: The ViCAN Model and Its Applications

This article presents Baidu's exploration of multimodal content understanding for commercial advertising, detailing the ViCAN pre‑training model, its contrastive and mask‑language learning tasks, integration across recall, ranking and risk‑control pipelines, quantization with MMDict, and future AIGC‑driven generation, all backed by extensive experiments and Q&A.

AIAIGCLarge-Scale Models
0 likes · 27 min read
Multimodal Content Understanding in Baidu Commercial Systems: The ViCAN Model and Its Applications
DataFunSummit
DataFunSummit
Mar 27, 2024 · Artificial Intelligence

Generative Multimodal Pretraining (OFA) and Representational Multimodal Pretraining (ONE-PEACE): Research Overview and Findings

This article reviews Tongyi Lab's work on the OFA framework for generative multimodal pretraining and the ONE-PEACE model for unified multimodal representation learning, detailing their architectures, training strategies, experimental results across vision‑language and audio tasks, and future research directions.

Large ModelsOFAONE-PEACE
0 likes · 15 min read
Generative Multimodal Pretraining (OFA) and Representational Multimodal Pretraining (ONE-PEACE): Research Overview and Findings
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Jan 21, 2024 · Artificial Intelligence

Understanding Pretraining and Fine‑Tuning of Large Language Models: Methods, Resources, and Practical Applications

This article explains the concepts of pretraining and fine‑tuning for large language models, compares full‑parameter, LoRA and QLoRA approaches, discusses resource consumption, introduces the ModelScope SWIFT framework with code examples, and shows how fine‑tuning can improve data‑visualisation tasks while reducing token usage.

Data VisualizationFine-tuningLLM
0 likes · 24 min read
Understanding Pretraining and Fine‑Tuning of Large Language Models: Methods, Resources, and Practical Applications
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Dec 13, 2023 · Artificial Intelligence

Comprehensive Overview of BERT: Architecture, Pre‑training Tasks, and Applications

This article provides a detailed introduction to BERT, covering its bidirectional transformer encoder design, pre‑training objectives such as Masked Language Modeling and Next Sentence Prediction, model configurations, differences from GPT/ELMo, and a wide range of downstream NLP applications.

BERTMasked Language ModelNLP
0 likes · 17 min read
Comprehensive Overview of BERT: Architecture, Pre‑training Tasks, and Applications
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Dec 4, 2023 · Artificial Intelligence

An Overview of BERT: Architecture, Pre‑training Tasks, Comparisons, and Applications

This article provides a comprehensive English overview of BERT, covering its original paper, model architecture, pre‑training objectives (Masked Language Model and Next Sentence Prediction), differences from ELMo, GPT and vanilla Transformers, parameter counts, main contributions, and a range of NLP application scenarios such as text classification, sentiment analysis, NER, and machine translation.

BERTMasked Language ModelNLP
0 likes · 16 min read
An Overview of BERT: Architecture, Pre‑training Tasks, Comparisons, and Applications
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Nov 26, 2023 · Artificial Intelligence

Overview of T5 (Text-to-Text Transfer Transformer): Architecture, Variants, Experiments, and Applications

This article provides a comprehensive overview of Google's T5 model, detailing its unified text‑to‑text formulation, encoder‑decoder architecture, three model variants, attention mask designs, training strategies, model sizes, experimental results, and key contributions to natural language processing.

Artificial IntelligenceNLPPretraining
0 likes · 14 min read
Overview of T5 (Text-to-Text Transfer Transformer): Architecture, Variants, Experiments, and Applications
AntTech
AntTech
Nov 7, 2023 · Artificial Intelligence

Multi‑Scale Stochastic Distribution Prediction for User Behavior Representation Learning

The paper proposes a multi‑scale stochastic distribution prediction (MSDP) framework that learns robust user behavior representations by predicting behavior distributions over random time windows, incorporates contrastive regularization, and demonstrates superior performance on both proprietary financial risk data and a public e‑commerce dataset compared with existing masked and next‑behavior pre‑training methods.

AIPretrainingdistribution prediction
0 likes · 13 min read
Multi‑Scale Stochastic Distribution Prediction for User Behavior Representation Learning