Tag

prompt learning

1 views collected around this technical thread.

DataFunSummit
DataFunSummit
Jun 12, 2024 · Artificial Intelligence

Large Language Model (LLM) Powered Recommendation Systems: Overview, Techniques, Challenges, and Future Directions

This article reviews how large language models are transforming recommendation systems, covering their fundamentals, recent LLM‑enabled methods for representation, learning and generalization, challenges such as scalability, bias and privacy, and future research directions including personalized prompts and robust model integration.

Bias MitigationLLMModel Generalization
0 likes · 19 min read
Large Language Model (LLM) Powered Recommendation Systems: Overview, Techniques, Challenges, and Future Directions
DataFunSummit
DataFunSummit
Mar 29, 2024 · Artificial Intelligence

Large Language Model (LLM) Revolution in Recommendation Systems: Overview, Techniques, and Future Directions

This article reviews how the rapid rise of large language models, exemplified by ChatGPT, is transforming recommendation systems by addressing traditional ID‑centric limitations, introducing prompt‑based and ID‑free representations, discussing recent research advances, practical challenges, and future research directions.

AILLMlarge models
0 likes · 18 min read
Large Language Model (LLM) Revolution in Recommendation Systems: Overview, Techniques, and Future Directions
DataFunTalk
DataFunTalk
Feb 26, 2024 · Artificial Intelligence

Large Language Model Empowered Recommendation Systems: Overview, Techniques, and Future Directions

With the rapid rise of ChatGPT and large language models, recommendation systems are undergoing a transformative shift, moving beyond traditional behavior‑based methods to leverage LLMs for improved generalization, representation, and prompt‑based learning, while addressing challenges such as scalability, interpretability, bias, and deployment costs.

AIGeneralizationLLM
0 likes · 19 min read
Large Language Model Empowered Recommendation Systems: Overview, Techniques, and Future Directions
DataFunTalk
DataFunTalk
Aug 13, 2023 · Artificial Intelligence

Applying Large Language Models to Search Advertising Satisfaction: From DNN to ERNIE and Prompt Learning

The article details how Baidu's Fengchao team leverages large language models, including a transition from DNN embeddings to ERNIE, introduces multi‑level tokenization and discrete core‑word inputs, and applies prompt learning and AIGC techniques to improve search advertising satisfaction and industry‑specific relevance modeling.

AIGCBaiduSearch Advertising
0 likes · 22 min read
Applying Large Language Models to Search Advertising Satisfaction: From DNN to ERNIE and Prompt Learning
DataFunTalk
DataFunTalk
May 18, 2023 · Artificial Intelligence

Query Intent Recognition in Enterprise Search: Knowledge‑Enhanced and Pretrained Model Approaches

This article explains how Alibaba's enterprise search system tackles query intent recognition by combining knowledge‑enhanced techniques, short‑text classification, and pretrained language models such as StructBERT and prompt‑learning, and it shares two real‑world case studies, experimental results, and future research directions.

NLPenterprise searchknowledge enhancement
0 likes · 19 min read
Query Intent Recognition in Enterprise Search: Knowledge‑Enhanced and Pretrained Model Approaches
Sohu Tech Products
Sohu Tech Products
Mar 22, 2023 · Artificial Intelligence

An Overview of Prompt Learning in Natural Language Processing

This article reviews the evolution of NLP training paradigms, explains why prompt learning is needed, defines its core concepts, and surveys major hard‑template and soft‑template methods such as PET, LM‑BFF, P‑tuning, and Prefix‑tuning, highlighting their advantages for few‑shot and zero‑shot scenarios.

NLPfew-shotpretrained models
0 likes · 10 min read
An Overview of Prompt Learning in Natural Language Processing
Top Architect
Top Architect
Mar 10, 2023 · Artificial Intelligence

Understanding InstructGPT and ChatGPT: Architecture, Training Pipeline, and Performance Analysis

This article provides a comprehensive overview of the GPT series, explains the differences between prompt learning and instruction learning, details the three‑stage training pipeline of InstructGPT/ChatGPT—including supervised fine‑tuning, reward‑model training, and PPO‑based reinforcement learning—examines their strengths, weaknesses, and future research directions, and discusses the broader impact of these models on AI development.

AIChatGPTInstructGPT
0 likes · 22 min read
Understanding InstructGPT and ChatGPT: Architecture, Training Pipeline, and Performance Analysis
DataFunTalk
DataFunTalk
Dec 1, 2022 · Artificial Intelligence

Advances and Challenges in Controllable Text Generation with Pretrained Language Models

This report reviews the background, recent research progress, practical applications, and future directions of controllable text generation using transformer‑based pretrained language models, highlighting methods such as decoding strategies, prompt learning, memory networks, continual learning, contrastive training, and knowledge integration.

continual learningcontrastive trainingcontrollable text generation
0 likes · 13 min read
Advances and Challenges in Controllable Text Generation with Pretrained Language Models
DataFunTalk
DataFunTalk
Nov 23, 2022 · Artificial Intelligence

Lightweight Adaptation Techniques for Multimodal Large Models

This article presents a comprehensive overview of lightweight adaptation methods—including language, domain, and optimization‑goal adapters and structured prompts—to overcome language mismatch, low domain fit, and objective differences when deploying open‑source multimodal large models in real‑world AI applications.

AIAdapterdomain adaptation
0 likes · 14 min read
Lightweight Adaptation Techniques for Multimodal Large Models