Tag

hallucination

0 views collected around this technical thread.

Architecture and Beyond
Architecture and Beyond
May 16, 2025 · Artificial Intelligence

Understanding AI Hallucinations: The Fictional Reality of Large Language Models

The essay explores why AI systems produce hallucinations by viewing their reality as a vast fictional narrative built from human language data, arguing that their knowledge is bounded by the corpus they ingest, and reflecting on philosophical limits of language and truth.

AIhallucinationknowledge limits
0 likes · 11 min read
Understanding AI Hallucinations: The Fictional Reality of Large Language Models
Model Perspective
Model Perspective
Mar 21, 2025 · Artificial Intelligence

How DeepSeek’s Tree‑Based Reasoning Transforms AI Interaction

DeepSeek’s R1 inference mode replaces linear chain‑of‑thought with a transparent, multi‑path tree reasoning system, offering layered analysis, intent understanding, memory management, emotion detection, and hallucination mitigation, illustrated through a practical example of buying authentic cigarettes and detailed technical breakdowns.

Artificial Intelligencehallucinationlarge language models
0 likes · 16 min read
How DeepSeek’s Tree‑Based Reasoning Transforms AI Interaction
AntTech
AntTech
Mar 10, 2025 · Artificial Intelligence

Ant Insurance and Zhejiang University’s AAAI 2025 Papers Tackle Hallucination in Large Vision‑Language and Video Models

Two collaborative papers by Ant Insurance and Zhejiang University were accepted at AAAI 2025, introducing the MoLE decoding framework to reduce hallucination in large vision‑language models and the MHBench benchmark plus Motion Contrastive Decoding to address motion hallucination in video large language models, advancing reliable AI‑driven insurance claim processing.

AAAI 2025AI researchhallucination
0 likes · 6 min read
Ant Insurance and Zhejiang University’s AAAI 2025 Papers Tackle Hallucination in Large Vision‑Language and Video Models
Code Mala Tang
Code Mala Tang
Mar 1, 2025 · Artificial Intelligence

Why Do Large Language Models Hallucinate and How Can We Fix It?

This article explains why large language models produce plausible‑looking but false information, traces the problem to the supervised fine‑tuning stage, and outlines mitigation techniques such as knowledge interrogation, RLHF, and tool‑augmented search to reduce hallucinations.

LLMRLHFTraining
0 likes · 12 min read
Why Do Large Language Models Hallucinate and How Can We Fix It?
Cognitive Technology Team
Cognitive Technology Team
Feb 18, 2025 · Artificial Intelligence

Two Major Bottlenecks in Deploying Large Language Models: Machine Deception and Hallucination

Deploying large language models faces two critical challenges—machine deception, where AI generates plausible yet false content, and machine hallucination, where outputs are logically coherent but factually inaccurate—both undermining trust, and the article outlines their causes, impacts, and technical, ethical, and regulatory mitigation strategies.

Artificial IntelligenceMachine DeceptionTrustworthiness
0 likes · 6 min read
Two Major Bottlenecks in Deploying Large Language Models: Machine Deception and Hallucination
AntTech
AntTech
Aug 6, 2024 · Artificial Intelligence

Trustworthy Alignment of Retrieval‑Augmented Large Language Models via Reinforcement Learning

The article explains how recent research tackles large language model hallucinations by combining retrieval‑augmented generation with reinforcement learning, achieving significant accuracy and reliability gains and paving the way for safe AI deployment in critical sectors such as finance and healthcare.

ICML2024Retrieval-Augmented Generationhallucination
0 likes · 5 min read
Trustworthy Alignment of Retrieval‑Augmented Large Language Models via Reinforcement Learning
JD Tech
JD Tech
Jul 22, 2024 · Artificial Intelligence

Task‑Aware Decoding (TaD): A Plug‑and‑Play Method to Mitigate Hallucinations in Large Language Models

This article presents Task‑aware Decoding (TaD), a plug‑and‑play technique introduced by JD Tech and Tsinghua University and accepted at IJCAI 2024, which reduces intrinsic hallucinations in large language models by comparing pre‑ and post‑fine‑tuning outputs, and demonstrates its effectiveness combined with Retrieval‑Augmented Generation across various tasks.

Artificial IntelligenceFine-tuningLLM
0 likes · 18 min read
Task‑Aware Decoding (TaD): A Plug‑and‑Play Method to Mitigate Hallucinations in Large Language Models
JD Tech Talk
JD Tech Talk
Jul 16, 2024 · Artificial Intelligence

Task‑Aware Decoding (TaD): A Plug‑and‑Play Method to Mitigate Hallucinations in Large Language Models

TaD, a task‑aware decoding technique jointly developed by JD.com and Tsinghua University and presented at IJCAI 2024, leverages differences between pre‑ and post‑fine‑tuned LLM outputs to construct knowledge vectors, significantly reducing hallucinations across various models, tasks, and data‑scarce scenarios, especially when combined with RAG.

AILLMRAG
0 likes · 18 min read
Task‑Aware Decoding (TaD): A Plug‑and‑Play Method to Mitigate Hallucinations in Large Language Models
DataFunSummit
DataFunSummit
Apr 13, 2024 · Artificial Intelligence

Understanding and Mitigating Hallucinations in Large Language Model Industry Q&A with Knowledge Graphs

This article examines why large language models often produce hallucinations in industry question‑answering, defines the phenomenon, explores its data and training origins, proposes evaluation metrics, and presents practical strategies—including high‑quality fine‑tuning data, honest refusal mechanisms, advanced decoding methods, and external knowledge‑graph augmentation—to reduce hallucinations and improve reliability.

AI evaluationPrompt EngineeringRetrieval-Augmented Generation
0 likes · 21 min read
Understanding and Mitigating Hallucinations in Large Language Model Industry Q&A with Knowledge Graphs
DataFunTalk
DataFunTalk
Feb 10, 2024 · Artificial Intelligence

Mitigating Hallucinations in Large Language Model Applications with Knowledge Graphs

This article examines the challenges of using large language models for industry Q&A, defines hallucination phenomena, evaluates their causes and impact, and proposes a set of strategies—including high‑quality fine‑tuning data, honest alignment, advanced decoding, and external knowledge‑graph augmentation—to reduce hallucinations and improve answer reliability.

Prompt EngineeringRetrieval-Augmented Generationhallucination
0 likes · 21 min read
Mitigating Hallucinations in Large Language Model Applications with Knowledge Graphs
Tencent Tech
Tencent Tech
Sep 20, 2023 · Artificial Intelligence

Why Do Large Language Models Hallucinate and How to Reduce It?

The article explains why large language models generate hallucinations—due to data errors, training conflicts, and inference uncertainty—and outlines data‑cleaning, model‑level feedback, knowledge augmentation, constraint techniques, and post‑processing methods such as the “Truth‑seeking” algorithm to mitigate the issue.

AI safetydata qualityhallucination
0 likes · 8 min read
Why Do Large Language Models Hallucinate and How to Reduce It?
ByteFE
ByteFE
Jun 15, 2023 · Artificial Intelligence

Effective Prompt Engineering: Techniques, Prompt Injection Prevention, Hallucination Mitigation, and Advanced Prompting Strategies

This article explains how to craft efficient prompts by combining clear instructions and questions, discusses prompt injection risks and mitigation with delimiters, addresses hallucinations, and introduces zero‑shot, few‑shot, and chain‑of‑thought prompting techniques for large language models.

Chain-of-ThoughtLLMPrompt Engineering
0 likes · 16 min read
Effective Prompt Engineering: Techniques, Prompt Injection Prevention, Hallucination Mitigation, and Advanced Prompting Strategies
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
May 5, 2023 · Artificial Intelligence

Limitations of Generative Pre‑trained Transformers: Hallucinations, Memory, Planning, and Architectural Proposals

The article critically examines GPT‑4 and similar transformer models, highlighting persistent hallucinations, outdated knowledge, insufficient domain coverage, lack of planning and memory, and proposes architectural extensions inspired by fast‑slow thinking and differentiable modules to overcome these fundamental constraints.

AI limitationsGPT-4Model Architecture
0 likes · 24 min read
Limitations of Generative Pre‑trained Transformers: Hallucinations, Memory, Planning, and Architectural Proposals