Tag

ad optimization

1 views collected around this technical thread.

JD Cloud Developers
JD Cloud Developers
Mar 24, 2025 · Artificial Intelligence

How Multi-Agent Reinforcement Learning Boosts Ad Computation Allocation

This article presents MaRCA, a multi‑agent reinforcement‑learning framework that allocates computation resources across the full ad‑serving chain, modeling user value, compute cost, and action rewards to maximize ad revenue while keeping system load stable under fluctuating traffic.

AIMulti-Agentad optimization
0 likes · 16 min read
How Multi-Agent Reinforcement Learning Boosts Ad Computation Allocation
JD Cloud Developers
JD Cloud Developers
Mar 19, 2025 · Artificial Intelligence

How AIGC Boosts Ad Creative Quality: Trustworthy Image Generation & Selection

2024 saw the advertising team achieve major breakthroughs in AI-generated ad creatives by introducing a multimodal reliable feedback network to improve image usability, releasing a large human-annotated dataset, and leveraging multimodal large language models for richer representation and more effective online/offline creative selection.

AIGCad optimizationimage generation
0 likes · 10 min read
How AIGC Boosts Ad Creative Quality: Trustworthy Image Generation & Selection
Xiaohongshu Tech REDtech
Xiaohongshu Tech REDtech
Feb 2, 2023 · Operations

Optimizing Xiaohongshu Splash Screen Ads: Flow Selection and Dynamic Decision Mechanisms

Xiaohongshu’s new “traffic‑optimal + dynamic decision” framework models splash‑screen ad allocation as a linear‑programming problem with volume guarantees, continuously adjusts weights via feedback, and pre‑computes cached decisions to preserve fast app startup, thereby boosting click‑through rates while meeting delivery commitments.

CTR improvementSplash Screenad optimization
0 likes · 14 min read
Optimizing Xiaohongshu Splash Screen Ads: Flow Selection and Dynamic Decision Mechanisms
Alimama Tech
Alimama Tech
Nov 23, 2022 · Artificial Intelligence

Enhancing Recommendation System Consistency via Feedback Cascading and LTR Models

The paper proposes a teacher‑student architecture that uses feedback cascading and learning‑to‑rank models with ΔnDCG‑based loss and bid‑aware optimizations to align coarse and fine sorting stages, address sparse data, and improve recommendation consistency, achieving a 4.8% RPM lift in A/B tests.

Feedback CascadingLTR ModelsSystem Consistency
0 likes · 14 min read
Enhancing Recommendation System Consistency via Feedback Cascading and LTR Models