Tag

low-resource

0 views collected around this technical thread.

DataFunSummit
DataFunSummit
Jan 6, 2025 · Artificial Intelligence

Efficient Large‑Model Training with LLaMA‑Factory: Overview, Techniques, and Applications

This article explains how to train large language models efficiently using LLaMA‑Factory, covering low‑resource training challenges, memory‑saving optimizations for parameters, gradients and activations, framework features, quick‑start guidance, performance tuning, real‑world case studies, and a detailed Q&A.

AIDeepSpeedLLaMA-Factory
0 likes · 10 min read
Efficient Large‑Model Training with LLaMA‑Factory: Overview, Techniques, and Applications
DataFunTalk
DataFunTalk
Jun 21, 2023 · Artificial Intelligence

Low‑Resource NLP Pretraining: Methodology, Experiments, and Zero‑Shot Applications

This article presents a low‑resource NLP pretraining approach that combines transformer‑based language modeling with contrastive vector learning, details the unsupervised sample‑pair construction, introduces a camel‑shaped masking distribution, and demonstrates through extensive experiments that the resulting model achieves strong zero‑shot NLU, NLG, and retrieval performance while requiring minimal compute and data.

NLPZero-shotcontrastive learning
0 likes · 10 min read
Low‑Resource NLP Pretraining: Methodology, Experiments, and Zero‑Shot Applications
DataFunTalk
DataFunTalk
Nov 5, 2019 · Artificial Intelligence

Low-Resource Text-to-Speech: FastSpeech, LightTTS, and LightBERT Overview

This article reviews recent advances in low‑resource text‑to‑speech synthesis, covering the background of TTS, challenges in data‑ and compute‑limited scenarios, and detailed descriptions of FastSpeech, LightTTS, LightBERT, and related lightweight vocoder techniques, along with experimental results and future research directions.

Artificial IntelligenceFastSpeechLightTTS
0 likes · 20 min read
Low-Resource Text-to-Speech: FastSpeech, LightTTS, and LightBERT Overview
DataFunTalk
DataFunTalk
Oct 9, 2019 · Artificial Intelligence

Multilingual Content Understanding in UC International Feed Recommendation

This article presents a comprehensive overview of the challenges, requirements, and technical solutions for multilingual content understanding in UC's international information‑flow recommendation system, covering structured signal construction, low‑resource NLP techniques, transfer learning, quality modeling, and image‑based signal integration.

NLPRecommendation systemscontent understanding
0 likes · 14 min read
Multilingual Content Understanding in UC International Feed Recommendation