Tag

loss function

1 views collected around this technical thread.

Cognitive Technology Team
Cognitive Technology Team
Apr 9, 2025 · Artificial Intelligence

How Neural Networks Learn: Gradient Descent and Loss Functions

This article explains how neural networks learn by using labeled training data, describing the role of weights, biases, activation functions, and how gradient descent iteratively adjusts parameters to minimize loss, illustrated with the MNIST digit‑recognition example.

MNISTdeep learninggradient descent
0 likes · 16 min read
How Neural Networks Learn: Gradient Descent and Loss Functions
IT Services Circle
IT Services Circle
Dec 31, 2024 · Artificial Intelligence

Understanding Linear Regression, Loss Functions, and Gradient Descent: A Conversational Guide

This article uses a dialogue format to introduce the fundamentals of linear regression, explain how loss functions such as mean squared error quantify prediction errors, and describe gradient descent as an iterative optimization technique for finding the best model parameters, illustrated with simple numeric examples and visual aids.

AI Basicsgradient descentlinear regression
0 likes · 13 min read
Understanding Linear Regression, Loss Functions, and Gradient Descent: A Conversational Guide
Model Perspective
Model Perspective
Sep 10, 2024 · Artificial Intelligence

Why Cross-Entropy Is the Key Loss Function for Classification Models

This article explains how loss functions evaluate model performance, contrasts regression’s mean squared error with classification’s cross‑entropy, describes one‑hot encoding and softmax outputs, and shows why higher predicted probabilities for the correct class yield lower loss, highlighting applications in image, language, and speech tasks.

classificationcross-entropyloss function
0 likes · 5 min read
Why Cross-Entropy Is the Key Loss Function for Classification Models
Bilibili Tech
Bilibili Tech
Mar 1, 2024 · Artificial Intelligence

Bilibili's Self-Developed Video Super-Resolution Algorithm: Background, Optimization Directions, and Implementation Details

Bilibili’s self‑supervised video super‑resolution system upgrades low‑resolution streams to 4K by using three parallel degradation‑branch networks—texture‑enhancing, line‑recovering, and noise‑removing—tailored to anime, game, and real‑world content, delivering sharper edges, finer textures, and measurable quality gains across its online playback pipeline.

AIBilibiliModel Architecture
0 likes · 16 min read
Bilibili's Self-Developed Video Super-Resolution Algorithm: Background, Optimization Directions, and Implementation Details
DataFunTalk
DataFunTalk
Aug 27, 2019 · Artificial Intelligence

How Machines Learn: From Newton’s Second Law to the Core Steps of Supervised Learning

This article illustrates how a machine can rediscover Newton’s second law by treating force and acceleration data as a simple linear regression problem, detailing the three fundamental steps of hypothesis space definition, loss function design, and optimization through calculus or gradient methods.

Newton's lawOptimizationhypothesis space
0 likes · 15 min read
How Machines Learn: From Newton’s Second Law to the Core Steps of Supervised Learning
DataFunTalk
DataFunTalk
Apr 25, 2019 · Artificial Intelligence

Comparison of Classification and Ranking Models in Recommendation Systems

This article examines the differences and similarities between classification (pointwise) and ranking (pairwise) models for recommendation systems, covering their probabilistic foundations, loss functions, parameter updates, and practical implications such as sensitivity to statistical features and robustness.

Recommendation systemsclassification modelloss function
0 likes · 10 min read
Comparison of Classification and Ranking Models in Recommendation Systems