Artificial Intelligence 13 min read

Biological Neurons and Their Simple Mathematical Representation in Neural Networks

This article explains how biological neurons inspire artificial neural networks, describing neuron concepts, threshold firing, weighted inputs, bias, activation functions such as the step and sigmoid functions, and shows how these ideas are expressed mathematically and visualized with diagrams.

Python Programming Learning Circle
Python Programming Learning Circle
Python Programming Learning Circle
Biological Neurons and Their Simple Mathematical Representation in Neural Networks

1 Biological Neuron

1.1 Concept of Neuron

The idea of neural networks is inspired by biological neurons. In biology, a neuron reacts according to the following process.

Neurons form networks.

If the sum of signals from other neurons does not exceed a fixed threshold, the neuron does not react.

If the sum exceeds the threshold, the neuron 点火 and transmits a fixed‑strength signal to other neurons.

In steps 2 and 3, each incoming signal has a different 权重 (weight).

1.2 Neuron Operation

Threshold: the intrinsic sensitivity value of a neuron.

Firing: when the sum of parameters exceeds the threshold, the neuron reacts.

For living organisms, a neuron will ignore weak input signals, which is important.

Conversely, if a neuron becomes excited by any tiny signal, the nervous system becomes "emotionally unstable".

2 Simple Mathematical Representation of a Neuron

After describing that a neuron fires when the sum of inputs exceeds a threshold, what is the output after firing?

The firing output is also a signal; regardless of how strong the stimulus, the neuron outputs a fixed‑size signal, typically 0 or 1.

The sum of signals from multiple neurons becomes the neuron's input.

If the input exceeds the neuron's intrinsic threshold, it fires.

The firing output can be represented by the binary signal 0 or 1; all output ports share the same value.

Below we express this mathematically:

1-1 of the process:

2.1 Input

We set the neuron's threshold to 0; inputs are divided into ≤0 and >0, also called "has" and "has not" signals.

2.2 Output

We set the firing output to 0 and 1 .

Then

2.3 Weight

The firing decision is based on a weighted sum, not a simple sum; each input has a different importance coefficient called a 权重 (weight). For example, visual signals may have higher weight than auditory signals.

w_1x_1 + w_2x_2 + w_3x_3

Here w_1 , w_2 , w_3 are the weights corresponding to inputs x_1 , x_2 , x_3 .

2.4 Firing Condition

When the sum of weighted inputs exceeds the threshold θ , the neuron fires:

Example 1 – Two input neurons with signals x_1 , x_2 , weights w_1 , w_2 , and threshold θ . With w_1=5 , w_2=3 , θ=4 , we examine the sum w_1x_1 + w_2x_2 and the output y .

2.5 Graphical Representation of Firing Condition (Step Function)

The step function plots the sum of inputs on the horizontal axis and the output y on the vertical axis: when the sum is less than θ , y=0 ; otherwise y=1 .

The function can be expressed using the unit step function :

The step function graph is:

Using the step function, we substitute the neuron input and the threshold to derive the firing function:

The variable z in the step function is called the weighted input :

2.6 Activation Function

Below is a simplified diagram of a neuron:

Because biological neurons output only 0 or 1, we replace the binary firing function with a more general activation function a , defined by the modeler.

In this function, y can take any value and represents the neuron's excitability, responsiveness, or activity level.

Comparison of two neuron types:

Sigmoid Function

The sigmoid function σ(z) is a representative activation function, defined as:

It maps any input to the interval (0, 1) and is continuous, smooth, and differentiable, making it easy to work with.

While the step function’s output (0 or 1) indicates firing, the sigmoid’s output (a value between 0 and 1) represents the neuron’s degree of excitation—the closer to 1, the more excited.

2.7 Bias

We revisit the activation function diagram:

Here θ is the threshold, representing the neuron's sensitivity: a larger θ makes the neuron less excitable, a smaller θ makes it more sensitive.

Mathematicians often replace the negative sign in -θ with a bias term b , yielding two equivalent equations that are easier to handle.

The two fundamental equations of neural networks become:

The term b is called the 偏置 (bias). In biology, weights and thresholds are non‑negative, but in the generalized model they may be negative.

Linear algebra provides a convenient way to compute the weighted sum z using the inner product of vectors.

We treat x_1 , x_2 , x_3 as a 变量向量 (variable vector) and the weights w_1 , w_2 , w_3 together with bias b as a 参数向量 (parameter vector). The inner product yields z . Because the variable vector has length 3 and the parameter vector length 4, we prepend a constant 1 to the variable vector to account for the bias.

This vector inner‑product method is less intuitive for humans but highly efficient for computers when implementing neural‑network code.

- END -

Artificial Intelligencemachine learningactivation functionbiasthresholdneuronweight
Python Programming Learning Circle
Written by

Python Programming Learning Circle

A global community of Chinese Python developers offering technical articles, columns, original video tutorials, and problem sets. Topics include web full‑stack development, web scraping, data analysis, natural language processing, image processing, machine learning, automated testing, DevOps automation, and big data.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.