Fundamentals 33 min read

Understanding Memory and Process Interaction: Virtual Memory, Paging, and Allocation in Linux

This article explains how memory works as a temporary storage stage for processes, describes the fundamentals of physical and virtual memory, details paging, page tables, multi‑level paging, allocation mechanisms such as brk() and mmap(), and outlines Linux memory‑management techniques including caching, swapping, and OOM handling.

Deepin Linux
Deepin Linux
Deepin Linux
Understanding Memory and Process Interaction: Virtual Memory, Paging, and Allocation in Linux

1. Overview of Memory

Memory, also called main memory or RAM, is the computer's temporary storage that holds program code and data while the CPU executes them. Its speed and capacity directly affect overall system performance and user experience.

Memory acts as a bridge between the CPU and slower storage devices; the CPU fetches data from memory far faster than from a hard disk, enabling efficient program execution.

Memory can be classified into three main types: ROM (read‑only memory), RAM (random‑access memory), and cache (high‑speed buffer between CPU and RAM).

ROM stores firmware such as BIOS and cannot be modified during normal operation.

RAM provides read‑write storage for active programs; it loses its contents when power is removed.

Cache (L1, L2, L3) stores frequently accessed data to reduce latency between CPU and RAM.

2. Virtual Memory Technology

Virtual memory allows a process to use a contiguous address space that may be larger than the physical RAM, by mapping unused portions to disk storage.

2.1 Why Virtual Memory Is Needed

Physical memory is limited, while each process expects a large address space (e.g., 3 GB user space + 1 GB kernel space on a 32‑bit system). Virtual memory provides isolation, eliminates fragmentation, and enables processes to run even when physical RAM is insufficient.

2.2 How Virtual Memory Works

Each process has its own virtual address space. The operating system maintains page tables that translate virtual addresses to physical frames. When a page is not present in RAM, a page‑fault occurs and the required page is loaded from the swap area on disk.

Example: a 32‑bit processor can address 4 GB of virtual memory, though the actual RAM may be far smaller.

Page tables are stored in the Memory Management Unit (MMU); the OS updates them during page‑fault handling.

2.3 Virtual Address Space Layout

Linux gives each process a continuous virtual address space divided into kernel space (high addresses) and user space (low addresses). On 32‑bit systems, the top 1 GB is kernel space and the lower 3 GB is user space; on 64‑bit systems both spaces are 128 TB.

Only when a process switches to kernel mode can it access kernel space, which is shared among all processes.

Physical memory is allocated to pages on demand; the OS uses a page‑mapping mechanism to associate virtual pages with physical frames.

The page‑table entries are cached in the Translation Lookaside Buffer (TLB) for fast translation.

To reduce the size of page tables, Linux uses a four‑level hierarchical page table.

Large pages (HugePages) of 2 MB or 1 GB are used for memory‑intensive applications.

2.4 Virtual Memory Usage

Processes obtain virtual memory via system calls such as mmap , brk , and sbrk . Small allocations (<128 KB) use brk (heap growth), while larger allocations use mmap in the file‑mapping region.

The C library’s malloc chooses between these two mechanisms, each with its own trade‑offs regarding caching and page‑fault frequency.

Memory is only actually backed by physical RAM when a page is first accessed (demand paging).

2.5 Memory Reclamation

When memory pressure arises, the kernel may reclaim pages using LRU caching, swap out rarely used pages to disk, or invoke the OOM killer to terminate high‑memory‑consuming processes.

Swap extends usable memory but is much slower than RAM.

The OOM killer scores processes based on memory usage (higher score = more likely to be killed) and can be tuned via /proc/ /oom_adj .

echo -16 > /proc/$(pidof sshd)/oom_adj

3. Process‑Memory Interaction

3.1 Memory Loading at Process Startup

When a program is launched, the OS reads the executable header, allocates a virtual address space, and maps code, data, and required shared libraries into that space.

3.2 Runtime Memory Allocation

During execution, functions like malloc request memory from the OS. For small requests, the heap top is moved via brk ; for large requests, mmap creates a new mapping.

Copy‑on‑Write (COW) allows parent and child processes created by fork to share pages until one writes to them.

3.3 Memory Reclamation and Management

When a process exits, the kernel frees all its virtual pages. Under memory pressure, the kernel may swap out pages or use replacement algorithms such as LRU, FIFO, or Clock to decide which pages to evict.

4. Low‑Level Memory Management

4.1 MMU and Address Translation

The Memory Management Unit translates virtual addresses to physical addresses using page tables and a TLB cache for fast look‑ups.

4.2 Page Tables and Multi‑Level Paging

Simple linear page tables would consume excessive memory; multi‑level tables store only the portions of the address space that are actually used, dramatically reducing memory overhead.

4.3 Caching and Locality

CPU caches, the TLB, and other buffers exploit temporal and spatial locality to keep frequently accessed data close to the processor, improving overall performance.

Memory ManagementLinuxVirtual MemorymallocOOMMMUPaging
Deepin Linux
Written by

Deepin Linux

Research areas: Windows & Linux platforms, C/C++ backend development, embedded systems and Linux kernel, etc.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.