Fundamentals 28 min read

Understanding the Linux Network Packet Reception Process

This article provides a comprehensive, step‑by‑step explanation of how Linux receives network packets—from hardware DMA and interrupt handling through soft‑interrupt processing, kernel initialization, driver registration, and protocol‑stack traversal—culminating in delivery to the application layer via sockets.

Deepin Linux
Deepin Linux
Deepin Linux
Understanding the Linux Network Packet Reception Process

1. Introduction

In the digital age, networks are integral to computer systems, and understanding how Linux receives network packets is essential for performance tuning and troubleshooting.

When a video call freezes or a large data transfer is slow, the root cause often lies in the packet‑reception path, which spans from the NIC hardware to the kernel protocol stack and interrupt mechanisms.

2. Linux Network Fundamentals

2.1 Network Protocol Stack

The Linux kernel implements the link layer via NIC drivers, while the network and transport layers reside in the kernel protocol stack, exposing socket interfaces to user space. The classic TCP/IP layering is illustrated below.

Link‑layer frames contain source/destination MAC addresses and type fields; ARP resolves IP to MAC addresses. The network layer (IP) handles routing, fragmentation, and reassembly, while the transport layer provides TCP (reliable) and UDP (unreliable) services.

2.2 Interrupt Mechanism

Linux uses both hard and soft interrupts for network processing. A hard interrupt is generated by the NIC when a packet arrives, prompting the CPU to copy data from the NIC buffer to memory. Soft interrupts defer heavier processing, such as protocol‑stack handling, to later execution.

The kernel registers handlers for IP, TCP, and UDP via inet_add_protocol and stores them in inet_protos . The IP packet type is registered in ptype_base through dev_add_pack .

//net/ipv4/af_inet.c
static struct packet_type ip_packet_type __read_mostly = {
    .type = cpu_to_be16(ETH_P_IP),
    .func = ip_rcv,
    .list_func = ip_list_rcv,
};

static const struct net_protocol tcp_protocol = {
    .handler = tcp_v4_rcv,
    .err_handler = tcp_v4_err,
    .no_policy = 1,
    .icmp_strict_tag_validation = 1,
};

static const struct net_protocol udp_protocol = {
    .handler = udp_rcv,
    .err_handler = udp_err,
    .no_policy = 1,
};

static int __init inet_init(void){
    // ...
    if (inet_add_protocol(&udp_protocol, IPPROTO_UDP) < 0)
        pr_crit("%s: Cannot add UDP protocol\n", __func__);
    if (inet_add_protocol(&tcp_protocol, IPPROTO_TCP) < 0)
        pr_crit("%s: Cannot add TCP protocol\n", __func__);
    // ...
    dev_add_pack(&ip_packet_type);
}

3. Preparations Before Reception

3.1 Network Subsystem Initialization

During start_kernel , net_dev_init creates per‑CPU softnet_data structures, initializes packet queues, and registers soft‑IRQs NET_TX_SOFTIRQ and NET_RX_SOFTIRQ with net_tx_action and net_rx_action .

static int __init net_dev_init(void){
    for_each_possible_cpu(i) {
        struct softnet_data *sd = &per_cpu(softnet_data, i);
        memset(sd, 0, sizeof(*sd));
        skb_queue_head_init(&sd->input_pkt_queue);
        skb_queue_head_init(&sd->process_queue);
        // ...
    }
    open_softirq(NET_TX_SOFTIRQ, net_tx_action);
    open_softirq(NET_RX_SOFTIRQ, net_rx_action);
}

3.2 NIC Driver Initialization

NICs are discovered via the PCI subsystem; drivers register with pci_register_driver . For example, the igb driver registers its probe function, which allocates resources, sets up DMA, and registers a net_device with operations such as ndo_open , ndo_start_xmit , and ethtool callbacks.

static struct platform_driver fec_driver = {
    .driver = {
        .name = DRIVER_NAME,
        .pm = &fec_pm_ops,
        .of_match_table = fec_dt_ids,
    },
    .id_table = fec_devtype,
    .probe = fec_probe,
    .remove = fec_drv_remove,
};

3.3 ksoftirqd Kernel Thread Creation

The ksoftirqd thread is spawned during early boot (via spawn_ksoftirqd ) and later wakes to handle pending soft‑IRQs, including NET_RX_SOFTIRQ .

4. Packet Reception Flow

4.1 NIC Receives Data

The NIC uses DMA to place incoming frames into a ring buffer, minimizing CPU involvement.

4.2 Hardware Interrupt Handling

Upon DMA completion, the NIC raises a hard interrupt. The interrupt handler copies the packet to a kernel buffer, performs minimal validation, and schedules the soft‑IRQ NET_RX_SOFTIRQ .

//drivers/net/ethernet/freescale/fec_main.c
static irqreturn_t fec_enet_interrupt(int irq, void *dev_id)
{
    struct net_device *ndev = dev_id;
    struct fec_enet_private *fep = netdev_priv(ndev);
    irqreturn_t ret = IRQ_NONE;

    if (fec_enet_collect_events(fep) && fep->link) {
        ret = IRQ_HANDLED;
        if (napi_schedule_prep(&fep->napi)) {
            writel(0, fep->hwp + FEC_IMASK);
            __napi_schedule(&fep->napi);
        }
    }
    return ret;
}

4.3 Soft‑Interrupt Processing

The soft‑IRQ wakes ksoftirqd , which calls net_rx_action . This function drains the per‑CPU poll list and invokes the NIC driver’s NAPI poll function (e.g., fec_enet_rx_napi ).

static __latent_entropy void net_rx_action(struct softirq_action *h)
{
    struct softnet_data *sd = this_cpu_ptr(&softnet_data);
    list_splice_init(&sd->poll_list, &list);
    while (!list_empty(&list)) {
        struct napi_struct *n = list_first_entry(&list, struct napi_struct, poll_list);
        budget -= napi_poll(n, &repoll);
        // ...
    }
}

The NAPI poll routine pulls packets from the ring buffer, performs GRO (generic receive offload) aggregation, and finally calls netif_receive_skb_list_internal to hand the packet to the IP layer.

4.4 Protocol‑Stack Traversal

At the network interface layer, Ethernet headers are stripped and the EtherType determines the next protocol (IP, ARP, etc.). The IP layer routes the packet, checks the protocol field, and dispatches to TCP ( tcp_v4_rcv ) or UDP ( udp_rcv ) handlers.

4.5 Transport‑Layer Delivery

TCP matches the packet’s 4‑tuple (src/dst IP and ports) to a socket, copies data into the socket’s receive buffer, and wakes the waiting application. UDP performs a similar lookup based on ports only.

5. Arrival at the Application Layer

Applications read data via socket system calls such as recv or recvfrom . For example, a chat program receives a message, parses it, and displays it to the user. HTTP servers similarly parse incoming requests after the packet has traversed the entire stack.

KernelNetworkLinuxOperating SystemnetworkingPacket Reception
Deepin Linux
Written by

Deepin Linux

Research areas: Windows & Linux platforms, C/C++ backend development, embedded systems and Linux kernel, etc.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.