Master the TCP/IP Model: From Layers to Real-World Protocols
This comprehensive guide explains the TCP/IP protocol suite, detailing each of its four layers, key protocols such as IP, TCP, UDP, ICMP, ARP, and DNS, and illustrates essential mechanisms like encapsulation, three‑way handshake, flow control, congestion control, and tools like ping and traceroute.
1. TCP/IP Model
TCP/IP (Transmission Control Protocol/Internet Protocol) is the core suite of protocols that underpins the Internet.
The reference model divides the suite into four layers: link, network, transport, and application. The diagram below shows the correspondence between the TCP/IP model and the OSI model.
The topmost application layer includes familiar protocols such as HTTP and FTP. The transport layer hosts TCP and UDP. The network layer contains the IP protocol, which adds IP addresses to packets. The data‑link layer adds Ethernet headers and CRC encoding before transmission.
Data transmission follows a stack‑like process: the sender encapsulates headers at each layer (stacking), while the receiver removes them (unstacking) to retrieve the original data.
Example using HTTP illustrates the encapsulation process.
2. Data Link Layer
The physical layer converts the 0/1 bit stream into electrical signals or light pulses. The data‑link layer groups bits into frames, transfers them between neighboring nodes, and uses MAC addresses to uniquely identify each node.
Framing: encapsulate network‑layer datagrams into frames with source and destination MAC addresses.
Transparent transmission: bit stuffing and escape characters.
Reliable transmission: rarely used on low‑error links, but employed on wireless links (WLAN).
Error detection (CRC): receiver discards frames with detected errors.
3. Network Layer
1. IP Protocol
The IP protocol is the core of the TCP/IP suite; TCP, UDP, ICMP, and IGMP data are all carried in IP packets. IP itself is unreliable and does not guarantee delivery—reliability is provided by upper‑layer protocols such as TCP or UDP.
1.1 IP Address
While the data‑link layer uses MAC addresses, the IP layer uses 32‑bit IP addresses, divided into network and host portions to reduce routing table size.
Class A: 0.0.0.0–127.255.255.255 Class B: 128.0.0.0–191.255.255.255 Class C: 192.0.0.0–223.255.255.255
1.2 IP Header
The diagram below shows the IP header; only the 8‑bit TTL field is described here.
TTL (Time‑to‑Live) specifies how many routers a packet may traverse before being discarded. Each router decrements TTL by one; when TTL reaches zero, the packet is dropped. Typical maximum TTL values are 255, though many systems use 32 or 64.
2. ARP and RARP
ARP resolves an IP address to a MAC address. When a host needs to send an IP packet, it first checks its ARP cache; if the mapping is missing, it broadcasts an ARP request. The host owning the IP replies with its MAC address, and the requester updates its ARP cache.
RARP performs the opposite operation (address resolution from MAC to IP).
3. ICMP Protocol
ICMP (Internet Control Message Protocol) operates at the IP layer to report errors such as host or network unreachable. It encapsulates error information and returns it to the originating host, enabling higher‑level protocols to handle failures.
4. Ping
Ping is the most famous ICMP application. It sends an ICMP echo request and waits for an echo reply, allowing users to verify network connectivity and diagnose faults.
Typical ping output includes round‑trip time and packet loss statistics.
Ping’s name derives from sonar “ping” because it probes another host’s reachability using ICMP packets.
5. Traceroute
Traceroute discovers the path packets take to a destination. It sends UDP packets with an initial TTL of 1; each router decrements TTL, and when TTL reaches zero the router returns an ICMP “time exceeded” message. The process repeats with increasing TTL values (2, 3, …) until the destination is reached, revealing each hop’s IP address.
6. TCP/UDP
Both TCP and UDP reside in the transport layer but have different characteristics and use cases.
Message‑oriented (UDP) : The application specifies the size of each datagram; UDP sends each datagram as a single packet. Large datagrams may be fragmented by IP, reducing efficiency.
Byte‑stream‑oriented (TCP) : TCP treats the data as a continuous stream of bytes, segmenting it as needed. It provides flow control, congestion control, and reliable delivery.
Typical TCP applications include HTTP, HTTPS, FTP, POP, and SMTP, where reliability is essential. UDP is chosen for latency‑sensitive scenarios where occasional loss is acceptable.
7. DNS
DNS (Domain Name System) maps human‑readable domain names to IP addresses. It operates over UDP on port 53 and provides a distributed database for name resolution.
8. TCP Connection Establishment and Termination
1. Three‑Way Handshake
TCP is connection‑oriented; before data exchange, a three‑step handshake synchronizes sequence numbers and window sizes.
First handshake : Client sends SYN with sequence number x.
Second handshake : Server replies with SYN‑ACK (ack = x+1, seq = y).
Third handshake : Client sends ACK (ack = y+1); both sides enter ESTABLISHED state.
Why three‑way handshake?
It prevents old, delayed connection requests from being mistakenly accepted, ensuring both sides agree on the connection parameters.
2. Four‑Way Termination
After data transfer, TCP closes the connection using a four‑step termination.
First : Host 1 sends FIN, enters FIN_WAIT_1.
Second : Host 2 acknowledges with ACK, enters FIN_WAIT_2.
Third : Host 2 sends its own FIN.
Fourth : Host 1 ACKs the FIN and enters TIME_WAIT; after 2 MSL the connection is fully closed.
Why wait 2 MSL?
Waiting 2 MSL ensures that delayed packets from the old connection are discarded and that both sides have time to acknowledge the final FIN, preventing resource leaks and accidental reuse of the same port.
9. TCP Flow Control
Flow control prevents the sender from overwhelming the receiver. The receiver advertises a window size (rwnd); the sender must not transmit more bytes than rwnd.
The receiver can shrink its window to zero, forcing the sender to pause until the window reopens.
10. TCP Congestion Control
Congestion control adjusts the congestion window (cwnd) based on network conditions.
Slow Start : cwnd starts at one MSS and doubles each round‑trip time (RTT) until a loss is detected or cwnd reaches the slow‑start threshold (ssthresh).
Congestion Avoidance : Once cwnd > ssthresh, cwnd grows linearly (cwnd += 1 MSS per RTT) instead of exponentially.
If loss occurs, ssthresh is set to half of the current cwnd, cwnd is reset to 1, and slow start restarts.
Fast Retransmit and Fast Recovery
Fast Retransmit
The receiver sends three duplicate ACKs when it detects a missing segment. Upon receiving three duplicate ACKs, the sender immediately retransmits the missing segment without waiting for the retransmission timer.
Fast retransmit can improve throughput by about 20%.
Fast Recovery
After fast retransmit, the sender halves ssthresh, sets cwnd to ssthresh, and continues with congestion avoidance (additive increase) instead of returning to slow start.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.