Revision History:
2026.02.25: Initial Writing for Introduction, Application Layer and Transport Layer
Introduction
Network edge: hosts(Client and Server), access network(Wired/Wireless links)
Network core: packet/circuit switching(Routers), internet structure
Performance: loss, delay, throughput
Protocols: Format + Order + Action
packet transmission delay = L(packet length) / R(link transmission rate)
Guided Media: Solid media(Coaxial Cable, Fiber Optic Cable)
Unguided Media: Radio, propagate freely
Packet Switching: Queueing, if exceed enter queue.
Circuit Switching: FDM and TDM:
Frequency Division Multiplexing (FDM): Frequency bands, each call allocated its own band, max bitrate at its band
Time Division Multiplexing (TDM): time divided into slots, each call has a slot for transmission
bottleneck link: link that is a bottleneck?
Why layering? 1. modularization eases maintenance. 2. explicit structure allows identification, relationship of system’s pieces
Application, Transport, Network, Link, Physical.
Application Layer
Client-server paradigm:
Server always on, permanent IP
Clients intermittently connected, do not communicate directly with each other.
Peer to Peer architecture:
No always on server,
self scalability: new peers bring newservice capacity, as well as new service demands
process in different hosts communicate by exchanging messages
Socket
a door
identifier includes both IP address and port numbers
What transport service does an app need? Data integrity, throughput, timing, security
web page consists of objects addressable by a Uniform Resource Locator (URL)
HTTP: hypertext transfer protocol(Client Server)
Stateless
- Client initiate TCP to server port 80
- Server accept TCP from Client
- HTTP message exchange
- Close
Non-persistent HTTP: at most one object sent over TCP connection
Persistent HTTP: multiple objects can be sent over single TCP
POST: form input; HEAD: request header only; GET: include user data in url; PUT: upload new file to server
200: OK; 301: Moved permanently; 400 Bad request; 505 HTTP version not supported
DNS
Why not centralize DNS? - single point of failure; - traffic volume; - maintenance; - distance;
Top-Level Domain (TLD) servers: .edu; .com; ...
authoritative DNS servers: organization’s own DNS server(s), providing authoritative hostname to IP mappings for organization’s named hosts
Local DNS: host makes DNS query; sent to local DNS server; reply with local cache, forward request for resolution
Iterated query: contacted server replies with name of server to contact
“I don’t know this name, but ask this server”
recursive query: puts burden of name resolution on contacted name server
“I hire you for telling me the result 😡”“OKOK will do”
Caching DNS with TTL
DNS resource record
RR format: (name, value, type, ttl)
type=A: name is hostname; value is IP address
(relay1.bar.foo.com, 145.37.93.126, A) Canonical name -> IP
type=CNAME: name is alias name for some “canonical” www.ibm.com => servereast.backup2.ibm.com; value is canonical name
(foo.com, relay1.bar.foo.com, CNAME) Alias -> Canonical Name
type=NS: name is domain; value is hostname of authoritative name server for this domain
(foo.com, dns.foo.com, NS) Domain -> DNS
type=MX: value is name of SMTP mail server associated with name
(foo.com, mail.bar.foo.com, MX) Domain -> Mail Serve
Transport Layer
provide logical communication between application processes running on different hosts
sender: breaks application messages into segments, passes to network layer
receiver: reassembles segments into messages, passes to application layer
Multiplexing/demultiplexing
handle data from multiple sockets, add transport header
use header info to deliver received segments to correct socket
host uses IP addresses & port numbers to direct segment to appropriate socket
Connectionless demultiplexing (UDP)
same dest port, different source IP/port ==> same socket at receiving
Connection-oriented demultiplexing (TCP)
same dest port, different source IP/port ==> same socket at receiving
Source process <==(source IP, source port, dest IP, dest port) ==> Dest process
Multiplexing/demultiplexing happen at all layers
UDP: User Datagram Protocol
best effort, may be lost / out of order
Connectionless
simple
small header size
no congestion control
TCP:
point-to-point
reliable, in-order byte steam
connection-oriented
cumulative ACKs
flow controlled
=
Sequence numbers: byte stream “number” of first byte in segment’s data
Acknowledgements: seq # of next byte expected from other side, cumulative ACK
event: timeout: retransmit segment that caused timeout. restart timer
event: ACK received update what is known to be ACKed. start timer if there are still unACKed segments
- In order, All acked: Delay ACK for 500ms. If no segment, then ACK.
- In order, one unACKed: cumulative ACK immediately.
- Out of order, higher(gap detected): Send duplicate ACK, indicating next expected seq.
- Fill gap(partially or completely): Send ACK, with seq at lower end of gap. TCP fast retransmit: 3 additional ACK for same data: resend unACKed segment with smallest seq #
TCP flow control:
free buffer space in rwnd field
RcvBuffer size set via socket options (typical default is 4096 bytes)
TCP 3 way handshake
- Connection request (SYN)
- C: Connect with ISN(SEQ) 8000, SYN
- S: ISN(SEQ) 15000, Receiver buffer 5000 bytes so rwnd = 5000, ACK 8001, SYN, ACK
- C: SEQ 8001, Receiver buffer 1000 bytes so rwnd = 1000, ACK 15001, ACK
- When closing:
- send TCP segment with FIN bit = 1
- respond to received FIN with ACK
TCP congestion control
AIMD: Additive Increase Multiplicative Decrease
Cut in half on loss detected by triple duplicate ACK (TCP Reno)
Cut to 1 MSS (maximum segment size) when loss detected by timeout (TCP Tahoe)
TCP slow start
exponential increase from 1 to ssthresh (min(ssthresh, 2*segment))
loss indicated by timeout: cwnd set to 1 MSS; slow start
loss indicated by 3 duplicate ACKs: cwnd is cut in half window then grows linearly
TCP Tahoe always sets cwnd to 1 (timeout or 3 duplicate acks)
throughput: (3/4) (W / RTT)
TCP CUBIC
cubic centered at Wmax replacing linear add