Send to different server, same socket
06 - Socket Programming
Section titled “06 - Socket Programming”Problem 1: Short Answer Questions
Section titled “Problem 1: Short Answer Questions”-
(a) Describe the service model exported by UDP. Write down the name of 2 application layer protocols that make use of UDP.
Solution
In short: UDP exports a connectionless, unreliable datagram service with no ordering guarantees or flow control. Each message is independent and may be lost, duplicated, or arrive out of order. DNS and TFTP are two protocols that use UDP.
Elaboration:
UDP Service Model:
Characteristics:- Connectionless: No setup/teardown, send immediately- Unreliable: Packets may be lost- Unordered: Messages may arrive out of order- No flow control: Sender not told if receiver overwhelmed- No congestion control: Sender ignores network conditions- Low overhead: Minimal header (8 bytes)- Low latency: No waiting for connection establishment- Datagram-oriented: Each send() = one complete messageMessage Delivery Guarantees:
Application sends 3 datagrams:Send: Message 1Send: Message 2Send: Message 3Possible outcomes:1. Receive: 1, 2, 3 (all arrive in order)2. Receive: 1, 3 (message 2 lost)3. Receive: 2, 1, 3 (out of order)4. Receive: 1, 1, 3 (message 1 duplicated)5. Receive: nothing (all lost)6. Receive: 3 (only last one)All are valid UDP behaviors!Application must handle all casesTwo UDP Protocols:
1. DNS (Domain Name System)
Why UDP?- Simple query/response protocol- Single request → single response expected- If no response: Timeout, retry with different server- Speed matters: DNS queries are frequent- Low bandwidth: Queries/responses small (~200 bytes typically)- Can't require TCP overhead (3 RTT handshake)DNS Query flow:1. Client sends DNS query (UDP, port 53)2. Server responds (UDP)3. DoneNo connection setup: FastLoss tolerance: Retry mechanism in client2. TFTP (Trivial File Transfer Protocol)
Why UDP?- Simple file transfer protocol- Designed for embedded systems with limited resources- Minimal overhead- Client retransmits if timeout- Each block (~512 bytes) is independentTFTP flow:1. Client sends READ/WRITE request (UDP, port 69)2. Server responds from random port3. Exchange data blocks with ACKs4. If block lost: Client retransmitsLoss handled by application layerCan work over simple networksOther Common UDP Protocols:
- NTP: Network Time Protocol (network synchronization)
- SNMP: Simple Network Management Protocol (monitoring)
- DHCP: Dynamic Host Configuration Protocol (IP assignment)
- VoIP: Skype, Zoom (real-time audio/video)
- Online Games: Quick, low-latency communication
- Streaming: Video/audio streaming (loss tolerance)
Conclusion:
UDP provides a connectionless, unreliable datagram service with minimal overhead. Applications using UDP must handle packet loss, duplication, and reordering. DNS and TFTP are classic examples that benefit from UDP’s low latency and simple operation, accepting unreliability as a trade-off for speed.
-
(b) Write down the service model exported by TCP. Write down the name of 2 application layer protocols that make use of TCP and why they use TCP instead of UDP.
Solution
In short: TCP exports a connection-oriented, reliable, ordered byte-stream service with flow control and congestion control. HTTP and SMTP are two protocols requiring TCP’s reliability guarantees because they handle important data (web pages, emails) that must arrive completely and in order without loss.
Elaboration:
TCP Service Model:
Characteristics:- Connection-oriented: 3-way handshake setup, graceful close- Reliable: All data delivered exactly once (no loss, no duplication)- Ordered: Bytes arrive in same order sent- Flow control: Receiver tells sender its buffer size- Congestion control: Sender adapts to network conditions- Byte-stream oriented: Application writes bytes, receives bytes- High overhead: 20-byte header + handshake (3 RTT)- Error detection: Checksums verify data integrityGuaranteed Properties:
Application sends: "Hello World"TCP guarantees:1. ALL bytes arrive (no loss)2. No duplicates (each byte once)3. In order (H-e-l-l-o-space-W-o-r-l-d)4. Application never sees partial data or corruptionIf TCP detects:- Loss: Retransmit- Out of order: Buffer, reorder- Corruption: Drop, request retransmitApplication unaware of these issuesTwo TCP Protocols:
1. HTTP (HyperText Transfer Protocol)
Why TCP instead of UDP?Reason 1: Data integrity critical- Downloading web page with images- Loss of even one byte corrupts data- Image file missing bytes = unrenderable- Can't download partial webpage- HTTP/1.0: One request per connection- Can't retry individual imagesReason 2: Large, variable-sized messages- Webpage: Several KB to MB- UDP has 64 KB limit per datagram- Would need to fragment at application layer- TCP handles fragmentation transparentlyReason 3: User expectation- "Click link → page loads completely"- No data loss tolerated- HTTP relies on TCP's reliabilityExample:Client downloads webpage (500 KB)UDP: Would need 500+ separate datagramsLosing even one: Must retry allInefficientTCP: Single stream, 500 KB arrives reliablyLost packets retransmitted invisibly2. SMTP (Simple Mail Transfer Protocol)
Why TCP instead of UDP?Reason 1: Message integrity essential- Email loss unacceptable- "Sent" button means reliable delivery- Recipients expect complete messages- Can't lose partial email content- Can't lose attachmentsReason 2: Guaranteed delivery semantics- SMTP tracks delivery status:"250 OK" = message acceptedServer stores for retry- UDP: No such guarantees possible- Can't tell if email reached serverReason 3: Error detection and recovery- TCP: If packet lost, automatic retransmit- SMTP: Application layer can detect failures- Can retry with different server if neededExample:Client sends 5 MB email with attachmentsUDP: Would fragment into many datagramsLosing one → entire retryUnacceptable for emailTCP: Transparent, reliable transmissionApplication gets "250 OK" = safe to deleteComparison:
Aspect UDP TCP Connection Connectionless Connection-oriented Reliability Unreliable Reliable Order Unordered Ordered Flow Control None Yes (window) Congestion Control None Yes (AIMD) Error Handling App layer TCP layer Handshake None 3-way Data Size Per datagram Byte stream More TCP Protocols:
- FTP: File transfer (reliability critical)
- Telnet: Remote login (ordered interaction)
- SSH: Secure shell (data integrity required)
- POP3/IMAP: Email retrieval (data loss intolerable)
Conclusion:
TCP provides connection-oriented, reliable, ordered byte-stream delivery with flow and congestion control. HTTP uses TCP because web page integrity is critical—pages can be large, multi-part, and losing even one byte breaks the page. SMTP uses TCP because email delivery must be guaranteed and errors must be detectable. Both protocols require reliability that UDP cannot provide.
-
(c) What’s the maximum size of user data that can be sent over TCP with a single send() operation? Justify your answer.
Solution
In short: There is no fixed maximum enforced by TCP for a single send() call. The send() operation can be called with megabytes of data, and TCP will fragment it into appropriately-sized segments. The actual limit depends on available memory and system buffer sizes, not TCP protocol limits.
Elaboration:
No Protocol-Level Limit:
TCP allows send() with arbitrary amount of data:send(socket, buffer, 1000000); // 1 MBsend(socket, buffer, 100000000); // 100 MBBoth are valid and will workTCP doesn't reject based on sizeHow TCP Handles Large Sends:
Application calls:send(sock, large_buffer, 1,000,000)TCP does:1. Accepts the 1 MB request2. Segments it into MSS-sized chunks- MSS (Maximum Segment Size): Typically 1460 bytes- 1,000,000 / 1460 ≈ 685 segments3. Sends segments as network allows:- Respects congestion window- Respects receiver's advertised window- Paces packets according to congestion control4. Returns to application when data queued in TCP bufferApplication doesn't wait for all 685 segmentsJust waits for buffer spacePractical Limits:
Limit 1: TCP send buffer size- Default: 64 KB to 2 MB (OS dependent)- Can increase with setsockopt()- send() buffers data in kernel- Can't send more than buffer holdsLimit 2: Available memory- System has finite RAM- Can't allocate infinite buffers- Large send() may fail (ENOMEM)Limit 3: Receiver's advertised window- Receiver tells sender: "I have X bytes buffer"- Sender won't send more than this- But send() doesn't fail- Just waits for receiver to read dataNo limit from TCP protocol itselfWhy No Protocol Limit?
TCP header specifies segment size with 16-bit field:IP header:Total Length: 16 bits → Max 65535 bytes per IP packetBut TCP payload in each IP packet limited by MTU:Ethernet MTU: 1500 bytesIP header: 20 bytesTCP header: 20 bytesTCP payload: 1500 - 20 - 20 = 1460 bytes (MSS)So each packet carries ~1460 bytesBut send() call isn't limited to one packetTCP fragments across multiple packetsTherefore: No limit on send() call sizeOnly limit on individual segment size (MSS)Example:
// Send 10 MB of datachar buffer[10 * 1024 * 1024];// ... fill buffer ...int bytes_sent = send(sock, buffer, 10*1024*1024, 0);What happens:1. TCP accepts request2. Buffers as much as fits in send buffer3. Immediately returns number of bytes buffered4. Application can send() again for remaining data5. TCP fragments into ~6850 segments (1460 bytes each)6. Transmits segments, respecting flow/congestion controlTotal time: Depends on network, not on send() callsend() Return Value:
send() returns: Number of bytes BUFFERED (not sent)Example:send(sock, 1MB_buffer, 1000000, 0);Returns: 65536 (TCP buffer size)Means: 65536 bytes queued in TCPRemaining 934464 bytes not yet queuedApplication must call send() againOr wait for spaceSo send() may not send all data requested!Application must loop:int total_sent = 0;while (total_sent < data_size) {int n = send(sock, ptr + total_sent,data_size - total_sent, 0);if (n < 0) error();total_sent += n;}Conclusion:
TCP has no protocol-level limit on send() data size. The practical limits are the TCP send buffer (typically 64 KB-2 MB) and available system memory. TCP internally fragments large sends into segments of MSS size (~1460 bytes) and transmits them according to congestion control. Applications may need to call send() multiple times for very large data, as send() returns only the amount buffered, not the amount requested.
-
(d) Assume that you create a UDP socket. Can you send DNS queries to two different DNS servers using this socket. Justify your answer. Assume now you create a TCP socket. Can you send queries to two different DNS servers using this socket. Justify your answer.
Solution
In short: YES for UDP—a single UDP socket can send datagrams to multiple different servers by calling sendto() with different destination addresses. NO for TCP—a TCP socket connects to exactly one server, and must be closed and recreated to connect to a different server.
Elaboration:
UDP Socket with Multiple Servers:
UDP is connectionless:Each sendto() specifies destinationCode:socket_fd = socket(AF_INET, SOCK_DGRAM, 0);// Query Server 1struct sockaddr_in server1;server1.sin_addr.s_addr = inet_aton("8.8.8.8");server1.sin_port = htons(53);sendto(socket_fd, query, query_len, 0,(struct sockaddr*)&server1, sizeof(server1));// Query Server 2struct sockaddr_in server2;server2.sin_addr.s_addr = inet_aton("1.1.1.1");server2.sin_port = htons(53);sendto(socket_fd, query, query_len, 0,(struct sockaddr*)&server2, sizeof(server2));// Both work! Same socket, different addressesWhy UDP Allows This:
UDP characteristics:- No connection state- Each datagram independent- Destination specified per send- Socket is just an endpointSocket can:1. Send to any address2. Receive from any address3. Address changes per packet4. No setup/teardown neededExample flow:Client → Server1: Query AClient ← Server1: Response AClient → Server2: Query BClient ← Server2: Response BAll on same socketDNS Example with UDP:
import socketsock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)dns_servers = [("8.8.8.8", 53), # Google DNS("1.1.1.1", 53), # Cloudflare DNS("208.67.222.123", 53) # OpenDNS]query = build_dns_query("example.com")for server, port in dns_servers:sock.sendto(query, (server, port))# Send to different server, same socket# Receive responsesfor _ in dns_servers:data, addr = sock.recvfrom(512)print(f"Response from {addr[0]}")sock.close()TCP Socket with Multiple Servers:
TCP is connection-oriented:Socket connects to exactly ONE serverCode:socket_fd = socket(AF_INET, SOCK_STREAM, 0);// Connect to Server 1struct sockaddr_in server1;server1.sin_addr.s_addr = inet_aton("8.8.8.8");server1.sin_port = htons(53);connect(socket_fd, (struct sockaddr*)&server1,sizeof(server1));// Now connected to 8.8.8.8// Send query to Server 1send(socket_fd, query, query_len, 0);recv(socket_fd, response, response_len, 0);// To connect to Server 2: Must close first!close(socket_fd);// Create NEW socketsocket_fd = socket(AF_INET, SOCK_STREAM, 0);// Connect to Server 2struct sockaddr_in server2;server2.sin_addr.s_addr = inet_aton("1.1.1.1");server2.sin_port = htons(53);connect(socket_fd, (struct sockaddr*)&server2,sizeof(server2));// Now connected to 1.1.1.1send(socket_fd, query, query_len, 0);recv(socket_fd, response, response_len, 0);close(socket_fd);Why TCP Can’t:
TCP connection = 4-tuple:(Source IP, Source Port, Dest IP, Dest Port)Once connected:- Destination is fixed- Can't change destination mid-connection- sendto() not typically used (use send())To connect to different server:- Must close connection to first server- Must create new socket- Must establish new connection (3-way handshake)Multiple servers = multiple sockets neededTCP Example (Multiple Sockets):
import socketdns_servers = [("8.8.8.8", 53),("1.1.1.1", 53)]for server, port in dns_servers:# Create NEW socket for each serversock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)# Connect to this serversock.connect((server, port))# Send queryquery = build_dns_query("example.com")sock.send(query)# Receive responseresponse = sock.recv(512)print(f"Response from {server}")# Close this connectionsock.close()# Can't reuse socket for different server!Why DNS Uses UDP:
Part of answer: DNS is simple request/responseWith UDP:- One socket for multiple servers- No setup overhead per server- StatelessWith TCP (if DNS used it):- Would need multiple sockets- Each requires 3-way handshake- Setup overhead too high- Overkill for simple query/responseThis is why DNS uses UDP despite unreliabilityCan handle loss with retriesConclusion:
UDP socket CAN send queries to multiple DNS servers using a single socket. The sendto() call specifies the destination each time, and the socket remains connectionless. TCP socket CANNOT send to multiple servers with a single socket because TCP is connection-oriented—one socket = one connection = one server. To query multiple servers with TCP requires creating new sockets and establishing new connections for each server, which is inefficient and impractical for DNS.
-
(e) What’s the maximum size of user data that can be sent over UDP? Justify your answer.
Solution
In short: UDP has a practical maximum of approximately 65,507 bytes per datagram. This limit comes from the 16-bit length field in the IP header (65,535 bytes total) minus the IP header (20 bytes) and UDP header (8 bytes). Larger messages must be fragmented at the application layer.
Elaboration:
UDP Datagram Size Limit:
UDP is datagram-oriented:Each send() = one complete datagramMaximum datagram size:= IP packet size - IP header - UDP header= 65,535 - 20 - 8= 65,507 bytes of user dataWhy This Limit?
IP Header structure:[Version: 4 bits][Header Len: 4 bits][Type of Service: 8 bits][Total Length: 16 bits] ← THIS FIELD[Identification: 16 bits][Flags: 3 bits][Fragment Offset: 13 bits][TTL: 8 bits][Protocol: 8 bits][Checksum: 16 bits][Source IP: 32 bits][Destination IP: 32 bits]Total Length field:- 16 bits = max value 2^16 - 1 = 65,535- Specifies entire IP packet (header + data)Calculation:IP Total Length = 65,535 bytesIP Header = 20 bytes (minimum)Data = 65,535 - 20 = 65,515 bytesUDP Header = 8 bytesUser Data = 65,515 - 8 = 65,507 bytesPractical Limit: Path MTU
Theoretical maximum: 65,507 bytesBut network has MTU (Maximum Transmission Unit):Typical MTU values:- Ethernet: 1,500 bytes (most common)- WiFi: 1,500 bytes- PPP: 576 bytes- Loopback: 65,535 bytesIf UDP datagram > MTU:- IP fragmentation occurs- One datagram → multiple IP fragments- Each fragment travels separately- Receiver reassemblesProblem with fragmentation:- If one fragment lost: entire datagram lost- No per-fragment retransmission- Reassembly timeout on receiver- InefficientExample with Fragmentation:
Send 3000-byte UDP datagram over Ethernet (MTU 1500):UDP datagram: 3000 bytesIP fragments:Fragment 1: [IP header: 20][Data: 1480 bytes][Frag offset: 0]Fragment 2: [IP header: 20][Data: 1480 bytes][Frag offset: 1480]Fragment 3: [IP header: 20][Data: 40 bytes][Frag offset: 2960]Network:Fragment 1 → arrivesFragment 2 → lostFragment 3 → arrivesReceiver:Has 1480 + 40 bytesMissing middle fragmentReassembly timer expiresDiscards fragmentsDatagram lost!Application receives nothingUDP reports no errorWhy Not Just Fragment at Transport Layer?
UDP doesn't provide fragmentation:"Send what you give me, or fail"If data > 65,507:sendto() returns error (EMSGSIZE on some systems)Or quietly failsApplication must:1. Split into smaller messages2. Add sequence numbers3. Handle reassembly4. Detect lossThis is why TCP is preferred for large dataTCP handles fragmentation transparentlyBest Practices:
Safe UDP datagram size:- With IPv4: Up to 65,507 bytes- In practice: Limit to 65,000 to be safeOver typical networks (MTU 1500):- Limit to 1,472 bytes to avoid fragmentation(1500 - 20 IP - 8 UDP = 1472)- Better: 512 bytes (very safe)- Even better: Let network determine optimal sizeExample DNS:- Queries: ~50-200 bytes (always safe)- Responses: ~200-512 bytes (usually safe)- TCP fallback if > 512 bytesChecking Datagram Size:
// Try to send large datagramchar data[70000];int n = sendto(sock, data, 70000, 0, &addr, addr_len);Possible outcomes:1. Fails: returns -1, errno = EMSGSIZE(Message too large for transport)2. Succeeds: returns 70000But IP will fragment itRisky: any lost fragment = lost datagram3. Truncates silently on some systems(UGH!)Better: Use recvmsg/sendmsg with control messagesTo determine path MTUConclusion:
UDP maximum datagram size is 65,507 bytes of user data, determined by the 16-bit length field in the IP header (65,535 bytes total minus 20-byte IP header and 8-byte UDP header). However, practical limits are much smaller due to network MTU (typically 1,500 bytes). Datagrams larger than MTU are fragmented by IP, and loss of any fragment causes loss of the entire datagram. Applications should limit UDP messages to avoid fragmentation, typically to 512 bytes or the estimated path MTU minus headers.
-
(f) Consider two hosts A and B attached to the same link (with no intervening router). An application needs to send 1000 messages from A to B over UDP. A programmer implements this operation as follows: S/he write a for loop, and simply dumps 1000 packets to the network as fast as possible. The receiver application reports that at least 20% of the messages has not arrived at it. Describe what might be happening here?
Solution
In short: The sender is flooding the network faster than the receiver can process packets, causing the receiver’s buffer to overflow and packets to be dropped. Additionally, the sender’s buffer may overflow, the network interface may have limits, and the receiver’s kernel may not keep up with processing the incoming packet stream.
Elaboration:
Root Cause: Receiver Buffer Overflow:
Sender behavior:for (int i = 0; i < 1000; i++) {sendto(sock, data, len, 0, &addr, addr_len);}This sends 1000 packets IMMEDIATELYAssumptions:- No delay between packets- Sends as fast as socket allows- Doesn't wait for receiver responseReceiver side:- Arrives at receiver's NIC (network interface card)- Buffered in kernel receive buffer- Application reads at its own paceProblem:If packets arrive faster than application reads:- Kernel buffer fills up- New arriving packets dropped- Application unaware (UDP = no error report!)Detailed Packet Flow:
Time T0:Sender sends packet 1-1000 as fast as possible(~microseconds apart)Time T0-T10ms:Packets arrive at receiver NIC (1 Gbps link)All 1000 packets arrive in ~10 millisecondsReceiver kernel:Buffer size: Typically 128-256 KBEach UDP packet: ~100-1000 bytesCan buffer: ~128-256 packets maximumWhat happens:Packets 1-128 arrive → buffered in kernelPackets 129-200 arrive → buffer full → DROPPEDPackets 201-1000 arrive → buffer still full → DROPPEDReceiver application:Calls recvfrom() at rate of 10 packets/secondBy time it reads packet 1:Time: 100 msSent: 1000 packets already arrivedAvailable: ~13 packets (1000 ms / 100 packets-per-100ms)Received: 128 buffered, rest DROPPEDResult: ~128 packets received, 872 droppedSuccess rate: ~13% (87% dropped!)Issue 1: Kernel Receive Buffer Size
Linux socket buffer sizes:Default: 128 KB (can vary)Maximum settable: 256 MBUDP datagram: ~100-1500 bytes128 KB buffer = ~128 datagrams before overflowIf 1000 sent in 10 ms:100 datagrams/ms arrival rateKernel processes at application's read rateWithout gaps: All but first 128 lostIssue 2: Receiver Application Processing Delay
Application loop:while (1) {recvfrom(sock, buf, size, 0, &addr, &addr_len);// Process message// ... maybe print, database write, etc ...// Takes 1-10 ms per packet}If processing takes 1 ms per packet:- Can handle 1000 packets/secondSender sends 1000 packets in 10 ms:- Rate: 100,000 packets/second- Receiver rate: 1,000 packets/second- 99% will be dropped!Even with zero processing:- Kernel still has buffer limits- At least 872 packets droppedIssue 3: NIC Hardware Buffering
Before reaching kernel:- Packets arrive at NIC- NIC has small buffer (1-2 KB typically)- If NIC can't offload to kernel fast enoughScenario:- NIC buffer overflows- Drops packets- Even before kernel receives themPlus kernel buffer:- Another layer of droppingIssue 4: Interrupt Handling
Each packet generates interrupt:- NIC: "Packet arrived"- Kernel handles interrupt- Copies packet to buffer- Wakes application (maybe)If packets arrive too fast:- Interrupt coalescing: Groups interrupts- Kernel might not keep up- Packets arrive during interrupt processing- Might be dropped by NICIssue 5: Lack of Flow Control
UDP is "fire and forget":Sender:while (packets_to_send--) {sendto(...);}No feedback to sender:- Sender doesn't know receiver struggling- Sender doesn't slow down- No congestion control- No flow controlTCP would:- Wait for ACKs- Get feedback on receiver window- Automatically slow down- All packets arrive (or retransmit)Demonstration with Numbers:
Scenario:- 1000 messages of 100 bytes each- Sender loop with no delays- Receiver application processes 10 packets/secondTiming:T = 0 ms:Sender starts sendingAll 1000 packets queued in TCP socketStart arriving at receiverT = 10 ms:All 1000 packets have arrived at receiver NICKernel has buffered ~128 packets max~872 packets DROPPED by kernelT = 0 - 100,000 ms:Receiver application reads at 10/secondProcesses: packets that weren't droppedGets: ~128 packets totalSuccess rate: 128/1000 = 12.8%Matches problem: "at least 20% didn't arrive"Actually worse than stated!Solutions:
1. Increase receive buffer sizesock.setsockopt(SO_RCVBUF, larger_size)Helps, but not complete solution2. Add delays in senderfor (int i = 0; i < 1000; i++) {sendto(...);usleep(100); // 100 microsecond delay}Receiver can keep up3. Slow down to match receiver capacityRate-limit sender to receiver's processing speed4. Use TCP insteadAutomatic flow controlGuaranteed deliveryNo drops5. Implement application-level ACKsReceiver ACKs each packetSender waits for ACK before sending next(But now you're reimplementing TCP!)Why This Matters:
UDP is "best effort" delivery:- No guarantee all packets arrive- No feedback to sender- If network/receiver overwhelmed: packets lostDevelopers must understand:- Can't just "dump" data and expect it to work- Must match sender rate to receiver rate- Must use larger buffers or flow control- Or switch to TCPConclusion:
The 20% packet loss is likely caused by receiver buffer overflow. When the sender floods 1000 packets in milliseconds and the receiver application processes them slowly (or with delays), the kernel’s receive buffer (typically 128 KB ≈ 128 packets) fills up and subsequent packets are silently dropped by the kernel or NIC. UDP provides no flow control or feedback, so the sender is unaware and continues sending. Solutions include increasing buffer size, adding delays in the sender, or using TCP for reliable delivery with automatic flow control.
-
(g) Is it possible to implement a multicasting application over TCP? Why or why not?
Solution
In short: NO. TCP cannot be used for multicasting because TCP is a point-to-point, connection-oriented protocol that establishes connections between exactly one sender and one receiver. Multicasting requires one sender to reach multiple receivers simultaneously, which TCP’s architecture fundamentally does not support.
Elaboration:
TCP’s Point-to-Point Nature:
TCP connection:One sender ←→ One receiverConnection identified by 5-tuple:(Source IP, Source Port, Dest IP, Dest Port, Protocol)Example:(192.168.1.1, 5000, 10.0.0.2, 80, TCP)This connects to ONE specific destination (10.0.0.2)Cannot connect to multiple destinationsWhat Multicasting Is:
Multicast model:One sender → Many receiversExample:Video stream from server to 1000 clientsServer sends onceAll 1000 clients receive same streamNetwork efficiency:- Server sends one packet- Network duplicates as needed- All clients receive copyLike TV broadcast vs phone callWhy TCP Can’t Do This:
Reason 1: Connection Semantics
TCP requires:1. Three-way handshake (SYN, SYN-ACK, ACK)2. Establishes connection with ONE peer3. Send data through that connection4. Receive ACKs from that ONE peerMulticast scenario:Server connects to Receiver1Server connects to Receiver2Server connects to Receiver3...Server connects to Receiver1000Result: 1000 separate TCP connections1000 × handshakes = massive overhead1000 × window management = complex1000 × retransmissions = inefficientThis is NOT multicasting anymoreIt's unicasting 1000 timesDefeats the purposeReason 2: Unicast vs Multicast Addresses
Unicast addresses (TCP):- **(192)**168.1.1 (specific host)- **(10)**0.0.2 (specific host)- Each address uniquely identifies one deviceMulticast addresses (UDP):- **(224)**0.0.0 to 239.255.255.255 (Class D)- **(224)**0.0.1 (all hosts on subnet)- **(239)**255.255.255 (site-local)- One address represents multiple hostsTCP requires unicast addressingCannot work with multicast groupsReason 3: ACK and Ordering Requirements
TCP guarantees:- In-order delivery- Reliable delivery (ACK each byte)Multicast scenario:Server sends to 1000 clientsClients 1-999 ACK immediatelyClient 1000 lost packets (network congestion)What should happen?- Retransmit for client 1000?- But clients 1-999 already got it!- Can't resend just for one client- Would have to resend to all 1000Inefficient and violates multicast semanticsAttempted Workarounds (All Bad):
Workaround 1: Multiple TCP ConnectionsServer opens TCP to each multicast recipientProblems:- 1000 recipients = 1000 connections- 3000 handshake packets minimum- Each connection has overhead- Server resource exhaustion- Not true multicast (unicast 1000 times)Workaround 2: Central RelayServer sends to central relay via TCPRelay broadcasts to all via UDPProblems:- Relay becomes bottleneck- Defeats multicast purpose- Still using UDP for actual broadcast- Adds latencyWorkaround 3: Application-Layer MulticastApplication implements multicast in softwareProblems:- Reinventing TCP for multicast- Inefficient and complex- Network doesn't help (no multicast support)- Packet duplication happens at app layerWhy UDP is Used for Multicast:
UDP characteristics enable multicast:- No connection state- Stateless- No ACKs to coordinate- No flow control per destination- Can send one packet, let network duplicateMulticast addresses:- Kernel joins multicast group- Receives all packets to that group- Sender sends once- Network replicates as neededExample:Sender: sendto(sock, data, len, 0, &multicast_addr, ...)Multicast address: 239.255.255.1Network sees: One packet to multicast groupReplicates to all subscribersEfficiency: One transmission, many receiversMulticast Example (UDP):
import socketimport struct# Multicast group addressMCAST_GRP = '239.255.255.1'MCAST_PORT = 5007# Sendersock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 2)sock.sendto(b'Hello Multicast', (MCAST_GRP, MCAST_PORT))# Receivers (any number of them)sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)sock.bind(('', MCAST_PORT))group_bin = socket.inet_aton(MCAST_GRP)mreq = struct.pack('4sL', group_bin, socket.INADDR_ANY)sock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq)data, addr = sock.recvfrom(1024)# All receivers get same packetWhen You Might Wrongly Think TCP:
Scenario: Need to send data to multiple clientsCommon (wrong) solution:- Open TCP to each client- Send data to allWhy this isn't multicast:- 1000 clients = 1000 connections- Server sends 1000 times- Network carries 1000 copies- Not network-assistedCorrect multicast approach:- UDP multicast address- Clients join group- Server sends once- Network duplicates- Much more efficientConclusion:
NO. TCP cannot implement multicasting because it is strictly point-to-point—a TCP connection exists between exactly one sender and exactly one receiver. Multicast requires one sender to reach many receivers, which would require either multiple TCP connections (defeating multicast efficiency) or reinventing the protocol. UDP, being connectionless and stateless, is the correct choice for multicast applications, allowing the network to efficiently replicate packets to all group members.
-
(h) In your DNS client project you used blocking UDP sockets and assumed that a reply from the server always comes back. We know that UDP is not reliable, and packets sent over UDP might get lost. Describe how you would have changed your code to implement the following: Your client sends the request and waits for a reply from the server for 5 seconds. If a reply arrives within 5 seconds, you print it on the screen. If no reply arrives for 5 seconds, your client wakes up and prints an error message.
Solution
In short: Set a socket timeout using setsockopt() with SO_RCVTIMEO (receive timeout), or use select()/poll() to monitor the socket with a timeout. When recvfrom() returns EAGAIN/EWOULDBLOCK (timeout), print an error. Alternatively, use non-blocking sockets with select() to monitor multiple sockets.
Elaboration:
Approach 1: Socket Timeout (Simplest)
#include <sys/socket.h>#include <netinet/in.h>#include <stdio.h>#include <string.h>#include <unistd.h>int main() {int sock;struct sockaddr_in server_addr, client_addr;struct timeval timeout;char buffer[512];int n;// Create UDP socketsock = socket(AF_INET, SOCK_DGRAM, 0);// Set 5-second timeouttimeout.tv_sec = 5; // 5 secondstimeout.tv_usec = 0; // 0 microsecondssetsockopt(sock, SOL_SOCKET, SO_RCVTIMEO,(const char*)&timeout, sizeof(timeout));// Send DNS queryserver_addr.sin_family = AF_INET;server_addr.sin_port = htons(53);inet_pton(AF_INET, "8.8.8.8", &server_addr.sin_addr);sendto(sock, query, query_len, 0,(struct sockaddr*)&server_addr,sizeof(server_addr));// Try to receive with timeoutsocklen_t addr_len = sizeof(client_addr);n = recvfrom(sock, buffer, sizeof(buffer), 0,(struct sockaddr*)&client_addr, &addr_len);if (n < 0) {// Check error typeif (errno == EAGAIN || errno == EWOULDBLOCK) {printf("Error: No reply from server after 5 seconds\n");} else {perror("recvfrom");}} else {printf("Received response: %s\n", buffer);}close(sock);return 0;}How Socket Timeout Works:
Without timeout:recvfrom() call↓Blocks indefinitely waiting for packet↓Packet arrives or never arrivesWith SO_RCVTIMEO:recvfrom() call↓Waits for 5 seconds↓Packet arrives: Return data (before timeout)↓OR 5 seconds elapse: Return -1, errno=EAGAINApproach 2: Using select() (More Control)
#include <sys/select.h>#include <sys/socket.h>#include <errno.h>#include <stdio.h>int main() {int sock;struct sockaddr_in server_addr;struct timeval timeout;fd_set readfds;char buffer[512];int n;// Create socketsock = socket(AF_INET, SOCK_DGRAM, 0);// Send queryserver_addr.sin_family = AF_INET;server_addr.sin_port = htons(53);inet_pton(AF_INET, "8.8.8.8", &server_addr.sin_addr);sendto(sock, query, query_len, 0,(struct sockaddr*)&server_addr,sizeof(server_addr));// Set up select() timeouttimeout.tv_sec = 5;timeout.tv_usec = 0;// Monitor socket for readabilityFD_ZERO(&readfds);FD_SET(sock, &readfds);// Wait for socket to be readable or timeoutint activity = select(sock + 1, &readfds, NULL, NULL, &timeout);if (activity < 0) {perror("select");} else if (activity == 0) {// Timeout occurredprintf("Error: No reply from server after 5 seconds\n");} else if (FD_ISSET(sock, &readfds)) {// Socket is readablestruct sockaddr_in client_addr;socklen_t addr_len = sizeof(client_addr);n = recvfrom(sock, buffer, sizeof(buffer), 0,(struct sockaddr*)&client_addr, &addr_len);printf("Received response: %s\n", buffer);}close(sock);return 0;}How select() Works:
select(nfds, readfds, writefds, exceptfds, timeout)Returns:- Positive: Number of file descriptors ready- 0: Timeout occurred, no descriptors ready- -1: ErrorUsage:FD_ZERO(&readfds); // Clear setFD_SET(sock, &readfds); // Add socket to settimeout.tv_sec = 5; // 5 secondsselect(sock+1, &readfds, ...);if (timeout elapsed) return 0else if (data ready) return 1Approach 3: poll() (Modern Alternative)
#include <poll.h>#include <stdio.h>int main() {int sock;struct pollfd fds[1];int poll_timeout = 5000; // millisecondsint poll_result;sock = socket(AF_INET, SOCK_DGRAM, 0);// Send query// ...// Set up pollfds[0].fd = sock;fds[0].events = POLLIN; // Interested in readable events// Wait for 5 seconds (5000 milliseconds)poll_result = poll(fds, 1, poll_timeout);if (poll_result < 0) {perror("poll");} else if (poll_result == 0) {// Timeoutprintf("Error: No reply from server after 5 seconds\n");} else if (fds[0].revents & POLLIN) {// Socket readablechar buffer[512];struct sockaddr_in client_addr;socklen_t addr_len = sizeof(client_addr);int n = recvfrom(sock, buffer, sizeof(buffer), 0,(struct sockaddr*)&client_addr, &addr_len);printf("Received response: %s\n", buffer);}close(sock);return 0;}Comparison of Approaches:
Approach Simplicity Control Use Case SO_RCVTIMEO Very simple Limited Single socket, simple timeout select() Medium Good Multiple sockets, complex logic poll() Medium Good Modern preference (more portable) With Retry Logic:
// Try up to 3 times with 5-second timeout eachint max_retries = 3;int retry_count = 0;while (retry_count < max_retries) {// Set timeouttimeout.tv_sec = 5;timeout.tv_usec = 0;setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO,(const char*)&timeout, sizeof(timeout));// Send querysendto(sock, query, query_len, 0,(struct sockaddr*)&server_addr,sizeof(server_addr));// Try to receivestruct sockaddr_in client_addr;socklen_t addr_len = sizeof(client_addr);int n = recvfrom(sock, buffer, sizeof(buffer), 0,(struct sockaddr*)&client_addr, &addr_len);if (n > 0) {// Successprintf("Received response: %s\n", buffer);break;} else if (errno == EAGAIN || errno == EWOULDBLOCK) {// Timeoutretry_count++;if (retry_count < max_retries) {printf("Timeout, retrying (%d/%d)...\n",retry_count, max_retries);}} else {perror("recvfrom");break;}}if (retry_count >= max_retries) {printf("Error: No reply from server after %d retries\n",max_retries);}Python Implementation:
import socketimport timesock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)# Set timeout to 5 secondssock.settimeout(5.0)server_addr = ("8.8.8.8", 53)try:# Send DNS querysock.sendto(query, server_addr)# Receive with timeoutdata, addr = sock.recvfrom(512)print(f"Received response: {data}")except socket.timeout:print("Error: No reply from server after 5 seconds")except Exception as e:print(f"Error: {e}")finally:sock.close()What Changes from Original Code:
Original (blocking, no timeout):recvfrom(sock, buffer, sizeof(buffer), 0, ...);// Blocks forever waiting for packetModified (with 5-second timeout):setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO, &timeout, sizeof(timeout));int n = recvfrom(sock, buffer, sizeof(buffer), 0, ...);if (n < 0 && errno == EAGAIN) {// 5 seconds passed, no response}Key Points:
1. SO_RCVTIMEO: Sets receive timeout on socket2. select()/poll(): More flexible, can monitor multiple sockets3. Returns immediately if data arrives before timeout4. Raises error/timeout if 5 seconds pass without data5. Application detects timeout and handles it6. Typical pattern: Retry with different server if timeoutConclusion:
To implement a 5-second timeout for DNS queries, use setsockopt() with SO_RCVTIMEO to set the receive timeout, then check for EAGAIN/EWOULDBLOCK errors when recvfrom() returns. Alternatively, use select() or poll() for more control. Python’s socket.settimeout() provides similar functionality. This allows detecting when the server doesn’t respond and either retrying or printing an error message, making the client robust to packet loss inherent in UDP.
-
(i) One way to make a Web server unavailable is to send it a lot of TCP SYN packets with an invalid source IP address, called SYN flooding. Describe why this crashes the Web server?
Solution
In short: SYN flooding exploits TCP’s 3-way handshake. The attacker sends many SYN packets with spoofed source IPs. The server responds with SYN-ACK but the attacker never completes the handshake. The server keeps half-open connections in memory, exhausting the connection queue and preventing legitimate users from connecting.
Elaboration:
Normal TCP Connection (3-Way Handshake):
Client (legitimate) Server| || -------- SYN -------→ || seq=x || || ←---- SYN-ACK ------- || seq=y || ack=x+1 || || ------- ACK --------→ || seq=x+1 || ack=y+1 || ||←--- Connection EstablishedTime: ~1 RTTServer state: ESTABLISHEDConnection added to accept queueApplication can read from connectionSYN Flooding Attack:
Attacker sends SYN with spoofed IPAttacker Server| || -- SYN ----→ || (src: fake) || ||←- SYN-ACK -- || (goes to fake IP, never delivered)|Attacker never sends ACK!Server state:SYN received, waiting for ACKConnection in HALF-OPEN stateAllocated memory for connectionWaits for ACK or timeoutWhat Happens on Server:
Server's TCP/IP stack:Receives SYN from client IP X:1. Creates connection state (TCB - Transmission Control Block)2. Allocates memory for connection info3. Sends SYN-ACK back to IP X4. Moves to SYN-RECEIVED state5. Waits for ACK to complete handshakeNo ACK arrives (source IP is fake):- Connection sits in half-open state- Waits for timeout (30-60 seconds typically)- Memory still allocated- Connection queue slot still occupiedResource Exhaustion:
Attacker sends 1000 SYN packets/secondEach with different spoofed source IPServer's half-open connection queue:Typical limit: 128-256 half-open connectionsTimeline:T=0 ms: Queue has 0 half-open connectionsT=100 ms: Queue has ~100 half-open connectionsT=150 ms: Queue FULL (128 or 256 limit reached)T=200 ms:New legitimate client tries to connectSends SYNServer receives itBut queue is FULLServer drops SYNNever sends SYN-ACKLegitimate client's handshake failsLegitimate client gives upWeb server appears unavailableWhy Server Resources Exhaust:
Each half-open connection requires:1. TCB (Transmission Control Block)- State variables- Sequence numbers- Window information- ~200-500 bytes per connection2. Memory in accept queue3. File descriptor slot4. Kernel data structuresWith 1000 SYN/second attack:Half-open connections accumulate at rate:Arrive rate: 1000/secDecay rate: ~30 per second (timeouts)Queue growth: ~970/secQueue limit: 128-256Queue becomes full in secondsNew connections rejectedLegitimate users cannot connectTimeline of Attack:
T=0 seconds:Attacker starts flooding SYN packetsNormal server stateT=0-1 second:Thousands of SYN packets arriveHalf-open queue fills rapidlyT=1-2 seconds:Queue fullNew SYN packets droppedT=2-10 seconds:Legitimate users try to connectServer drops their SYN packetsNo SYN-ACK responsesConnection timeouts after ~20-60 secondsUsers see "Connection refused" or timeoutT=30-60 seconds:First spoofed connections timeoutHalf-open queue has spaceBut attack continuesQueue refills immediatelyDuration:As long as attack continuesServer remains unavailableWhy It’s Effective:
Asymmetry of work:Attacker sends:- Simple SYN packets (very cheap)- Spoofed IP (no return traffic)- Bandwidth: Few kbpsServer work:- Receives SYN (stores state)- Sends SYN-ACK (wastes bandwidth)- Allocates memory per connection- Waits for timeout (~30 seconds)- CPU cycles for managementAttacker cost: MinimalServer cost: MaximalAmplification:1 SYN packet can cause server to waste ~100 bytes for 30 secondsThis is a 3000x amplification!Modern Defenses:
1. SYN CookiesDon't allocate full TCB until ACK receivedServer doesn't store state for half-open connectionsTCB created only when ACK completes handshake2. SYN LimitLimit half-open connections per IPCan't flood from single attackerBut distributed attack (botnet) still works3. Rate LimitingDrop excessive SYN packets from same source4. SYN Proxy / WAFFirewall drops spoofed packets before reaching serverRequires ISP support5. Increase Queue SizeAllows more half-open connectionsBut uses more memoryEventually still exhausted6. Shorter TimeoutClose half-open connections fasterBut legitimate slow clients hurtSYN Cookies Explanation:
Traditional:SYN received → Allocate TCB immediatelyProblem: Many allocations → memory exhaustionSYN Cookies:SYN received → Don't allocate TCBSend SYN-ACK with "cookie" in sequence numberCookie encodes:- Server port- Client port- Client IP- TimestampClient ACK arrives:- Server decodes cookie- Verifies legitimate client (has correct cookie)- Only then allocates TCBResult:- Server can handle many SYN packets- Only allocates resources for completed handshakes- Spoofed requests die naturallyExample Attack & Defense:
Attack:Attacker: 10,000 SYN/second (spoofed IPs)Without SYN Cookies:Half-open limit: 128Server: FULL in 0.01 secondsLegitimate users: BLOCKEDWith SYN Cookies:Server receives 10,000 SYN/secondDoesn't allocate memoryResponds with SYN-ACK (stateless)Spoofed clients: Never send ACKLegitimate clients: Send ACKServer: Only allocates for completed handshakesResult: Can handle 10,000+ SYN/secondConclusion:
SYN flooding crashes the web server by exploiting the 3-way handshake. The attacker sends many SYN packets with spoofed source IPs. The server responds with SYN-ACK but the attacker (or spoofed client) never completes the handshake by sending ACK. The server keeps half-open connections in memory waiting for the ACK, eventually exhausting the connection queue. Legitimate users’ connection attempts are then dropped or delayed. Modern defenses like SYN cookies prevent allocation of resources until the handshake is complete, making the server resistant to SYN flooding attacks.
-
(j) Consider an idle TCP connection, i.e., a connection where no data is flowing at the time. If one end of the connection crashes without issuing a close call, is it possible for the other end of the connection to be aware of this? Why or why not?
Solution
In short: NO, not automatically. If the crashed end doesn’t send a FIN or RST packet, the other end cannot detect the crash unless it tries to send data (which generates an RST on timeout) or uses TCP keep-alive packets to detect the broken connection. An idle connection has no mechanism to detect the remote crash.
Elaboration:
Why Idle Connections Can’t Detect Crashes:
TCP connection states:Normal:┌──────────┐ ┌──────────┐│ Host A │ ←-----→ │ Host B ││ │ CLOSE │ ││ FIN sent │ │ FIN recv │└──────────┘ └──────────┘Result: Both sides agree connection is closedCrash (no close):┌──────────┐ ┌──────────┐│ Host A │ ←-----→ │ Host B ││ CRASHED │ ??????? │ IDLE ││ No FIN │ │ Doesn't ││ │ │ know! │└──────────┘ └──────────┘Host B has no way to know A crashedWhy TCP Can’t Detect Idle Crashes:
TCP is a stateless-ish protocol:Once connection established, TCP minimal activityNo heartbeat or keep-alive by defaultConnection state is remembered, but not verifiedIdle connection:├─ Last data sent: 10 minutes ago├─ Last data received: 10 minutes ago├─ No activity since then├─ No way to know if other end is alive└─ No way to know if other end crashedScenario: One Side Crashes
Time T0:Connection establishedBoth sides in ESTABLISHED stateTime T100:Host A and Host B exchange dataConnection workingTime T200:Both sides idleNo data sent or receivedTime T300:HOST A CRASHES (power failure, network cable pulled)Host A doesn't send FIN/RSTConnection state just disappearsHost B's TCP still thinks connection is openTime T400:Host B still idleStill unaware of crashWould wait forever if no activity requiredWhy No Automatic Detection:
TCP operates on demand:1. Data sent by application2. TCP sends segment3. Receives ACK4. Connection goodIf no data sent by application:- No segments generated- No ACKs checked- No probes sent- Nothing to verify connectionConnection is ASSUMED to be aliveNo mechanism to verify idle connectionDetection Options:
Option 1: Send Data (Application Layer)
Host B sends heartbeat/keepalive:send(sock, "ping", 4, 0);What happens:- If Host A is alive: Receives and echoes- If Host A is crashed: Router responds with RST(destination unreachable)Or timeout after ~20 secondsResult: Host B knows connection is deadBut this requires application to:- Know to send heartbeat- Know when to send (timeout interval)- Handle responsesOption 2: TCP Keep-Alive
Use SO_KEEPALIVE socket option:setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE, &opt, sizeof(opt));Behavior:- After 2 hours of idle (default)- TCP sends keep-alive probe- If no response: Considers connection dead- Closes connection, notifies applicationLinux tuning:tcp_keepidle = 7200 (seconds = 2 hours)tcp_keepintvl = 75 (seconds between probes)tcp_keepcnt = 9 (number of probes)Total time to detect: 2 hours + (75 * 9) secondsLong time!Option 3: Application Protocol Timeouts
Application implements its own keep-alive:Example (HTTP):HTTP/1.1 Keep-Alive headerConnection: keep-aliveKeep-Alive: timeout=5, max=100Server closes connection if no request for 5 secondsClient must send request or connection closesExample (Chat application):App sends "typing..." messagesIf no typing, sends heartbeat every 30 secondsIf heartbeat times out: Connection deadOption 4: Application Layer Heartbeat
// Pseudo-codewhile (1) {timeout = select(sock, ..., 30_seconds);if (timeout) {// 30 seconds with no activity// Send heartbeatsend(sock, "HEARTBEAT", 9, 0);// Set timeout for responsetimeout = select(sock, ..., 5_seconds);if (timeout) {// No response to heartbeatprintf("Connection lost, other end crashed\n");close(sock);break;}}// Activity detected (data or heartbeat response)// Continue}Timeline Examples:
Scenario 1: No Keep-Alive, Idle Connection
T=0: Connect, exchange dataT=100: Both sides idleT=100: Host A crashesT=200: Host B still idle, unawareT=500: Host B still idle, unawareT=1000: Host B still idle, unawareHost B never finds out A crashedScenario 2: Keep-Alive Enabled
T=0: ConnectT=100: Both sides idleT=100: Host A crashesT=7200: (2 hours later)TCP send keep-alive probe to AT=7200+75: No response, send second probeT=7200+675: No responses to 9 probesTCP: "Connection dead"Application notifiedSocket becomes unusableTotal detection time: ~2.5 hoursScenario 3: Application Heartbeat (30 second timeout)
T=0: ConnectT=100: Both sides idleT=100: Host A crashesT=130: 30 seconds elapsed with no activityHost B sends heartbeatT=135: Wait for response (5 second timeout)T=135: No response = connection deadHost B detects crashTotal detection time: ~35 secondsWhy This Matters:
Common problem: Zombie connectionsHost A crashes while idleHost B doesn't knowHost B keeps socket openResources tied up:- TCP connection slot- File descriptor- Memory- Potential application stateWith many clients:Server accumulates zombie connectionsEventually runs out of file descriptorsCannot accept new connectionsAppears to hangReal-World Example: Web Server
Client makes HTTP request:GET /index.html HTTP/1.1Connection: keep-aliveServer responds:HTTP/1.1 200 OKConnection: keep-aliveConnection remains open for next requestClient crashes without closingWithout keep-alive timeout:Server waits foreverConnection never closesSlot remains occupiedWith application timeout:Server closes after ~5-30 secondsFrees resourcesSlot available for new clientsConclusion:
NO. TCP cannot automatically detect if an idle connection’s remote end has crashed. TCP is a demand-driven protocol—it only verifies connections when data is transmitted. If one end crashes without sending FIN or RST, the other end has no way to know unless it tries to send data (which causes timeout) or uses TCP keep-alive (2-hour default, too slow) or implements application-level heartbeats (best for detecting quick crashes). Most applications implement their own keep-alive/heartbeat mechanism to detect broken connections quickly rather than relying on TCP’s passive approach.
Problem 2: Dynamic Host Configuration Protocol
Section titled “Problem 2: Dynamic Host Configuration Protocol”We discussed in class that a host’s IP address can either be configured manually, or by Dynamic Host Configuration Protocol (DHCP).
-
(a) Describe the advantages and disadvantages of each approach.
Solution
Manual (Static) Configuration
- Advantages:
- Stable, predictable IP addresses.
- Suitable for servers and infrastructure devices.
- No dependency on DHCP availability.
- Disadvantages:
- Error-prone and time-consuming.
- Poor scalability.
- Risk of IP address conflicts.
DHCP (Dynamic Configuration)
- Advantages:
- Plug-and-play for clients.
- Centralized network configuration.
- Avoids most IP conflicts.
- Disadvantages:
- Depends on DHCP server availability.
- IP addresses may change.
- Slight delay during address acquisition.
- Advantages:
-
(b) Describe how a host gets an IP address using DHCP.
Solution
DHCP Process (DORA)
- DHCPDISCOVER – Client broadcasts request.
- DHCPOFFER – Server offers IP configuration.
- DHCPREQUEST – Client requests chosen offer.
- DHCPACK – Server confirms lease.
(Then later: lease renew with REQUEST/ACK before it expires; if server refuses, DHCPNAK)
DHCPNAK stands for DHCP Negative Acknowledgment. It is a message sent by a DHCP server to a client to tell the client that its requested IP configuration is invalid and cannot be used.
Problem 3: UDP Remote Calculator Server
Section titled “Problem 3: UDP Remote Calculator Server”You are asked to design a UDP server that would run at 10.10.100.180, port 30000, and would be used as a remote calculator to perform addition, subtraction and multiplication on two 4 byte integers sent by clients. Your server needs to run in a loop, accept the next client request, perform the operation and send the result back to the client. Your client needs to run in a loop, ask the user for the type of operation and two numbers, put them into a message and send them to the server. When the client receives the reply, it prints the result on the screen. You are asked to design an application layer protocol and implement the client/server code. Take into consideration that the client and the server may have different endian representation of integers, i.e., the client may be little-endian while the server is big-endian and viceversa.
Application-Layer Protocol
- All integers use network byte order (big-endian).
Request (9 bytes):
- 1 byte: operation (’+’, ’-’, ’*’)
- 4 bytes: integer A
- 4 bytes: integer B
Reply (4 bytes):
-
4 bytes: result
-
(a) Show the pseudocode for your UDP client.
Solution
client():sock = udp_socket()server = (10.10.100.180, 30000)loop:op = input_operation()a = input_int()b = input_int()msg = op + htonl(a) + htonl(b)sendto(sock, msg, server)reply = recvfrom(sock, 4)print(ntohl(reply)) -
(b) Show the pseudocode for your UDP server.
Solution
server():sock = udp_socket()bind(sock, (10.10.100.180, 30000))loop:msg, addr = recvfrom(sock, 9)op = msg[0]a = ntohl(msg[1:5])b = ntohl(msg[5:9])if op == '+': r = a + bif op == '-': r = a - bif op == '*': r = a * bsendto(sock, htonl(r), addr)
Problem 4: Multi-Threaded TCP Remote Calculator
Section titled “Problem 4: Multi-Threaded TCP Remote Calculator”You are asked to design a multi-threaded TCP server that would run at 10.10.100.180, port 30000, and would be used as a remote calculator to perform addition, subtraction and multiplication on two 4 byte integers sent by clients. Your server needs to run in a loop, accept the next client connection and create a new thread that would interact with the client. The service thread runs in a loop, receives the next request from the client, performs the requested operation and sends the result back to the client until the client closes the connection. Your client needs to run in a loop, ask the user for the type of operation and two numbers, put them into a message and send them to the server. When the client receives the reply, it prints the result on the screen. You are asked to design an application layer protocol and implement the client/server code. Take into consideration that the client and the server may have different endian representation of integers, i.e., the client may be little-endian while the server is bigendian and vice-versa.
-
(a) Show the pseudocode for your TCP client.
Solution
client():sock = tcp_socket()connect(sock, (10.10.100.180, 30000))loop:op, a, b = user_input()write_all(sock, op + htonl(a) + htonl(b))reply = read_exact(sock, 4)print(ntohl(reply)) -
(b) Show the pseudocode for your TCP server.
Solution
server():listen_sock = tcp_socket()bind(listen_sock, (10.10.100.180, 30000))listen(listen_sock)loop:conn, addr = accept(listen_sock)create_thread(service_client, conn)service_client(conn):loop:req = read_exact_or_eof(conn, 9)if EOF: breakop, a, b = parse(req)compute resultwrite_all(conn, htonl(result))close(conn)
Problem 5: Multi-Socket UDP Server with select()
Section titled “Problem 5: Multi-Socket UDP Server with select()”Assume you have a UDP server that will be listening to requests from 2 sockets: One listening to port 20000, one listening to port 30000. Assume both sockets are blocking sockets. Show the pseudocode for a generic single-threaded UDP server that would receive data from any of these sockets. Make sure that your server is not blocked waiting for a message on one socket, while there are messages ready for reading on the other. In other words, as soon as a message is ready on one of the sockets, your server must be able to read from it.
Solution
server(): sock1 = udp_socket(port=20000) sock2 = udp_socket(port=30000)
loop: ready = select({sock1, sock2})
if sock1 ready: recvfrom(sock1)
if sock2 ready: recvfrom(sock2)Problem 6: UDP Echo Server
Section titled “Problem 6: UDP Echo Server”Assume you would be designing an UDP Echo Server that would run at 10.10.100.180, port 30000. Your server would get a message from the UDP socket and simply echo (send) it back to the sender (client). Your echo client would run in a loop: Asks the user to enter the size of the message, sends it to the server, gets the reply back and prints the message size of the reply on the screen. Assume that a UDP client can potentially send a max. sized UDP packet.
-
(a) Show the pseudocode for a generic UDP Echo client.
Solution
client():sock = udp_socket()loop:n = input_size()msg = make_bytes(n)sendto(sock, msg, server)reply = recvfrom(sock)print(len(reply)) -
(b) Show the pseudocode for a generic UDP Echo server.
Solution
server():sock = udp_socket()bind(sock, (10.10.100.180, 30000))loop:msg, addr = recvfrom(sock)sendto(sock, msg, addr)
Problem 7: Multi-Port UDP Echo Server with Threads
Section titled “Problem 7: Multi-Port UDP Echo Server with Threads”Assume you would be designing a server that would run at 10.10.100.180 and listen to UDP ports 20000 and 30000 for client requests. Upon reception of a message from any of these ports, the server simply echoes the message back to the client.
-
(a) Show the pseudocode for this UDP server if you must implement a single-threaded server.
Solution
server():s1 = udp_socket(20000)s2 = udp_socket(30000)loop:ready = select({s1, s2})for s in ready:msg, addr = recvfrom(s)sendto(s, msg, addr) -
(b) Show the pseudocode for this UDP server if you are asked to use 2 separate threads, one serving client requests at port 20000 and the other at port 30000.
Solution
server():create_thread(echo_loop, socket_20000)create_thread(echo_loop, socket_30000)echo_loop(sock):loop:msg, addr = recvfrom(sock)sendto(sock, msg, addr)
Problem 8: TCP Server with Initial Message Exchange
Section titled “Problem 8: TCP Server with Initial Message Exchange”Assume you would be designing a TCP client and a single-threaded TCP Server. Your server would run at 10.10.100.180, port 30000. Once a connection is established, your server will first send a 100 byte message. Your client must read this 100 byte message, and send it back to the server. The server must then read the message back, close the connection and go back to accept a new connection.
-
(a) Show the pseudocode for this TCP client.
Solution
client():sock = tcp_socket()connect(sock, (10.10.100.180, 30000))msg = read_exact(sock, 100)write_all(sock, msg)close(sock) -
(b) Show the pseudocode for this TCP server.
Solution
server():listen_sock = tcp_socket()bind(listen_sock, (10.10.100.180, 30000))listen(listen_sock)loop:conn, addr = accept(listen_sock)write_all(conn, make_100_byte_msg())echoed = read_exact(conn, 100)close(conn)