Skip to content

CSCI 340 Akinlar Lecture 3

Real-time operating systems are fundamentally different from general-purpose operating systems like Windows, Linux, or macOS. They provide an OS interface for a fixed set of tasks determined in advance. The number of tasks is immutable during execution—you cannot dynamically create or remove processes while the system runs.

  • Fixed task set: Tasks are predetermined before execution
  • Compiled binary: System is compiled into a single binary for deployment
  • Embedded focus: Typically used in IoT devices and industrial applications
  • Deterministic behavior: Predictable execution patterns

Zephyr is currently the most popular real-time operating system. It’s widely used for:

  • IoT devices
  • Industrial applications
  • Microcontroller deployments

FreeRTOS was historically popular but has been superseded by Zephyr.

If you’re interested in exploring real-time operating systems further, the instructor recommends looking into Zephyr, including demonstration boards and YouTube tutorials.

Historical Context: MS-DOS and Single-Tasking Systems

Section titled “Historical Context: MS-DOS and Single-Tasking Systems”

MS-DOS (Microsoft Disk Operating System) was a single-tasking operating system used primarily on personal computers in the 1980s and 1990s. In a single-tasking system, the CPU executes one program at a time.

The system presented a command-line interface (CLI) where users would:

  1. Type a command
  2. Press Enter
  3. The OS loads the executable into memory and executes it
  4. Upon completion, control returns to the shell

Common commands like DIR (directory listing) were MS-DOS commands.

The fundamental inefficiency of single-tasking systems is CPU idle time. During process execution, the CPU alternates between:

  • Computation cycles: Active processing
  • I/O cycles: Waiting for disk/device operations

When a process waits for I/O, the CPU sits idle—unable to execute other work even if available.

In the early 1980s, IBM was manufacturing personal computer hardware but lacked an operating system. Rather than developing one internally (which would be the logical choice for a large company), IBM decided that:

Software doesn’t make money. Hardware is the real deal. People buy what they can touch.

IBM decided to purchase an operating system from an external vendor.

Paul Allen and Bill Gates (Microsoft founders) approached IBM with a clever licensing strategy:

  • Traditional approach: IBM buys the OS outright
  • Microsoft’s approach: Charge a per-unit royalty (approximately $2 per machine sold)

IBM reasoned: “The machine costs 2 in royalties?”

However, Bill Gates and Paul Allen didn’t actually own an operating system at that time. They learned of a friend in Phoenix who had developed an operating system called DOS (Disk Operating System). Allen purchased it for approximately $50,000 and Microsoft rebranded it as MS-DOS (Microsoft Disk Operating System).

Ten years later, IBM realized their mistake. As MS-DOS became ubiquitous and personal computer sales skyrocketed, Microsoft’s per-unit royalties grew substantially. IBM attempted to reclaim market share by developing OS/2, a superior operating system compared to MS-DOS.

However, OS/2 failed commercially despite being technically superior. The reason: established user inertia.

If something isn’t broken, don’t fix it.

Users had spent a decade with MS-DOS. Even when presented with a demonstrably better alternative, they resisted switching because they were accustomed to the familiar system.

This historical event demonstrates a critical lesson in software adoption: user familiarity and network effects often triumph over technical superiority.

While Microsoft maintained command-line interfaces, Apple Computer (founded by Steve Jobs) introduced the graphical user interface (GUI). To compete, Microsoft developed Windows 3.1, their first GUI-based operating system.

Both early Windows and early MacOS were notoriously unstable—they crashed frequently. However, Apple made a critical decision:

The Unix Foundation and Modern Operating Systems

Section titled “The Unix Foundation and Modern Operating Systems”

When Apple’s proprietary operating system proved unreliable and unmaintainable, the company abandoned development and adopted Unix—a proven, rock-solid operating system.

Unix is one of the first multitasking operating systems, designed at AT&T (specifically in Bell Labs in New Jersey). Key characteristics:

  • Rock-solid reliability: Rarely crashes, proven over decades
  • Multitasking capability: Can manage multiple concurrent processes
  • POSIX standard: Defines a common interface across Unix variants

BSD (Berkeley Software Distribution)

  • Unix clone developed at UC Berkeley
  • Enhanced with additional features
  • Led to Sun Microsystems (eventually acquired by Oracle)

Linux

  • Free, open-source Unix clone
  • Dominates servers and embedded systems
  • Used in Android phones

macOS

  • Apple’s modern OS based on FreeBSD (a Unix derivative)
  • That’s why Mac users can access a Unix command line

All modern general-purpose operating systems (Windows, macOS, Linux, iOS, Android) incorporate multitasking concepts—even though they may not be Unix-based.

Multitasking systems fundamentally solve the CPU idle time problem by loading and executing multiple processes concurrently. Rather than waiting for one process to complete, the OS can switch to another process while the first is blocked on I/O.

┌─────────────────────┐
│ Operating System │
├─────────────────────┤
│ Process 1 │
├─────────────────────┤
│ Process 2 │
├─────────────────────┤
│ Process 3 │
├─────────────────────┤
│ ... │
└─────────────────────┘

The OS loads itself into memory (upper or lower region), then loads multiple processes.

The Core Principle: Overlapping I/O and Computation

Section titled “The Core Principle: Overlapping I/O and Computation”

Goal: Keep the CPU as busy as possible by overlapping the computation cycle of one process with the I/O cycle of another.

TimeCPU StatusProcessActivity
0-5msRunningP1Computation
5-15msI/O WaitP1I/O in progress
5-12msRunningP2Computation
12-20msRunningP3Computation
20-25msRunningP1Resume (I/O done)

By interleaving processes, the CPU remains productive instead of idle.

On a typical modern system, CPU utilization hovers around 1% because most applications are I/O-bound:

  • Web browsers: Waiting for network requests, user input
  • Text editors: Waiting for keyboard/mouse input
  • Email clients: Waiting for server responses

When computation-bound tasks run (matrix multiplication, data processing), CPU utilization jumps to 100%.

Every process follows a pattern:

  • Computation burst: Active CPU usage
  • I/O wait: Blocked on external resource

Example: Web Browser

  1. User types a URL
  2. Browser performs I/O: downloads page content from server
  3. Browser performs computation: renders HTML/CSS
  4. Browser waits: blocked on user interaction (click, scroll, type)

The I/O-bound nature of most applications means that, without multitasking, the system would be unacceptably slow.

Programmers don’t interact with hardware directly. Instead, they use the system call interface—a set of functions exported by the operating system:

Common system calls:

  • open(): Open a file
  • read(): Read from file/device
  • write(): Write to file/device
  • close(): Close a file
  • fork(): Create a new process

Modern operating systems export approximately 300 system calls.

Pure multitasking systems have a critical flaw: processes must voluntarily release the CPU. If a process has a long computation burst (e.g., 1 second for matrix multiplication), it will monopolize the CPU.

Impact on I/O-bound processes:

  • User presses a key
  • Must wait for current computation-bound process to voluntarily release CPU
  • Result: System becomes unresponsive and frustrating
Timeline with no preemption:
P1 (computation): ═══════════════════════════════════════════════ (1 second)
P2 (editor): [waiting...waiting...waiting... response too slow!]

Time-sharing systems address responsiveness by forcefully taking the CPU away from processes after a fixed time quantum.

  1. Time quantum: Fixed time slice (typically 10-100 milliseconds)
  2. Timer interrupt: Hardware interrupt fires periodically
  3. Preemption: OS forcibly removes CPU from process if quantum expires
  4. Round-robin scheduling: Cycles through all ready processes
Time Quantum = 10 ms
P1: [10ms computation] → P2: [10ms computation] → P3: [8ms IO release]
↓ ↓
(forced) (voluntary)
P1: [continue 5ms] → P2: [continue 7ms] → P3: [waiting...]

Key insight: Even if a process’s computation burst is 1 second:

  • Without time-sharing: Process uses CPU for entire second
  • With time-sharing: Process gets 10ms, is preempted, and will be rescheduled in ~30ms (if 3 other processes)

With round-robin scheduling, each process appears to have its own dedicated CPU, even though processes are context switching rapidly:

Everybody thinks that they have their own CPU. Everybody is happy.

This is why:

  • Multiple browser tabs appear to run simultaneously
  • Spotify can stream music while you work
  • Word remains responsive while other applications run
AspectTime-SharingBatch Multitasking
Context switchingForced (preemptive)Voluntary
ResponsivenessHighLow
OverheadHigher (more switches)Lower (fewer switches)
Best forInteractive appsComputation-heavy jobs
User experienceResponsiveSluggish (if much I/O wait)

Batch multitasking is suitable for non-interactive workloads:

  • Bank end-of-day interest calculations
  • Overnight data processing
  • Scientific simulations

Context switching carries overhead—saving and restoring process state consumes CPU cycles. For computation-bound jobs without user interaction, this overhead is unnecessary.

All modern general-purpose operating systems are time-sharing systems:

  • Windows
  • macOS
  • Linux
  • iOS
  • Android

They must support interactive applications that require immediate responsiveness.

Single-Tasking (MS-DOS)

  • Multitasking (Unix)
  • Time-Sharing (Windows, Linux, macOS)

Note: Time-sharing systems are inherently multitasking systems with the additional feature of preemptive scheduling.

Concurrency vs. Parallelism

  • Concurrency (1 CPU): OS rapidly context-switches between processes. Processes appear concurrent but execute serially.
  • Parallelism (Multiple CPUs): Multiple processes execute simultaneously on different CPU cores.

Multi-core systems (6-8 cores typical in laptops):

  • Support true parallelism: Different processes run on different cores
  • Also support concurrency: More processes than cores via context switching

Operating systems fundamentally consist of a set of exported functions—the system call interface. Just as Java developers use the Java API, system programmers use the OS’s system calls:

Examples:

  • read(), write(), open(), close()
  • Process management: fork(), exec(), wait()
  • Memory management: malloc(), free()

The OS abstracts away hardware complexity, providing a consistent interface for applications.

  1. Real-time OS: Fixed task set, typically for embedded/IoT
  2. Single-tasking (MS-DOS): One process at a time; CPU often idle
  3. Multitasking (Unix): Multiple processes; improved CPU utilization
  4. Time-sharing (Modern OS): Preemptive multitasking with fixed time quantums for responsiveness

The evolution from MS-DOS → Multitasking → Time-sharing reflects the progression toward more responsive and efficient systems. Modern operating systems must support interactive applications, making time-sharing essential.