WebNovels

Operating System Notes

Soham_Bhutkar
7
chs / week
The average realized release rate over the past 30 days is 7 chs / week.
--
NOT RATINGS
67
Views
Synopsis
Prepare for Exam Dont just read and listen to Novels and Fanfic
VIEW MORE

Chapter 1 - ALL Impo Notes For Operating System

Introduction to Operating Systems

An Operating System (OS) is a software that manages computer hardware and software resources and provides common services for computer programs. Its primary purpose is to create an environment where a user can execute programs conveniently and efficiently. The OS acts as an intermediary between the user and the computer hardware.

Types of Operating Systems

Batch OS: Processes are executed in batches without any user interaction. The system operator collects jobs and runs them in a sequence. This type of OS is suitable for tasks that do not require user input, like payroll systems or reports. Time-sharing OS: This type of OS allows multiple users to share a single computer simultaneously. The CPU is rapidly switched between processes, giving each user the illusion that they have exclusive access to the system. This is done through time slicing. Distributed OS: These systems use multiple computers connected via a network to appear as a single, coherent system. They offer increased computational power, reliability, and access to shared resources. Real-time OS (RTOS): An RTOS is designed to serve real-time applications where a task must be completed within a strict deadline. They are used in systems that need to respond quickly and reliably, such as industrial control systems and robotics.

 

System Components and Structure

The OS is made up of several key components:

Process Management: Handles the creation, scheduling, and termination of processes. Memory Management: Manages primary and secondary memory, keeping track of memory usage and allocating/deallocating space. File Management: Organizes, stores, and retrieves files and directories. I/O Systems Management: Manages device drivers and I/O hardware. Security and Protection: Protects system resources and user data.

System Calls are the interface between a process and the OS. They are requests from a program to the kernel for a service, such as file access, process creation, or I/O operations.

 

Process Management

A process is an instance of a computer program that is being executed. It's a fundamental unit of work in a modern OS.

Process States

A process can be in one of five states during its lifetime:

New: The process is being created. Ready: The process is waiting to be assigned to a processor. Running: The process is currently being executed by the CPU. Waiting/Blocked: The process is waiting for some event to occur (e.g., I/O completion or a signal). Terminated: The process has finished execution.

A Process Control Block (PCB) is a data structure in the OS kernel that contains all the information about a process. This includes its state, program counter, CPU registers, and memory management information.

Threads and Multithreading

A thread is a light-weight process. It is the basic unit of CPU utilization. A single process can have multiple threads of execution.

Benefits: Increased responsiveness, resource sharing, and better utilization of multi-core processors. User vs. Kernel Threads: User-level threads are managed by a user-level library without kernel support. They are fast but a single blocking system call can block the entire process. Kernel-level threads are supported and managed by the OS kernel. They are slower but allow for true parallelism on multi-core systems.

Process Scheduling

The goal of CPU scheduling is to maximize CPU utilization by switching the CPU among processes.

Scheduling Criteria: CPU Utilization: Keep the CPU as busy as possible. Throughput: Number of processes completed per unit of time. Turnaround Time: Total time to execute a particular process (from submission to completion). Waiting Time: Total time a process spends in the ready queue. Response Time: Time from submission of a request until the first response is produced. Scheduling Algorithms First-Come, First-Served (FCFS): The simplest algorithm. The process that requests the CPU first gets it first. It's non-preemptive. Problem: The "convoy effect" where a single, long-running process can hold up all subsequent processes. Problem: Find the average waiting time for the following processes using FCFS.

| Process | Burst Time |

| :--- | :--- |

| P1 | 24 |

| P2 | 3 |

| P3 | 3 |

Solution: Gantt chart: [P1(0-24)] [P2(24-27)] [P3(27-30)] P1 wait time = 0 P2 wait time = 24 P3 wait time = 27 Avg wait time = (0+24+27)/3 = 17 Shortest Job First (SJF): This algorithm associates with each process its next CPU burst length and schedules the process with the shortest burst time. It can be preemptive (Shortest-Remaining-Time-First) or non-preemptive. Advantage: Gives the minimum average waiting time. Problem: Knowing the length of the next CPU burst is difficult to predict. Problem: Find the average waiting time for the following processes using non-preemptive SJF.

| Process | Burst Time |

| :--- | :--- |

| P1 | 6 |

| P2 | 8 |

| P3 | 7 |

| P4 | 3 |

Solution: Order of execution: P4, P1, P3, P2 Gantt chart: [P4(0-3)] [P1(3-9)] [P3(9-16)] [P2(16-24)] P4 wait time = 0 P1 wait time = 3 P3 wait time = 9 P2 wait time = 16 Avg wait time = (0+3+9+16)/4 = 7 Priority Scheduling: A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Problem: Starvation—a low-priority process might never execute. Solution: Aging—as time progresses, increase the priority of long-waiting processes. Round-Robin (RR): Designed for time-sharing systems. Each process gets a small unit of CPU time, called a time quantum. When the quantum expires, the process is preempted and put at the end of the ready queue. Advantage: Fair scheduling and good response time. Problem: The performance depends heavily on the size of the time quantum. A very small quantum leads to frequent context switches, and a very large one makes it behave like FCFS.

Context Switching is the process of saving the state of a running process and loading the state of the next process to be executed. This is a pure overhead as no useful work is done during this time.

 

Synchronization and Deadlocks

Interprocess Communication (IPC) mechanisms allow processes to exchange data.

Shared Memory: Processes share a region of memory, providing fast communication. Message Passing: Processes communicate by exchanging messages. It's slower but more flexible for distributed systems.

Synchronization Mechanisms

The critical section problem arises when multiple processes access a shared resource. The goal is to design a protocol that ensures only one process can execute its critical section at a time.

Semaphores: An integer variable that is accessed only through two atomic operations: wait() and signal(). wait(): Decrements the semaphore value. If the value becomes negative, the process is blocked. signal(): Increments the semaphore value. If the value is not positive, it unblocks a waiting process. Mutexes: A simpler version of a semaphore used for mutual exclusion. A process acquires the mutex before entering the critical section and releases it upon exit. Monitors: A high-level synchronization primitive that provides a lock and condition variables, simplifying the critical section problem by hiding the implementation details.

Classic Problems of Synchronization

The Bounded-Buffer Problem: A producer process produces items and a consumer process consumes them. They share a fixed-size buffer. The problem is to ensure the producer doesn't add to a full buffer and the consumer doesn't take from an empty buffer. The Reader-Writer Problem: Readers can read a shared resource simultaneously, but a writer must have exclusive access. The Dining-Philosophers Problem: A classic resource allocation problem where five philosophers alternate between thinking and eating. To eat, a philosopher needs two chopsticks, which are shared resources. The problem is to avoid a deadlock where all philosophers pick up one chopstick and wait indefinitely for the other.

Deadlocks

A deadlock is a state where two or more processes are blocked forever, waiting for each other to release a resource.

Conditions for Deadlock: Mutual Exclusion: A resource can only be used by one process at a time. Hold and Wait: A process holds at least one resource and is waiting to acquire another. No Preemption: A resource cannot be forcibly taken from a process. Circular Wait: A circular chain of processes exists, where each process in the chain holds a resource needed by the next process.

Deadlock Handling

Deadlock Prevention: Ensure at least one of the four conditions cannot hold. For example, disallow the "hold and wait" condition by forcing processes to request all needed resources at once. Deadlock Avoidance: The OS has prior knowledge of the resources a process will request. The Banker's Algorithm is a classic avoidance algorithm. Banker's Algorithm Problem: A system has 12 tape drives. P0, P1, and P2 need 10, 4, and 9 drives respectively. At a time T0, P0 has 5, P1 has 2, and P2 has 2 drives. Is the system in a safe state? Solution: Total drives = 12 Allocated drives = 5 + 2 + 2 = 9 Available drives = 12 - 9 = 3 Needs: P0=5, P1=2, P2=7 Available (3) is not enough for P0 (5) or P2 (7), but it is for P1 (2). Run P1. P1 finishes and releases its 2 drives. Available becomes 3+2=5. Now Available (5) is enough for P0 (5). P0 finishes and releases 5 drives. Available becomes 5+5=10. Now Available (10) is enough for P2 (7). P2 finishes and releases 7 drives. Available becomes 10+7=17. Since all processes can finish in the sequence P1, P0, P2, the system is in a safe state. Deadlock Detection: The OS allows deadlocks to occur and then uses a detection algorithm (e.g., a resource-allocation graph) to find them. Deadlock Recovery: Once detected, the OS can recover by either terminating processes or preempting resources.

Memory Management

Memory management is the OS function that handles the main memory.

Logical vs. Physical Address Space: A logical address is generated by the CPU. A physical address is the address seen by the memory unit. The Memory-Management Unit (MMU) translates logical to physical addresses. Dynamic Loading/Linking: A routine is loaded into memory only when it's called. This is useful for large programs where parts of the code might not be needed. Swapping: A process can be temporarily swapped out of main memory to a backing store (disk) and then swapped back in.

Contiguous Memory Allocation

Main memory is divided into one large contiguous block for the OS and the rest for user processes.

Fragmentation: External Fragmentation: Total free memory is enough, but it's scattered in small, non-contiguous chunks. Internal Fragmentation: Allocated memory is slightly larger than requested memory. The unused space within a partition is wasted.

Paging

Paging is a non-contiguous memory allocation scheme that solves the external fragmentation problem.

Main memory is divided into fixed-size blocks called frames. Logical memory is divided into same-sized blocks called pages. The OS maintains a page table for each process to map its pages to available frames. Problem: Given a logical address of 250 with a page size of 100 bytes. What is the page number and offset? Solution: Page number = floor(250 / 100) = 2 Offset = 250 % 100 = 50

Segmentation

Segmentation is a memory management technique where the logical address space is a collection of variable-sized segments. Each segment has a name and a length. It provides a user-oriented view of memory.

Virtual Memory Management

Virtual memory is a technique that allows a program to execute even if it's not entirely in memory. It separates the user's logical memory from physical memory, creating the illusion of a very large memory.

Demand Paging: Pages are loaded into memory only when they are needed (on demand). This improves system performance. Page Replacement: When a page fault occurs and the memory is full, a page must be selected for replacement. FIFO (First-In, First-Out): Replaces the oldest page. It's simple but can replace a frequently used page. Optimal Page Replacement: Replaces the page that will not be used for the longest period of time. It's the best but is impossible to implement in practice as it requires future knowledge. LRU (Least Recently Used): Replaces the page that has not been used for the longest time. It's a good approximation of Optimal. MFU (Most Frequently Used): Replaces the page that has been referenced most often.

 

I/O Systems

The I/O subsystem manages communication between the computer and its I/O devices.

Device Drivers: Software modules that provide an interface between the OS and the I/O hardware. Interrupt Handling: An interrupt is a signal from a device to the CPU, notifying it of a state change (e.g., I/O completion). The OS's interrupt handler processes this signal. DMA (Direct Memory Access): A feature that allows I/O devices to transfer data directly to and from main memory without involving the CPU. This reduces CPU overhead.

Disk Scheduling

The OS is responsible for using the disk hardware efficiently by minimizing disk head movement.

FCFS (First-Come, First-Served): Services requests in the order they arrive. It's fair but doesn't minimize seek time. SSTF (Shortest Seek Time First): Services the request with the shortest seek time from the current head position. It's efficient but can lead to starvation for some requests. SCAN (Elevator Algorithm): The disk arm starts at one end and moves towards the other, servicing all requests in its path. When it reaches the end, it reverses direction. C-SCAN (Circular SCAN): Similar to SCAN, but when the head reaches the end, it quickly returns to the beginning without servicing requests on the return trip.

 

Security Concepts

Security in an OS refers to the protection of system resources against unauthorized access or modification.

Threats and Attacks: Malware (viruses, worms), phishing, denial-of-service attacks, and unauthorized access. User Authentication: Mechanisms to verify a user's identity, such as passwords, biometric data, or multi-factor authentication.

Protection

Protection is a mechanism for controlling the access of processes to resources.

Access Control Lists (ACLs): A list associated with each file or resource that defines which users or processes have specific access rights. Capabilities: A token or key held by a user that grants them specific access rights to a resource.

Cryptography in Operating Systems

Cryptography is used to secure data stored on the system.

Disk Encryption: Encrypting entire disk partitions to protect data from physical theft. Secure Boot: Using digital signatures to ensure that the OS kernel and other system components have not been tampered with.A. CPU Scheduling Algorithms – Practice Problems

 

1. FCFS (First-Come, First-Served)

Problem

Four processes arrive at time 0 with burst times:

P1 = 6, P2 = 8, P3 = 7, P4 = 3

Solution

Order: P1 → P2 → P3 → P4. Gantt chart:

| P1 | P2 | P3 | P4 |

0 → 6 → 14 → 21 → 24 Completion times:

P1 = 6, P2 = 14, P3 = 21, P4 = 24 Turnaround Time (TAT) = Completion − Arrival (all arrived at 0):

P1 = 6, P2 = 14, P3 = 21, P4 = 24 → Avg TAT = 16.25 Waiting Time (WT) = TAT − Burst:

P1 = 0, P2 = 6, P3 = 14, P4 = 21 → Avg WT = 10.252. SJF – Non-Preemptive (Shortest Job First)

Problem

Processes (Arrival, Burst):

P1 (0, 8), P2 (1, 4), P3 (2, 9), P4 (3, 5)

Solution

At 0 → only P1; runs 0–8. At 8 → ready: P2, P3, P4 → pick shortest → P2 runs 8–12 Then → P4 (12–17), then P3 (17–26) Gantt chart:

| P1 | P2 | P4 | P3 |

0 → 8 → 12 → 17 → 26 Turnaround Times:

P1 = 8, P2 = 11, P3 = 24, P4 = 14 → Avg = 14.25 Waiting Times:

P1 = 0, P2 = 7, P3 = 15, P4 = 9 → Avg = 7.753. SRT (Shortest Remaining Time First) – Preemptive SJF

Problem

Same as above:

Walkthrough:

P1 runs 0–1 (remaining = 7) At 1 → P2 arrives (burst = 4) < 7 → preempt P1 → P2 runs 1–5 → finishes At 2 → P3 arrives (9) At 3 → P4 arrives (5) After P2 → remaining: P1 (7), P3 (9), P4 (5) → pick P4 → runs 5–10 → finishes Then P1 runs 10–17 → finishes Then P3 runs 17–26 → finishes Gantt chart:

| P1 | P2 | P4 | P1 | P3 |

0–1, 1–5, 5–10, 10–17, 17–26 TAT:

P1 = 17, P2 = 4, P3 = 24, P4 = 7 → Avg ≈ 13 WT:

P1 = 9, P2 = 0, P3 = 15, P4 = 2 → Avg = 6.54. Round-Robin (RR) with Time Quantum

Problem

Processes (Arrival, Burst):

P1 (0, 10), P2 (0, 4), P3 (0, 6) ; quantum = 3

Solution

Execution order:

P1 (0–3, rem=7) P2 (3–7, rem=1) P3 (7–10, rem=3) P1 (10–13, rem=4) P2 (13–14, rem=0, finishes) P3 (14–17, rem=0, finishes) P1 (17–21, rem=0, finishes) Gantt chart:

| P1 | P2 | P3 | P1 | P2 | P3 | P1 |

0–3, 3–7, 7–10, 10–13, 13–14, 14–17, 17–21 Completion Times:

P1 = 21, P2 = 14, P3 = 17 TAT:

P1 = 21, P2 = 14, P3 = 17 → Avg = 17.33 WT:

P1 = 11, P2 = 10, P3 = 11 → Avg = 10.675. Priority Scheduling (Non-Preemptive)

Problem

Processes (Arrival, Burst, Priority (lower = higher)):

P1 (0, 10, priority 3) P2 (0, 1, priority 1) P3 (0, 2, priority 4) P4 (0, 1, priority 2)

Solution

Order by priority: P2 → P4 → P1 → P3 Gantt chart:

| P2 | P4 | P1 | P3 |

0–1, 1–2, 2–12, 12–14 Completion Times:

P2=1, P4=2, P1=12, P3=14 TAT:

P2=1, P4=2, P1=12, P3=14 → Avg = 7.25 WT:

P2=0, P4=1, P1=2, P3=12 → Avg = 3.75

 

B. Memory Management Algorithms – Practice Problems

 

1. Page Replacement: FIFO

Problem

Frames = 3; Reference string:

7, 0, 1, 2, 0, 3, 0, 4

Solution

Start empty. Load 7, 0, 1 → 3 faults. 2 replaces 7 → fault #4. 0 hit. 3 replaces 0 → fault #5. 0 replaces 1 → fault #6. 4 replaces 2 → fault #7.

Total Page Faults = 7

 

2. Page Replacement: LRU

Problem

Same parameters (3 frames; same string).

Solution

Track recency:

Load 7, 0, 1 → faults 3. 2 replaces LRU (7) → fault 4. 0 hit. 3 replaces LRU (1) → fault 5. 0 hit. 4 replaces LRU (2) → fault 6.

Total Faults = 6

 

3. Optimal Page Replacement

Problem

Same frames and reference string.

Solution

Choose the page not needed for the longest time ahead:

After initial loads (7, 0, 1): 2 replaces 7 (7 appears furthest) → fault 4. 0 hit. 3 replaces 1 (next needed further than 0) → fault 5. 0 hit. 4 replaces page with furthest future use (7's chain) → fault 6.

Total Faults = 6

(Optimal = theoretical lower bound.)

 

4. Contiguous Allocation (Best Fit)

Problem

Process sizes: P1=212KB, P2=417KB, P3=112KB, P4=426KB

Free holes: 100KB, 500KB, 200KB, 300KB, 600KB

Allocate with Best Fit.

Solution

P1 (212KB) → best=300KB P2 (417KB) → best=500KB P3 (112KB) → best=200KB P4 (426KB) → best=600KB

 

5. Paging: Logical-to-Physical Address Translation

Problem

Page size = 4 KB. Logical address = 12,345. Page table entry for page 3 = frame 18.

Solution

Page number = 12,345 ÷ 4096 = 3 (integer division) Offset = 12,345 % 4096 = 57 Physical frame = 18 → physical address = (18 × 4096) + 57 = 73,665

 

✅ 1. Deadlock Handling Techniques – Problems with Solutions Problem 1: Deadlock Prevention

Problem:

A system has 3 resources: R1, R2, and R3. Three processes P1, P2, and P3 are currently holding and requesting resources as follows:

Process

Holding

Requesting

P1

R1

R2

P2

R2

R3

P3

R3

R1

Question: Is there a deadlock? How can it be prevented?

Solution:

All 4 Coffman conditions hold:

Mutual exclusion – Yes Hold and wait – Yes No preemption – Yes Circular wait – Yes (P1 → P2 → P3 → P1)

✅ Deadlock exists.

Prevention technique: Break circular wait by imposing a resource ordering (e.g., request R1 < R2 < R3) and force processes to request resources in that order. This avoids circular wait.

Problem 2: Deadlock Avoidance – Banker's Algorithm

System:

3 processes: P0, P1, P2 3 resource types: A (10), B (5), C (7) Allocation and Maximum needs:

Process

Allocation

Max Need

P0

0 1 0

7 5 3

P1

2 0 0

3 2 2

P2

3 0 2

9 0 2

Available: A=3, B=3, C=2

Question: Is the system in a safe state?

Solution:

Calculate Need = Max − Allocation

Process

Need

P0

7 4 3

P1

1 2 2

P2

6 0 0

Check for safe sequence: Start with Available = 3 3 2 P1's need = 1 2 2 ≤ Available → P1 can run Releases: 2 0 0 → New Available = 5 3 2 P0's need = 7 4 3 → Not ≤ Available P2's need = 6 0 0 → OK → Run P2 Releases: 3 0 2 → Available = 8 3 4 Now P0 can run → releases 0 1 0 → Available = 8 4 4

✅ Safe sequence: P1 → P2 → P0

 Problem 3: Deadlock Detection

Given:

Processes: P1, P2, P3 Resources: R1, R2 Allocation & Request:

Process

Allocated

Requesting

P1

R1

R2

P2

R2

R1

P3

-

-

Question: Can you detect a deadlock?

Solution:

Construct resource allocation graph:

P1 → R2, R1 → P1 P2 → R1, R2 → P2

=> Cycle exists: P1 ↔ P2

✅ Deadlock detected between P1 and P2

 

 2. Disk Scheduling Algorithms – Problems with Solutions

 

Problem 1: FCFS (First-Come, First-Served)

Request queue: 98, 183, 37, 122, 14, 124, 65, 67

Initial head: 53

Solution:

Total head movement:

From

To

Movement

53

98

45

98

183

85

183

37

146

37

122

85

122

14

108

14

124

110

124

65

59

65

67

2

✅ Total movement = 640 tracks

 

 Problem 2: SSTF (Shortest Seek Time First)

Same queue and head (53)

Step-by-step:

Nearest to 53 → 65 (12) 65 → 67 (2) 67 → 37 (30) 37 → 14 (23) 14 → 98 (84) 98 → 122 (24) 122 → 124 (2) 124 → 183 (59)

✅ Total movement = 236 tracks

 

 Problem 3: SCAN Algorithm

Initial head: 53, moving up

Queue: 14, 37, 65, 67, 98, 122, 124, 183

Sort ascending: 14, 37, 65, 67, 98, 122, 124, 183 Go up from 53 → 65, 67, 98, 122, 124, 183 Then reverse and go to 37, 14

Movement:

53 → 183 = 130 183 → 14 = 169

✅ Total = 299 tracks

 

Problem 4: C-SCAN Algorithm

Same as above, but no reverse movement

Go up from 53 → 65, 67, 98, 122, 124, 183 Go to end (199) → then jump to 0 (0 movement counted) Then serve 14, 37

Movement:

53 → 183 = 130 183 → 199 = 16 Jump to 0 (not counted) 0 → 14 = 14 14 → 37 = 23

✅ Total = 130 + 16 + 14 + 23 = 183 tracks