You are on page 1of 34

Biyani's Think Tank

Concept based notes

Operating System
MCA

Poonam Sharma
Deptt. of IT
Biyani Girls College, Jaipur
2

Published by :
Think Tanks
Biyani Group of Colleges

Concept & Copyright :


Biyani Shikshan Samiti
Sector-3, Vidhyadhar Nagar,
Jaipur-302 023 (Rajasthan)
Ph : 0141-2338371, 2338591-95 Fax : 0141-2338007
E-mail : acad@biyanicolleges.org
Website :www.gurukpo.com; www.biyanicolleges.org

Edition : 2012

While every effort is taken to avoid errors or omissions in this Publication, any mistake or
omission that may have crept in is not intentional. It may be taken note of that neither the
publisher nor the author will be responsible for any damage or loss of any kind arising to
anyone in any manner on account of such errors and omissions.

Leaser Type Setted by :


Biyani College Printing Department
Operating System 3

Preface
I am glad to present this book, especially designed to serve the needs of the
students. The book has been written keeping in mind the general weakness in
understanding the fundamental concepts of the topics. The book is self-explanatory and
adopts the Teach Yourself style. It is based on question-answer pattern. The language
of book is quite easy and understandable based on scientific approach.
Any further improvement in the contents of the book by making corrections,
omission and inclusion is keen to be achieved based on suggestions from the readers
for which the author shall be obliged.
I acknowledge special thanks to Mr. Rajeev Biyani, Chairman & Dr. Sanjay Biyani,
Director (Acad.) Biyani Group of Colleges, who are the backbones and main concept
provider and also have been constant source of motivation throughout this endeavour.
They played an active role in coordinating the various stages of this endeavour and
spearheaded the publishing work.
I look forward to receiving valuable suggestions from professors of various
educational institutions, other faculty members and students for improvement of the
quality of the book. The reader may feel free to send in their comments and suggestions
to the under mentioned address.
Author
4

Syllabus
M.C.A. Sem.-II
OPERATING SYSTEM (MCA-204)
Introduction: Definition and types of operating systems, Batch Systems, multi
programming, timesharing parallel, distributed and real-time systems, Operating
system structure, Operating system components and services, System calls, system
programs, Virtual machines.
Process Management: Process concept, Process scheduling, Cooperating processes,
Threads, Inter-process communication, CPU scheduling criteria, Scheduling algorithms,
Multiple processor scheduling, Real-time scheduling and Algorithm evaluation.
Process Synchronization and Deadlocks: The Critical-Section problem, synchronization
hardware, Semaphores, Classical problems of synchronization, Critical regions,
Monitors, Deadlocks-System model, Characterization, Deadlock prevention, Avoidance
and Detection, Recovery from deadlock, Combined approach to deadlock handling.
Storage management: Memory Management-Logical and Physical Address Space,
Swapping, Contiguous Allocation, Paging, Segmentation with paging, Virtual Memory,
Demand paging and its performance, Page replacement algorithms, Allocation of
frames, Thrashing, Page Size and other considerations, Demand segmentation,
File systems, secondary Storage Structure, File concept, access methods, directory
implementation, Efficiency and performance, recovery, Disk structure, Disk scheduling
methods, Disk management, Recovery, Disk structure, disk scheduling methods, Disk
management, Swap-Space management, Disk reliability.
Protection and Security-Goals of protection, Domain of protection, Access matrix,
Implementation of access Matrix, Revocation of Access Rights, language based
protection, The Security problem, Authentication, One Time passwords, Program
threats, System threats, Threat Monitoring, Encryption.
Case study: Windows NT-Design principles, System components, Environmental,
Environmental subsystems, File system, Networking and program interface.
Subsystems, File system, Networking and program interface.


Operating System 5

Content
S.No. Name of Topic
1. Introduction to Operating System
1.1 Definition of Operating system
1.2 Types of operating system
1.3 Operating system structure
1.4 Operating system components and services
1.5 System calls, system programs
1.6 Virtual machines.

2. Process Management
2.1 Process concept, Process
2.2 Cooperating processes
2.3 Threads
2.4 Inter-process communication
2.5 scheduling CPU scheduling criteria
2.6 Scheduling algorithms & Algorithm evaluation.
2.7 Multiple processor scheduling
2.8 Real-time scheduling

3. Process synchronization and deadlocks


3.1 The Critical-Section problem, Critical regions
3.2 synchronization hardware
3.3 Semaphores
3.4 Classical problems of synchronization
3.5 Monitors
3.6 Introduction to Deadlocks, System model
3.7 Necessary conditions for deadlock
3.8 Deadlock prevention, Avoidance
6

3.9 Detection, Recovery from deadlock


3.10 Combined approach to deadlock handling

S.No. Name of Topic Page No.


4. Primary memory management
4.1 Logical and Physical Address Space, Swapping
4.2 Contiguous Allocation
4.3 Paging, Segmentation with paging
4.4 Virtual Memory, Demand paging and its
performance,
4.5 Page replacement algorithms, Allocation of frames
4.6 Thrashing, Page Size and other considerations
4.7 Demand segmentation

5. Secondary storage management


5.1 File systems
5.2 secondary Storage Structure
5.3 File concept, access methods
5.4 directory implementation, Efficiency and
performance, recovery
5.5 Disk structure
5.6 Disk scheduling methods
5.7 Disk management, Recovery, Disk reliability
5.8 Swap-Space management.

6. Protection and security


6.1 Goals of protection,
6.2 Access matrix, Implementation of access Matrix
Operating System 7

6.3 Revocation of Access Rights, language based


protection,
6.4 The Security problem
6.5 Authentication, One Time passwords
6.6 Program threats, System threats, Threat Monitoring
6.7 Encryption.

Chapter- 1
Introduction to Operating system
Q1 What do you mean by Operating System?
Ans: Operating System:- An operating system is a System software that acts as an
intermediary between a user of a computer and the computer hardware. The
purpose of an operating system is to provide an environment in which a user can
execute program. An operating system is an important part of almost every
computer system. It is basically a control program that controls the execution of
user programs to prevent errors and improper use of the computer. A computer
system can be divided roughly into four components: the hardware,
the operating system, the application programs, and the users

Q2 Write about various types of Operating systems.


Ans Evolution of Operating system:- The various types of operating systems evolved
can be briefly described as follows:-
1- Batch System:- This type of Operating systems was used in the earlier age. To
speed up processing, jobs with similar needs were batched together and were
run through the computer as a group. The definitive feature of a batch system is
the lack of interaction between the user and the job while that job is executing. In
this execution environment, the CPU is often idle.
Operating System 9

Memory layout for a batch system

In this execution environment, the CPU is often idle, because the speeds of
the mechanical I/O devices are intrinsically slower than are those of electronic
devices. The introduction of disk technology allowed the operating system to
keep all jobs on a disk, rather than in a serial card reader. With direct access to
several jobs, the operating system could perform job scheduling, to use resources
and perform tasks efficiently.
2- Multiprogramming Systems:- In this type of Operating systems, more than one
program will reside in to main memory. The operating system picks and begins
to execute one of the jobs in the memory. Eventually, the job may have to wait
for some task, the operating system simply switches to and executes another job.
When the first job finishes waiting and gets the CPU back. As long as there is
always some job to execute, the CPU will never be idle.

Memory layout for a multiprogramming system

3 Time Sharing Systems:- A time-shared operating system allows the many users to
share the computer simultaneously. A time-shared operating system uses CPU
10

scheduling and multiprogramming to provide each user with a small portion of a


time-shared computer. Time sharing (or multitasking) is a logical extension of
multiprogramming.
The CPU executes multiple jobs by switching among them, but the switches
occur so frequently that the users can interact with each program while it is
running. As the system switches rapidly from one user to the next, each user is
given the impression that the entire computer system is dedicated to her use,
even though it is being shared among many users.

4 Real Time Operating System:- Real time Operating systems is a special purpose
operating system, used when there are rigid time requirements on the operation
of a processor or the flow of data. Thus, it is often used as a control device in a
dedicated application. Sensors bring data to the computer. The computer must
analyze the data and possibly adjust controls to modify the sensor inputs.
Systems that control scientific experiments, medical imaging systems,
Industrial control systems, and certain display systems are real-time systems.

5 Parallel System:-. Such systems have more than one processor in close
communication, sharing the computer bus, the clock, and sometimes memory
and peripheral devices. Hence parallel system is also called tightly coupled
systems. There is a large amount of data to be shared by different processors.
These systems have three main advantages:- Increased throughput, Economy of
scale and Increased reliability.

6 Distributed System: - A distributed system is an interconnection of two or more


nodes, but the processors do not share memory. These systems are also called as
loosely coupled systems. Distributed systems depend on networking for their
functionality. By being able to communicate, distributed systems are able to
share computational tasks, and provide a rich set of features to users.

Q3 Define various components of operating system in brief.


Ans components of operating system:- various components of operating system
are:-
1- Process Management:- A program does nothing unless its instructions are
executed by a CPU. A process can be thought of as a program in execution. The
Operating System 11

operating system is responsible for the following activities in connection with


process management:

Creating and deleting both user and system processes


Suspending and resuming processes
Providing mechanisms for process synchronization
Providing mechanisms for process communication
Providing mechanisms for deadlock handling

2- Memory Management:- the main memory is central to the operation of a


computer system.
The operating system is responsible for the following activities in connection
with memory management:
Keeping track of which parts of memory are currently being used and by
whom
Deciding which processes are to be loaded into memory when memory
space becomes available
Allocating and deallocating memory space as needed

3- File Management:- File management is one of the most visible components of an


operating system. Computers can store information on several different types of
physical media. The operating system is responsible for the following activities
in connection with file management:
Creating and deleting files
Creating and deleting directories
Supporting primitives for manipulating files and directories
Mapping files onto secondary storage
Backing up files on stable (nonvolatile) storage media

4- I/0-System Management:- One of the purposes of an operating system is to hide


the peculiarities of specific hardware devices from the user. The I/O subsystem
consists of
A memory-management component that includes buffering, caching, and
spooling
A general device-driver interface
Drivers for specific hardware devices
12

5- Secondary-Storage Management:- The operating system is responsible for the


following activities in connection with disk management:
Free-space management
Storage allocation
Disk scheduling

Q4 Define operating system services and system programs in brief.


Ans Operating-System Services:- An operating system provides an environment for
the execution of programs. These operating-system services are provided for the
convenience of the programmer, to make the programming task easier.

Program execution: The system must be able to load a program into memory and
to run that program. The program must be able to end its execution, either
normally or abnormally

I/O operations: A running program may require I/O. This I/O may involve a
file or an I/O device. For specific devices, special functions may be desired. For
efficiency and protection, operating system must provide a means to do I/O.
File-system manipulation: programs need to read and write files. Programs also
need to create and delete files by name.
Communications: In many situations one process needs to exchange information
with another process. Communications may be implemented via shared
memory, or by the technique of message passing, in which packets of
information are moved between processes by the operating system.

Error detection: The operating system constantly needs to be aware of possible


errors. For each type of error, the operating system should take the appropriate
action to ensure correct and consistent computing.

In addition, another set of operating-system functions exists not for helping the
user, but for ensuring the efficient operation of the system itself. Systems with
multiple users can gain efficiency by sharing the computer resources among the
users. It includes: Resource allocation, accounting, Protection etc.

Q5 What do you mean by system call.


Operating System 13

Ans System Calls:- System calls provide the interface between a process and the
operating system. These calls are generally available as assembly-language
instructions. Certain systems allow system calls to be made directly from a
higher level language program, in which case the calls normally resemble
predefined function or subroutine calls. They may generate a call to a special
run-time routine that makes the system call, or the system call may be generated
directly

Q6 What do you understand by system programs?


Ans system programs:- System programs provide a convenient environment for
program development and execution. Some of them are simply user interfaces to
system calls; others are considerably more complex.
They can be divided into these categories:

File management: These programs create, delete, copy, rename, print, list, and
generally manipulate files and directories.
Status information: Some programs simply ask the system for the date, time,
amount of available memory or disk space, number of users, or similar status
information.

File modification: Several text editors may be available to create and modify the
content of files stored on disk or tape.
Programming-language support: Compilers, assemblers, and interpreters for
common programming languages are often provided to the user with the
operating system.

Program loading and execution: Once a program is assembled or compiled, it must


be loaded into memory to be executed. The system may provide absolute
loaders, linkage editors etc. Debugging systems are needed also.

Communications: These programs provide the mechanism for creating virtual


connections among processes, users, and different computer systems. They allow
users to send messages, to browse web pages, to send electronic-mail messages,
to log in remotely, or to transfer files from one machine to another.
14

Most operating systems are supplied with programs that solve common
problems, or perform common operations. Such programs include web browsers,
word processors, database systems, compiler etc. These programs are known as
system utilities or application programs.

Q7 What is virtual machine?


Ans virtual machine:- A computer system is made up of layers. The hardware is the
lowest level in all such systems. The kernel running at the next level uses the
hardware instructions to create a set of system calls for use by outer layers. The
system programs above the kernel are therefore able to use either system calls or
hardware instructions, and in some ways these programs do not differentiate
between these two. By using CPU scheduling and virtual-memory techniques an
operating system can create the illusion that a process has its own processor with
its own (virtual) memory. The virtual-machine approach, does not provide any
additional functionality, but rather provides an interface that is identical to the
underlying bare hardware. Each process is provided with a (virtual) copy of the
underlying computer The physical computer shares resources to create the
virtual machines.
Operating System 15

Chapter-2
Process Management
Q1 Write the definition of process.
Ans Process Definition:- In general, a process is a program in execution. The
execution of a process is progress in a sequential fashion. Thus a process is an
active entity with a program counter, specifying the next instruction to be
executed and a set of associated resources. A program is a passive entity, such as
the contents of a file stored on disk, whereas a process
is an active entity, with a program counter specifying the next instruction to
execute and a set of associated resources. A process is more than the program
code; it also includes the current activity, as represented by the value of the
program counter and the contents of the processor's registers. In addition, a
process generally includes the process stack, which contains temporary data and
a data section, which contains global variables.

Q2 Explain the different states of process with example. Or


Describe about the life cycle of a process.
Ans Process States:- The various stages through which a process pass is called its life
cycle and this life cycle of a process can be divided in to several stages called as
Process states. When a process starts execution, it goes through one state to
another state. Each process may be in one of the following states:-
.
1- New: The process is being created

2- Ready : The process is waiting to be assigned to a processor.

3- Running: Instructions are being executed.

4- Waiting: The process is waiting for some event to occur (such as an I/O
completion)

5- Terminated: The process has finished execution.


16

Q3 Explain the Process Control Block (PCB) with diagram.


Ans Process control Block:- To control the various processes the Operating System
maintains a table called the process table. The process table contains one entry
per process. These entries are referred to as Process Control Block. It contains
many piece of information related with a specific process including the:- Process
Number, Process State, Program Counter, CPU Registers, CPU Scheduling
Information, Memory management Information, Accounting Information, and
I/O Status Information.

Process control block (PCB).

Q4 What is the requirement of CPU scheduling? Explain?


Ans CPU Scheduling:- CPU scheduling is the basis of multi Programmed Operating
system. By switching the CPU among the processes, the operating system can
make the computer more productive. With multiprogramming, several processes
are kept in memory at one time. When one process has to wait, the operating
system takes the CPU away from that process and gives the CPU to another
process. The main objective of scheduling is to increase CPU utilization and
higher throughput.

Q5 What is the difference between CPU bound and I/O bound processes?
Ans I/O bound processes:-An I/O bound process is one that spends more of its time
to doing I/O then it spends on doing computation.
CPU bound Processes:- CPU bound processes needs very little I/O, but requires
heavy computation. These processes requires
Operating System 17

For a balanced system scheduler should select a good process mix of I/O-bound
and CPU-bound processes. If all processes are I/O bound, the ready queue will
almost always be empty, and the short-term scheduler will have little to do.
If all processes are CPU bound, the I/O waiting queue will almost always be
empty, devices will go unused, and again the system will be unbalanced. The
system with the best performance will have a combination of CPU-bound and
I/O-bound processes.

Q6 What are the different types of scheduling queues?


Ans The scheduling queues in the systems are:-
Job queue:- As processes enters in the system they are put into a job queue. This
queue consists of all processes in the system.
1- Ready queue:- The processes that are residing in the main memory and are
ready and waiting to execute are kept in the ready queue. This queue is
generally stored as a linked list. A ready-queue header contains pointers to
the first and final PCBs in the list.

2- Device Queue:- This queue contain a list processes waiting for I/O devices.
Each device has its own device queue.
18

Queuing-diagram

A new process is initially put in the ready queue. It waits in the ready queue
until it is selected for execution (or dispatched). Once the process is assigned to
the CPU and is executing, one of several events could occur:
The process could issue an I/O request, and then be placed in an I/O queue.
The process could create a new sub-process and wait for its termination.
The process could be removed forcibly from the CPU, as a result of an
interrupt, and be put back in the ready queue.

Q7 Write about the different types of schedulers.


Ans Types of schedulers:- There are basically three types of schedulers:-

Long Term Schedulers:- This scheduler determines which job will be submitted
for immediate processing. It selects from the job pool and loads them into
memory. The long-term scheduler, executes much less frequently. The long-term
scheduler may need to be invoked only when a process leaves the system.
Because of the longer interval between executions, the long-term scheduler can
afford to take more time to select a process for execution. The long-term
scheduler controls the degree of multiprogramming-the number of processes in
memory.

Short Term Schedulers:- It is also called a CPU scheduler. It allocates processes


belonging to ready queue to CPU for immediate processing. The short-term
scheduler must select a new process for the CPU frequently. A process may
execute for only a few milliseconds before waiting for an I/O request. Often, the
short-term scheduler executes at least once every 100 milliseconds. Because of the
brief time between executions, the short-term scheduler must be fast.
Operating System 19

Medium Term Scheduler:- It is also called as memory scheduler. The medium-


term scheduler removes processes from memory and thus reduces the degree of
multiprogramming. At some later time, the process can be reintroduced into
memory and its execution can be continued where it left off. This scheme is
called swapping. During the execution processes are swapped- out and swapped
-in by the Medium term scheduler.

Q8 What are the different criteria for measuring the performance of scheduling
mechanism?
Ans Performance criteria for scheduling mechanism:-
The various criteria for measuring the scheduler performance are
1- CPU Utilization:- We want to keep the CPU as busy as possible. CPU
utilization may range from 0 to 100 percent.

2- Throughput:- The measure of work is the number of processes completed per


time unit, called throughput.

3- Turnaround Time:- The interval from the time of submission of a process to


the time of completion is the turnaround time. Turnaround time is the sum of the
periods spent waiting to get into memory, waiting in the ready queue, executing
on the CPU, and doing I/O.

4- Waiting time:- Waiting time is the sum of the periods spent waiting in the
ready queue.

5- Response Time:- The time from the submission


of a request until the first response is produced. This measure, called
20

response time, is the amount of time it takes to start responding, but not the time
that it takes to output that response.

Q9 Differentiate between non-preemptive and pre-emptive scheduling


mechanism.
Ans Non-preemptive scheduling mechanism:-

A Non-Preemptive scheduling algorithm selects a process to run and just let it


run, until it blocks or until it voluntarily release the CPU. Hence under non-
preemptive scheduling, once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or by switching to
the waiting state. This scheduling method is used by the Microsoft Windows 3.1
and by the Apple Macintosh operating systems.

Preemptive scheduling mechanism:- In this category, suspension or preemption


is allowed based on priority.

Q10 Explain various non- preemptive scheduling mechanisms.


Ans Non- preemptive scheduling mechanisms:- Non- preemptive scheduling
mechanisms are:-
First Come First Served (FCFS) Scheduling:- With this scheme, the process that
requests the CPU first is allocated the CPU first. The implementation of the FCFS
policy is easily managed with a FIFO queue. When a process enters the ready
queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is
allocated to the process at the head of the queue. The running process is then
removed from the queue. The code for FCFS scheduling is simple to write and
understand.
The average waiting time under the FCFS policy, however, is often quite long.

For example:- Consider the following set of processes that arrive at time 0, with
the length of the CPU-burst time given in milliseconds:
Process Burst Time
PI 24
P2 3
Operating System 21

P3 3
If the processes arrive in the order PI, P2, P3, and are served in FCFS order,

The waiting time is 0 milliseconds for process PI, 24 milliseconds for process P2,
and 27 milliseconds for process P3. Thus, the average waiting time is (0 + 24 +
27)/3 = 17 milliseconds.

If the processes arrive in the order P2, P3, Pl,

The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This reduction


is substantial. Thus, the average waiting time under a FCFS policy is generally
158 Chapter 6 CPU Scheduling
not minimal, and may vary substantially if the process CPU-burst times vary
1- greatly.

2- Shortest-Job First (SJF) Scheduling:- In this scheme, job requiring the minimal
CPU time is selected first for CPU allocation.
For example:- Consider the following set of processes that arrive at time 0, with
the length of the CPU-burst time given in milliseconds:
Process Burst Time
PI 24
P2 3
P3 3

If the processes arrive in the order P2, P3, P1 according to minimum CPU burst
time

The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This reduction


is substantial. Hence SJF provide minimum average waiting time in compare to
FCFS.

3- Priority Scheduling:- In this scheme a priority is associated with each process


and the CPU is allocated to the process with the highest priority. Equal
priority processes are scheduled in FCFS order.
22

4- Deadline Scheduling:- With this scheduling algorithm the process with earliest
deadline is given the highest priority for execution.

Q11 Explain various preemptive scheduling mechanisms.


Ans Preemptive scheduling mechanisms:- Preemptive scheduling mechanisms are:-

1- Round-Robin Scheduling:- In This algorithm a small time slice is assigned to


each process. The CPU scheduler goes around the ready queue, and
allocating the CPU to each process for a time interval of one time quantum.

2- Two Queue scheduling:- In this approach, the processes are classified into two
different groups. One queue is allocated to CPU bound processes and other is
allocated to I/O bound processes.

3- Multilevel Queue scheduling:- A multilevel queue scheduling algorithm


partition the ready queue in to separate queues and each queues has its own
scheduling algorithms.

Q12 What do you understand by the thread.


Ans Thread: - A thread can be defined as an asynchronous code path with in a
process. Hence in OS which supports multithreading, a process can consist of
multiple threads, which can run simultaneously in the same way that a multi-
user OS supports multiple processes at the same time.
Operating System 23

Chapter-3
Process synchronization and deadlocks
Q1 Explain the concept of Synchronization.
Ans Synchronization:- The concept of synchronization is concerned with co-
operating processes that share some resources. Co-operating processes must
synchronize with each other when they are used shared resources. Thus we can
view the synchronization as a set of constraints on the ordering of events.

Q2 What do you mean by co-operating processes?


Ans Co-operating processes:- The concurrent processes executing in the Operating
system may be either independent processes or co-operating processes. A
process is co-operating if it can affect or be affected by the other processes
executing in the system. That means any process that shares data with other
processes is a co-operating process.

Q3 Write about semaphores and its usages.


Ans Semaphores:- A semaphore is a protected variable whose value can be accessed
and altered only by operations P and V. A semaphore mechanism basically
consists of the two primitive operations SIGNAL and WAIT. The semaphore
variable can assume only positive integer value. The integer value of the
semaphore in the wait and signal operations must be executed indivisible.
That is, when one process modifies the semaphore value, no other process can
simultaneously modify that same semaphore value. I
Usage
1- Semaphores can be used to deal with n process critical section problem.
3- Semaphores can also be used to solve various synchronization problems

Q4 Explain the concept of Inter-process communication.


Ans Inter-process communication:- Cooperating processes can communicate in a
shared-memory environment. Cooperating processes communicate with each
other via an inter-process-communication (IPC) facility. IPC provides a
24

mechanism to allow processes to communicate and to synchronize their actions.


Inter-process-communication is best provided by a message system. Message
systems can be defined in many different ways. An IPC facility provides at least
the two operations:
send(message) and receive(message).

Q5- What is the difference between direct and indirect inter-process


communication?
Ans 1- Direct Communication:- In the direct-communication, each process that wants
to communicate must explicitly name the recipient or sender of the
communication. In this scheme, the
send and receive primitives are defined as follows:
Send (P, message). Send a message to process P.
receive(Q, message) Receive a message from process Q.

2- Indirect Communication: - With indirect communication, the messages are


sent to and received from mailboxes. A mailbox can be viewed abstractly as, an
object into which messages can be placed by processes and from which messages
can be removed. The send and receive primitives are defined as follows:
send (A, message). Send a message to mailbox A.
received (A, message). Receive a message from mailbox A.

Q6 What is the deadlock situation? Explain.


Ans Deadlock:- In a multiprogramming environment, several processes may
compete for a finite number of resources. A process requests resources; if the
resources are not available at that time, the process enters a wait state. It may
happen that waiting processes will never again change state, because the
resources they have requested are held by other waiting processes. Then this is
called deadlock situation.

Q7 What are the necessary conditions to produce a deadlock?


Ans Necessary conditions for deadlock:- A deadlock situation can arise if the
following four conditions hold simultaneously in a system:
1 . Mutual exclusion: At least one resource must be held in a non-sharable mode; t
that is, only one process at a time can use the resource.
Operating System 25

2. Hold and wait : There must exist a process that is holding at least one resource
and is waiting to acquire additional resources that are currently being held by
other processes.
3. No preemption : Resources cannot be preempted; that is, a resource can be
released only voluntarily by the process holding it, after that process, has
completed its task.
4. Circular wait: There must exist a set {P0, P1, ..., P2} of waiting processes such that
P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is
held by P2, ., Pn-1 is waiting for a resource that is held by Pn, and Pn is waiting
for a resource that is held by P0.

Q8 What are the methods of handling a deadlock?


Ans Methods of handling deadlocks:- There are three different methods for dealing
with the deadlock problem:
We can use a protocol to ensure that the system will never enter a deadlock state.
We can allow the system to enter a deadlock state and then recover.
We can ignore the problem all together, and pretend that deadlocks never occur
in the system.

Q9 Write the methods to prevent a deadlock situation.


Ans Methods to prevent a deadlock situation:- For a deadlock to occur, each of the
four necessary-conditions must hold. By ensuring that at least on one these
conditions cannot hold, we can prevent the occurrence of a deadlock.
Mutual Exclusion
The mutual-exclusion condition must hold for non-sharable resources. Sharable
resources, on the other hand, do not require mutually exclusive access, and thus
cannot be involved in a deadlock.

Hold and Wait


1. Whenever a process requests a resource, it does not hold any other resources.
2. An alternative protocol allows a process to request resources only when the
process has none.

No Preemption
If a process that is holding some resources requests another resource that cannot
be immediately allocated to it, then all resources currently being held are
26

released implicitly. Then the preempted resources are added to the list of
resources for which the process is waiting.

Circular Wait
Circular-wait condition never holds is to impose a total ordering of all resource
types, and to require that each process requests resources in an increasing order
of enumeration.

Q10 How can a deadlock situation be avoided?


Ans Deadlock Avoidance :- Avoiding deadlocks is to require additional information
about how resources are to be requested. A deadlock-avoidance algorithm
dynamically examines the resource-allocation state to ensure that there can never
be a circular wait condition. The resource allocation state is defined by the
number of available and allocated resources, and the maximum demands of the
processes. There are various methods used for the purpose of deadlock
avoidance:-

Safe State
A state is safe if the system can allocate resources to each process (up to its
maximum) in some order and still avoid a deadlock. More formally, a system is
in a safe state only if there exists a safe sequence. If no such sequence exists, then
the system state is said to be unsafe.

Resource-Allocation Graph Algorithm


Suppose that process Pi requests resource Rj. The request can be granted only if
Converting the request edge Pi Rj to an assignment edge Rj Pi does not
result in the formation of a cycle in the resource-allocation graph.

Banker's Algorithm
This resource-allocation graph algorithm is applicable to a resource-allocation
system with multiple instances of each resource type. This algorithm is
commonly known as the banker's algorithm.

Q11 Write the methods for detect a deadlock.


Ans Deadlock Detection :- If a system does not employ either a deadlock-prevention
or a deadlock avoidance algorithm, then a deadlock situation may occur. In this
environment
Operating System 27

An algorithm that examines the state of the system to determine whether a


deadlock has occurred
An algorithm to recover from the deadlock

Single Instance of Each Resource Type


If all resources have only a single instance, then we can define a deadlock detection
algorithm that uses a variant of the resource-allocation graph, called a wait-for
graph. We obtain this graph from the resource-allocation graph by removing the
nodes of type resource and collapsing the appropriate edges.

Several Instances of a Resource Type


The wait-for graph scheme is not applicable to a resource-allocation system with
multiple instances of each resource type.
The data structures used are:
Available
Allocation
Request

Q12 Write the methods of recovery from deadlock.


Ans Recovery from Deadlock:- When a detection algorithm determines that a
deadlock exists, several alternatives exist for recovery. One possibility is to
inform the operator that a deadlock has occurred, and to let the operator deal
with the deadlock manually. The other possibility is to let the system recover
from the deadlock automatically. There are two options for breaking a deadlock.
One solution is simply to abort one or more processes to break the circular wait.
The second option is to preempt some resources from one or more of the
deadlocked processes.
28

Chapter-4
Primary Memory Management
Q1 What is difference between logical and physical addresses?
Ans Logical versus Physical Address Space:- An address generated by the CPU is
commonly referred to as a logical address, whereas an address seen by the
memory unit is commonly referred to as a physical address. The set of all logical
addresses generated by a program is referred to as a logical address space; the set
of all physical addresses corresponding to these logical addresses is referred to as
a physical address space. The user program deals with logical addresses. The
memory-mapping hardware converts logical addresses into physical addressed

Q2 Explain various memory allocation techniques in brief.


Ans Memory allocation techniques:- There are various techniques used for Memory
allocation. Such as:-
1- Single-program Partition techniques: - In single user systems, the memory is
usually divided into two partitions, one for the resident operating system, and
one for the user processes. The Operating system is normally residing in low
memory, and the user processes are executing in high memory.
2- Fixed-Sized Partition Allocation:- One of the simplest schemes for memory
allocation is to divide memory into a number of fixed-sized partitions. Each
partition may contain exactly one process. When a partition is free, a process is
selected from the input queue and is loaded into the free partition. When the
process terminates, the partition becomes available for another process.
The operating system keeps a table indicating which parts of memory are
available and which are occupied.
3- Variable Sized Partition technique:- This technique is more efficient then fixed-
sized partition technique. In this technique, when a job is loaded into memory, it
is allocated the exact memory space required by it and no more. Hence the
partition are created dynamically.

Q3 Explain the difference between Internal and External fragmentation.


Ans External and Internal Fragmentation:- As processes are loaded and removed
from memory, the free memory space is broken into little pieces. External
fragmentation exists when enough to the memory space exists to satisfy a
Operating System 29

request, but it is not contiguous; storage is fragmented into a large number of


small holes. While internal fragmentation means memory that is internal to
partition, but is not being used.

Q4 What do you mean by swapping.


Ans Swapping :-In case of Round robin CPU-scheduling or priority-based
scheduling it is required to a process, that it can be swapped temporarily out of
memory to a backing store, and then brought back into memory for continued
execution. This technique is called swapping. A process is swapped out will be
swapped back into the same memory space that it occupies previously.
Swapping requires a backing store. The backing store is commonly a fast disk.

Q5 Explain the paging technique in brief.


Ans Paging: - External fragmentation is avoided by using paging. In this physical
memory is broken into blocks of the same size called pages. When a process is to
be executed, its pages are loaded into any available memory frames. Every
address generated by the CPU is divided into any two parts: a page number (p)
and a page offset (d). The page number is used as an index into a page table. The
page table contains the base address of each page in physical memory. The page
size like is defined by the hardware. Paging is a form of dynamic relocation.
Every logical address is bound by the paging hardware to some physical
address.

Q6 Explain the concept of virtual memory.


Ans VIRTUAL MEMORY :- Virtual memory is a technique that allows the execution
of process that may not be completely in memory. The main visible advantage of
this scheme is that programs can be larger than physical memory.
Virtual memory is the separation of user logical memory from physical memory
this separation allows an extremely large virtual memory to be provided for
Programmers when only a smaller physical memory is available.

Q7 What do you understand by demand paging?


Ans Demand Paging: - A demand paging is similar to a paging system with
swapping. When we want to execute a process, we swap it into memory. Rather
than swapping the entire process into memory. When a process is to be swapped
in, the pager guesses which pages will be used before the process is swapped out
30

again Instead of swapping in a whole process; the pager brings only those
necessary pages into memory. Thus, it avoids reading into memory pages that
will not be used in anyway, decreasing the swap time and the amount of
physical memory needed.

Q8 What are the various methods of page replacement?


Ans Page Replacement Algorithm: - There are many different page replacement
algorithms. Some are:-
1- FIFO Algorithm: - The simplest page-replacement algorithm is a FIFO
algorithm. A FIFO replacement algorithm associates with each page the time
when that page was brought into memory. When a page must be replaced, the
oldest page is chosen.
2- Optimal Algorithm: - An optimal page-replacement algorithm has the lowest
page-fault rate of all algorithms. It is simply replace the page that will not be
used for the longest period of time.
3. LRU Algorithm: - The LRU replaces the page that has not been used for the
longest period of time. LRU replacement associates with each page the time
of that page's last use. When a page must be replaced, LRU chooses that page
that has not been used for the longest period of time.
Operating System 31

Chapter-5
Secondary storage Management
Q1 Explain the file system in brief.
Ans File system :- A file is a named collection of related information that is recorded
on secondary storage. It is the smallest allotment of logical secondary storage.
Commonly, files represent programs and data . In general, a file is a sequence of
bits, bytes, lines, or records. A file has a certain defined structure, which depends
on its type
A file is typically consist of these: Name, Identifier, Type, Location, Size,
Protection, Time, date, and user identification. The information about all files is
kept in the directory structure. The operating system can provide system calls to
create, write, read, reposition, delete, and truncate files.

Q2- Define the strategies of contiguous, linked and indexed allocation in file
system.
Ans (i) Contiguous Allocation The contiguous allocation method requires each file to
occupy a set of contiguous blocks on the disk. Disk addresses define a linear
ordering on the disk. The directory entry for each file indicates the address of the
starting block and the length of the area allocate for this file. Accessing a file that
has been allocated contiguously is easy.
(ii) Linked Allocation
With link allocation, each file is a linked list disk blocks; the disk blocks may be
scattered anywhere on the disk and any free block on the free-space list can be
used to satisfy a request ,there is no need to declare the size of a file when that
file is created. A file can continue to grow as long as there are free blocks.
(iii) Indexed Allocation
The problem with Linked allocation is that, the pointers to the blocks are
scattered with the blocks themselves all over the disk and need to be retrieved in
order, while in Indexed allocation all the pointers bring together into one
location: the index block. These types of allocation supports direct access.

Q3 What is the use of directories?


Ans Directories:- The Directories are treated as files which keep track of all other
files. The directory contain information about the files such as location and
32

owner of the file etc. the directory is itself a file, owned by the operating system
and accessible by various file management routines.

Q4 Explain the various types of directory systems.


Ans Type of directories :- Directories can be organized in following ways:-
1- Single-Level Directory:- In a single-level directory system, all the files are
placed in one directory. This is very common on single-user OS's. Even with a
single-user, as the number of files increases, it becomes difficult to remember
the names of all the files in order to create only files with unique names.

2- Two-Level Directory: - In the two-level directory system, the system maintains a


master block that has one entry for each user. This master block contains the
addresses of the directory of the users.

3- Tree-Structured Directories: - In the tree-structured directory, the directory


themselves are files. This leads to the possibility of having sub-directories that
can contain files and sub-subdirectories.
1- Acyclic-Graph Directories:- The acyclic directory structure is an extension of
the tree-structured directory structure. In the acyclic structure a directory or
file under directory can be owned by several users. Thus an acyclic graph
structure allows directories to have shared subdirectories and files.

2- General Graph Directory: - One problem with using an acyclic graph structure
is ensuring that there is no cycle. However to avoid traversing shared sectors
of an acyclic graph twice, we can allow cycles to exist. If cycles are allowed to
exist in the directory, we generally, want to avoid searching any component
twice, for reason of correctness and performance.
Operating System 33

Chapter-6
Protection and Security
Q1 Define various security threats in context of computer system.
Ans Security Threats:- The major security threats perceived by user and providers of
computer-based systems are-
1- Unauthorized disclosure of information
2- Unauthorized alteration or destruction of information
3- Unauthorized use of service
4- Denial of service to legitimate users.

Q2 Write the difference between viruses and worms.


Ans Viruses and Worms:- A computer virus is a code that attached itself to other
programs in order to alter their behavior, in an harmful way. Viruses normally
cause direct harm to the system. While a computer worm is a program written in
such a way that is spread to other computers over a network and it also consume
the network recourses to very large extent. But it does not harm any other
computer program or data.

Q3 What is the goal of authentication?


Ans Authentication:- Authentication is a process of verifying whether a person is a
legitimate user or not. The primary goal of authentication is to allow access to
legitimate system users and to deny access to unauthorized parties. The
password is the most commonly used authentication scheme which is also easy
to implement.

Q4 Define various protection mechanisms used for protecting files.


Ans Protection Mechanisms:- There are a number of mechanisms that are employed
to protect the system resources. For various objects, the operating system allows
different Access Rights for different subjects. An Access Control Matrix (ACM)
has to be stored by operating systems in order to decide to which user to grant
what access rights for which file. Basically there are three methods used for
storing the ACM:-
34

1- Access Control List


2- Capability List
3- Combined scheme

Q5 What do you mean by Encryption?


Ans Encryption:- In technical terms, the process of encoding messages is called as
Encryption. In it the original text is called as plain text but when it is encrypted,
it is called Cipher text. The recipient understands the meaning and decodes the
message to extract to correct meaning out of it. This process is called decryption.

You might also like