You are on page 1of 15

1 Prepared By: Yogesh Misra, FET, Mody University of Science & Technology

Course Title: Embedded System Design


Topic Covered in notes
(1) Operating System Basics
(2) Kernel
(3) Kernel Space and User Space
(4) Monolithic Kernel and Microkernel
(5) Important Features of RTOS
(6) Mutual Exclusion
(7) Semaphore
(8) Binary Semaphore (Mutex)
(9) Counting Semaphore
(10) Multiprocessing and Multitasking
2 Prepared By: Yogesh Misra, FET, Mody University of Science & Technology
OPERATING SYSTEM BASICS
The operating system acts as a bridge between the user application/tasks and the
underlying system resources through a set of system services
Fig. 1 : The Operating System Architecture
K
E
R
N
E
L
S
E
R
V
I
C
E
S
User Applications
Underlying Hardware
Memory Management
Process Management
Time Management
File System Management
I/O System Management
3 Prepared By: Yogesh Misra, FET, Mody University of Science & Technology
Kernel: It communicates between system resources and user applications. Kernel is the
core of an operating system and contains a set of system services. Various services of
kernel are as follows:
Process Management: It deals with managing the processes/tasks. Process
management includes:
setting up the memory space for the process
loading the processs code into the memory space
allocating system resources
scheduling and managing the execution of the process
Primary Memory Management: It deals with:
(i) Keeping track of which part of the memory area is currently used by which process
(ii) Allocating and De-allocating memory space on a need basis
I/O System Management: It is responsible for routing the I/O requests coming from
different user applications to the appropriate I/O devices of the system.
The kernel maintains a list of all the I/O devices of the system. This list may be available
in advance, at the time of building the kernel. Some kernels, dynamically updates the list
of available devices as and when a new device is installed (e.g. Windows XP kernel
keeps the list updated when a new plug n play USB device is attached the system).
The service Device Manager of the kernel is responsible for handling all I/O device
related operations.
Protection System Management: It deals with implementing security policies to restrict
the access to system resources by different applications or processes or users (e,g,
Window XP has restrictions like Administrator, Standard. Resticted, etc).
Kernel Space and User Space:
The application services are classified into two categories, namely: user applications and
kernel applications.
4 Prepared By: Yogesh Misra, FET, Mody University of Science & Technology
The program code corresponding to the kernel applications/services are kept in the
primary memory and protected from un-authorizedaccess by user programs/applications.
The memory space at which the kernel code is placed is known as kernel space.
Similarly, all user applications are placed to a specific area of primary memory and this
memory area is known as user space.
Some OS implements this kind of partioning and protection whereas some OS does not
segregate the kernel and user application code into two separate areas.
Monolithic Kernel and Microkernel: Based on the kernel design it can be classified in
to monolithic and micro kernels.
Monolithic Kernel:
In this all kernel services run in the kernel space. The major drawback of monolithic
kernel is that any error or failure in any one of the kernel serviceleads to the crashing of
the entire kernel application. LINUX, SOLARIS, MS-DOS kernels are examples of
monolithic kernel.
Fig. 2 : The Monolithic Kernel Model
Monolithic Kernel with all OS services
running in kernel space
Applications
5 Prepared By: Yogesh Misra, FET, Mody University of Science & Technology
Microkernel: The microkernel design incorporates only the essential set of operating
system services into the kernel. The rest of the operating system services are
implemented in programs known as Servers which runs in user space. Memory
management, process management, timer systems and interrupt handlers are essential
services, which forms the part of the microkernel.
Benefit of microkernel: If problem arises in any of the services, which run as
Server application, the same can be reconfigurable and re-started without the need
for re-starting the entire OS.
Fig. 3 : The Microkernel Model
Microkernel with essential services like
memory management, process management,
time system, interrupt handler, etc. in kernel
space
Applications
Server (kernel
services running
in user space)
6 Prepared By: Yogesh Misra, FET, Mody University of Science & Technology
REAL-TIME OPERATING SYSTEM(RTOS)
Important Features of RTOS: The RTOS consumes only known and expected amount
of time regardless the number of services it is offering. The RTOS decides which
applications should run in which order and for how much time. A RTOS designed for an
embedded system should have as low as possible memory requirement. The Real-Time
kernel is highly specialized and it contains minimal set of services required to run the
user application/task.
Some of the important featuresand functionsrequired for a RTOS are as follows:
1 Task/Process Management It deals with setting up memory space for the tasks,
loading the tasks code into the memory space, allocating system resources,
task/process termination/deletion and setting up a Task Control Block (TCB). A
Task/Process management service of RTOS creates a TCB for a task on creating a
task and delete the TCB when the task is terminated.
A TCB contains following informations:
Task ID Task identificationnumber.
Task state The current state of task (e.g. READY state, RUNNING state,
BLOCKED/WAITING state, etc.)
Task Type Indicate the type of task (e.g. Hard Real Time Task, Soft Real Time
Task, etc)
Task Priority It indicates about the priority of task.
2 Task/Process Scheduling A kernel application called Scheduler handles the
task scheduling. A scheduler is an algorithm which deals with efficient sharing of
CPU among various tasks/processes.
3 Task/Process Synchronisation In a multitasking system, multiple processes run
concurrently (in pseudo parallelism) and share system resources. Task
synchronization provides a mechanism to access the resources which are to be
shared among various Tasks/Processes. Without task synchronization problems
like racing and deadlock arises.
4 Error/Exception Handling It deals with handling the errors raised during the
7 Prepared By: Yogesh Misra, FET, Mody University of Science & Technology
execution of tasks. Exceptions can be in the form of insufficient memory,
timeouts,deadlocks, deadline missing, etc. Watchdog timer is a mechanism for
handling timeouts for tasks.
5 Clock and Timer Support - Clock and timer services with adequate resolution
are one of the most important part of a RTOS. Hard real-time application requires
timer services with the resolution of the order of a few microseconds.
On the other hand, traditional operating systems normally do not provide timer
services with sufficiently high resolution.
6 Real-Time Priority Levels - A RTOS must support static priority levels. A
priority level supported by an operating system is called static, when once the
programmer assigns a priority value to a task, the operating system does not
change it by itself. Static priority levels are also called as Real-Time Priority
Levels.
A traditional operating system dynamically changes the priority levels of tasks
from programmer assigned values to maximize system throughput.
7 Fast Task Preemption - For successful operation of a real-time application,
whenever a high priority critical task arrives, currently executing low priority task
must free the CPU to high priority task. The time duration for which a higher
priority task waits before it is allowed to execute is quantitatively expressed as
task preemption time. A RTOS should have task preemption times of the order of
a few microseconds. On the other hand task preemption times of the traditional
operating system is of the order of seconds.
8 Fast Interrupt Latency - Interrupt latency is defined as the time delay between
the occurrence of an interrupt and the running of the corresponding Interrupt
Service Routine (ISR). In RTOS, the interrupt latency should be less than a few
microseconds.
9 Preemptive Kernel - The kernel of a RTOS must be preemptive type. Preemptive
8 Prepared By: Yogesh Misra, FET, Mody University of Science & Technology
kernel of RTOS ensures that every task / process gets a chance to execute. When
and how much time a process gets is dependent on the implementation of the
preemptive scheduling. In this the currently running task is preempted to give a -
chance to other tasks/processes to execute.
On the other hand, in case of non-preemptive kernel, the task currently running
runs till it terminates or enters the wait state, waiting for an I/O or system
resources.
Multiprocessing and Multitasking
1 Multiprocessing - In the operating system context multiprocessing describes
the ability to execute multiple processes simultaneously. Systems which are
capable of performing multiprocessing, are known as multiprocessor systems.
Multiprocessor systems possess multiple CPUs and execute multiple
processes simultaneously.
2 Multitasking - In a uniprocessor system, it is not possible to execute multiple
processes simultaneously. However, it is possible for a uniprocessor system to
achieve some degree of pseudo parallelism in the execution of multiple
processes by switching the execution among different processes. The ability
of an operating system to hold the multiple processes in memory and switch
the processor from executing one process to another process is known as
multitasking. Multitasking creates the illusion of multiple processes executing
in parallel.
Whenever a CPU switching happens, the current context of currently running
process should be saved so that at a later point of time it is retrieved when the
current process is to be executed by CPU. The context saving and retrieval is
essential for resuming the process exactly from the point where it was
interrupted due to CPU switching. The act of switching CPU among the
processes or changing the current execution context is known as Context
Switching.
9 Prepared By: Yogesh Misra, FET, Mody University of Science & Technology
(A)
(B)
(C)
Multitasking involves the switching of execution among multiple tasks.
Depending upon how the switching process is implemented the multitasking
can be classified as follows:
Co-operative Multitasking In this process a task/process gets the chance of
execution only when the currently executing task/process voluntarily releases
the CPU.
In this method, any task can hold the CPU as much time it wants. In this
method the co-operation of each task/process is required for utilization of
CPU.
Preemptive Multitasking In this method every tasks/processes get a
chance to execute. The currently running task/process is preempted to give
chance to other task/process. When and how much time a task/process gets for
execution is dependent on the implementation algorithm of preemptive
scheduling.
Non-preemptive Multitasking - In this method the task/process, which is
currently in execution, is allowed to execute until it terminates (enters the
Complete state) or enters the Bloked/Wait state, waiting for an I/O or
system resources.
The Co-operative and Non-preemptive Multitasking behave differently in
Bloked/Wait state. In Co-operative Multitasking, the currently executing
task/process need not to release the CPU when it enters the Bloked/Wait
state, waiting for an I/O, or a shared resource access or an event to occur
whereas in Non-preemptive Multitaskingthe currently executing task/process
releases the CPU when it enters the Bloked/Wait state, waiting for an I/O, or
a shared resource access or an event to occur.
10 Prepared By: Yogesh Misra, FET, Mody University of Science & Technology
Fig. 4: Process States and State Transitions
Process A process is a program or part of it, in execution. Process is also
known as an instance of a program in execution. Multiple instances of the
same program can ececutesimultaneously. A process requires various system
resources like CPU for executing the process, a stack for holding the values
of local variables associated with the process, I/O devices for information
exchange, etc.
Process States and State Transition The creation of a process and its
termination is not a single step operation. The transition of a task/process in
various state from newly created task state to completed task state is called as
Process Life Cycle.
11 Prepared By: Yogesh Misra, FET, Mody University of Science & Technology
Created State The state at which a task/process is created is called as
Created State. The operating system recognizes the task but no resources
are allotted to the task in Created State.
Ready State During the Ready State task is allocated the memory and
task waits for processor so that the execution of task begins. During this
stage, the task is inlisted in the queue of Ready State tasks. This queue is
maintained by the operating system.
Running State In this state the execution of the source code of the task
starts.
Blocked/Wait State In this state the task is temporarily suspended from
execution. This state generates due to given reasons:
Task is waiting for getting access to a shared resource.
Task is waiting for user inputs such as keyboard input.
Completed State A state where the process completes its execution is
known as Completed State.
Task Scheduling
Determining which task/process is to be executed at a given point of time is
known as task/process scheduling. Task scheduling is an algorithm which is
run by kernel as a service.
Selection of scheduling algorithm should consider following factors:
Throughput This gives an indication of the number of tasks/processes
executed per unit of time. It is always high for good scheduling algorithm.
Turnaround Time It is the amount of time taken by a process for
complete execution. It includes the time spent by process in ready queue,
time spent for waiting for the main memory, time spent in completing I/O
operation and time spent in execution. Turnaround Time should be low for
good scheduling algorithm.
Waiting Time It is the time spent by process in ready queue waiting to get
12 Prepared By: Yogesh Misra, FET, Mody University of Science & Technology
the CPU time for execution. Waiting Time should be low for good
scheduling algorithm.
Various queues maintained by OS in association with CPU scheduling are:
Job Queue It contains all the processes in the system.
Ready Queue It contains all the processes which are ready for execution
and waiting for CPU to get their turn for execution.
Device Queue It contains the set of processeswaiting for an I/O device.
Non-Preemptive Scheduling - Non-Preemptive Schedulingis employed in
systems which implement Non-Preemptive multitasking model. In this
scheduling the current process is allowed to execute until it terminates or
enters in Blocked/Wait Statewaiting for an I/O or system resource. The
various types of non-preemptive scheduling are as follows;
First-Come-First-Served Scheduling
Shortest J ob First Scheduling
Priority Based Scheduling
First-Come-First-Served (FCFS) Scheduling This scheduling algorithm
allocates CPU time to the processes based on the order they enter the
Ready queue.
Example 1: Three processes P1, P2 and P3 with estimated completion time
10ms, 5ms and 7ms respectively enters the ready queue together in the order
P1, P2, P3. Calculate the waiting time and turn around time (TAT) for each
process and the average waiting time and average TAT (assuming there is no
I/O waiting for the process).
P1 P2 P3
10ms 5ms 7ms
Solution Assuming the CPU is available at the time of arrival of P1. P1
starts executing without any waiting in the Ready queue. Hence waiting
time for P1 is 0. The waiting time for each processes are as follows:
13 Prepared By: Yogesh Misra, FET, Mody University of Science & Technology
Waiting Time for P1 =0ms
Waiting Time for P2 =1 0ms (P2 starts executing after completing P1)
Waiting Time for P3 =1 5ms (P3 starts executing after completing P1 and
P2)
Average waiting Time =(waiting time of all processes) / (No. of processes)
=(0ms +10ms +15ms) / 3 =8.33ms
TAT for P1 =10ms (Time spent in Ready queue +Execution Time)
TAT for P2 =15ms (Time spent in Ready queue +Execution Time)
TAT for P3 =22ms (Time spent in Ready queue +Execution Time)
Average TAT =(TAT of all processes) / (No. of processes)
=(10ms +15ms +22ms) / 3 =15.66ms
Shortest Job First (SJF) Scheduling This scheduling algorithm allocates
CPU time to the processes which has shortest estimated completion time,
followed by the next shortest process, so on.
Example 2: Three processes P1, P2 and P3 with estimated completion time
10ms, 5ms and 7ms respectively enters the ready queue together in the order
P1, P2, P3. Calculate the waiting time and turn around time (TAT) for each
process and the average waiting time and average TAT for SJ F scheduling
(assuming there is no I/O waiting for the process).
P2 P3 P1
5ms 7ms 10ms
Solution The waiting time for each processes are as follows:
Waiting Time for P2=0ms
Waiting Time for P3=5ms (P3starts executing after completing P2)
Waiting Time for P1=1 2ms (P1starts executing after completing P2and
P3)
Average waiting Time =(waiting time of all processes) / (No. of processes)
=(0ms +5ms +12ms) / 3 =5.66ms
14 Prepared By: Yogesh Misra, FET, Mody University of Science & Technology
TAT for P2=5ms (Time spent in Ready queue +Execution Time)
TAT for P3=12ms (Time spent in Ready queue +Execution Time)
TAT for P1=22ms (Time spent in Ready queue +Execution Time)
Average TAT =(TAT of all processes) / (No. of processes)
=(5ms +12ms +22ms) / 3 =13ms
Task Synchronization
In a multitasking system, multiple processes run concurrently (in pseudo
parallelism) and share system resources. Task synchronization provides a
mechanism to access the resources which are to be shared among various
Tasks/Processes. Without task synchronization problems like racing and
deadlock arises.
Racing It is the situation in which multiple processes try to access and
manipulate shared variable/data. In a racing condition the final value of the
shared datadepends on the process which finally accessed it. Racing
condition produces incorrect results.
Deadlock Deadlock creates a situation where none of the processes are
able to make any progress in their execution.
Fig. 5: Deadlock Condition
15 Prepared By: Yogesh Misra, FET, Mody University of Science & Technology
Mutual Exclusion: Mutual Exclusion is used to prevent dead lock condition.
The two mechanisms used for mutual exclusion are as follows:
(i) Mutual Exclusion through Busy Waiting Mechanism In
this mechanism the shared resource which is currently being used
by a process is locked. If a new process requires that resource
then it makes CPU always busy by checking the lock. This leads
in wastage of CPU time and results in higher power consumption.
(ii) Mutual Exclusion through Sleep & Wakeup Mechanism
In this mechanism when a new process is not allowed to access
shared resource, which is currently being locked by another
process, undergoes Sleep mode and enters the Blocked state.
The process which is blocked on waiting to access shared
resource is awakened by the process which currently using shared
resource before it leaves shared resource.
Semaphore: Semaphore is the implementation of a Sleep & Wake based
mutual exclusion for shared resources.
A shared resource can be used exclusively by one process at a particular
instant (e.g. display device of a embedded system) or it can be shared by
multiple process at a particular instant (e.g. different sectors of secondary
memory can be used by different processes at a time). Based on the
limitation of sharing the shared resources semaphore can be classified as
Binary Semaphore and Counting Semaphore.
Binary Semaphore or Mutex: The binary semaphore provides exclusive
access to shared resource by allocating it to a single process at a time and not
allowing the other processes to access it when it is being used by a process.
In some RTOS kernel binary semaphore also referred as Mutex.
Counting Semaphore: The counting semaphore limits the access to shared
resource by a fixed number.

You might also like