You are on page 1of 138

BCA-402 DIGITAL ELECTRONICS, COMPUTER SYSTEM ARCHITECTURE,

AND ORGANISATION

UNIT-1 INTRODUCTION TO DIGITAL CIRCUITS

THE BASIC COMPUTER:


The main components in a typical computer system are the processor, memory, input/output
devices and the communication channels that connect them.
A processor is the logic circuitry that responds to and processes the basic instructions that
drive a computer. The four primary functions of a processor are fetch, decode, execute and
write back. A processor performs arithmetical, logical, input/output (I/O) and other basic
instructions that are passed from an operating system (OS). Most other processes are
dependent on the operations of a processor.
A processor includes an arithmetical logic unit (ALU) and control unit (CU), which measures
capability in terms of the following:

 Ability to process instructions at a given time


 Maximum number of bits/instructions
 Relative clock speed

Memory is a passive component that simply stores information unit it is requested by another
part of the systems. During normal operations, it feeds instruction and data to the processor,
and at other times it is the source or destination of data transferred by I/O devices.
Information in a memory is accessed by its address.

I/O devices transfer information without altering it between the external world and one or
more internal components. I/O devices can be secondary memories, for example- disks and
tapes, or devices used to communicate directly with users, such as video displays, keyboard
and mouse.

The communication channels that tie the systems together can either be simple links that
connect two devices or more complex switches that interconnect several components and
allow any two of them to communicate at a given point in time. When a switch is configured
to allow two devices to exchange information, all other devices that rely on the switch are
blocked, i.e, they must wait until the switch can be reconfigured.

THE VON-NEUMANN ARCHITECTURE:


Von Neumann architecture was first published by John von Neumann in 1945.

His computer architecture design consists of a Control Unit, Arithmetic and Logic
Unit (ALU), Memory Unit, Registers and Inputs/Outputs.

Von Neumann architecture is based on the stored-program computer concept, where


instruction data and program data are stored in the same memory. This design is still used in
most computers produced today.

Central Processing Unit (CPU):-

The Central Processing Unit (CPU) is the electronic circuit responsible for executing the
instructions of a computer program.

It is sometimes referred to as the microprocessor or processor.

The CPU contains the ALU, CU and a variety of registers.

Registers:-

Registers are high speed storage areas in the CPU. All data must be stored in a register
before it can be processed.
Holds the memory location of data that needs to be
MAR Memory Address Register
accessed

MDR Memory Data Register Holds data that is being transferred to or from memory

Where intermediate arithmetic and logic results are


AC Accumulator
stored

Contains the address of the next instruction to be


PC Program Counter
executed

Current Instruction
CIR Contains the current instruction during processing
Register

Arithmetic and Logic Unit (ALU):-

The ALU allows arithmetic (add, subtract etc) and logic (AND, OR, NOT etc) operations to
be carried out.

Control Unit (CU):-

The control unit controls the operation of the computer’s ALU, memory and input/output
devices, telling them how to respond to the program instructions it has just read and
interpreted from the memory unit. The control unit also provides the timing and control
signals required by other computer components.

Buses:-

Buses are the means by which data is transmitted from one part of a computer to another,
connecting all major internal components to the CPU and memory.

A standard CPU system bus is comprised of a control bus, data bus and address bus.
Carries the addresses of data (but not the data) between the processor and
Address Bus
memory

Carries data between the processor, the memory unit and the input/output
Data Bus
devices

Carries control signals/commands from the CPU (and status signals from other
Control Bus
devices) in order to control and coordinate all the activities within the computer

Memory Unit:-

The memory unit consists of RAM, sometimes referred to as primary or main


memory. Unlike a hard drive (secondary memory), this memory is fast and also directly
accessible by the CPU.

RAM is split into partitions. Each partition consists of an address and its contents (both in
binary form).

The address will uniquely identify every location in the memory.

Loading data from permanent memory (hard drive), into the faster and directly accessible
temporary memory (RAM), allows the CPU to operate much quicker.

INSTRUCTION EXECUTION:
Instruction is command which is given by the user to computer. Execution is the process by
which a computer performs instruction. Instruction Execution means a program to be
executed by a processor consists of a set of instructions stored in memory.
Terminologies:-
 Program Counter is a register in a computer processor that contains the address of the
next instruction which will be executed.
 Memory Address Register (MAR) holds the Memory Location of data that needs to be
accessed.
 Instruction Register (IR) is a part of CPU control unit that stores the instruction currently
being executed or decoded.
 Memory Buffer Register (MBR) stores the data being transferred to and from immediate
access store also known as Memory Data Register (MDR).
 Control Unit (CU) decodes the program instruction in the IR, selecting machine
resources such as a data source register and a particular arithmetic operation.
 Arithmetic Logic Unit (ALU) performs mathematical and logical operations.
 Accumulator (AC) means the processor contains the single data register.
INSTRUCTION EXECUTION CYCLE:-
The time period during which one instruction is fetched from memory and execute when
computer given an instruction in machine language.
Each instruction is further divided into sequence of phases.
After the execution of program counter is incremented to point to the next instruction.
Process:
I. Processor reads instruction from memory time.
II. Decodes the instruction.
III. Execute the instruction.

INSTRUCTION CYCLE:

It consists of following steps:


1. Fetch Cycle:-
Through this cycle, the instruction which is executed to the next is brought from memory to
CPU.
a) Transfer the next instruction address from program counter to memory address register.
MAR PC
b) Put the address of the MAR register into the memory with read signal (this is generated
by control unit). The resultant so obtained placed on the data bus through which it stored
in the data register.
DR BUS
c) PC value is incremented by 1 and it is happen in parallel with the second step.
PC PC+1
d) The instruction so obtained is transferred to the IR register.
IR DR
2. Indirect Cycle:-
Transfer the address bit of instruction to the MAR. This can be done using DR only.
MAR IR(address)
Once again, all step performed in the fetch instruction is performed and the desired address
of operand is placed into the DR register.
DR BUS
Transfer this address to IR.
IR DR(address)
3. Execute Cycle:-
After fetch and indirect cycle, the execute cycle is performed. During this cycle, the
instruction is actually get executed.
At first, send the address of all desired operand from IR to MAR.
MAR IR
With read instruction, the value of the operand transferred to DR.
DR BUS
Transfer the value of DR to AC register and fetch the value of next operand to DR register.
The value of DR and AC register is transferred to ALU to perform the required operation
and the result will be stored in AC register.
AC AC+DR
4. Interrupt Cycle:-
After completion of the execute cycle, the machine checks the interrupt. If any interrupt
occurred, then the interrupt cycle executed.
Transfer the contents of PC to DR as this is the return address.
DR PC
Place the address of memory where the return address is to be saved.
MAR Address of memory

Now, the interrupted will be handled by the interrupt handler and after the handling of
interrupt, the address of the memory is located with read instruction and the context address
is assign to the controller.

Instruction Cycle State Diagram:

 Instruction address calculation: Determine the address of the next instruction to be


executed.
 Instruction fetch: Read instruction from its memory location into the processor.
 Instruction operation decoding: Analyse instruction to determine type of operation to
he performed and operand(s) to be used.
 Operand address calculation: If the operation involves reference to an operand in
memory or available via I/O. then determine the address of the operand.
 Operand fetch: Fetch the operand from memory or read it in from I/O,
 Data operation: Perform the operation indicated in the instruction.
 Operand store: Write the result into memory or out to I/O

HISTORY OF COMPUTER:

ABACUS:- Alternatively referred to as the counting frame, an abacus is a mechanical device


used to assist a person in performing mathematical calculations and counting. It was the first
computing machine which was developed by Chinese nearly 3000 years ago. It can perform
simple addition or subtraction.
PASCAL LINE:- The first mechanical calculating machine was made in 1642 by great
French mathematician ‘Blaise Pascal’. It was used for simple calculation, i.e, addition or
subtraction. Pascal had made this machine to assist his father in tax calculation.

JACQUARD CARD:- In early 17th century ‘Maric Jacquard’ has built a loom controlled by
punched card. It was the first concept towards programming.

PUNCHED CARD:- Dr. Hermann Hollerith developed a tabulating machine to read and
compile data from punch card.

ANALYTICAL ENGINE:- The 1st person who use the concept of programming in a
computer machine was ‘Charles Babbage’, a professor at Cambridge university in England.
Babbage invented a new machine, i.e, analytical engine which performed calculation
according to the instruction code.

MARK-1:- “Howard Aikens” and “Grance Hopper” of Harbour University in 1930 were
developed by the American multinational company IBM. His computer was called Mark-1
and it has number of components.

ENIAC:- It stands for “Electronic Numerical Integrator and Calculator”. It was a huge
machine with 10,000 vacuum tube of size 8 feet high and 80 feet long, weighted 30 tones and
consume 1,74,000 watts of power. The ENIAC performed a mathematical calculation that
could have required 40 hours for one person to complete.

UNIVAC-1:- It stands for “Universal Accounting Company”. This was the first electronic
computer developed by “Mockly Eckert” in 1951.

GENERATIONS OF COMPUTER:

FIRST GENERATION (1940-1956):-

 The first generation computers were developed by using vacuum tube or thermionic
valve machine.
 The input of this system was based on punched cards and paper tape; however, the
output was displayed on printouts.
 The first generation computers worked on binary-coded concept (i.e., language of 0-
1). Examples: ENIAC, EDVAC, etc.

SECOND GENERATION (1956-1963):-

 The second generation computers were developed by using transistor technology.


 In comparison to the first generation, the size of second generation was smaller.
 In comparison to computers of the first generation, the computing time taken by the
computers of the second generation was lesser.

THIRD GENERATION (1964-1971):-


 The third generation computers were developed by using the Integrated Circuit (IC)
technology.
 In comparison to the computers of the second generation, the size of the computers of
the third generation was smaller.
 In comparison to the computers of the second generation, the computing time taken by
the computers of the third generation was lesser.
 The third generation computer consumed less power and also generated less heat.
 The maintenance cost of the computers in the third generation was also low.
 The computer system of the computers of the third generation was easier for commercial
use.

FOURTH GENERATION (1971-Present):-

 The fourth generation computers were developed by using microprocessor technology.


 By coming to fourth generation, computer became very small in size, it became portable.
 The machine of fourth generation started generating very low amount of heat.
 It is much faster and accuracy became more reliable.
 The production cost reduced to very low in comparison to the previous generation.
 It became available for the common people as well.

FIFTH GENERATION (Present & Beyond):-

 By the time, the computer generation was being categorized on the basis of hardware
only, but the fifth generation technology also included software.
 The computers of the fifth generation had high capability and large memory capacity.
 Working with computers of this generation was fast and multiple tasks could be
performed simultaneously.
 Some of the popular advanced technologies of the fifth generation include Artificial
intelligence, Quantum computation, Nanotechnology, Parallel processing, etc.

***

UNIT-2 THE DATA REPRESENTATION

DATA REPRESENTATION:
Data and instructions cannot be entered and processed directly into computers using human
language. Any type of data be it numbers, letters, special symbols, sound or pictures must
first be converted into machine-readable form i.e. binary form. Due to this reason, it is
important to understand how a computer together with its peripheral devices handles data in
its electronic circuits, on magnetic media and in optical devices.

Data representation in digital circuits:-

Electronic components, such as microprocessor, are made up of millions of electronic


circuits. The availability of high voltage(on) in these circuits is interpreted as ‘1’ while a low
voltage (off) is interpreted as ‘0’.This concept can be compared to switching on and off an
electric circuit. When the switch is closed the high voltage in the circuit causes the bulb to
light (‘1’ state).on the other hand when the switch is open, the bulb goes off (‘0’ state). This
forms a basis for describing data representation in digital computers using the binary number
system.

Data representation on magnetic media:-

The laser beam reflected from the land is interpreted, as 1.The laser entering the pot is not
reflected. This is interpreted as 0.The reflected pattern of light from the rotating disk falls on
a receiving photoelectric detector that transforms the patterns into digital form. The presence
of a magnetic field in one direction on magnetic media is interpreted as 1; while the field in
the opposite direction is interpreted as “0”.Magnetic technology is mostly used on storage
devices that are coated with special magnetic materials such as iron oxide. Data is written on
the media by arranging the magnetic dipoles of some iron oxide particles to face in the same
direction and some others in the opposite direction

Data representation on optical media:-

In optical devices, the presence of light is interpreted as ‘1’ while its absence is interpreted as
‘0’.Optical devices use this technology to read or store data. Take example of a CD-ROM, if
the shiny surface is placed under a powerful microscope, the surface is observed to have very
tiny holes called pits. The areas that do not have pits are called land.

NUMBER SYSTEMS:
If base or radix of a number system is ‘r’, then the numbers present in that number system
are ranging from zero to r-1. The total numbers present in that number system is ‘r’. So, we
will get various number systems, by choosing the values of radix as greater than or equal to
two.
The following number systems are the most commonly used.

 Decimal Number system


 Binary Number system
 Octal Number system
 Hexadecimal Number system
Decimal Number System:-
The base or radix of Decimal number system is 10. So, the numbers ranging from 0 to 9 are
used in this number system. The part of the number that lies to the left of the decimal
point is known as integer part. Similarly, the part of the number that lies to the right of the
decimal point is known as fractional part.
In this number system, the successive positions to the left of the decimal point having
weights of 100, 101, 102, 103 and so on. Similarly, the successive positions to the right of the
decimal point having weights of 10-1, 10-2, 10-3and so on. That means, each position has
specific weight, which is power of base 10
Example:-
Consider the decimal number 1358.246. Integer part of this number is 1358 and fractional
part of this number is 0.246. The digits 8, 5, 3 and 1 have weights of 100, 101, 102 and
103 respectively. Similarly, the digits 2, 4 and 6 have weights of 10-1, 10-2 and 10-
3
respectively.
Mathematically, we can write it as
1358.246 = (1 × 103) + (3 × 102) + (5 × 101) + (8 × 100) + (2 × 10-1) +
(4 × 10-2) + (6 × 10-3)
After simplifying the right hand side terms, we will get the decimal number, which is on left
hand side.
Binary Number System:-
All digital circuits and systems use this binary number system. The base or radix of this
number system is 2. So, the numbers 0 and 1 are used in this number system.
The part of the number, which lies to the left of the binary point is known as integer part.
Similarly, the part of the number, which lies to the right of the binary point is known as
fractional part.
In this number system, the successive positions to the left of the binary point having weights
of 20, 21, 22, 23 and so on. Similarly, the successive positions to the right of the binary point
having weights of 2-1, 2-2, 2-3 and so on. That means, each position has specific weight,
which is power of base 2.
Example:-
Consider the binary number 1101.011. Integer part of this number is 1101 and fractional
part of this number is 0.011. The digits 1, 0, 1 and 1 of integer part have weights of 2 0, 21, 22,
23 respectively. Similarly, the digits 0, 1 and 1 of fractional part have weights of 2 -1, 2-2, 2-
3
respectively.
Mathematically, we can write it as
1101.011 = (1 × 23) + (1 × 22) + (0 × 21) + (1 × 20) + (0 × 2-1) +
(1 × 2-2) + (1 × 2-3)
After simplifying the right hand side terms, we will get a decimal number, which is an
equivalent of binary number on left hand side.
Octal Number System:-
The base or radix of octal number system is 8. So, the numbers ranging from 0 to 7 are used
in this number system. The part of the number that lies to the left of the octal point is known
as integer part. Similarly, the part of the number that lies to the right of the octal point is
known as fractional part.
In this number system, the successive positions to the left of the octal point having weights
of 80, 81, 82, 83 and so on. Similarly, the successive positions to the right of the octal point
having weights of 8-1, 8-2, 8-3 and so on. That means, each position has specific weight,
which is power of base 8.
Example:-
Consider the octal number 1457.236. Integer part of this number is 1457 and fractional part
of this number is 0.236. The digits 7, 5, 4 and 1 have weights of 80, 81, 82 and 83 respectively.
Similarly, the digits 2, 3 and 6 have weights of 8-1, 8-2, 8-3 respectively.
Mathematically, we can write it as
1457.236 = (1 × 83) + (4 × 82) + (5 × 81) + (7 × 80) + (2 × 8-1) +
(3 × 8-2) + (6 × 8-3)
After simplifying the right hand side terms, we will get a decimal number, which is an
equivalent of octal number on left hand side.
Hexadecimal Number System:-
The base or radix of Hexa-decimal number system is 16. So, the numbers ranging from 0 to
9 and the letters from A to F are used in this number system. The decimal equivalent of
Hexa-decimal digits from A to F are 10 to 15.
The part of the number, which lies to the left of the hexadecimal point is known as integer
part. Similarly, the part of the number, which lies to the right of the Hexa-decimal point is
known as fractional part.
In this number system, the successive positions to the left of the Hexa-decimal point having
weights of 160, 161, 162, 163 and so on. Similarly, the successive positions to the right of the
Hexa-decimal point having weights of 16-1, 16-2, 16-3 and so on. That means, each position
has specific weight, which is power of base 16.
Example:-
Consider the Hexa-decimal number 1A05.2C4. Integer part of this number is 1A05 and
fractional part of this number is 0.2C4. The digits 5, 0, A and 1 have weights of 160, 161,
162 and 163 respectively. Similarly, the digits 2, C and 4 have weights of 16-1, 16-2 and 16-
3
respectively.
Mathematically, we can write it as
1A05.2C4 = (1 × 163) + (10 × 162) + (0 × 161) + (5 × 160) + (2 × 16-1) +
(12 × 16-2) + (4 × 16-3)
After simplifying the right hand side terms, we will get a decimal number, which is an
equivalent of Hexa-decimal number on left hand side.

BASE CONVERSION:
Decimal Number to other Bases Conversion:-
If the decimal number contains both integer part and fractional part, then convert both the
parts of decimal number into other base individually. Follow these steps for converting the
decimal number into its equivalent number of any base ‘r’.
 Do division of integer part of decimal number and successive quotients with base ‘r’ and
note down the remainders till the quotient is zero. Consider the remainders in reverse
order to get the integer part of equivalent number of base ‘r’. That means, first and last
remainders denote the least significant digit and most significant digit respectively.
 Do multiplication of fractional part of decimal number and successive fractions with
base ‘r’ and note down the carry till the result is zero or the desired number of equivalent
digits is obtained. Consider the normal sequence of carry in order to get the fractional part
of equivalent number of base ‘r’.
Decimal to Binary Conversion:-
The following two types of operations take place, while converting decimal number into its
equivalent binary number.

 Division of integer part and successive quotients with base 2.


 Multiplication of fractional part and successive fractions with base 2.
Example
Consider the decimal number 58.25. Here, the integer part is 58 and fractional part is 0.25.
Step 1 − Division of 58 and successive quotients with base 2.

Operation Quotient Remainder

58/2 29 0 (LSB)

29/2 14 1

14/2 7 0

7/2 3 1

3/2 1 1

½ 0 1(MSB)

⇒(58)10 = (111010)2
Therefore, the integer part of equivalent binary number is 111010.
Step 2 − Multiplication of 0.25 and successive fractions with base 2.
Operation Result Carry

0.25 x 2 0.5 0

0.5 x 2 1.0 1

- 0.0 -

⇒(.25)10 = (.01)2
Therefore, the fractional part of equivalent binary number is .01
⇒(58.25)10 = (111010.01)2
Therefore, the binary equivalent of decimal number 58.25 is 111010.01.
Decimal to Octal Conversion:-
The following two types of operations take place, while converting decimal number into its
equivalent octal number.
 Division of integer part and successive quotients with base 8.
 Multiplication of fractional part and successive fractions with base 8.
Example
Consider the decimal number 58.25. Here, the integer part is 58 and fractional part is 0.25.
Step 1 − Division of 58 and successive quotients with base 8.

Operation Quotient Remainder

58/8 7 2

7/8 0 7

⇒(58)10 = (72)8
Therefore, the integer part of equivalent octal number is 72.
Step 2 − Multiplication of 0.25 and successive fractions with base 8.

Operation Result Carry

0.25 x 8 2.00 2
- 0.00 -

⇒ (.25)10 = (.2)8
Therefore, the fractional part of equivalent octal number is .2
⇒ (58.25)10 = (72.2)8
Therefore, the octal equivalent of decimal number 58.25 is 72.2.

Decimal to Hexa-Decimal Conversion:-


The following two types of operations take place, while converting decimal number into its
equivalent hexa-decimal number.

 Division of integer part and successive quotients with base 16.


 Multiplication of fractional part and successive fractions with base 16.
Example
Consider the decimal number 58.25. Here, the integer part is 58 and decimal part is 0.25.
Step 1 − Division of 58 and successive quotients with base 16.

Operation Quotient Remainder

58/16 3 10=A

3/16 0 3

⇒ (58)10 = (3A)16
Therefore, the integer part of equivalent Hexa-decimal number is 3A.
Step 2 − Multiplication of 0.25 and successive fractions with base 16.

Operation Result Carry

0.25 x 16 4.00 4

- 0.00 -

⇒(.25)10 = (.4)16
Therefore, the fractional part of equivalent Hexa-decimal number is .4.
⇒(58.25)10 = (3A.4)16
Therefore, the Hexa-decimal equivalent of decimal number 58.25 is 3A.4.
Binary Number to other Bases Conversion:-
The process of converting a number from binary to decimal is different to the process of
converting a binary number to other bases. Now, let us discuss about the conversion of a
binary number to decimal, octal and Hexa-decimal number systems one by one.
Binary to Decimal Conversion:-
For converting a binary number into its equivalent decimal number, first multiply the bits of
binary number with the respective positional weights and then add all those products.
Example
Consider the binary number 1101.11.
Mathematically, we can write it as
(1101.11)2 = (1 × 23) + (1 × 22) + (0 × 21) + (1 × 20) + (1 × 2-1) +
(1 × 2-2)
⇒ (1101.11)2 = 8 + 4 + 0 + 1 + 0.5 + 0.25 = 13.75
⇒ (1101.11)2 = (13.75)10
Therefore, the decimal equivalent of binary number 1101.11 is 13.75.
Binary to Octal Conversion:-
We know that the bases of binary and octal number systems are 2 and 8 respectively. Three
bits of binary number is equivalent to one octal digit, since 23 = 8.
Follow these two steps for converting a binary number into its equivalent octal number.
 Start from the binary point and make the groups of 3 bits on both sides of binary point. If
one or two bits are less while making the group of 3 bits, then include required number of
zeros on extreme sides.
 Write the octal digits corresponding to each group of 3 bits.
Example
Consider the binary number 101110.01101.
Step 1 − Make the groups of 3 bits on both sides of binary point.
101 110.011 01
Here, on right side of binary point, the last group is having only 2 bits. So, include one zero
on extreme side in order to make it as group of 3 bits.
⇒ 101 110.011 010
Step 2 − Write the octal digits corresponding to each group of 3 bits.
⇒ (101 110.011 010)2 = (56.32)8
Therefore, the octal equivalent of binary number 101110.01101 is 56.32.
Binary to Hexa-Decimal Conversion:-
We know that the bases of binary and Hexa-decimal number systems are 2 and 16
respectively. Four bits of binary number is equivalent to one Hexa-decimal digit, since 24 =
16.
Follow these two steps for converting a binary number into its equivalent Hexa-decimal
number.
 Start from the binary point and make the groups of 4 bits on both sides of binary point. If
some bits are less while making the group of 4 bits, then include required number of zeros
on extreme sides.
 Write the Hexa-decimal digits corresponding to each group of 4 bits.
Example
Consider the binary number 101110.01101
Step 1 − Make the groups of 4 bits on both sides of binary point.
10 1110.0110 1
Here, the first group is having only 2 bits. So, include two zeros on extreme side in order to
make it as group of 4 bits. Similarly, include three zeros on extreme side in order to make the
last group also as group of 4 bits.
⇒ 0010 1110.0110 1000
Step 2 − Write the Hexa-decimal digits corresponding to each group of 4 bits.
⇒ (0010 1110.0110 1000)2 = (2E.68)16
Therefore, the Hexa-decimal equivalent of binary number 101110.01101 is (2E.68).
Octal Number to other Bases Conversion:-
The process of converting a number from octal to decimal is different to the process of
converting an octal number to other bases. Now, let us discuss about the conversion of an
octal number to decimal, binary and Hexa-decimal number systems one by one.
Octal to Decimal Conversion:-
For converting an octal number into its equivalent decimal number, first multiply the digits
of octal number with the respective positional weights and then add all those products.
Example
Consider the octal number 145.23.
Mathematically, we can write it as
(145.23)8 = (1 × 82) + (4 × 81) + (5 × 80) + (2 × 8-1) + (3 × 8-2)
⇒ (145.23)8 = 64 + 32 + 5 + 0.25 + 0.05 = 101.3
⇒ (145.23)8 = (101.3)10
Therefore, the decimal equivalent of octal number 145.23 is 101.3.
Octal to Binary Conversion:-
The process of converting an octal number to an equivalent binary number is just opposite to
that of binary to octal conversion. By representing each octal digit with 3 bits, we will get the
equivalent binary number.
Example
Consider the octal number 145.23.
Represent each octal digit with 3 bits.
(145.23)8 = (001 100 101.010 011)2
The value doesn’t change by removing the zeros, which are on the extreme side.
⇒ (145.23)8 = (1100101.010011)2
Therefore, the binary equivalent of octal number 145.23 is 1100101.010011.
Octal to Hexa-Decimal Conversion:-
Follow these two steps for converting an octal number into its equivalent Hexa-decimal
number.

 Convert octal number into its equivalent binary number.


 Convert the above binary number into its equivalent Hexa-decimal number.
Example
Consider the octal number 145.23
In previous example, we got the binary equivalent of octal number 145.23 as
1100101.010011.
By following the procedure of binary to Hexa-decimal conversion, we will get
(1100101.010011)2 = (65.4C)16
⇒(145.23)8 = (65.4C)16
Therefore, the Hexa-decimal equivalent of octal number 145.23 is 65.4C.

Hexa-Decimal Number to other Bases Conversion:-


The process of converting a number from Hexa-decimal to decimal is different to the process
of converting Hexa-decimal number into other bases. Now, let us discuss about the
conversion of Hexa-decimal number to decimal, binary and octal number systems one by
one.
Hexa-Decimal to Decimal Conversion:-
For converting Hexa-decimal number into its equivalent decimal number, first multiply the
digits of Hexa-decimal number with the respective positional weights and then add all those
products.
Example
Consider the Hexa-decimal number 1A5.2
Mathematically, we can write it as
(1A5.2)16 = (1 × 162) + (10 × 161) + (5 × 160) + (2 × 16-1)
⇒ (1A5.2)16 = 256 + 160 + 5 + 0.125 = 421.125
⇒ (1A5.2)16 = (421.125)10
Therefore, the decimal equivalent of Hexa-decimal number 1A5.2 is 421.125.
Hexa-Decimal to Binary Conversion:-
The process of converting Hexa-decimal number into its equivalent binary number is just
opposite to that of binary to Hexa-decimal conversion. By representing each Hexa-decimal
digit with 4 bits, we will get the equivalent binary number.
Example
Consider the Hexa-decimal number 65.4C
Represent each Hexa-decimal digit with 4 bits.
(65.4C)6 = (0110 0101.0100 1100)2
The value doesn’t change by removing the zeros, which are at two extreme sides.
⇒ (65.4C)16 = (1100101.010011)2
Therefore, the binary equivalent of Hexa-decimal number 65.4C is 1100101.010011.
Hexa-Decimal to Octal Conversion:-
Follow these two steps for converting Hexa-decimal number into its equivalent octal number.

 Convert Hexa-decimal number into its equivalent binary number.


 Convert the above binary number into its equivalent octal number.
Example
Consider the Hexa-decimal number 65.4C
In previous example, we got the binary equivalent of Hexa-decimal number 65.4C as
1100101.010011.
By following the procedure of binary to octal conversion, we will get
(1100101.010011)2 = (145.23)8
⇒(65.4C)16 = (145.23)𝟖
Therefore, the octal equivalent of Hexa-decimal number 65.4C is 145.23.
BINARY NUMBERS REPRESENTATION:-
We can make the binary numbers into the following two groups − Unsigned
numbers and Signed numbers.
Unsigned Numbers:-
Unsigned numbers contain only magnitude of the number. They don’t have any sign. That
means all unsigned binary numbers are positive. As in decimal number system, the placing
of positive sign in front of the number is optional for representing positive numbers.
Therefore, all positive numbers including zero can be treated as unsigned numbers if positive
sign is not assigned in front of the number.
Signed Numbers:-
Signed numbers contain both sign and magnitude of the number. Generally, the sign is
placed in front of number. So, we have to consider the positive sign for positive numbers and
negative sign for negative numbers. Therefore, all numbers can be treated as signed numbers
if the corresponding sign is assigned in front of the number.
If sign bit is zero, which indicates the binary number is positive. Similarly, if sign bit is one,
which indicates the binary number is negative.
Representation of Un-Signed Binary Numbers:-
The bits present in the un-signed binary number holds the magnitude of a number. That
means, if the un-signed binary number contains ‘N’ bits, then all N bits represent the
magnitude of the number, since it doesn’t have any sign bit.
Example
Consider the decimal number 108. The binary equivalent of this number is 1101100. This is
the representation of unsigned binary number.
(108)10 = (1101100)2
It is having 7 bits. These 7 bits represent the magnitude of the number 108.
Representation of Signed Binary Numbers:-
The Most Significant Bit (MSB) of signed binary numbers is used to indicate the sign of the
numbers. Hence, it is also called as sign bit. The positive sign is represented by placing ‘0’
in the sign bit. Similarly, the negative sign is represented by placing ‘1’ in the sign bit.
If the signed binary number contains ‘N’ bits, then (N-1) bits only represent the magnitude of
the number since one bit (MSB) is reserved for representing sign of the number.
There are three types of representations for signed binary numbers

 Sign-Magnitude form
 1’s complement form
 2’s complement form
Representation of a positive number in all these 3 forms is same. But, only the representation
of negative number will differ in each form.
Example
Consider the positive decimal number +108. The binary equivalent of magnitude of this
number is 1101100. These 7 bits represent the magnitude of the number 108. Since it is
positive number, consider the sign bit as zero, which is placed on left most side of
magnitude.
(+108)10 = (01101100)2
Therefore, the signed binary representation of positive decimal number +108 is
𝟎𝟏𝟏𝟎𝟏𝟏𝟎𝟎. So, the same representation is valid in sign-magnitude form, 1’s complement
form and 2’s complement form for positive decimal number +108.
Sign-Magnitude form:-
In sign-magnitude form, the MSB is used for representing sign of the number and the
remaining bits represent the magnitude of the number. So, just include sign bit at the left
most side of unsigned binary number. This representation is similar to the signed decimal
numbers representation.
Example
Consider the negative decimal number -108. The magnitude of this number is 108. We
know the unsigned binary representation of 108 is 1101100. It is having 7 bits. All these bits
represent the magnitude.
Since the given number is negative, consider the sign bit as one, which is placed on left most
side of magnitude.
(−108)10 = (11101100)2
Therefore, the sign-magnitude representation of -108 is 11101100.
1’s complement form:-
The 1’s complement of a number is obtained by complementing all the bits of signed binary
number. So, 1’s complement of positive number gives a negative number. Similarly, 1’s
complement of negative number gives a positive number.
That means, if you perform two times 1’s complement of a binary number including sign bit,
then you will get the original signed binary number.
Example
Consider the negative decimal number -108. The magnitude of this number is 108. We
know the signed binary representation of 108 is 01101100.
It is having 8 bits. The MSB of this number is zero, which indicates positive number.
Complement of zero is one and vice-versa. So, replace zeros by ones and ones by zeros in
order to get the negative number.
(−108)10 = (10010011)2
Therefore, the 1’s complement of (108)10 is (10010011)2.
2’s complement form:-
The 2’s complement of a binary number is obtained by adding one to the 1’s
complement of signed binary number. So, 2’s complement of positive number gives a
negative number. Similarly, 2’s complement of negative number gives a positive number.
That means, if you perform two times 2’s complement of a binary number including sign bit,
then you will get the original signed binary number.
Example
Consider the negative decimal number -108.
We know the 1’s complement of (108)10 is (10010011)2
2’s compliment of (108)10 = 1’s compliment of (108)10 + 1.
= 10010011 + 1
= 10010100
Therefore, the 2’s complement of (108)10 is (10010100)2.
SIGNED BINARY ARITHMETIC:
Addition of two Signed Binary Numbers:-
Consider the two signed binary numbers A & B, which are represented in 2’s complement
form. We can perform the addition of these two numbers, which is similar to the addition of
two unsigned binary numbers. But, if the resultant sum contains carry out from sign bit, then
discard (ignore) it in order to get the correct value.
If resultant sum is positive, you can find the magnitude of it directly. But, if the resultant sum
is negative, then take 2’s complement of it in order to get the magnitude.
Example 1
Let us perform the addition of two decimal numbers +7 and +4 using 2’s complement
method.
The 2’s complement representations of +7 and +4 with 5 bits each are shown below.
(+7)10 = (00111)2
(+4)10 = (00100)2
The addition of these two numbers is
(+7)10 +(+4)10 = (00111)2+(00100)2
⇒(+7)10 +(+4)10 = (01011)2.
The resultant sum contains 5 bits. So, there is no carry out from sign bit. The sign bit ‘0’
indicates that the resultant sum is positive. So, the magnitude of sum is 11 in decimal number
system. Therefore, addition of two positive numbers will give another positive number.
Example 2
Let us perform the addition of two decimal numbers -7 and -4 using 2’s complement method.
The 2’s complement representation of -7 and -4 with 5 bits each are shown below.
(−7)10 = (11001)2
(−4)10 = (11100)2
The addition of these two numbers is
(−7)10 + (−4)10 = (11001)2 + (11100)2
⇒(−7)10 + (−4)10 = (110101)2.
The resultant sum contains 6 bits. In this case, carry is obtained from sign bit. So, we can
remove it
Resultant sum after removing carry is (−7)10 + (−4)10 = (10101)2.
The sign bit ‘1’ indicates that the resultant sum is negative. So, by taking 2’s complement of
it we will get the magnitude of resultant sum as 11 in decimal number system. Therefore,
addition of two negative numbers will give another negative number.
Subtraction of two Signed Binary Numbers:-
Consider the two signed binary numbers A & B, which are represented in 2’s complement
form. We know that 2’s complement of positive number gives a negative number. So,
whenever we have to subtract a number B from number A, then take 2’s complement of B and
add it to A. So, mathematically we can write it as
A - B = A + (2's complement of B)
Similarly, if we have to subtract the number A from number B, then take 2’s complement of
A and add it to B. So, mathematically we can write it as
B - A = B + (2's complement of A)
So, the subtraction of two signed binary numbers is similar to the addition of two signed
binary numbers. But, we have to take 2’s complement of the number, which is supposed to be
subtracted. This is the advantage of 2’s complement technique. Follow, the same rules of
addition of two signed binary numbers.
Example 3
Let us perform the subtraction of two decimal numbers +7 and +4 using 2’s complement
method.
The subtraction of these two numbers is
(+7)10 − (+4)10 = (+7)10 + (−4)10.
The 2’s complement representation of +7 and -4 with 5 bits each are shown below.
(+7)10 = (00111)2
(+4)10 = (11100)2
⇒(+7)10 + (+4)10 = (00111)2 + (11100)2 = (00011)2
Here, the carry obtained from sign bit. So, we can remove it. The resultant sum after
removing carry is
(+7)10 + (+4)10 = (00011)2
The sign bit ‘0’ indicates that the resultant sum is positive. So, the magnitude of it is 3 in
decimal number system. Therefore, subtraction of two decimal numbers +7 and +4 is +3.
Example 4
Let us perform the subtraction of two decimal numbers +4 and +7 using 2’s complement
method.
The subtraction of these two numbers is
(+4)10 − (+7)10 = (+4)10 + (−7)10.
The 2’s complement representation of +4 and -7 with 5 bits each are shown below.
(+4)10 = (00100)2
(-7)10 = (11001)2
⇒(+4)10 + (-7)10 = (00100)2 + (11001)2 = (11101)2
Here, carry is not obtained from sign bit. The sign bit ‘1’ indicates that the resultant sum
is negative. So, by taking 2’s complement of it we will get the magnitude of resultant sum as
3 in decimal number system. Therefore, subtraction of two decimal numbers +4 and +7 is -3.
CODES:
In the coding, when numbers or letters are represented by a specific group of symbols, it is
said to be that number or letter is being encoded. The group of symbols is called as code. The
digital data is represented, stored and transmitted as group of bits. This group of bits is also
called as binary code.
Binary codes can be classified into two types.

 Weighted codes
 Unweighted codes
If the code has positional weights, then it is said to be weighted code. Otherwise, it is an
unweighted code. Weighted codes can be further classified as positively weighted codes and
negatively weighted codes.
Binary Codes for Decimal digits:-
The following table shows the various binary codes for decimal digits 0 to 9.

Decimal Digit 8421 Code 2421 Code 84-2-1 Code Excess 3 Code

0 0000 0000 0000 0011

1 0001 0001 0111 0100

2 0010 0010 0110 0101

3 0011 0011 0101 0110

4 0100 0100 0100 0111

5 0101 1011 1011 1000

6 0110 1100 1010 1001

7 0111 1101 1001 1010

8 1000 1110 1000 1011

9 1001 1111 1111 1100

We have 10 digits in decimal number system. To represent these 10 digits in binary, we


require minimum of 4 bits. But, with 4 bits there will be 16 unique combinations of zeros and
ones. Since, we have only 10 decimal digits, the other 6 combinations of zeros and ones are
not required.
8 4 2 1 code:-
 The weights of this code are 8, 4, 2 and 1.

 This code has all positive weights. So, it is a positively weighted code.
 This code is also called as natural BCD (Binary Coded Decimal) code.
Example
Let us find the BCD equivalent of the decimal number 786. This number has 3 decimal digits
7, 8 and 6. From the table, we can write the BCD (8421) codes of 7, 8 and 6 are 0111, 1000
and 0110 respectively.
∴ (786)10 = (011110000110)BCD
There are 12 bits in BCD representation, since each BCD code of decimal digit has 4 bits.
2 4 2 1 code:-
 The weights of this code are 2, 4, 2 and 1.

 This code has all positive weights. So, it is a positively weighted code.
 It is an unnatural BCD code. Sum of weights of unnatural BCD codes is equal to 9.
 It is a self-complementing code. Self-complementing codes provide the 9’s complement
of a decimal number, just by interchanging 1’s and 0’s in its equivalent 2421
representation.
Example
Let us find the 2421 equivalent of the decimal number 786. This number has 3 decimal digits
7, 8 and 6. From the table, we can write the 2421 codes of 7, 8 and 6 are 1101, 1110 and 1100
respectively.
Therefore, the 2421 equivalent of the decimal number 786 is 110111101100.
8 4 -2 -1 code:-
 The weights of this code are 8, 4, -2 and -1.

 This code has negative weights along with positive weights. So, it is a negatively
weighted code.
 It is an unnatural BCD code.
 It is a self-complementing code.
Example
Let us find the 8 4-2-1 equivalent of the decimal number 786. This number has 3 decimal
digits 7, 8 and 6. From the table, we can write the 8 4 -2 -1 codes of 7, 8 and 6 are 1001, 1000
and 1010 respectively.
Therefore, the 8 4 -2 -1 equivalent of the decimal number 786 is 100110001010.
Excess 3 code:-
 This code doesn’t have any weights. So, it is an un-weighted code.

 We will get the Excess 3 code of a decimal number by adding three (0011) to the binary
equivalent of that decimal number. Hence, it is called as Excess 3 code.
 It is a self-complementing code.
Example
Let us find the Excess 3 equivalent of the decimal number 786. This number has 3 decimal
digits 7, 8 and 6. From the table, we can write the Excess 3 codes of 7, 8 and 6 are 1010, 1011
and 1001 respectively.
Therefore, the Excess 3 equivalent of the decimal number 786 is 101010111001
Gray Code:-
The following table shows the 4-bit Gray codes corresponding to each 4-bit binary code.

Decimal Number Binary Code Gray Code

0 0000 0000

1 0001 0001

2 0010 0011

3 0011 0010

4 0100 0110

5 0101 0111

6 0110 0101

7 0111 0100

8 1000 1100

9 1001 1101

10 1010 1111

11 1011 1110

12 1100 1010

13 1101 1011
14 1110 1001

15 1111 1000

 This code doesn’t have any weights. So, it is an un-weighted code.


 In the above table, the successive Gray codes are differed in one bit position only. Hence,
this code is called as unit distance code.
Binary code to Gray Code Conversion:-
Follow these steps for converting a binary code into its equivalent Gray code.
 Consider the given binary code and place a zero to the left of MSB.
 Compare the successive two bits starting from zero. If the 2 bits are same, then the output
is zero. Otherwise, output is one.
 Repeat the above step till the LSB of Gray code is obtained.
Example
From the table, we know that the Gray code corresponding to binary code 1000 is 1100. Now,
let us verify it by using the above procedure.
Given, binary code is 1000.
Step 1 − By placing zero to the left of MSB, the binary code will be 01000.
Step 2 − By comparing successive two bits of new binary code, we will get the gray code
as 1100.
ERROR DETECTION & CORRECTION CODES:
We know that the bits 0 and 1 corresponding to two different range of analog voltages. So,
during transmission of binary data from one system to the other, the noise may also be
added. Due to this, there may be errors in the received data at other system.
That means a bit 0 may change to 1 or a bit 1 may change to 0. We can’t avoid the
interference of noise. But, we can get back the original data first by detecting whether any
error(s) present and then correcting those errors. For this purpose, we can use the following
codes.

 Error detection codes


 Error correction codes
Error detection codes − are used to detect the error(s) present in the received data (bit
stream). These codes contain some bit(s), which are included (appended) to the original bit
stream. These codes detect the error, if it is occurred during transmission of the original data
(bit stream).Example − Parity code, Hamming code.
Error correction codes − are used to correct the error(s) present in the received data (bit
stream) so that, we will get the original data. Error correction codes also use the similar
strategy of error detection codes. Example − Hamming code.
Therefore, to detect and correct the errors, additional bit(s) are appended to the data bits at
the time of transmission.
Parity Code:-
It is easy to include (append) one parity bit either to the left of MSB or to the right of LSB of
original bit stream. There are two types of parity codes, namely even parity code and odd
parity code based on the type of parity being chosen.
Even Parity Code
The value of even parity bit should be zero, if even number of ones present in the binary
code. Otherwise, it should be one. So that, even number of ones present in even parity code.
Even parity code contains the data bits and even parity bit.
The following table shows the even parity codes corresponding to each 3-bit binary code.
Here, the even parity bit is included to the right of LSB of binary code.

Binary Code Even Parity bit Even Parity Code

000 0 0000

001 1 0011

010 1 0101

011 0 0110

100 1 1001

101 0 1010

110 0 1100

111 1 1111

Here, the number of bits present in the even parity codes is 4. So, the possible even number
of ones in these even parity codes are 0, 2 & 4.
 If the other system receives one of these even parity codes, then there is no error in the
received data. The bits other than even parity bit are same as that of binary code.
 If the other system receives other than even parity codes, then there will be an error(s) in
the received data. In this case, we can’t predict the original binary code because we don’t
know the bit position(s) of error.
Therefore, even parity bit is useful only for detection of error in the received parity code.
But, it is not sufficient to correct the error.
Odd Parity Code
The value of odd parity bit should be zero, if odd number of ones present in the binary code.
Otherwise, it should be one. So that, odd number of ones present in odd parity code. Odd
parity code contains the data bits and odd parity bit.
The following table shows the odd parity codes corresponding to each 3-bit binary code.
Here, the odd parity bit is included to the right of LSB of binary code.

Binary Code Odd Parity bit Odd Parity Code

000 1 0001

001 0 0010

010 0 0100

011 1 0111

100 0 1000

101 1 1011

110 1 1101

111 0 1110

Here, the number of bits present in the odd parity codes is 4. So, the possible odd number of
ones in these odd parity codes are 1 & 3.
 If the other system receives one of these odd parity codes, then there is no error in the
received data. The bits other than odd parity bit are same as that of binary code.
 If the other system receives other than odd parity codes, then there is an error(s) in the
received data. In this case, we can’t predict the original binary code because we don’t
know the bit position(s) of error.
Therefore, odd parity bit is useful only for detection of error in the received parity code. But,
it is not sufficient to correct the error.
Hamming Code:-
Hamming code is useful for both detection and correction of error present in the received
data. This code uses multiple parity bits and we have to place these parity bits in the
positions of powers of 2.
The minimum value of 'k' for which the following relation is correct (valid) is nothing but
the required number of parity bits.
2k≥n+k+12k≥n+k+1
Where,
‘n’ is the number of bits in the binary code (information)
‘k’ is the number of parity bits
Therefore, the number of bits in the Hamming code is equal to n + k.
Let the Hamming code is bn+kbn+k−1.....b3b2b1bn+kbn+k−1.....b3b2b1 & parity bits pk,
pk−1 ,....p1pk,pk−1,....p1. We can place the ‘k’ parity bits in powers of 2 positions only. In
remaining bit positions, we can place the ‘n’ bits of binary code.
Based on requirement, we can use either even parity or odd parity while forming a Hamming
code. But, the same parity technique should be used in order to find whether any error
present in the received data.
Follow this procedure for finding parity bits.
 Find the value of p1, based on the number of ones present in bit positions b3, b5, b7 and so
on. All these bit positions (suffixes) in their equivalent binary have ‘1’ in the place value
of 20.
 Find the value of p2, based on the number of ones present in bit positions b3, b6, b7 and so
on. All these bit positions (suffixes) in their equivalent binary have ‘1’ in the place value
of 21.
 Find the value of p3, based on the number of ones present in bit positions b5, b6, b7 and so
on. All these bit positions (suffixes) in their equivalent binary have ‘1’ in the place value
of 22.
 Similarly, find other values of parity bits.
Follow this procedure for finding check bits.
 Find the value of c1, based on the number of ones present in bit positions b1, b3, b5, b7 and
so on. All these bit positions (suffixes) in their equivalent binary have ‘1’ in the place
value of 20.
 Find the value of c2, based on the number of ones present in bit positions b2, b3, b6, b7 and
so on. All these bit positions (suffixes) in their equivalent binary have ‘1’ in the place
value of 21.
 Find the value of c3, based on the number of ones present in bit positions b4, b5, b6, b7 and
so on. All these bit positions (suffixes) in their equivalent binary have ‘1’ in the place
value of 22.
 Similarly, find other values of check bits.
The decimal equivalent of the check bits in the received data gives the value of bit position,
where the error is present. Just complement the value present in that bit position. Therefore,
we will get the original binary code after removing parity bits.
Example 1
Let us find the Hamming code for binary code, d4d3d2d1 = 1000. Consider even parity bits.
The number of bits in the given binary code is n=4.
We can find the required number of parity bits by using the following mathematical relation.
2k≥n+k+12k≥n+k+1
Substitute, n=4 in the above mathematical relation.
⇒2k≥4+k+1⇒2k≥4+k+1

⇒2k≥5+k⇒2k≥5+k
The minimum value of k that satisfied the above relation is 3. Hence, we require 3 parity bits
p1, p2, and p3. Therefore, the number of bits in Hamming code will be 7, since there are 4 bits
in binary code and 3 parity bits. We have to place the parity bits and bits of binary code in
the Hamming code as shown below.
The 7-bit Hamming
code is b7b6b5b4b3b2b1=d4d3d2p3d1p2bp1b7b6b5b4b3b2b1=d4d3d2p3d1p2bp1
By substituting the bits of binary code, the Hamming code will
be b7b6b5b4b3b2b1=100p3Op2p1b7b6b5b4b3b2b1=100p3Op2p1. Now, let us find the
parity bits.
p1=b7⊕b5⊕b3=1⊕0⊕0=1p1=b7⊕b5⊕b3=1⊕0⊕0=1

p2=b7⊕b6⊕b3=1⊕0⊕0=1p2=b7⊕b6⊕b3=1⊕0⊕0=1

p3=b7⊕b6⊕b5=1⊕0⊕0=1p3=b7⊕b6⊕b5=1⊕0⊕0=1
By substituting these parity bits, the Hamming code will
be b7b6b5b4b3b2b1=1001011b7b6b5b4b3b2b1=1001011.
Example 2
In the above example, we got the Hamming code
as b7b6b5b4b3b2b1=1001011b7b6b5b4b3b2b1=1001011. Now, let us find the error position
when the code received is b7b6b5b4b3b2b1=1001111b7b6b5b4b3b2b1=1001111.
Now, let us find the check bits.
c1=b7⊕b5⊕b3⊕b1=1⊕0⊕1⊕1=1c1=b7⊕b5⊕b3⊕b1=1⊕0⊕1⊕1=1

c2=b7⊕b6⊕b3⊕b2=1⊕0⊕1⊕1=1c2=b7⊕b6⊕b3⊕b2=1⊕0⊕1⊕1=1

c3=b7⊕b6⊕b5⊕b4=1⊕0⊕0⊕1=0c3=b7⊕b6⊕b5⊕b4=1⊕0⊕0⊕1=0
The decimal value of check bits gives the position of error in received Hamming code.
c3c2c1=(011)2=(3)10c3c2c1=(011)2=(3)10
Therefore, the error present in third bit (b3) of Hamming code. Just complement the value
present in that bit and remove parity bits in order to get the original binary code.

***
UNIT-3 PRINCIPLES OF LOGIC CIRCUITS-I

LOGIC GATES:
Logic gates are the basic building blocks of any digital system. It is an
electronic circuit having one or more than one input and only one output. The relationship
between the input and the output is based on certain logic. Based on this, logic gates are
named as AND gate, OR gate, NOT gate etc.
A gate can be represented in three ways:
I. Graphical Symbols
II. Algebric Notation
III. Truth Table
Fundamental Gates:-
1. AND Gate:
The AND gate is a digital logic gate with ‘n’ i/ps one o/p, which perform logical conjunction
based on the combinations of its inputs. The output of this gate is true only when all the
inputs are true. When one or more inputs of the AND gate’s i/ps are false, then only the
output of the AND gate is false.

Logic diagram:-

Truth Table:-

2. OR Gate:
The OR gate is a digital logic gate with ‘n’ i/ps and one o/p, that performs a logical
conjunction based on the combinations of its inputs. The output of the OR gate is true only
when one or more inputs are true. If all the i/ps of the gate are false, then only the output of
the OR gate is false.

Logic diagram:-

Truth Table:-

3. NOT Gate:
The NOT gate is a digital logic gate with one input and one output that operates an inverter
operation of the input.The output of the NOT gate is the reverse of the input. When the input
of the NOT gate is true then the output will be false and vice versa.

Logic diagram:-

Truth Table:-

Universal/Derived Gates:-
These two (NAND/NOR) gates are called as universal gate because we can derived the
function of any other gate by using only NAND/NOR gates.
1. NAND Gate:
The NAND gate is a digital logic gate with ‘n’ i/ps and one o/p, that performs the operation
of the AND gate followed by the operation of the NOT gate. NAND gate is designed by
combining the AND and NOT gates. If the input of the NAND gate high, then the output of
the gate will be low.

Logic diagram:-

Truth Table:-

2. NOR Gate:
The NOR gate is a digital logic gate with n inputs and one output, that performs the
operation of the OR gate followed by the NOT gate. NOR gate is designed by combining the
OR and NOT gate. When any one of the i/ps of the NOR gate is true, then the output of the
NOR gate will be false.

Logic diagram:-

Truth Table:-
Special Gates:-
1. XOR Gate:
The Exclusive-OR gate is a digital logic gate with two inputs and one output. The short form
of this gate is Ex-OR. It performs based on the operation of OR gate. If any one of the inputs
of this gate is high, then the output of the EX-OR gate will be high.

Logic diagram:-

Truth Table:-

2. XNOR Gate:
The Exclusive-NOR gate is a digital logic gate with two inputs and one output. The short
form of this gate is Ex-NOR. It performs based on the operation of NOR gate. When both the
inputs of this gate are high, then the output of the EX-NOR gate will be high. But, if any one
of the inputs is high (but not both), then the output will be low.
Logic diagram:-

Truth Table:-

BOOLEAN ALGEBRA:
Boolean Algebra is an algebra, which deals with binary numbers & binary variables. Hence,
it is also called as Binary Algebra or logical Algebra. A mathematician, named George Boole
had developed this algebra in 1854. The variables used in this algebra are also called as
Boolean variables.
The range of voltages corresponding to Logic ‘High’ is represented with ‘1’ and the range of
voltages corresponding to logic ‘Low’ is represented with ‘0’.
Boolean Postulates:-
Consider the binary numbers 0 and 1, Boolean variable (x) and its complement (x’). Either
the Boolean variable or complement of it is known as literal. The four possible logical
OR operations among these literals and binary numbers are shown below.
x+0=x
x+1=1
x+x=x
x + x’ = 1
Similarly, the four possible logical AND operations among those literals and binary numbers
are shown below.
x.1 = x
x.0 = 0
x.x = x
x.x’ = 0
These are the simple Boolean postulates. We can verify these postulates easily, by
substituting the Boolean variable with ‘0’ or ‘1’.
Note− The complement of complement of any Boolean variable is equal to the variable
itself. i.e., (x’)’=x.
Basic Laws of Boolean Algebra:-
Following are the three basic laws of Boolean Algebra.

 Commutative law
 Associative law
 Distributive law
Commutative Law:-
If any logical operation of two Boolean variables give the same result irrespective of the
order of those two variables, then that logical operation is said to be Commutative. The
logical OR & logical AND operations of two Boolean variables x & y are shown below
x+y=y+x
x.y = y.x
The symbol ‘+’ indicates logical OR operation. Similarly, the symbol ‘.’ indicates logical
AND operation and it is optional to represent. Commutative law obeys for logical OR &
logical AND operations.
Associative Law:-
If a logical operation of any two Boolean variables is performed first and then the same
operation is performed with the remaining variable gives the same result, then that logical
operation is said to be Associative. The logical OR & logical AND operations of three
Boolean variables x, y & z are shown below.
x + (y + z) = (x + y) + z
x.(y.z) = (x.y).z
Associative law obeys for logical OR & logical AND operations.
Distributive Law:-
If any logical operation can be distributed to all the terms present in the Boolean function,
then that logical operation is said to be Distributive. The distribution of logical OR &
logical AND operations of three Boolean variables x, y & z are shown below.
x.(y + z) = x.y + x.z
x + (y.z) = (x + y).(x + z)
Distributive law obeys for logical OR and logical AND operations.
These are the Basic laws of Boolean algebra. We can verify these laws easily, by substituting
the Boolean variables with ‘0’ or ‘1’.
Theorems of Boolean Algebra:-
The following two theorems are used in Boolean algebra.

 Duality theorem
 DeMorgan’s theorem
Duality Theorem:-
This theorem states that the dual of the Boolean function is obtained by interchanging the
logical AND operator with logical OR operator and zeros with ones. For every Boolean
function, there will be a corresponding Dual function.
Let us make the Boolean equations (relations) that we discussed in the section of Boolean
postulates and basic laws into two groups. The following table shows these two groups.

Group1 Group2

x+0=x x.1 = x

x+1=1 x.0 = 0

x+x=x x.x = x

x + x’ = 1 x.x’ = 0

x+y=y+x x.y = y.x

x + (y + z) = (x + y) + z x.(y.z) = (x.y).z

x.(y + z) = x.y + x.z x + (y.z) = (x + y).(x + z)

In each row, there are two Boolean equations and they are dual to each other. We can verify
all these Boolean equations of Group1 and Group2 by using duality theorem.
DeMorgan’s Theorem:-
This theorem is useful in finding the complement of Boolean function. It states that the
complement of logical OR of at least two Boolean variables is equal to the logical AND of
each complemented variable.
DeMorgan’s theorem with 2 Boolean variables x and y can be represented as
(x + y)’ = x’.y’
The dual of the above Boolean function is
(x.y)’ = x’ + y’
Therefore, the complement of logical AND of two Boolean variables is equal to the logical
OR of each complemented variable. Similarly, we can apply DeMorgan’s theorem for more
than 2 Boolean variables also.
LOGIC CIRCUITS:
Logic circuits are those circuits that simulates human mental process. Digital circuits are
logical circuits. Logic circuits use two different values of physical quantity, usually voltage,
to represent the Boolean values true(0r 1) and false(0r 0). Logic circuits can have inputs and
they have one or more outputs that are, at least partially, dependent on their inputs. In logic
circuit diagrams, connections from one circuits output to another circuit’s input are often
shown with an arrowhead at the input end.
In terms of their behaviour, logic circuits are much like programming language functions or
methods. Their inputs are analogous to function parameters and their outputs are analogous
to function returned values. However, a logic circuit can have multiple outputs.
Types of logic circuits:-
1. Combinational logic circuits:-
 Output depends only on its current inputs.
 A combinational circuit may contain an arbitrary number of logic gates and inverters
but no feedback loops.
 A feedback loop is a connection from the output of one gate to propagate back into the
input of that same gate.
 A function of a combinational circuit represent by a logic diagram is formally
described using logic expressions and truth tables.
2. Sequential logic circuits:-
 Output depends not only on the current inputs but also on the past sequences of inputs.
 Sequential logic circuits contain combinational logic in addition to memory elements
formed with feedback loops.
 The behaviour of sequential circuits is formally described with state transition tables
and diagrams.
COMBINATIONAL CIRCUITS:
Combinational circuit is a circuit in which we combine the different gates in the circuit, for
example encoder, decoder, multiplexer and demultiplexer. Some of the characteristics of
combinational circuits are following −
 The output of combinational circuit at any instant of time, depends only on the levels
present at input terminals.
 The combinational circuit do not use any memory. The previous state of input does not
have any effect on the present state of the circuit.
 A combinational circuit can have an n number of inputs and m number of outputs.

Half Adder:-
Half adder is a combinational logic circuit with two inputs and two outputs. The half adder
circuit is designed to add two single bit binary number A and B. It is the basic building
block for addition of two single bit numbers. This circuit has two outputs carry and sum.
Truth Table:-

Circuit Diagram:-

Full Adder:-
Full adder is developed to overcome the drawback of Half Adder circuit. It can add two one-
bit numbers A and B, and carry c. The full adder is a three input and two output
combinational circuit.

Truth Table:-
Circuit Diagram:-

N-Bit Parallel Adder:-


The Full Adder is capable of adding only two single digit binary number along with a carry
input. But in practical we need to add binary numbers which are much longer than just one
bit. To add two n-bit binary numbers we need to use the n-bit parallel adder. It uses a number
of full adders in cascade. The carry output of the previous full adder is connected to carry
input of the next full adder.
4 Bit Parallel Adder:-
In the block diagram, A0 and B0 represent the LSB of the four bit words A and B. Hence Full
Adder-0 is the lowest stage. Hence its Cin has been permanently made 0. The rest of the
connections are exactly same as those of n-bit parallel adder is shown in fig. The four bit
parallel adder is a very common logic circuit.
N-Bit Parallel Subtractor:-
The subtraction can be carried out by taking the 1's or 2's complement of the number to be
subtracted. For example we can perform the subtraction (A-B) by adding either 1's or 2's
complement of B to A. That means we can use a binary adder to perform the binary
subtraction.
4 Bit Parallel Subtractor:-
The number to be subtracted (B) is first passed through inverters to obtain its 1's
complement. The 4-bit adder then adds A and 2's complement of B to produce the
subtraction. S3 S2 S1 S0 represents the result of binary subtraction (A-B) and carry output
Cout represents the polarity of the result. If A > B then Cout = 0 and the result of binary form
(A-B) then Cout = 1 and the result is in the 2's complement form.

Half Subtractors:-
Half subtractor is a combination circuit with two inputs and two outputs (difference and
borrow). It produces the difference between the two binary bits at the input and also
produces an output (Borrow) to indicate if a 1 has been borrowed. In the subtraction (A-B),
A is called as Minuend bit and B is called as Subtrahend bit.
Truth Table:-
Circuit Diagram:-

Full Subtractors:-
The disadvantage of a half subtractor is overcome by full subtractor. The full subtractor is a
combinational circuit with three inputs A,B,C and two output D and C'. A is the 'minuend', B
is 'subtrahend', C is the 'borrow' produced by the previous stage, D is the difference output
and C' is the borrow output.
Truth Table:-

Circuit Diagram:-
Multiplexers:-
Multiplexer is a special type of combinational circuit. There are n-data inputs, one output and
m select inputs with 2m = n. It is a digital circuit which selects one of the n data inputs and
routes it to the output. The selection of one of the n inputs is done by the selected inputs.
Depending on the digital code applied at the selected inputs, one out of n data sources is
selected and transmitted to the single output Y. E is called the strobe or enable input which is
useful for the cascading. It is generally an active low terminal that means it will perform the
required operation when it is low.

Multiplexers come in multiple variations

 2 : 1 multiplexer
 4 : 1 multiplexer
 16 : 1 multiplexer
 32 : 1 multiplexer
Block Diagram:-

Truth Table:-

Demultiplexers:-
A demultiplexer performs the reverse operation of a multiplexer i.e. it receives one input and
distributes it over several outputs. It has only one input, n outputs, m select input. At a time
only one output line is selected by the select lines and the input is transmitted to the selected
output line. A de-multiplexer is equivalent to a single pole multiple way switch as shown in
fig.
Demultiplexers comes in multiple variations.

 1 : 2 demultiplexer
 1 : 4 demultiplexer
 1 : 16 demultiplexer
 1 : 32 demultiplexer
Block diagram:-
Truth Table:-

Decoder:-
A decoder is a combinational circuit. It has n input and to a maximum m = 2n outputs.
Decoder is identical to a demultiplexer without any data input. It performs operations which
are exactly opposite to those of an encoder.

Examples of Decoders are following.

 Code converters
 BCD to seven segment decoders
2 to 4 Line Decoder:-
The block diagram of 2 to 4 line decoder is shown in the fig. A and B are the two inputs
where D0 through D3 are the four outputs. Truth table explains the operations of a decoder.
It shows that each output is 1 for only a specific combination of inputs.

Truth Table:-
Logic Circuit:-

Encoder:-
Encoder is a combinational circuit which is designed to perform the inverse operation of the
decoder. An encoder has n number of input lines and m number of output lines. An encoder
produces an m bit binary code corresponding to the digital input number. The encoder
accepts an n input digital word and converts it into an m bit another digital word.

Examples of Encoders are following.

 Priority encoders
 Decimal to BCD encoder
 Octal to binary encoder
 Hexadecimal to binary encoder
Priority Encoder:-
This is a special type of encoder. Priority is given to the input lines. If two or more input line
are 1 at the same time, then the input line with highest priority will be considered. There are
four input D0, D1, D2, D3 and two output Y0, Y1. Out of the four input D3 has the highest
priority and D0 has the lowest priority. That means if D3 = 1 then Y1 Y1 = 11 irrespective of
the other inputs. Similarly if D3 = 0 and D2 = 1 then Y1 Y0 = 10 irrespective of the other
inputs.

Truth Table:-

Logic Circuit:-
CANONICAL AND STANDARD FORMS:
We will get four Boolean product terms by combining two variables x and y with logical
AND operation. These Boolean product terms are called as min terms or standard product
terms. The min terms are x’y’, x’y, xy’ and xy.
Similarly, we will get four Boolean sum terms by combining two variables x and y with
logical OR operation. These Boolean sum terms are called as Max terms or standard sum
terms. The Max terms are x + y, x + y’, x’ + y and x’ + y’.
The following table shows the representation of min terms and MAX terms for 2 variables.

X Y Min terms Max terms

0 0 m0=x’y’ M0=x + y

0 1 m1=x’y M1=x + y’

1 0 m2=xy’ M2=x’ + y

1 1 m3=xy M3=x’ + y’

If the binary variable is ‘0’, then it is represented as complement of variable in min term and
as the variable itself in Max term. Similarly, if the binary variable is ‘1’, then it is
represented as complement of variable in Max term and as the variable itself in min term.
From the above table, we can easily notice that min terms and Max terms are complement of
each other. If there are ‘n’ Boolean variables, then there will be 2n min terms and 2n Max
terms.
Canonical SoP and PoS forms:-
A truth table consists of a set of inputs and output(s). If there are ‘n’ input variables, then
there will be 2n possible combinations with zeros and ones. So the value of each output
variable depends on the combination of input variables. So, each output variable will have
‘1’ for some combination of input variables and ‘0’ for some other combination of input
variables.
Therefore, we can express each output variable in following two ways.

 Canonical SoP form


 Canonical PoS form
Canonical SoP form:-
Canonical SoP form means Canonical Sum of Products form. In this form, each product
term contains all literals. So, these product terms are nothing but the min terms. Hence,
canonical SoP form is also called as sum of minterms form.
First, identify the min terms for which, the output variable is one and then do the logical OR
of those min terms in order to get the Boolean expression (function) corresponding to that
output variable. This Boolean function will be in the form of sum of min terms.
Follow the same procedure for other output variables also, if there is more than one output
variable.
Example
Consider the following truth table.

Inputs Output

P Q r f

0 0 0 0

0 0 1 0

0 1 0 0

0 1 1 1

1 0 0 0

1 0 1 1
1 1 0 1

1 1 1 1

Here, the output (f) is ‘1’ for four combinations of inputs. The corresponding min terms are
p’qr, pq’r, pqr’, pqr. By doing logical OR of these four min terms, we will get the Boolean
function of output (f).
Therefore, the Boolean function of output is, f = p’qr + pq’r + pqr’ + pqr. This is
the canonical SoP form of output, f. We can also represent this function in following two
notations.

f=m3+m5+m6+m7

f=∑m(3,5,6,7)
In one equation, we represented the function as sum of respective min terms. In other
equation, we used the symbol for summation of those min terms.
Canonical PoS form:-
Canonical PoS form means Canonical Product of Sums form. In this form, each sum term
contains all literals. So, these sum terms are nothing but the Max terms. Hence, canonical
PoS form is also called as product of Max terms form.
First, identify the Max terms for which, the output variable is zero and then do the logical
AND of those Max terms in order to get the Boolean expression (function) corresponding to
that output variable. This Boolean function will be in the form of product of Max terms.
Follow the same procedure for other output variables also, if there is more than one output
variable.
Example
Consider the same truth table of previous example. Here, the output (f) is ‘0’ for four
combinations of inputs. The corresponding Max terms are p + q + r, p + q + r’, p + q’ + r, p’
+ q + r. By doing logical AND of these four Max terms, we will get the Boolean function of
output (f).
Therefore, the Boolean function of output is, f = (p + q + r).(p + q + r’).(p + q’ + r).(p’ + q +
r). This is the canonical PoS form of output, f. We can also represent this function in
following two notations.

f=M0.M1.M2.M4
f=∏M(0,1,2,4)
In one equation, we represented the function as product of respective Max terms. In other
equation, we used the symbol for multiplication of those Max terms.
The Boolean function, f = (p + q + r).(p + q + r’).(p + q’ + r).(p’ + q + r) is the dual of the
Boolean function, f = p’qr + pq’r + pqr’ + pqr.
Therefore, both canonical SoP and canonical PoS forms are Dual to each other.
Functionally, these two forms are same. Based on the requirement, we can use one of these
two forms.
Standard SoP and PoS forms:-
We discussed two canonical forms of representing the Boolean output(s). Similarly, there
are two standard forms of representing the Boolean output(s). These are the simplified
version of canonical forms.

 Standard SoP form


 Standard PoS form
We will discuss about Logic gates in later chapters. The main advantage of standard forms
is that the number of inputs applied to logic gates can be minimized. Sometimes, there will
be reduction in the total number of logic gates required.
Standard SoP form:-
Standard SoP form means Standard Sum of Products form. In this form, each product term
need not contain all literals. So, the product terms may or may not be the min terms.
Therefore, the Standard SoP form is the simplified form of canonical SoP form.
We will get Standard SoP form of output variable in two steps.

 Get the canonical SoP form of output variable


 Simplify the above Boolean function, which is in canonical SoP form.
Follow the same procedure for other output variables also, if there is more than one output
variable. Sometimes, it may not possible to simplify the canonical SoP form. In that case,
both canonical and standard SoP forms are same.
Example
Convert the following Boolean function into Standard SoP form.
f = p’qr + pq’r + pqr’ + pqr
The given Boolean function is in canonical SoP form. Now, we have to simplify this
Boolean function in order to get standard SoP form.
Step 1 − Use the Boolean postulate, x + x = x. That means, the Logical OR operation with
any Boolean variable ‘n’ times will be equal to the same variable. So, we can write the last
term pqr two more times.
⇒ f = p’qr + pq’r + pqr’ + pqr + pqr + pqr
Step 2 − Use Distributive law for 1st and 4th terms, 2nd and 5th terms, 3rdand 6th terms.
⇒ f = qr(p’ + p) + pr(q’ + q) + pq(r’ + r)
Step 3 − Use Boolean postulate, x + x’ = 1 for simplifying the terms present in each
parenthesis.
⇒ f = qr(1) + pr(1) + pq(1)
Step 4 − Use Boolean postulate, x.1 = x for simplifying above three terms.
⇒ f = qr + pr + pq
⇒ f = pq + qr + pr
This is the simplified Boolean function. Therefore, the standard SoP form corresponding to
given canonical SoP form is f = pq + qr + pr
Standard PoS form:-
Standard PoS form means Standard Product of Sums form. In this form, each sum term
need not contain all literals. So, the sum terms may or may not be the Max terms. Therefore,
the Standard PoS form is the simplified form of canonical PoS form.
We will get Standard PoS form of output variable in two steps.

 Get the canonical PoS form of output variable


 Simplify the above Boolean function, which is in canonical PoS form.
Follow the same procedure for other output variables also, if there is more than one output
variable. Sometimes, it may not possible to simplify the canonical PoS form. In that case,
both canonical and standard PoS forms are same.
Example
Convert the following Boolean function into Standard PoS form.
f = (p + q + r).(p + q + r’).(p + q’ + r).(p’ + q + r)
The given Boolean function is in canonical PoS form. Now, we have to simplify this
Boolean function in order to get standard PoS form.
Step 1 − Use the Boolean postulate, x.x = x. That means, the Logical AND operation with
any Boolean variable ‘n’ times will be equal to the same variable. So, we can write the first
term p+q+r two more times.
⇒ f = (p + q + r).(p + q + r).(p + q + r).(p + q + r’).(p +q’ + r).(p’ + q + r)
Step 2 − Use Distributive law, x + (y.z) = (x + y).(x + z) for 1st and 4thparenthesis, 2nd and
5th parenthesis, 3rd and 6th parenthesis.
⇒ f = (p + q + rr’).(p + r + qq’).(q + r + pp’)
Step 3 − Use Boolean postulate, x.x’=0 for simplifying the terms present in each
parenthesis.
⇒ f = (p + q + 0).(p + r + 0).(q + r + 0)
Step 4 − Use Boolean postulate, x + 0 = x for simplifying the terms present in each
parenthesis
⇒ f = (p + q).(p + r).(q + r)
⇒ f = (p + q).(q + r).(p + r)
This is the simplified Boolean function. Therefore, the standard PoS form corresponding to
given canonical PoS form is f = (p + q).(q + r).(p + r). This is the dual of the Boolean
function, f = pq + qr + pr.
Therefore, both Standard SoP and Standard PoS forms are Dual to each other.

MINIMIZATION OF GATES
We can simplify the Boolean functions using Boolean postulates and theorems. It is a time
consuming process and we have to re-write the simplified expressions after each step.
To overcome this difficulty, Karnaugh introduced a method for simplification of Boolean
functions in an easy way. This method is known as Karnaugh map method or K-map
method. It is a graphical method, which consists of 2n cells for ‘n’ variables. The adjacent
cells are differed only in single bit position.
K-Maps for 2 to 5 Variables:-
K-Map method is most suitable for minimizing Boolean functions of 2 variables to 5
variables. Now, let us discuss about the K-Maps for 2 to 5 variables one by one.
2 Variable K-Map:-
The number of cells in 2 variable K-map is four, since the number of variables is two. The
following figure shows 2 variable K-Map.

 There is only one possibility of grouping 4 adjacent min terms.


 The possible combinations of grouping 2 adjacent min terms are {(m0, m1), (m2, m3),
(m0, m2) and (m1, m3)}.
3 Variable K-Map:-
The number of cells in 3 variable K-map is eight, since the number of variables is three. The
following figure shows 3 variable K-Map.

 There is only one possibility of grouping 8 adjacent min terms.


 The possible combinations of grouping 4 adjacent min terms are {(m0, m1, m3, m2), (m4,
m5, m7, m6), (m0, m1, m4, m5), (m1, m3, m5, m7), (m3, m2, m7, m6) and (m2, m0, m6, m4)}.
 The possible combinations of grouping 2 adjacent min terms are {(m0, m1), (m1, m3), (m3,
m2), (m2, m0), (m4, m5), (m5, m7), (m7, m6), (m6, m4), (m0, m4), (m1, m5), (m3, m7) and (m2,
m6)}.
 If x=0, then 3 variable K-map becomes 2 variable K-map.
4 Variable K-Map:-
The number of cells in 4 variable K-map is sixteen, since the number of variables is four.
The following figure shows 4 variable K-Map.

 There is only one possibility of grouping 16 adjacent min terms.


 Let R1, R2, R3 and R4 represents the min terms of first row, second row, third row and
fourth row respectively. Similarly, C1, C2, C3 and C4represents the min terms of first
column, second column, third column and fourth column respectively. The possible
combinations of grouping 8 adjacent min terms are {(R1, R2), (R2, R3), (R3, R4), (R4, R1),
(C1, C2), (C2, C3), (C3, C4), (C4, C1)}.
 If w=0, then 4 variable K-map becomes 3 variable K-map.
5 Variable K-Map:-
The number of cells in 5 variable K-map is thirty-two, since the number of variables is 5.
The following figure shows 5 variable K-Map.

 There is only one possibility of grouping 32 adjacent min terms.


 There are two possibilities of grouping 16 adjacent min terms. i.e., grouping of min terms
from m0 to m15 and m16 to m31.
 If v=0, then 5 variable K-map becomes 4 variable K-map.
In the above all K-maps, we used exclusively the min terms notation. Similarly, you can use
exclusively the Max terms notation.
Minimization of Boolean Functions using K-Maps:-
If we consider the combination of inputs for which the Boolean function is ‘1’, then we will
get the Boolean function, which is in standard sum of productsform after simplifying the
K-map.
Similarly, if we consider the combination of inputs for which the Boolean function is ‘0’,
then we will get the Boolean function, which is in standard product of sums form after
simplifying the K-map.
Follow these rules for simplifying K-maps in order to get standard sum of products form.
 Select the respective K-map based on the number of variables present in the Boolean
function.
 If the Boolean function is given as sum of min terms form, then place the ones at
respective min term cells in the K-map. If the Boolean function is given as sum of
products form, then place the ones in all possible cells of K-map for which the given
product terms are valid.
 Check for the possibilities of grouping maximum number of adjacent ones. It should be
powers of two. Start from highest power of two and upto least power of two. Highest
power is equal to the number of variables considered in K-map and least power is zero.
 Each grouping will give either a literal or one product term. It is known as prime
implicant. The prime implicant is said to be essential prime implicant, if atleast single
‘1’ is not covered with any other groupings but only that grouping covers.
 Note down all the prime implicants and essential prime implicants. The simplified
Boolean function contains all essential prime implicants and only the required prime
implicants.
Note 1 − If outputs are not defined for some combination of inputs, then those output values
will be represented with don’t care symbol ‘x’. That means, we can consider them as either
‘0’ or ‘1’.
Note 2 − If don’t care terms also present, then place don’t cares ‘x’ in the respective cells of
K-map. Consider only the don’t cares ‘x’ that are helpful for grouping maximum number of
adjacent ones. In those cases, treat the don’t care value as ‘1’.
Example
Let us simplify the following Boolean function, f(W, X, Y, Z)= WX’Y’ + WY +
W’YZ’ using K-map.
The given Boolean function is in sum of products form. It is having 4 variables W, X, Y &
Z. So, we require 4 variable K-map. The 4 variable K-map with ones corresponding to the
given product terms is shown in the following figure.
Here, 1s are placed in the following cells of K-map.
 The cells, which are common to the intersection of Row 4 and columns 1 & 2 are
corresponding to the product term, WX’Y’.
 The cells, which are common to the intersection of Rows 3 & 4 and columns 3 & 4 are
corresponding to the product term, WY.
 The cells, which are common to the intersection of Rows 1 & 2 and column 4 are
corresponding to the product term, W’YZ’.
There are no possibilities of grouping either 16 adjacent ones or 8 adjacent ones. There are
three possibilities of grouping 4 adjacent ones. After these three groupings, there is no single
one left as ungrouped. So, we no need to check for grouping of 2 adjacent ones. The 4
variable K-map with these three groupings is shown in the following figure.

Here, we got three prime implicants WX’, WY & YZ’. All these prime implicants
are essential because of following reasons.
 Two ones (m8 & m9) of fourth row grouping are not covered by any other groupings.
Only fourth row grouping covers those two ones.
 Single one (m15) of square shape grouping is not covered by any other groupings. Only
the square shape grouping covers that one.
 Two ones (m2 & m6) of fourth column grouping are not covered by any other groupings.
Only fourth column grouping covers those two ones.
Therefore, the simplified Boolean function is
f = WX’ + WY + YZ’
Follow these rules for simplifying K-maps in order to get standard product of sums form.
 Select the respective K-map based on the number of variables present in the Boolean
function.
 If the Boolean function is given as product of Max terms form, then place the zeroes at
respective Max term cells in the K-map. If the Boolean function is given as product of
sums form, then place the zeroes in all possible cells of K-map for which the given sum
terms are valid.
 Check for the possibilities of grouping maximum number of adjacent zeroes. It should be
powers of two. Start from highest power of two and up to least power of two. Highest
power is equal to the number of variables considered in K-map and least power is zero.
 Each grouping will give either a literal or one sum term. It is known as prime implicant.
The prime implicant is said to be essential prime implicant, if atleast single ‘0’ is not
covered with any other groupings but only that grouping covers.
 Note down all the prime implicants and essential prime implicants. The simplified
Boolean function contains all essential prime implicants and only the required prime
implicants.
Note − If don’t care terms also present, then place don’t cares ‘x’ in the respective cells of
K-map. Consider only the don’t cares ‘x’ that are helpful for grouping maximum number of
adjacent zeroes. In those cases, treat the don’t care value as ‘0’.
Example:-
Let us simplify the following Boolean
function, f(X,Y,Z)=∏M(0,1,2,4)f(X,Y,Z)=∏M(0,1,2,4)using K-map.
The given Boolean function is in product of Max terms form. It is having 3 variables X, Y &
Z. So, we require 3 variable K-map. The given Max terms are M0, M1, M2 & M4. The
3 variable K-map with zeroes corresponding to the given Max terms is shown in the
following figure.

There are no possibilities of grouping either 8 adjacent zeroes or 4 adjacent zeroes. There are
three possibilities of grouping 2 adjacent zeroes. After these three groupings, there is no
single zero left as ungrouped. The 3 variable K-map with these three groupings is shown
in the following figure.
Here, we got three prime implicants X + Y, Y + Z & Z + X. All these prime implicants
are essential because one zero in each grouping is not covered by any other groupings
except with their individual groupings.
Therefore, the simplified Boolean function is
f = (X + Y).(Y + Z).(Z + X)
In this way, we can easily simplify the Boolean functions up to 5 variables using K-map
method. For more than 5 variables, it is difficult to simplify the functions using K-Maps.
Because, the number of cells in K-map gets doubled by including a new variable.
Due to this checking and grouping of adjacent ones (min terms) or adjacent zeros (Max
terms) will be complicated. We will discuss Tabular method in next chapter to overcome
the difficulties of K-map method.

PROGRAMMABLE LOGIC DEVICE


Programmable Logic Devices (PLDs) are the integrated circuits. They contain an array of
AND gates & another array of OR gates. There are three kinds of PLDs based on the type of
array(s), which has programmable feature.

 Programmable Read Only Memory


 Programmable Logic Array
The process of entering the information into these devices is known as programming.
Basically, users can program these devices or ICs electrically in order to implement the
Boolean functions based on the requirement. Here, the term programming refers to hardware
programming but not software programming.
Programmable Read Only Memory (PROM):-
Read Only Memory (ROM) is a memory device, which stores the binary information
permanently. That means, we can’t change that stored information by any means later. If the
ROM has programmable feature, then it is called as Programmable ROM (PROM). The
user has the flexibility to program the binary information electrically once by using PROM
programmer.
PROM is a programmable logic device that has fixed AND array & Programmable OR
array. The block diagram of PROM is shown in the following figure.
Here, the inputs of AND gates are not of programmable type. So, we have to generate
2n product terms by using 2n AND gates having n inputs each. We can implement these
product terms by using nx2n decoder. So, this decoder generates ‘n’ min terms.
Here, the inputs of OR gates are programmable. That means, we can program any number of
required product terms, since all the outputs of AND gates are applied as inputs to each OR
gate. Therefore, the outputs of PROM will be in the form of sum of min terms.
Example:-
Let us implement the following Boolean functions using PROM.
A(X,Y,Z)=∑m(5,6,7)A(X,Y,Z)=∑m(5,6,7)
B(X,Y,Z)=∑m(3,5,6,7)B(X,Y,Z)=∑m(3,5,6,7)
The given two functions are in sum of min terms form and each function is having three
variables X, Y & Z. So, we require a 3 to 8 decoder and two programmable OR gates for
producing these two functions. The corresponding PROM is shown in the following figure.

Here, 3 to 8 decoder generates eight min terms. The two programmable OR gates have the
access of all these min terms. But, only the required min terms are programmed in order to
produce the respective Boolean functions by each OR gate. The symbol ‘X’ is used for
programmable connections.
Programmable Logic Array (PLA):-
PLA is a programmable logic device that has both Programmable AND array &
Programmable OR array. Hence, it is the most flexible PLD. The block diagram of PLA is
shown in the following figure.

Here, the inputs of AND gates are programmable. That means each AND gate has both
normal and complemented inputs of variables. So, based on the requirement, we can
program any of those inputs. So, we can generate only the required product terms by using
these AND gates.
Here, the inputs of OR gates are also programmable. So, we can program any number of
required product terms, since all the outputs of AND gates are applied as inputs to each OR
gate. Therefore, the outputs of PAL will be in the form of sum of products form.
Example:-
Let us implement the following Boolean functions using PLA.
A=XY+XZ′A=XY+XZ′
B=XY′+YZ+XZ′B=XY′+YZ+XZ′
The given two functions are in sum of products form. The number of product terms present
in the given Boolean functions A & B are two and three respectively. One product
term, Z′XZ′X is common in each function.
So, we require four programmable AND gates & two programmable OR gates for producing
those two functions. The corresponding PLA is shown in the following figure.
The programmable AND gates have the access of both normal and complemented inputs
of variables. In the above figure, the inputs X, X′X′, Y, Y′Y′, Z & Z′Z′, are available at the
inputs of each AND gate. So, program only the required literals in order to generate one
product term by each AND gate.
All these product terms are available at the inputs of each programmable OR gate. But,
only program the required product terms in order to produce the respective Boolean
functions by each OR gate. The symbol ‘X’ is used for programmable connections.

***

UNIT-4 PRINCIPLE OF LOGIC CIRCUITS-II


SEQUENTIAL LOGIC CIRCUIT

The combinational circuit does not use any memory. Hence the previous state of input does
not have any effect on the present state of the circuit. But sequential circuit has memory so
output can vary based on input. This type of circuits uses previous input, output, clock and a
memory element.

Flip Flop:-
Flip flop is a sequential circuit which generally samples its inputs and changes its outputs
only at particular instants of time and not continuously. Flip flop is said to be edge sensitive
or edge triggered rather than being level triggered like latches.

S-R Flip-Flop :-
It is basically S-R latch using NAND gates with an additional enable input. It is also called
as level triggered SR-FF. For this, circuit in output will take place if and only if the enable
input (E) is made active.

In short this circuit will operate as an S-R latch if E = 1 but there is no change in the output if
E = 0.

Circuit Diagram:-

Truth Table:-
Operation:-

S.N. Condition Operation

1 S = R = 0 : No change If S = R = 0 then output of NAND gates 3 and 4 are forced to


become 1.

Hence R' and S' both will be equal to 1. Since S' and R' are the
input of the basic S-R latch using NAND gates, there will be
no change in the state of outputs.

2 S = 0, R = 1, E = 1 Since S = 0, output of NAND-3 i.e. R' = 1 and E = 1 the output


of NAND-4 i.e. S' = 0.

Hence Qn+1 = 0 and Qn+1 bar = 1. This is reset condition.

3 S = 1, R = 0, E = 1 Output of NAND-3 i.e. R' = 0 and output of NAND-4 i.e. S' =


1.

Hence output of S-R NAND latch is Qn+1 = 1 and Qn+1 bar = 0.


This is the reset condition.

4 S = 1, R = 1, E = 1 As S = 1, R = 1 and E = 1, the output of NAND gates 3 and 4


both are 0 i.e. S' = R' = 0. Hence the Race condition will occur
in the basic NAND latch.

Master Slave JK Flip Flop :-


Master slave JK FF is a cascade of two S-R FF with feedback from the output of second to
input of first. Master is a positive level triggered. But due to the presence of the inverter in
the clock line, the slave will respond to the negative level. Hence when the clock = 1
(positive level) the master is active and the slave is inactive. Whereas when clock = 0 (low
level) the slave is active and master is inactive.

Circuit Diagram:-
Truth Table:-

Operation:-

S.N. Condition Operation

1 J = K = 0 (No change) When clock = 0, the slave becomes active and master is
inactive. But since the S and R inputs have not changed, the
slave outputs will also remain unchanged. Therefore outputs
will not change if J = K =0.

2 J = 0 and K = 1 (Reset) Clock = 1 − Master active, slave inactive. Therefore outputs


of the master become Q1 = 0 and Q1 bar = 1. That means S =
0 and R =1.

Clock = 0 − Slave active, master inactive. Therefore outputs


of the slave become Q = 0 and Q bar = 1.

Again clock = 1 − Master active, slave inactive. Therefore


even with the changed outputs Q = 0 and Q bar = 1 fed back
to master, its output will be Q1 = 0 and Q1 bar = 1. That
means S = 0 and R = 1.

Hence with clock = 0 and slave becoming active the outputs


of slave will remain Q = 0 and Q bar = 1. Thus we get a
stable output from the Master slave.

3 J = 1 and K = 0 (Set) Clock = 1 − Master active, slave inactive. Therefore outputs


of the master become Q1 = 1 and Q1 bar = 0. That means S =
1 and R =0.
Clock = 0 − Slave active, master inactive. Therefore outputs
of the slave become Q = 1 and Q bar = 0.

Again clock = 1 − then it can be shown that the outputs of the


slave are stabilized to Q = 1 and Q bar = 0.

4 J = K = 1 (Toggle) Clock = 1 − Master active, slave inactive. Outputs of master


will toggle. So S and R also will be inverted.

Clock = 0 − Slave active, master inactive. Outputs of slave


will toggle.

These changed output are returned back to the master inputs.


But since clock = 0, the master is still inactive. So it does not
respond to these changed outputs. This avoids the multiple
toggling which leads to the race around condition. The
master slave flip flop will avoid the race around condition.

Delay Flip Flop / D Flip Flop :-


Delay Flip Flop or D Flip Flop is the simple gated S-R latch with a NAND inverter
connected between S and R inputs. It has only one input. The input data is appearing at the
output after some time. Due to this data delay between i/p and o/p, it is called delay flip flop.
S and R will be the complements of each other due to NAND inverter. Hence S = R = 0 or S
= R = 1, these input condition will never appear. This problem is avoid by SR = 00 and SR =
1 conditions.

Circuit Diagram:-

Truth Table:-
Operation:-

S.N. Condition Operation

1 E=0 Latch is disabled. Hence no change in output.

2 E = 1 and D = 0 If E = 1 and D = 0 then S = 0 and R = 1. Hence irrespective of the


present state, the next state is Qn+1 = 0 and Qn+1 bar = 1. This is the
reset condition.

3 E = 1 and D = 1 If E = 1 and D = 1, then S = 1 and R = 0. This will set the latch and
Qn+1 = 1 and Qn+1 bar = 0 irrespective of the present state.

Toggle Flip Flop / T Flip Flop :-


Toggle flip flop is basically a JK flip flop with J and K terminals permanently connected
together. It has only input denoted by T as shown in the Symbol Diagram. The symbol for
positive edge triggered T flip flop is shown in the Block Diagram.

Symbol Diagram:-

Block Diagram:-

Truth Table:-
Operation:-

S.N. Condition Operation

1 T = 0, J = K = 0 The output Q and Q bar won't change

2 T = 1, J = K = 1 Output will toggle corresponding to every leading edge of clock


signal.

Edge-Triggered Flip-Flop:-
Positive edge (rising an edge-triggered flip-flop changes states either at the edge) or at the
negative edge (falling edge) of the clock pulse on the control input. The three basic types are
introduced here: S-R, J-K and D.

Click on one the following types of flip-flop. Then


its logic symbol will be shown on the left. Notice the
small triangle, called the dynamic input indicator, is
used to identify an edge-triggered flip-flop.

Positive edge-triggered (without bubble at Clock input):


S-R, J-K, and D.

Negative edge-triggered (with bubble at Clock input):


S-R, J-K, and D.
The S-R, J-K and D inputs are called synchronous inputs because data on these inputs are
transferred to the flip-flop's output only on the triggering edge of the clock pulse. On the
other hand, the direct set (SET) and clear (CLR) inputs are called asynchronous inputs, as
they are inputs that affect the state of the flip-flop independent of the clock. For the
synchronous operations to work properly, these asynchronous inputs must both be kept LOW.

Edge-triggered S-R flip-flop:-

The basic operation is illustrated below, along with the truth table for this type of flip-flop.
The operation and truth table for a negative edge-triggered flip-flop are the same as those for
a positive except that the falling edge of the clock pulse is the triggering edge.
As S = 1, R = 0. Flip-flop SETS
on the rising clock edge.

Note that the S and R inputs can be changed at any time when the clock input is LOW or
HIGH (except for a very short interval around the triggering transition of the clock) without
affecting the output. This is illustrated in the timing diagram below:

Edge-triggered J-K flip-flop:-

The J-K flip-flop works very similar to S-R flip-flop. The only difference is that this flip-flop
has NO invalid state. The outputs toggle (change to the opposite state) when both J and K
inputs are HIGH. The truth table is shown below.

Edge-triggered D flip-flop:-

The operation of a D flip-flop is much more simpler. It has only one input addition to the
clock. It is very useful when a single data bit (0 or 1) is to be stored. If there is a HIGH on
the D input when a clock pulse is applied, the flip-flop SETs and stores a 1. If there is a
LOW on the D input when a clock pulse is applied, the flip-flop RESETs and stores a 0. The
truth table below summarizes the operations of the positive edge-triggered D flip-flop. As
before, the negative edge-triggered flip-flop works the same except that the falling edge of
the clock pulse is the triggering edge.
EXCITATION TABLE
Flip-flop specifies the next state when the input and the present state are known. During the
design of sequential circuits, the required transition from present state to next state and to find
the FF input conditions that will cause the required transition. For this reason we need a table
that lists the required input combinations for a given change of state. Such a table is called a
flip-flop excitation table.

Excitation Table

DIGITAL REGISTERS
Flip-flop is a 1 bit memory cell which can be used for storing the digital data. To increase the
storage capacity in terms of number of bits, we have to use a group of flip-flop. Such a group
of flip-flop is known as a Register. The n-bit register will consist of n number of flip-flop
and it is capable of storing an n-bit word.

The binary data in a register can be moved within the register from one flip-flop to another.
The registers that allow such data transfers are called as shift registers. There are four
modes of operations of a shift register.

 Serial Input Serial Output


 Serial Input Parallel Output
 Parallel Input Serial Output
 Parallel Input Parallel Output
Serial Input Serial Output :-
Let all the flip-flop be initially in the reset condition i.e. Q3 = Q2 = Q1 = Q0 = 0. If an entry of
a four bit binary number 1 1 1 1 is made into the register, this number should be applied
to Din bit with the LSB bit applied first. The D input of FF-3 i.e. D3 is connected to serial
data input Din. Output of FF-3 i.e. Q3 is connected to the input of the next flip-flop i.e.
D2 and so on.

Block Diagram:-

Operation:-

Before application of clock signal, let Q3 Q2 Q1 Q0 = 0000 and apply LSB bit of the number
to be entered to Din. So Din = D3 = 1. Apply the clock. On the first falling edge of clock, the
FF-3 is set, and stored word in the register is Q3 Q2 Q1Q0 = 1000.

Apply the next bit to Din. So Din = 1. As soon as the next negative edge of the clock hits, FF-
2 will set and the stored word change to Q3 Q2 Q1 Q0 = 1100.

Apply the next bit to be stored i.e. 1 to Din. Apply the clock pulse. As soon as the third
negative clock edge hits, FF-1 will be set and output will be modified to Q3 Q2 Q1 Q0 = 1110.
Similarly with Din = 1 and with the fourth negative clock edge arriving, the stored word in
the register is Q3 Q2 Q1 Q0 = 1111.

Truth Table:-

Waveforms:-

Serial Input Parallel Output :-


 Serially and In such types of operations, the data is entered taken out in parallel fashion.
 Data is loaded bit by bit. The outputs are disabled as long as the data is loading.

 As soon as the data loading gets completed, all the flip-flops contain their required data,
the outputs are enabled so that all the loaded data is made available over all the output
lines at the same time.

 4 clock cycles are required to load a four bit word. Hence the speed of operation of SIPO
mode is same as that of SISO mode.

Block Diagram:-

Parallel Input Serial Output (PISO) :-


 Data bits are entered in parallel fashion.

 The circuit shown below is a four bit parallel input serial output register.

 Output of previous Flip Flop is connected to the input of the next one via a combinational
circuit.

 The binary input word B0, B1, B2, B3 is applied though the same combinational circuit.

 There are two modes in which this circuit can work namely - shift mode or load mode.

Load mode:-

When the shift/load bar line is low (0), the AND gate 2, 4 and 6 become active they will pass
B1, B2, B3 bits to the corresponding flip-flops. On the low going edge of clock, the binary
input B0, B1, B2, B3 will get loaded into the corresponding flip-flops. Thus parallel loading
takes place.

Shift mode:-

When the shift/load bar line is low (1), the AND gate 2, 4 and 6 become inactive. Hence the
parallel loading of the data becomes impossible. But the AND gate 1,3 and 5 become active.
Therefore the shifting of data from left to right bit by bit on application of clock pulses. Thus
the parallel in serial out operation takes place.

Block Diagram:-
Parallel Input Parallel Output (PIPO) :-
In this mode, the 4 bit binary input B0, B1, B2, B3 is applied to the data inputs D0, D1, D2,
D3 respectively of the four flip-flops. As soon as a negative clock edge is applied, the input
binary bits will be loaded into the flip-flops simultaneously. The loaded bits will appear
simultaneously to the output side. Only clock pulse is essential to load all the bits.

Block Diagram:-

Bidirectional Shift Register:-


 If a binary number is shifted left by one position then it is equivalent to multiplying the
original number by 2. Similarly if a binary number is shifted right by one position then it
is equivalent to dividing the original number by 2.

 Hence if we want to use the shift register to multiply and divide the given binary number,
then we should be able to move the data in either left or right direction.

 Such a register is called bi-directional register. A four bit bi-directional shift register fig.

 There are two serial inputs namely the serial right shift data input DR, and the serial left
shift data input DL along with a mode select input (M).

Block Diagram:-
Operation:-

S.N. Condition Operation

1 With M = 1 − Shift right operation If M = 1, then the AND gates 1, 3, 5 and 7 are
enabled whereas the remaining AND gates 2, 4,
6 and 8 will be disabled.

The data at DR is shifted to right bit by bit from


FF-3 to FF-0 on the application of clock pulses.
Thus with M = 1 we get the serial right shift
operation.

2 With M = 0 − Shift left operation When the mode control M is connected to 0 then
the AND gates 2, 4, 6 and 8 are enabled while 1,
3, 5 and 7 are disabled.

The data at DL is shifted left bit by bit from FF-0


to FF-3 on the application of clock pulses. Thus
with M = 0 we get the serial right shift operation.

Universal Shift Register :-


A shift register which can shift the data in only one direction is called a uni-directional shift
register. A shift register which can shift the data in both directions is called a bi-directional
shift register. Applying the same logic, a shift register which can shift the data in both
directions as well as load it parallely, is known as a universal shift register. The shift register
is capable of performing the following operation −

 Parallel loading
 Lift shifting
 Right shifting
The mode control input is connected to logic 1 for parallel loading operation whereas it is
connected to 0 for serial shifting. With mode control pin connected to ground, the universal
shift register acts as a bi-directional register. For serial left operation, the input is applied to
the serial input which goes to AND gate-1 shown in figure. Whereas for the shift right
operation, the serial input is applied to D input.

Block Diagram:-

DIGITAL COUNTERS
Counter is a sequential circuit. A digital circuit which is used for counting pulses is known
counter. Counter is the widest application of flip-flops. It is a group of flip-flops with a clock
signal applied. Counters are of two types.

 Asynchronous or ripple counters.


 Synchronous counters.
Asynchronous or ripple counters :-
The logic diagram of a 2-bit ripple up counter is shown in figure. The toggle (T) flip-flop are
being used. But we can use the JK flip-flop also with J and K connected permanently to logic
1. External clock is applied to the clock input of flip-flop A and QA output is applied to the
clock input of the next flip-flop i.e. FF-B.

Logical Diagram:-

Operation:-
S.N. Condition Operation

1 Initially let both the FFs be in the reset state QBQA = 00 initially

2 After 1st negative clock edge As soon as the first negative clock
edge is applied, FF-A will toggle and
QA will be equal to 1.

QA is connected to clock input of FF-


B. Since QA has changed from 0 to 1,
it is treated as the positive clock edge
by FF-B. There is no change in
QBbecause FF-B is a negative edge
triggered FF.

QBQA = 01 after the first clock pulse.

3 After 2nd negative clock edge On the arrival of second negative


clock edge, FF-A toggles again and
QA = 0.

The change in QA acts as a negative


clock edge for FF-B. So it will also
toggle, and QBwill be 1.

QBQA = 10 after the second clock


pulse.

4 After 3rd negative clock edge On the arrival of 3rd negative clock
edge, FF-A toggles again and
QA become 1 from 0.

Since this is a positive going change,


FF-B does not respond to it and
remains inactive. So QB does not
change and continues to be equal to 1.

QBQA = 11 after the third clock pulse.

5 After 4th negative clock edge On the arrival of 4th negative clock
edge, FF-A toggles again and
QA becomes 1 from 0.

This negative change in QAacts as


clock pulse for FF-B. Hence it toggles
to change QBfrom 1 to 0.

QBQA = 00 after the fourth clock


pulse.
Truth Table:-

Synchronous counters :-
If the "clock" pulses are applied to all the flip-flops in a counter simultaneously, then such a
counter is called as synchronous counter.

2-bit Synchronous up counter :-

The JA and KA inputs of FF-A are tied to logic 1. So FF-A will work as a toggle flip-flop.
The JB and KB inputs are connected to QA.

Logical Diagram:-

Operation:-

S.N. Condition Operation

1 Initially let both the FFs be in the reset state QBQA = 00 initially.

2 After 1st negative clock edge As soon as the first negative clock
edge is applied, FF-A will toggle and
QA will change from 0 to 1.

But at the instant of application of


negative clock edge, QA , JB = KB = 0.
Hence FF-B will not change its state.
So QB will remain 0.

QBQA = 01 after the first clock pulse.


3 After 2nd negative clock edge On the arrival of second negative
clock edge, FF-A toggles again and
QA changes from 1 to 0.

But at this instant QA was 1. So JB =


KB= 1 and FF-B will toggle. Hence
QB changes from 0 to 1.

QBQA = 10 after the second clock


pulse.

4 After 3rd negative clock edge On application of the third falling


clock edge, FF-A will toggle from 0 to
1 but there is no change of state for
FF-B.

QBQA = 11 after the third clock pulse.

5 After 4th negative clock edge On application of the next clock pulse,
QA will change from 1 to 0 as QB will
also change from 1 to 0.

QBQA = 00 after the fourth clock


pulse.

Classification of counters:-
Depending on the way in which the counting progresses, the synchronous or asynchronous
counters are classified as follows −

 Up counters
 Down counters
 Up/Down counters

UP/DOWN Counter
Up counter and down counter is combined together to obtain an UP/DOWN counter. A mode
control (M) input is also provided to select either up or down mode. A combinational circuit
is required to be designed and used between each pair of flip-flop in order to achieve the
up/down operation.

Type of up/down counters:-

 UP/DOWN ripple counters


 UP/DOWN synchronous counter

1.UP/DOWN Ripple Counters


In the UP/DOWN ripple counter all the FFs operate in the toggle mode. So either T flip-flops
or JK flip-flops are to be used. The LSB flip-flop receives clock directly. But the clock to
every other FF is obtained from (Q = Q bar) output of the previous FF.

 UP counting mode (M=0) − The Q output of the preceding FF is connected to the clock of
the next stage if up counting is to be achieved. For this mode, the mode select input M is at
logic 0 (M=0).

 DOWN counting mode (M=1) − If M = 1, then the Q bar output of the preceding FF is
connected to the next FF. This will operate the counter in the counting mode.

Example:

3-bit binary up/down ripple counter.

 3-bit − hence three FFs are required.

 UP/DOWN − so a mode control input is essential.

 For a ripple up counter, the Q output of preceding FF is connected to the clock input of the
next one.

 For a ripple up counter, the Q output of preceding FF is connected to the clock input of the
next one.

 For a ripple down counter, the Q bar output of preceding FF is connected to the clock input
of the next one.

 Let the selection of Q and Q bar output of the preceding FF be controlled by the mode
control input M such that, If M = 0, UP counting. So connect Q to CLK. If M = 1, DOWN
counting. So connect Q bar to CLK.

Block Diagram

Truth Table
Operation

S.N. Condition Operation

1 Case 1 − With M = 0 (Up counting mode) If M = 0 and M bar = 1, then the AND
gates 1 and 3 in fig. will be enabled
whereas the AND gates 2 and 4 will be
disabled.

Hence QA gets connected to the clock


input of FF-B and QBgets connected to the
clock input of FF-C.

These connections are same as those for


the normal up counter. Thus with M = 0
the circuit work as an up counter.

2 Case 2: With M = 1 (Down counting mode) If M = 1, then AND gates 2 and 4 in fig.
are enabled whereas the AND gates 1 and
3 are disabled.

Hence QA bar gets connected to the clock


input of FF-B and QB bar gets connected
to the clock input of FF-C.

These connections will produce a down


counter. Thus with M = 1 the circuit works
as a downcounter.

Modulus Counter (MOD-N Counter) :-


The 2-bit ripple counter is called as MOD-4 counter and 3-bit ripple counter is called as
MOD-8 counter. So in general, an n-bit ripple counter is called as modulo-N counter. Where,
MOD number = 2n.

Type of modulus

 2-bit up or down (MOD-4)3-bit up or down (MOD-8)


 4-bit up or down (MOD-16)
Application of counters

 Frequency counters
 Digital clock
 Time measurement
 A to D converter
 Frequency divider circuits
 Digital triangular wave generator.

***

UNIT-5 BASIC COMPUTER ORGANISATION

THE MEMORY SYSTEM


MEMORY HIRERCHY
A memory is just like a human brain. It is used to store data and instructions. Computer
memory is the storage space in the computer, where data is to be processed and instructions
required for processing are stored. The memory is divided into large number of small parts
called cells. Each location or cell has a unique address, which varies from zero to memory
size minus one. For example, if the computer has 64k words, then this memory unit has 64 *
1024 = 65536 memory locations. The address of these locations varies from 0 to 65535.
Memory is primarily of two types :
 Internal Memory − cache memory and primary/main memory

 External Memory − magnetic disk / optical disk etc.

Characteristics of Memory Hierarchy are following when we go from top to bottom :

 Capacity in terms of storage increases.


 Cost per bit of storage decreases.
 Frequency of access of the memory by the CPU decreases.
 Access time by the CPU increases.

Properties of good memory :


 Fast
 Large
 Inexpensive
 == not possible
PRIMARY MEMORY (MAIN MEMORY)
Primary memory holds only those data and instructions on which the computer is currently
working. It has a limited capacity and data is lost when power is switched off. It is generally
made up of semiconductor device. These memories are not as fast as registers. The data and
instruction required to be processed resides in the main memory. It is divided into two
subcategories RAM and ROM.

Characteristics of Main Memory :

 These are semiconductor memories.


 It is known as the main memory.
 Usually volatile memory.
 Data is lost in case power is switched off.
 It is the working memory of the computer.
 Faster than secondary memories.
 A computer cannot run without the primary memory.

RANDOM ACESS MEMORY (RAM)


RAM (Random Access Memory) is the internal memory of the CPU for storing data,
program, and program result. It is a read/write memory which stores data until the machine is
working. As soon as the machine is switched off, data is erased.

Access time in RAM is independent of the address, that is, each storage location inside the
memory is as easy to reach as other locations and takes the same amount of time. Data in the
RAM can be accessed randomly but it is very expensive.

RAM is volatile, i.e. data stored in it is lost when we switch off the computer or if there is a
power failure. Hence, a backup Uninterruptible Power System (UPS) is often used with
computers. RAM is small, both in terms of its physical size and in the amount of data it can
hold.

RAM is of two types −

 Static RAM (SRAM)


 Dynamic RAM (DRAM)

Static RAM (SRAM):-


The word static indicates that the memory retains its contents as long as power is being
supplied. However, data is lost when the power gets down due to volatile nature. SRAM
chips use a matrix of 6-transistors and no capacitors. Transistors do not require power to
prevent leakage, so SRAM need not be refreshed on a regular basis.
There is extra space in the matrix, hence SRAM uses more chips than DRAM for the same
amount of storage space, making the manufacturing costs higher. SRAM is thus used as
cache memory and has very fast access.

Characteristic of Static RAM :

 Long life
 No need to refresh
 Faster
 Used as cache memory
 Large size
 Expensive
 High power consumption
Dynamic RAM (DRAM)
DRAM, unlike SRAM, must be continually refreshed in order to maintain the data. This is
done by placing the memory on a refresh circuit that rewrites the data several hundred times
per second. DRAM is used for most system memory as it is cheap and small. All DRAMs are
made up of memory cells, which are composed of one capacitor and one transistor.

Characteristics of Dynamic RAM

 Short data lifetime


 Needs to be refreshed continuously
 Slower as compared to SRAM
 Used as RAM
 Smaller in size
 Less expensive
 Less power consumption

READ ONLY MEMORY (ROM)


ROM stands for Read Only Memory. The memory from which we can only read but cannot
write on it. This type of memory is non-volatile. The information is stored permanently in
such memories during manufacture. A ROM stores such instructions that are required to start
a computer. This operation is referred to as bootstrap. ROM chips are not only used in the
computer but also in other electronic items like washing machine and microwave oven.

The various types of ROMs and their characteristics are:

MROM (Masked ROM)


The very first ROMs were hard-wired devices that contained a pre-programmed set of data
or instructions. These kind of ROMs are known as masked ROMs, which are inexpensive.

PROM (Programmable Read Only Memory)


PROM is read-only memory that can be modified only once by a user. The user buys a blank
PROM and enters the desired contents using a PROM program. Inside the PROM chip, there
are small fuses which are burnt open during programming. It can be programmed only once
and is not erasable.

EPROM (Erasable and Programmable Read Memory)


EPROM can be erased by exposing it to ultra-violet light for a duration of up to 40 minutes.
Usually, an EPROM eraser achieves this function. During programming, an electrical charge
is trapped in an insulated gate region. The charge is retained for more than 10 years because
the charge has no leakage path. For erasing this charge, ultra-violet light is passed through a
quartz crystal window (lid). This exposure to ultra-violet light dissipates the charge. During
normal use, the quartz lid is sealed with a sticker.

EEPROM (Electrically Erasable and Programmable Read Only Memory)


EEPROM is programmed and erased electrically. It can be erased and reprogrammed about
ten thousand times. Both erasing and programming take about 4 to 10 ms (millisecond). In
EEPROM, any location can be selectively erased and programmed. EEPROMs can be erased
one byte at a time, rather than erasing the entire chip. Hence, the process of reprogramming
is flexible but slow.

Advantages of ROM
The advantages of ROM are as follows −

 Non-volatile in nature
 Cannot be accidentally changed
 Cheaper than RAMs
 Easy to test
 More reliable than RAMs
 Static and do not require refreshing
 Contents are always known and can be verified

SECONDARY MEMORY

This type of memory is also known as external memory or non-volatile. It is slower than the
main memory. These are used for storing data/information permanently. CPU directly does
not access these memories, instead they are accessed via input-output routines. The contents
of secondary memories are first transferred to the main memory, and then the CPU can access
it. For example, disk, CD-ROM, DVD, etc.

Characteristics of Secondary Memory

 These are magnetic and optical memories.


 It is known as the backup memory.
 It is a non-volatile memory.
 Data is permanently stored even if power is switched off.
 It is used for storage of data in a computer.
 Computer may run without the secondary memory.
 Slower than primary memories.
FLASH MEMORY

Flash memory of a form of semiconductor memory is widely used for many electronics data
storage applications.

Although first developed in the 1980s, the use of flash memory has grown rapidly in recent
years as forms the basis of many memory products.

Flash memory can be seen in many forms today including flash memory USB memory sticks,
digital camera memory cards in the form of compact flash or secure digital, SD memory. In
addition to this flash memory storage is used in many other items from MP3 players to
mobile phones, and in many other applications

There are also different flash memory types and these different types are each suited to their
own applications.

What Is Flash memory?

Flash memory storage is a form of non-volatile memory that was born out of a combination
of the traditional EPROM and E2PROM.

In essence it uses the same method of programming as the standard EPROM and the erasure
method of the E2PROM.

One of the main advantages that flash memory has when compared to EPROM is its ability to
be erased electrically. However it is not possible to erase each cell in a flash memory
individually unless a large amount of additional circuitry is added into the chip. This would
add significantly to the cost and accordingly most manufacturers dropped this approach in
favour of a system whereby the whole chip, or a large part of it is block or flash erased -
hence the name.

Today most flash memory chips have selective erasure, allowing parts or sectors of the flash
memory to be erased. However any erasure still means that a significant section of the chip
has to be erased.

Flash memory advantages & disadvantages

As with any technology there are various advantages and disadvantages. It is necessary to
consider all of these when determining the optimum type of memory to be used.

Flash Memory Flash Memory


Advantages Disadvantages
 Higher cost per bit than hard drives
 Non-volatile memory  Slower than other forms of memory
 Easily portable (e.g. USB 
memory Limited number of write / erase cycles
sticks)  Data must be erased before new data can be
 Mechanically robust written
 Data typically erased and written in blocks

Flash memory types

There are two basic types of Flash memory. Although they use the same basic technology,
the way they are addressed for reading and writing is slightly different. They two flash
memory types are:

1. NAND Flash memory: NAND Flash memories have a different structure to NOR
memories. This type of flash memory is accessed much like block devices such as hard
disks. When NAND Flash memories are to be read, the contents must first be paged into
memory-mapped RAM. This makes the presence of a memory management unit essential.
2. NOR Flash memory: NOR Flash memory is able to read individual flash memory cells,
and as such it behaves like a traditional ROM in this mode. For the erase and write
functions, commands are written to the first page of the mapped memory, as defined in
"common flash interface" created by Intel.

NAND / NOR tradeoff: NAND Flash memories and NOR Flash memories can be used for
different applications. However some systems will use a combination of both types of Flash
memory. The NOR memory type is used as ROM and the NAND memory is partitioned with
a file system and used as a random access storage area.

HARD DISK DRIVE

The hard disk drive is the main, and usually largest, data storage hardware device in a
computer.

The operating system, software titles, and most other files are stored in the hard disk drive.

The Hard Disk Drive is Also Known As

HDD (abbreviation), hard drive, hard disk, fixed drive, fixed disk, fixed disk drive

Important Hard Disk Drive Facts

The hard drive is sometimes referred to as the "C drive" due to the fact that Microsoft
Windows designates the "C" drive letter to the primary partition on the primary hard drive in
a computer by default.
While this is not a technically correct term to use, it is still common. For example, some
computers have multiple drive letters (e.g. C, D, E) representing areas across one or more
hard drives.

Popular Hard Disk Drive Manufacturers

Seagate, Western Digital, Hitachi

Hard Disk Drive Description

A hard drive is usually the size of a paperback book but much heavier.

The sides of the hard drive have pre-drilled, threaded holes for easy mounting in the 3.5-inch
drive bay in the computer case. Mounting is also possible in a larger 5.25-inch drive bay with
an adapter. The hard drive is mounted so the end with the connections faces inside the
computer.

The back end of the hard drive contains a port for a cable that connects to the motherboard.
The type of cable used will depend on the type of drive but is almost always included with a
hard drive purchase. Also here is a connection for power from the power supply.

Most hard drives also have jumper settings on the back end that define how the motherboard
is to recognize the drive when more than one is present. These settings vary from drive to
drive so check with your hard drive manufacturer for details.

Common Hard Disk Drive Tasks

Some common things we might do that involve a hard disk drive:

 Test a Hard Drive


 Replace a Hard Drive
 Format a Hard Drive
 Partition a Hard Drive
 Change a Hard Drive's Letter

OPTICAL MEMORIES

In Optical Memory, data is stored on an optical medium (i.e., CD-ROM or DVD), and read
with a laser beam. While not currently practical for use in computer processing, optical
memory is an ideal solution for storing large quantities of data very inexpensively, and more
importantly, transporting that data between computer devices.

I. CD:-
 circular discs
 4.75 in (12 cm) in diameter
 developed by Philips and Sony in 1980
 Initially for audio
 1985 CD-ROM (Compact Disc Read Only Memory)
 can hold 720 MB(80 min audio)= 500 floppy disks or 200,000 pages of text.
Advantages of CD-ROM:
o Large Storage Capacity
o Portability
o Sturdiness

Disadvantages of CD-ROM:
o cannot be updated
o access time longer
II.DVD:-
 Digital Versatile Disk (Formerly Digital Video Disk)
 More capacity than CDs while having the same dimensions.
 developed by Philips, Sony, Toshiba, and Panasonic in 1995.
 An extremely high capacity compact disc capable of storing from 4.7 GB to 17
GB
III.Blue Ray Disk:-
 Blu-ray Disc (official abbreviation BD) is an optical disc storage medium
designed to replace the DVD format.
 The standard physical medium is a 12 cm plastic optical disc, the same size as
DVDs and CDs.
 Blu-Ray Discs contain 25 GB per layer, with dual layer discs (50 GB) the
norm for feature-length video discs and additional layers possible later.
CCDs

Stands for "Charged Coupled Device." CCDs are sensors used in digital cameras and video
cameras to record still and moving images. The CCD captures light and converts it to digital
data that is recorded by the camera. For this reason, a CCD is often considered the digital
version of film.

The quality of an image captured by a CCD depends on the resolution of the sensor. In digital
cameras, the resolution is measured in Megapixels (or thousands of or pixels. Therefore, an
8MP digital camera can capture twice as much information as a 4MP camera. The result is a
larger photo with more detail.

CCDs in video cameras are usually measured by physical size. For example, most consumer
digital cameras use a CCD around 1/6 or 1/5 of an inch in size. More expensive cameras may
have CCDs 1/3 of an inch in size or larger. The larger the sensor, the more light it can
capture, meaning it will produce better video in low light settings. Professional digital video
cameras often have three sensors, referred to as "3CCD," which use separate CCDs for
capturing red, green, and blue hues.

BUBBLE MEMORY

Bubble memory is a type of non-volatile computer memory that uses a thin film of a
magnetic material to hold small magnetized areas, known as bubbles or domains, each storing
one bit of data. Andrew Bobeck invented the Bubble Memory in 1970. His development of
the magnetic core memory and the development of the twistor memory put him in a good
position for the development of Bubble Memory.

Bubble memory is a type of non-volatile computer memory that uses a thin film of a
magnetic material to hold small magnetized areas, known as bubbles or domains, each storing
one bit of data. Andrew Bobeck invented the Bubble Memory in 1970. His development of
the magnetic core memory and the development of the twistor memory put him in a good
position for the development of Bubble Memory.

It is conceptually a stationary disk with spinning bits. The unit, only a couple of square inches
in size, contains a thin film magnetic recording layer. Globular-shaped bubbles (bits) are
electromagnetically generated in circular strings inside this layer. In order to read or write the
bubbles, they are rotated past the equivalent of a read/write head.

One of the limitations of bubble memory was the slow access. A lagre bubble memory would
requure large loops, so accessing a bit require cycling through a huge number of other bits
first.

RAID AND ITS LEVEL

 Stands for Redundant Array of Independent Disks.


 It’s a technology that enables greater levels of performance, reliability and/or large
volumes when dealing with data. This is possible by concurrent use of two or more
‘hard disk drives’.
 A set of disk stations treated as one logical station
 Data are distributed over the stations
 Redundant capacity is used for parity allowing for data repair.

Levels of RAID:

o 6 levels of RAID (0-5) have been accepted by industry


o Other kinds have been proposed in literature
o Level 2 and 4 are not commercially available, they are included for clarity

Mean time between failure:-


RAID 0:- It splits data among two or more disks.

It provides good performance.

Lack of data redundancy means there is no fail over support with this configuration.

In the diagram, the odd blocks are written to disk 0 and the even blocks to disk 1 such that
A1, A2, A3, A4, … would be the order of blocks read if read sequentially from the
beginning.

Used in read only NFS systems and gaming systems.

RAID 0 analysis:-

Failure Rate:

MTBF of RAID0 is roughly proportional to the number of disks in the array.


Pr(disk fail) = 5%, then

Pr(atleastonefails) = 1 – Pr(nonefails) = 1 – [1-0.05]2 = 9.75%

Performance:

The fragments are written to their respective disks simultaneously on the same sector.
This allows smaller sections of the entire chunk of data to be read off the drive in
parallel, hence good performance.

RAID 1:-

RAID1 is ‘data mirroring’.

Two copies of the data are held on two physical disks, and the data is always identical.

Twice as many disks are required to store the same data when compared to RAID 0.

Array continues to operate so long as at least one drive is functioning.


RAID 1 analysis:-

Failure Rate:

If Pr(disk fail) = 5%, then the probability of both the drives failing in a 2 disk array is
P(both fail) = (0.05)2 = 0.25%.

Performance:

If we use independent disk controllers for each disk, then we can increase the read or
write speeds by doing operations in parallel.

RAID 5:-

RAID 5 is an ideal combination of good performance, good fault tolerance and high capacity
and storage efficiency.

An arrangement of parity and CRC to help rebuilding drive data in case of disk failures.

“Distributed Parity” is the key word here.

RAID 5 analysis :-

MTBF is slightly better than RAID 0. This is because failure of one disk is not quite a
harm. We need more time if 2 or more disks fail.
Performance is also as good as RAID 0, if not better. We can read and write parallel
blocks of data.
One of the drawbacks is that the write involves heavy parity calculations by the RAID
controller. Write operations are slower compared to RAID 0.
Pretty useful for general purpose uses where ‘read’s’ are more frequent the ‘write’s’.

RAID 10:-
Combines RAID 1 and RAID 0.

Which means having the pleasure of both - good performance and good failover handling.

Also called ‘Nested RAID’.

RAID 6:-

It is seen as the best way to guarantee data integrity as it uses double parity.

Lesser MTBF compared to RAID5.

It has a drawback though of longer write time.

The expanded use of RAID-6 and other dual-parity schemes is a virtual certainty.

RAID vendors to support "fast rebuild" features that can restore hundreds of gigabytes in
just an hour or so!!

Striping (of data) would extend across RAID groups -- not just across drives within a
group.

Improved disk diagnostic features should offer more reliable predictions of impending
drive failures, allowing the rebuild process to begin before an actual fault occurs.

Hot Spares!!

IMPLEMENTATIONS

Software based RAID:


 Software implementations are provided by many Operating Systems.
 A software layer sits above the disk device drivers and provides an abstraction layer
between the logical drives(RAIDs) and physical drives.
 Server's processor is used to run the RAID software.
 Used for simpler configurations like RAID0 and RAID1.

Hardware based RAID:

 A hardware implementation of RAID requires at least a special-purpose RAID controller.


 On a desktop system this may be built into the motherboard.
 Processor is not used for RAID calculations as a separate controller present.

CACHE MEMORY

The cache is a very high speed, expensive piece of memory, which is used to speed up the
memory retrieval process. Due to its higher cost, the CPU comes with a relatively small
amount of cache compared with the main memory. Without cache memory, every time the
CPU requests for data, it would send the request to the main memory which would then be
sent back across the system bus to the CPU. This is a slow process. The idea of introducing
cache is that this extremely fast memory would store data that is frequently accessed and if
possible, the data that is around it. This is to achieve the quickest possible response time to
the CPU.

In early PCs, the various components had one thing in common: they were all really slow.
Now processors run much faster than everything else in the computer. This means that one of
the key goals in modern system design is to ensure that to whatever extent possible, the
processor is not slowed down by the storage devices it works with. Slowdowns mean wasted
processor cycles, where the CPU can't do anything because it is sitting and waiting for
information it needs.

TYPES OF CACHE MEMORY

• Memory Cache: A memory cache, sometimes called a cache store or RAM cache, is a
portion of memory made of high-speed static RAM (SRAM) instead of the slower and
cheaper dynamic RAM (DRAM) used for main memory. Memory caching is effective
because most programs access the same data or instructions over and over. By keeping as
much of this information as possible in SRAM, the computer avoids accessing the slower
DRAM.
• Disk Cache: Disk caching works under the same principle as memory caching, but instead
of using high-speed SRAM, a disk cache uses conventional main memory. The most
recently accessed data from the disk (as well as adjacent sectors) is stored in a memory
buffer. When a program needs to access data from the disk, it first checks the disk cache to
see if the data is there. Disk caching can dramatically improve the performance of
applications, because accessing a byte of data in RAM can be thousands of times faster
than accessing a byte on a hard disk.

LEVELS OF CACHE:

Cache memory is categorized in levels based on its closeness and accessibility to the
microprocessor. There are three levels of a cache.
 Level 1(L1) Cache: This cache is inbuilt in the processor and is made of SRAM(Static
RAM) Each time the processor requests information from memory, the cache controller on
the chip uses special circuitry to first check if the memory data is already in the cache. If it
is present, then the system is spared from time consuming access to the main memory. In a
typical CPU, primary cache ranges in size from 8 to 64 KB, with larger amounts on the
newer processors. This type of Cache Memory is very fast because it runs at the speed of
the processor since it is integrated into it.
 Level 2(L2) Cache: The L2 cache is larger but slower in speed than L1 cache. It is used to
see recent accesses that is not picked by L1 cache and is usually 64 to 2 MB in size. A L2
cache is also found on the CPU. If L1 and L2 cache are used together, then the missing
information that is not present in L1 cache can be retrieved quickly from the L2 cache.
Like L1 caches, L2 caches are composed of SRAM but they are much larger. L2 is usually
a separate static RAM (SRAM) chip and it is placed between the CPU & DRAM(Main
Memory)
 Level 3(L3) Cache: L3 Cache memory is an enhanced form of memory present on the
motherboard of the computer. It is an extra cache built into the motherboard between the
processor and main memory to speed up the processing operations. It reduces the time gap
between request and retrieving of the data and instructions much more quickly than a main
memory. L3 cache is being used with processors nowadays, having more than 3 MB of
storage in it.

Diagram showing different types of cache and their position in the computer system

PRINCIPLE BEHIND CACHE MEMORY

Cache is really an amazing technology. A 512 KB level 2 cache, caching 64 MB of system


memory, can supply the information that the processor requests 90-95% of the time. The
level 2 cache is less than 1% of the size of the memory it is caching, but it is able to register a
hit on over 90% of requests. That's pretty efficient, and is the reason why caching is so
important.

The reason that this happens is due to a computer science principle called locality of
reference. It states basically that even within very large programs with several megabytes of
instructions, only small portions of this code generally get used at once. Programs tend to
spend large periods of time working in one small area of the code, often performing the same
work many times over and over with slightly different data, and then move to another area.
This occurs because of "loops", which are what programs use to do work many times in rapid
succession.

IMPORTANCE OF CACHE

Cache is responsible for a great deal of the system performance improvement of today's PCs.
The cache is a buffer of sorts between the very fast processor and the relatively slow memory
that serves it. The presence of the cache allows the processor to do its work while waiting for
memory far less often than it otherwise would. Without cache the computer will be very slow
and all our works get delay. So cache is a very important part of our computer system.

MEMORY INTERLEAVING

 Memory interleaving is the technique used to increase the throughput. The core idea is to
split the memory system into independent banks, which can answer read or write requests
independents in parallel.

Fig: 4-Way Interleaved Memory

 Usually , this is done by interleaving the address space, Consecutive cells in the address
space are assigned to different memory banks. An example of four-way interleaved
memory , and the mapping of consecutive data cells it is shown in previous slide.

There are two-address format for memory interleaving the address space:

Low order interleaving

Low order interleaving spreads contiguous memory location across the modules horizontally.
This implies that the low order bits of the memory address are used to identify the memory
module. High order bits are the word addresses (displacement) within each module

High order interleaving

High order interleaving uses the high order bits as the module address and the low order bits
as the word address within each module.

IMPLMENTATION: Memory interleaving implemented on main memory , Which is slow as


compared to Cache. And Main memory having less bandwidth.
Memory Bandwidth :- Memory bandwidth is the rate at which data can be read and write
into a memory by a processor. Memory bandwidth is usually expressed in units of bytes/sec.

INTERLEAVED MEMORY ORGANIZATION

 Various organization of the physical memory are included in this section. In order to close
up the speed gap between Cache and main memory. And interleaving technique is
represented allow pipelined access of the parallel memory modules.
 The memory design goal (interleaving goal) is to broaden the effective memory bandwidth
so that more memory words can be accessed per unit time.
 The ultimate purpose is to match the memory bandwidth with the bus bandwidth and with
the processor bandwidth.

ASSOCIATIVE MEMORY

• A memory unit accessed by contents is called an associative memory or content


addressable memory(CAM).
• This type of memory is accessed simultaneously and in parallel on the basis of data
content rather than by specific address or location.

READ/WRITE OPERATION IN CAM

Write operation:

• When a word is written in in an associative memory, no address is given.


• The memory is capable of finding an unused location to store the word.

Read operation:

• When a word is to be read from an associative memory, the contents of the word, or a part
of the word is specified.
• The memory locates all the words which match the specified content and marks them for
reading.

HARDWARE ORGANISATION

Argument register(A): It contains the word to be searched. It has n bits(one for each bit of
the word).

Key Register(K): It provides mask for choosing a particular field or key in the argument
word. It also has n bits.

Associative memory array: It contains the words which are to be compared with the
argument word.

Match Register(M):It has m bits, one bit corresponding to each word in the memory array .
After the matching process, the bits corresponding to matching words in match register are
set to 1.
MATCHING PROCESS

• The entire argument word is compared with each memory word, if the key register
contains all 1’s. Otherwise, only those bits in the argument that have 1’s in their
corresponding position of the key register are compared.
• Thus the key provides a mask or identifying piece of information which specifies how the
reference to memory is made.
• To illustrate with a numerical example, suppose that the argument register A and the key
register K have the bit configuration as shown below.
• Only the three left most bits of A are compared with the memory words because K has 1’s
in these three positions only.

DISADVANTAGES

• An associative memory is more expensive than a random access memory because each
cell must have an extra storage capability as well as logic circuits for matching its content
with an external argument.
• For this reason, associative memories are used in applications where the search time is
very critical and must be very short.

***

UNIT-6 THE INPUT/OUTPUT SYSTEM


PERIPHERAL DEVICES
Input or output devices that are connected to computer are called peripheral devices. These
devices are designed to read information into or out of the memory unit upon command from
the CPU and are considered to be the part of computer system. These devices are also
called peripherals.
For example: Keyboards, display units and printers are common peripheral devices.
There are three types of peripherals:

1. Input peripherals : Allows user input, from the outside world to the computer. Example:
Keyboard, Mouse etc.
2. Output peripherals: Allows information output, from the computer to the outside world.
Example: Printer, Monitor etc
3. Input-Output peripherals: Allows both input(from outised world to computer) as well
as, output(from computer to the outside world). Example: Touch screen etc.

INTERFACES
Interface is a shared boundary between two separate components of the computer system
which can be used to attach two or more components to the system for communication
purposes.
There are two types of interface:

1. CPU Interface
2. I/O Interface

Input-Output Interface:-
Peripherals connected to a computer need special communication links for interfacing with
CPU. In computer system, there are special hardware components between the CPU and
peripherals to control or manage the input-output transfers. These components are
called input-output interface units because they provide communication links between
processor bus and peripherals. They provide a method for transferring information between
internal system and input-output devices.
Modes of I/O Data Transfer
Data transfer between the central unit and I/O devices can be handled in generally three types
of modes which are given below:

1. Programmed I/O
2. Interrupt Initiated I/O
3. Direct Memory Access

Programmed I/O:-
Programmed I/O instructions are the result of I/O instructions written in computer program.
Each data item transfer is initiated by the instruction in the program.
Usually the program controls data transfer to and from CPU and peripheral. Transferring data
under programmed I/O requires constant monitoring of the peripherals by the CPU.
Interrupt Initiated I/O:-
In the programmed I/O method the CPU stays in the program loop until the I/O unit indicates
that it is ready for data transfer. This is time consuming process because it keeps the
processor busy needlessly.
This problem can be overcome by using interrupt initiated I/O. In this when the interface
determines that the peripheral is ready for data transfer, it generates an interrupt. After
receiving the interrupt signal, the CPU stops the task which it is processing and service the
I/O transfer and then returns back to its previous processing task.
Direct Memory Access:-
Removing the CPU from the path and letting the peripheral device manage the memory buses
directly would improve the speed of transfer. This technique is known as DMA.
In this, the interface transfer data to and from the memory through memory bus. A DMA
controller manages to transfer data between peripherals and memory unit.
Many hardware systems use DMA such as disk drive controllers, graphic cards, network
cards and sound cards etc. It is also used for intra chip data transfer in multicore processors.
In DMA, CPU would initiate the transfer, do other operations while the transfer is in progress
and receive an interrupt from the DMA controller when the transfer has been completed.

Above figure shows block diagram of DMA


INPUT/OUTPUT PROCESSOR
An input-output processor (IOP) is a processor with direct memory access capability. In this,
the computer system is divided into a memory unit and number of processors.
Each IOP controls and manage the input-output tasks. The IOP is similar to CPU except that
it handles only the details of I/O processing. The IOP can fetch and execute its own
instructions. These IOP instructions are designed to manage I/O transfers only.
Block Diagram Of I/O Processor:-
Below is a block diagram of a computer along with various I/O Processors. The memory unit
occupies the central position and can communicate with each processor.
The CPU processes the data required for solving the computational tasks. The IOP provides a
path for transfer of data between peripherals and memory. The CPU assigns the task of
initiating the I/O program.
The IOP operates independent from CPU and transfer data between peripherals and memory.

The communication between the IOP and the devices is similar to the program control
method of transfer. And the communication with the memory is similar to the direct memory
access method.
In large scale computers, each processor is independent of other processors and any processor
can initiate the operation.
The CPU can act as master and the IOP act as slave processor. The CPU assigns the task of
initiating operations but it is the IOP, who executes the instructions, and not the CPU. CPU
instructions provide operations to start an I/O transfer. The IOP asks for CPU through
interrupt.
Instructions that are read from memory by an IOP are also called commands to distinguish
them from instructions that are read by CPU. Commands are prepared by programmers and
are stored in memory. Command words make the program for IOP. CPU informs the IOP
where to find the commands in memory.
INTERRUPTS
Data transfer between the CPU and the peripherals is initiated by the CPU. But the CPU
cannot start the transfer unless the peripheral is ready to communicate with the CPU. When a
device is ready to communicate with the CPU, it generates an interrupt signal. A number of
input-output devices are attached to the computer and each device is able to generate an
interrupt request.
The main job of the interrupt system is to identify the source of the interrupt. There is also a
possibility that several devices will request simultaneously for CPU communication. Then,
the interrupt system has to decide which device is to be serviced first.
Priority Interrupt:-
A priority interrupt is a system which decides the priority at which various devices, which
generates the interrupt signal at the same time, will be serviced by the CPU. The system has
authority to decide which conditions are allowed to interrupt the CPU, while some other
interrupt is being serviced. Generally, devices with high speed transfer such as magnetic
disks are given high priority and slow devices such as keyboards are given low priority.
When two or more devices interrupt the computer simultaneously, the computer services the
device with the higher priority first.
Types of Interrupts:-
Following are some different types of interrupts:
Hardware Interrupts
When the signal for the processor is from an external device or hardware then this interrupts
is known as hardware interrupt.
Let us consider an example: when we press any key on our keyboard to do some action, then
this pressing of the key will generate an interrupt signal for the processor to perform certain
action. Such an interrupt can be of two types:

 Maskable Interrupt

The hardware interrupts which can be delayed when a much high priority interrupt has
occurred at the same time.

 Non Maskable Interrupt

The hardware interrupts which cannot be delayed and should be processed by the
processor immediately.
Software Interrupts
The interrupt that is caused by any internal system of the computer system is known as
a software interrupt. It can also be of two types:

 Normal Interrupt

The interrupts that are caused by software instructions are called normal software
interrupts.

 Exception

Unplanned interrupts which are produced during the execution of some program are
called exceptions, such as division by zero.

DAISY CHAINING PRIORITY


This way of deciding the interrupt priority consists of serial connection of all the devices
which generates an interrupt signal. The device with the highest priority is placed at the first
position followed by lower priority devices and the device which has lowest priority among
all is placed at the last in the chain.
In daisy chaining system all the devices are connected in a serial form. The interrupt line
request is common to all devices. If any device has interrupt signal in low level state then
interrupt line goes to low level state and enables the interrupt input in the CPU. When there is
no interrupt the interrupt line stays in high level state. The CPU respond to the interrupt by
enabling the interrupt acknowledge line. This signal is received by the device 1 at its PI input.
The acknowledge signal passes to next device through PO output only if device 1 is not
requesting an interrupt.
The following figure shows the block diagram for daisy chaining priority system.

***

UNIT-7 THE CENTRAL PROCESSING UNIT

INSTRUCTION SET:
The instruction set, also called ISA (instruction set architecture) is part of a computer that
pertains to programming, which is basically machine language. The instruction set provides
commands to the processor, to tell it what it needs to do. The instruction set consists of
addressing modes, instructions, native data types, registers, memory architecture, interrupt,
and exception handling, and external I/O.
An example of an instruction set is the x86 instruction set, which is common to find on
computers today. Different computer processors can use almost the same instruction set while
still having very different internal design. Both the Intel Pentium and AMD Athlon
processors use nearly the same x86 instruction set. An instruction set can be built into the
hardware of the processor, or it can be emulated in software, using an interpreter. The
hardware design is more efficient and faster for running programs than the emulated software
version.
Examples of instruction set:-

 ADD - Add two numbers together.


 COMPARE - Compare numbers.
 IN - Input information from a device, e.g., keyboard.
 JUMP - Jump to designated RAM address.
 JUMP IF - Conditional statement that jumps to a designated RAM address.
 LOAD - Load information from RAM to the CPU.
 OUT - Output information to device, e.g., monitor.
 STORE - Store information to RAM.

INSTRUCTION CODES
While a Program, as we all know, is, A set of instructions that specify the operations,
operands, and the sequence by which processing has to occur. An instruction code is a group
of bits that tells the computer to perform a specific operation part.
Instruction Code: Operation Code
The operation code of an instruction is a group of bits that define operations such as add,
subtract, multiply, shift and compliment. The number of bits required for the operation code
depends upon the total number of operations available on the computer. The operation code
must consist of at least n bits for a given 2^n operations. The operation part of an instruction
code specifies the operation to be performed.

Instruction Code: Register Part


The operation must be performed on the data stored in registers. An instruction code
therefore specifies not only operations to be performed but also the registers where the
operands(data) will be found as well as the registers where the result has to be stored.

Stored Program Organisation


The simplest way to organize a computer is to have Processor Register and instruction code
with two parts. The first part specifies the operation to be performed and second specifies an
address. The memory address tells where the operand in memory will be found.
Instructions are stored in one section of memory and data in another.
Computers with a single processor register is known as Accumulator (AC). The operation is
performed with the memory operand and the content of AC.
Common Bus System
The basic computer has 8 registers, a memory unit and a control unit. Paths must be provided
to transfer data from one register to another. An efficient method for transferring data in a
system is to use a Common Bus System. The output of registers and memory are connected
to the common bus.
Load(LD)
The lines from the common bus are connected to the inputs of each register and data inputs of
memory. The particular register whose LD input is enabled receives the data from the bus
during the next clock pulse transition.
Before studying about instruction formats lets first study about the operand address parts.
When the 2nd part of an instruction code specifies the operand, the instruction is said to
have immediate operand. And when the 2nd part of the instruction code specifies the
address of an operand, the instruction is said to have a direct address. And in indirect
address, the 2nd part of instruction code, specifies the address of a memory word in which
the address of the operand is found.
COMPUTER INSTRUCTIONS
The basic computer has three instruction code formats. The Operation code (opcode) part of
the instruction contains 3 bits and remaining 13 bits depends upon the operation code
encountered.
There are three types of formats:
1. Memory Reference Instruction
It uses 12 bits to specify the address and 1 bit to specify the addressing mode (I). I is equal
to 0for direct address and 1 for indirect address.
2. Register Reference Instruction
These instructions are recognized by the opcode 111 with a 0 in the left most bit of
instruction. The other 12 bits specify the operation to be executed.
3. Input-Output Instruction
These instructions are recognized by the operation code 111 with a 1 in the left most bit of
instruction. The remaining 12 bits are used to specify the input-output operation.
Format of Instruction:-
The format of an instruction is depicted in a rectangular box symbolizing the bits of an
instruction. Basic fields of an instruction format are given below:

1. An operation code field that specifies the operation to be performed.


2. An address field that designates the memory address or register.
3. A mode field that specifies the way the operand of effective address is determined.

Computers may have instructions of different lengths containing varying number of


addresses. The number of address field in the instruction format depends upon the internal
organization of its registers.
ADRESSING MODES AND INSTRUCTION CYCLE
The operation field of an instruction specifies the operation to be performed. This operation
will be executed on some data which is stored in computer registers or the main memory. The
way any operand is selected during the program execution is dependent on the addressing
mode of the instruction. The purpose of using addressing modes is as follows:

1. To give the programming versatility to the user.


2. To reduce the number of bits in addressing field of instruction.

Types of Addressing Modes:-


Below we have discussed different types of addressing modes one by one:
Immediate Mode
In this mode, the operand is specified in the instruction itself. An immediate mode instruction
has an operand field rather than the address field.
For example: ADD 7, which says Add 7 to contents of accumulator. 7 is the operand here.
Register Mode
In this mode the operand is stored in the register and this register is present in CPU. The
instruction has the address of the Register where the operand is stored.
Advantages

 Shorter instructions and faster instruction fetch.


 Faster memory access to the operand(s)

Disadvantages

 Very limited address space


 Using multiple registers helps performance but it complicates the instructions.

Register Indirect Mode


In this mode, the instruction specifies the register whose contents give us the address of
operand which is in memory. Thus, the register contains the address of operand rather than
the operand itself.

Auto Increment/Decrement Mode


In this the register is incremented or decremented after or before its value is used.
Direct Addressing Mode
In this mode, effective address of operand is present in instruction itself.
 Single memory reference to access data.
 No additional calculations to find the effective address of the operand.

For Example: ADD R1, 4000 - In this the 4000 is effective address of operand.
NOTE: Effective Address is the location where operand is present.
Indirect Addressing Mode
In this, the address field of instruction gives the address where the effective address is stored
in memory. This slows down the execution, as this includes multiple memory lookups to find
the operand.

Displacement Addressing Mode


In this the contents of the indexed register is added to the Address part of the instruction, to
obtain the effective address of operand.
EA = A + (R), In this the address field holds two values, A(which is the base value) and
R(that holds the displacement), or vice versa.
Relative Addressing Mode
It is a version of Displacement addressing mode.
In this the contents of PC(Program Counter) is added to address part of instruction to obtain
the effective address.
EA = A + (PC), where EA is effective address and PC is program counter.
The operand is A cells away from the current cell (the one pointed to by PC)
Base Register Addressing Mode
It is again a version of Displacement addressing mode. This can be defined as EA = A + (R),
where A is displacement and R holds pointer to base address.
Stack Addressing Mode
In this mode, operand is at the top of the stack. For example: ADD, this instruction
will POP top two items from the stack, add them, and will then PUSH the result to the top of
the stack.
INSTRUCTION CYCLE
An instruction cycle, also known as fetch-decode-execute cycle is the basic operational
process of a computer. This process is repeated continuously by CPU from boot up to shut
down of computer.
Following are the steps that occur during an instruction cycle:
1. Fetch the Instruction
The instruction is fetched from memory address that is stored in PC(Program Counter) and
stored in the instruction register IR. At the end of the fetch operation, PC is incremented by 1
and it then points to the next instruction to be executed.
2. Decode the Instruction
The instruction in the IR is executed by the decoder.
3. Read the Effective Address
If the instruction has an indirect address, the effective address is read from the memory.
Otherwise operands are directly read in case of immediate operand instruction.
4. Execute the Instruction
The Control Unit passes the information in the form of control signals to the functional unit
of CPU. The result generated is stored in main memory or sent to an output device.
The cycle is then repeated by fetching the next instruction. Thus in this way the instruction
cycle is repeated continuously.

INSTRUCTION REPRESENTATION:

Within the computer, each instruction is represented by a sequence of bits. The instruction is
divided into fields, corresponding to the constituent elements of the instruction. The
instruction format is highly machine specific and it mainly depends on the machine
architecture. It is assume that it is a 16-bit CPU. 4 bits are used to provide the operation
code. So, we may have to 16 (24 = 16) different set of instructions. With each instruction,
there are two operands. To specify each operand, 6 bits are used. It is possible to provide 64
(26 = 64) different operands for each operand reference.

It is difficult to deal with binary representation of machine instructions. Thus, it has become
common practice to use a symbolic representation of machine instructions.

Opcodes are represented by abbreviations, called mnemonics that indicate the operations.
Common examples include:

ADD: Add
SUB : Subtract
MULT: Multiply
DIV : Division
LOAD: Load data from memory to CPU
STORE: Store data to memory from CPU.

RISC and CISC Processors

RISC Processor
It is known as Reduced Instruction Set Computer. It is a type of microprocessor that has a
limited number of instructions. They can execute their instructions very fast because
instructions are very small and simple.
RISC chips require fewer transistors which make them cheaper to design and produce. In
RISC, the instruction set contains simple and basic instructions from which more complex
instruction can be produced. Most instructions complete in one cycle, which allows the
processor to handle many instructions at same time.
In this instructions are register based and data transfer takes place from register to register.

CISC Processor

 It is known as Complex Instruction Set Computer.


 It was first developed by Intel.
 It contains large number of complex instructions.
 In this instructions are not register based.
 Instructions cannot be completed in one machine cycle.
 Data transfer is from memory to memory.
 Micro programmed control unit is found in CISC.
 Also they have variable instruction formats.

Difference Between CISC and RISC


Architectural Complex Instruction Set Reduced Instruction Set
Characterstics Computer(CISC) Computer(RISC)

Instruction size and Large set of instructions with variable Small set of instructions with
format formats (16-64 bits per instruction). fixed format (32 bit).

Data transfer Memory to memory. Register to register.

CPU control Most micro coded using control memory Mostly hardwired without
(ROM) but modern CISC use hardwired control memory.
control.

Instruction type Not register based instructions. Register based instructions.

Memory access More memory access. Less memory access.

Clocks Includes multi-clocks. Includes single clock.

Instructions are reduced and


Instruction nature Instructions are complex.
simple.

***

UNIT-8 REGISTERS, MICRO-OPERATIONS AND INSTRUCTION EXECUTION

PROCESSOR ORGANIZATION
REQUIREMENTS PLACED ON THE PROCESSOR
 Fetch instruction: The processor reads an instruction from memory (register,cache,
main memory).
 Interpret instruction: The instruction is decoded to determine what action is
required.
 Fetch data: The execution of an instruction may require reading data from memory
or an I/O module.
 Process data: The execution of an instruction may require performing some
arithmetic or logical operation on data.
 Write data: The results of an execution may require writing data to memory on I/O
module.
SIMPLIFIED VIEW OF PROCESSOR

COMPONENTS OF PROCESSOR
 The major components of the processor are an arithmetic and logic unit (ALU) and
a control unit (CU).
 The ALU does the actual computation or processing of data.
 The control unit controls the movement of data and instructions into and out of the
processor and controls the operation of the ALU.
 Register consists of a set of storage locations.

INTERNAL STRUCTURE OF CPU


Explanation:-
 The data transfer and logic control paths are indicated, including an element labeled
internal processor bus.
 This element is needed to transfer data between the various registers and the ALU
because the ALU in fact operates only on data in the internal processor memory.

REGISTER ORGANIZATION
The register in the processor perform two roles:
1. User-visible register: Enable the machine- or assembly language programmer to
minimize main memory references by optimizing use of registers.
2. Control and status registers: Used by the control unit to control the operation of the
processor and by privileged, operating system programs to control the execution of
programs.

USER-VISIBLE REGISTERS

 General Purpose:-
 General Purpose Registers can be assigned to a variety of functions by the
programmer
 Mostly these registers contain the operand for any opcode.
 In some cases these are used for addressing purpose.
 Data Registers:-
 Data Register to hold data and cannot be employed in the calculation of an
operand address
 Eg. Accumulator.
 Address Registers:-
 Address Register they may be devoted to a particular addressing mode
 Segment pointers :a segment register holds the address of the base of the
segment
 Index registers :are used for indexed addressing and may be auto indexed.
 Stack Pointer: If there is user-visible stack addressing, then typically there is
a dedicated register that points to the top of the stack.
 Condition Codes:-
 Condition codes are bits set by the processor hardware as the result of
operations.
 Condition codes are bits set by the processor hardware as the result of
operation.

CONTROL AND STATUS REGISTERS


Four Essential Registers:
 Program counter (PC): Contains the address of an instruction to be fetched.
 Instruction register (IR): Contains the instruction most recently fetched.
 Memory address register (MAR): Contains the address of a location in memory.
 Memory buffer register (MBR): Contains a word of data to be written to memory or
the word most recently read.
Program Status Word:-
 Program status word (PSW) contain status information.
 The PSW typically contains condition codes plus other status information.
 Sign: Contains the sign bit of the result of the last arithmetic operation.
 Zero: Set when the result is 0.
 Carry: Set if an operation resulted in a carry (addition) into or borrow
(subtraction)out of a high-order bit. Used for multiword arithmetic operations.
 Equal: Set if a logical compare result is equality.
 Overflow: Used to indicate arithmetic overflow.
 Interrupt Enable/Disable: Used to enable or disable interrupts.
 Supervisor: Indicates whether the processor is executing in supervisor or user
mode. Certain privileged instructions can be executed only in supervisor
mode, and certain areas of memory can be accessed only in supervisor mode

MICRO-OPERATION

• Simple digital systems are frequently characterized in terms of


– the registers they contain, and
– the operations that they perform.
• The operations on the data in registers are called micro-operations.
• The functions built into registers are examples of micro-operations
– Shift
– Load
– Clear
– Increment ...etc.
• Computer system micro-operations are of four types:
– Register transfer micro-operations
– Arithmetic micro-operations
– Logic micro-operations
– Shift micro-operations

REGISTER TRANFER MICRO-OPERATION


This micro-operation is used to transfer the information from one register to another register,
i.e.,
R1 R2
Where, R2 is the source register and R1 is the destination register. The data are transferred
through the data bus where, size of the bus = no. of multiplexer-size of the register.
A 4-bit bus system for four bit register using 4 X 1 multiplexer is given below-

In the above circuit the multiplexer consist two selection line S0 and S1 which defines the
selection of bits of a register, i.e.

S1 S0 output bit selected


0 0 Ai
0 1 Bi
1 0 Ci
1 1 Di
Hence, when S0, S1 is 0,0; all the bits of A register are selected.
when S0, S1 is 0,1; all the bits of B register are selected. And so on.

ARITHMETIC MICRO-OPERATION
 The basic arithmetic micro-operations are
– Addition
– Subtraction
– Increment
– Decrement
 The additional arithmetic micro-operations are
– Add with carry
– Subtract with borrow
– Transfer/Load
etc. …
Summary of Typical Arithmetic Micro-Operations
R3  R1 + R2 Contents of R1 plus R2 transferred to R3
R3  R1 - R2 Contents of R1 minus R2 transferred to R3
R2  R2’ Complement the contents of R2
R2  R2’+ 1 2's complement the contents of R2 (negate)
R3  R1 + R2’+ 1 subtraction
R1  R1 + 1 Increment
R1  R1 - 1 Decrement

The 4-bit arithmetic micro-operation diagram is shown below:

In the above diagram we use four full-adder circuit where, each adder have three input
, i.e. , Cin, X and Y.Where, the x input is directly feeded to the full adder while, the Y
input is feeded to full adder using 4X1 multiplexer.

S1 S0 Cin Y Output Microoperation


0 0 0 B D=A+B Add
0 0 1 B D=A+B+1 Add with carry
0 1 0 B’ D = A + B’ Subtract with borrow
0 1 1 B’ D = A + B’+ 1 Subtract
1 0 0 0 D=A Transfer A
1 0 1 0 D=A+1 Increment A
1 1 0 1 D=A-1 Decrement A
1 1 1 1 D=A Transfer A

LOGICAL MICRO-OPERATION
 Logic microoperation specify binary operation for strings of bit stored in registers.
 These operation consider each bit of the register separately and treat them as binary
variables. For example,
P:R1 R1 + R2
1010 Content of R1
1100 Content of R2
0110 Content of R1 after P=1

TRUTH TABLE FOR 16 FUNCTION OF TWO VARIABLES


X Y F0 F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15

0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
0 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
1 0 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1
1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

• SELECTIVE SET:
it sets the bit’s to 1 in register A where there are 1,s in register B.
example
1010 A before
1100 B(logic operant)
1110 A after
• SELECTIVE COMPLEMENT :
it complements bits in A where there are corresponding 1’s in B.
example
1010 A before
1100 B
0110 A after
it can be seen selective complement can be done by Exclusive –OR

 SELECTIVE CLEAR:
it clear the bit to 0 in A where there are corresponding 1’s in B.example
1010 A before
1100 B
0010 A after
(it can be obtained by microoperation AB’)

 MASKING:
it is similar to selective clear except that the bit of A is cleared where there
corresponding 0’s.
1010 A before
1100 B
1000 A after
 INSERT :
it inserts a new value into a group of bits.
This is done by first masking and then Oring with the value. Example
0110 1010 A before
0000 1111 B
0000 1010 A after
then insert a new value
0000 1010 A before
1001 0000 B(insert)
1001 1010 A after

SHIFT MICRO-OPERATION
 Shift microoperation are used for serial transfer of data. The content of the register
can be shifted to left or the right.At the same time the bits are shifted the the first flip
flop receive its binary information from the serial input.The information transferred
through the serial input determines the type of shift.
 There are three types of shift:
I. Logical shift
II. Circular shift
III. Arithmetic shift
Logical shift:-
A Logical Shift Micro Operation transfers a 0 (zero) through the serial input, either from left
or right depending on the type. For, logical shift left micro operation, 0 (zero) is transferred
through the right of the data and for the logical shift right micro operation, 0 (zero is
transferred through the left of the data as shown in the figures below.
Register Transfer Language (RTL) for the logical shift micro operations can be written as:
R ← shl R (shift left register (R)).
R ← shr R (shift right register (R)).
Below is the diagram showing logical shift left micro operation on the data in a register.

Below is the diagram showing the logical shift right is as follows:


Circular shift:-
A Circular Shift micro operation performs the shifting of bits from one end of the register to
the other end of the register. In Circular shift left operation, the left most bit in the register is
transferred to the right most end and in the circular shift right operation, the right most bit in
the register is transferred or shifted to the left most end of the register as shown in the figures
below:
Register Transfer Language for the Circular Shift micro operations can be written as:
R ← cil R (circular shift left register (R)).
R ← cir R (circular shift right register (R)).
Below is the diagram showing circular shift left micro operation

Below is the diagram showing circular shift right micro operation

Arithmetic shift:-
Arithmetic Shift Operation shifts signed (positive or negative) binary numbers either left or
right by multiplying or dividing by 2. For, Arithmetic Shift left micro operation, the value in
the register is multiplied by 2 and whereas for Arithmetic Shift right micro operation, the
value in the register is divided by 2.
In RTL (RTL stands for Register Transfer Language), we can represent this arithmetic shift
micro operations as
R ← ashl R (arithmetic shift left R (register))
R ← ashr R (arithmetic shift right R (register))
Diagram showing Arithmetic shift left operation is as follows:

Diagram showing Arithmetic shift right operation is as follows:

INSTRUCTION EXECUTION
Instruction Execution Steps
1. Fetch next instruction from memory into instr. register
2. Change program counter to point to next instruction
3. Determine type of instruction just fetched
4. If instructions uses word in memory, determine where Fetch word, if needed, into
CPU register
5. Execute the instruction
6. Go to step 1 to begin executing following instruction

Design Principles for Modern Computers


• All instructions directly executed by hardware
• Maximize rate at which instructions are issued
• Instructions should be easy to decode
• Only loads, stores should reference memory
• Provide plenty of registers

PIPELINING
Pipelining is the process of accumulating instruction from the processor through a pipeline. It
allows storing and executing instructions in an orderly process. It is also known as pipeline
processing.
Pipelining is a technique where multiple instructions are overlapped during execution.
Pipeline is divided into stages and these stages are connected with one another to form a pipe
like structure. Instructions enter from one end and exit from another end.
Pipelining increases the overall instruction throughput.

In pipeline system, each segment consists of an input register followed by a combinational


circuit. The register is used to hold data and combinational circuit performs operations on it.
The output of combinational circuit is applied to the input register of the next segment.
Pipeline system is like the modern day assembly line setup in factories. For example in a car
manufacturing industry, huge assembly lines are setup and at each point, there are robotic
arms to perform a certain task, and then the car moves on ahead to the next arm.
Types of Pipeline
It is divided into 2 categories:

1. Arithmetic Pipeline
2. Instruction Pipeline
Arithmetic Pipeline
Arithmetic pipelines are usually found in most of the computers. They are used for floating
point operations, multiplication of fixed point numbers etc. For example: The input to the
Floating Point Adder pipeline is:
X=A*2^a
Y=B*2^b
Here A and B are mantissas (significant digit of floating point numbers), while a and b are
exponents.
The floating point addition and subtraction is done in 4 parts:

1. Compare the exponents.


2. Align the mantissas.
3. Add or subtract mantissas
4. Produce the result.

Registers are used for storing the intermediate results between the above operations.
Instruction Pipeline
In this a stream of instructions can be executed by
overlapping fetch, decode and execute phases of an instruction cycle. This type of technique
is used to increase the throughput of the computer system.
An instruction pipeline reads instruction from the memory while previous instructions are
being executed in other segments of the pipeline. Thus we can execute multiple instructions
simultaneously. The pipeline will be more efficient if the instruction cycle is divided into
segments of equal duration.

Pipeline Conflicts
There are some factors that cause the pipeline to deviate its normal performance. Some of
these factors are given below:
1. Timing Variations
All stages cannot take same amount of time. This problem generally occurs in instruction
processing where different instructions have different operand requirements and thus
different processing time.
2. Data Hazards
When several instructions are in partial execution, and if they reference same data then the
problem arises. We must ensure that next instruction does not attempt to access data before
the current instruction, because this will lead to incorrect results.

3. Branching
In order to fetch and execute the next instruction, we must know what that instruction is. If
the present instruction is a conditional branch, and its result will lead us to the next
instruction, then the next instruction may not be known until the current one is processed.
4. Interrupts
Interrupts set unwanted instruction into the instruction stream. Interrupts effect the execution
of instruction.
5. Data Dependency
It arises when an instruction depends upon the result of a previous instruction but this result is
not yet available.

Advantages of Pipelining

1. The cycle time of the processor is reduced.


2. It increases the throughput of the system
3. It makes the system reliable.

Disadvantages of Pipelining

1. The design of pipelined processor is complex and costly to manufacture.


2. The instruction latency is more.

Performance

 Performance = 1 / Execution time


 Let Machine X is n times faster than Machine Y:
o Performance of X = n * Performance of Y
o Execution time of X = (1/n) * Execution_time of Y
 CPU Execution_time = (Number of CPU clock cycles required) * (cycle
time)OR
 CPU Execution_time = (Number of CPU clock cycles required) / (Clock rate)

***

UNIT-9 ALU ORGANISATION


Arithmetic And Logic Unit (ALU):
An arithmetic logic unit (ALU) is a major component of the central processing unit of a
computer system. It does all processes related to arithmetic and logic operations that need to
be done on instruction words. In some microprocessor architectures, the ALU is divided into
the arithmetic unit (AU) and the logic unit (LU).
An ALU can be designed by engineers to calculate any operation. As the operations become
more complex, the ALU also becomes more expensive, takes up more space in the CPU and
dissipates more heat. That is why engineers make the ALU powerful enough to ensure that
the CPU is also powerful and fast, but not so complex as to become prohibitive in terms of
cost and other disadvantages.
The arithmetic logic unit is that part of the CPU that handles all the calculations the CPU may
need. Most of these operations are logical in nature. Depending on how the ALU is designed,
it can make the CPU more powerful, but it also consumes more energy and creates more heat.
Therefore, there must be a balance between how powerful and complex the ALU is and how
expensive the whole unit becomes. This is why faster CPUs are more expensive, consume
more power and dissipate more heat. In modern CPU or Microprocessors, there can be more
than one integrated ALU to speed up arithmetical and logical operations, such as; integer
unit, floating point unit etc.
The main functions of the ALU are to do arithmetic and logic operations, including bit
shifting operations. These are essential processes that need to be done on almost any data that
is being processed by the CPU.

Organization of ALU:
Various circuits are required to process data or perform arithmetical operations which are
connected to microprocessor's ALU. Accumulator and Data Buffer stores data temporarily.
These data are processed as per control instructions to solve problems. Such problems are
addition, multiplication etc.
Functions of ALU:
Functions of ALU or Arithmetic & Logic Unit can be categorized into following 3 categories:

1. Arithmetic Operations:
Additions, multiplications etc. are example of arithmetic operations. Finding greater than or
smaller than or equality between two numbers by using subtraction is also a form of
arithmetic operations.

2. Logical Operations:
Operations like AND, OR, NOR, NOT etc. using logical circuitry are examples of logical
operations.

3. Data Manipulations:

Operations such as flushing a register is an example of data manipulation. Shifting binary


numbers are also example of data manipulation. This pertains to shifting the positions of the
bits by a certain number of places to the right or left, which is considered a multiplication
operation.

***

UNIT-10 THE CONTROL UNIT


The three main elements of the control unit are as follows:

Decoder
This is used to decode the instructions that make up a program when they are being
processed, and to determine in what actions must be taken in order to process them. These
decisions are normally taken by looking at the opcode of the instruction, together with the
addressing mode used.

Timerorclock
The timer or clock ensures that all processes and instructions are carried out and completed at
the right time. Pulses are sent to the other areas of the CPU at regular intervals (related to the
processor clock speed), and actions only occur when a pulse is detected. This ensures that the
actions themselves also occur at these same regular intervals, meaning that the operations of
the CPU are synchronised.

Control logic circuits

The control logic circuits are used to create the control signals themselves, which are then
sent around the processor. These signals inform the arithmetic and logic unit and the register
array what they actions and steps they should be performing, what data they should be using
to perform said actions, and what should be done with the results.

Functions of Control Unit:


Functions of control unit can be categorized into following 5 categories

1. Fetching instructions one by one from primary memory and gather required data and
operands to perform those instructions.
2. Sending instructions to ALU to perform additions, multiplication etc.
3. Receiving and sending results of operations of ALU to primary memory
4. Fetching programs from input and secondary memory and bringing them to primary
memory
5. Sending results from ALU stored in primary memory to output
Hardwired Control Unit
It is implemented with the help of gates, flip flops, decoders etc. in the hardware. The inputs
to control unit are the instruction register, flags, timing signals etc. This organization can be
very complicated if we have to make the control unit large.
If the design has to be modified or changed, all the combinational circuits have to be
modified which is a very difficult task.
Microprogrammed Control Unit
It is implemented by using programming approach. A sequence of micro operations is carried
out by executing a program consisting of micro-instructions. In this organization any
modifications or changes can be done by updating the micro program in the control memory
by the programmer.

Difference between Hardwired Control and Microprogrammed Control


Hardwired Control Microprogrammed Control

Technology is circuit based. Technology is software based.

It is implemented through flip-flops, Microinstructions generate signals to control the


gates, decoders etc. execution of instructions.

Variable instruction format (16-64 bits per


Fixed instruction format.
instruction).

Instructions are register based. Instructions are not register based.

ROM is not used. ROM is used.

It is used in RISC. It is used in CISC.

Faster decoding. Slower decoding.

Difficult to modify. Easily modified.

Chip area is less. Chip area is large.

Wilkes Control
• 1951
• Matrix partially filled with diodes
• During cycle, one row activated
— Generates signals where diode present
— First part of row generates control
— Second generates address for next cycle
Wilkes control unit consist of control memory address register, decoder, and control
store. Data from the instruction register is entered into CMAR(control memory address
register) and its output is feeded to the decoder which generate control store(it is
collection of control field, conditional bit and address line).

MICRO-INSTRUCTION
Microinstruction: A single instruction in microcode. It is the most elementary instruction in
the computer, such as moving the contents of a register to the arithmetic logic unit (ALU). It
takes several microinstructions to carry out one complex machine instruction (CISC).
Information in a Microinstruction
- Control Information
- Sequencing Information
- Constant
Information which is useful when feeding into the system
These information needs to be organized in some way for
- Efficient use of the microinstruction bits
- Fast decoding
Field Encoding
- Encoding the microinstruction bits
- Encoding slows down the execution speed due to the decoding delay
- Encoding also reduces the flexibility due to the decoding hardware
Microinstruction Encoding Direct Encoding

Microinstruction Encoding Indirect Encoding

MICRO-INSTRUCTION TYPES
Vertical micro-programming: Each micro-instruction specifies single (or few) micro-
operations to be performed.
 Width is narrow
 n control signals encoded into log2 n bits
 Limited ability to express parallelism
 Considerable encoding of control information requires external memory word decoder
to identify the exact control line being manipulated
 Diagram:-

Micro-instruction Address
Function codes
Jump Condition

Horizontal micro-programming: Each micro-instruction specifies many different micro-


operations to be performed in parallel.
 Wide memory word
 High degree of parallel operations possible
 Little encoding of control information
 Diagram:-
Internal CPU Control Micro-instruction
Signal Address

System Bus Jump Condition


Control Signal

CONTROL MEMORY ORGANIZATION

In a microprogrammed data processing system wherein the execution of a microinstruction


sequence may be interrupted at any time for the execution of a more prioritary
microinstruction sequence, the control memory is organized in such a way as to provide
microinstructions of variable length.
The basic length of the microinstructions is defined by the parallelism of a first control
memory.
With regard to a first range of addresses of the first control memory, a second control
memory, read in parallel to the first one, provides a microinstruction field which is added to
the basic field of the first memory and increases the microinstruction length.
With regard to the remaining field of addresses of the first control memory, a first
microinstruction may load with one of its bit fields a register.
Then such bit field is associated to the subsequent microinstruction for increasing the length
of it.
In this case, in order to avoid that the execution of such longer subsequent microinstruction
be affected by a microprogram interruption occurring within the execution of the first
microinstruction and the reading out of the subsequent one, some logic circuits defers, in case
of interruption, the association of the bit field to the subsequent microinstruction till the
return to the interrupted microprogram.

MICRO-INSTRUCTION FORMATS
The microinstruction format consists of 128 bits and these bits are broken down into 30
functional fields , each of these fields consists of one or more bits and they are grouped into
five major categories:

1) Control of board

2) 8847 floating- point and integer processor or chip

3) 8832 registered ALU

4) 8818 micro-sequencer

5) WCS data field.

Control operations in the microinstruction include:

- Selecting condition codes for sequencer control . The first bit of field 1 indicates whether
the condition flag is to be set to 1 or 0, and the remaining 4 bits indicate which flag is to be
set.

- Sending an I/O request to the PC/AT.

- Enabling local data memory read/write operations.

- Determining the unit driving the system Y bus. One of the four devices attached to the bus
is selected.

For the control memory, the

microinstruction format is given

below:

Fig: Instruction Format

The three distinct fields are:


A 1-bit field for indirect addressing

A 4-bit operation code (opcode)

An 11-bit address field

THE EXECUTION OF MICRO-PROGRAM

A general sequential circuit. The Combinational circuit can be directly implemented as a

truth table, although it is not efficient in terms of size. A read-only-memory (ROM) stored

the truth table. This ROM can be regarded as storing a "program".

The ROM can output a fixed sequence of control signals simply by cycling the address of the
ROM. The content of this ROM is a microprogram. It is comparable to a straight
line program (no transfer of control). Each entry in the ROM is called a microword. A
microprogram counter is used to cycling the sequence of control.

Conditionals are the bits used to determine the flow of microprogram. The next address
determines the next microword to be executed.
A microprogram is executed as follows:

1. A microword at the location specified by the microprogram counter is read out;


control bits are latched at an output buffer which is connected to the data path.
2. If the conditional field is specified and the test for conditional is true, the next address
of microprogram will come from the next address field otherwise the microprogram
counter will be incremented (execute the next microword).
What that has been described is called horizontal microprogram. The microword can have
other formats. There are several possibilities:

1. Single format, one address as just described above.


2. Single format, two addresses, contain two next addresses field, one for result of test
true, and the other for result of test false.
3. Multiple formats, such as, one format for the control bits without the next address
field and another format for jump on condition with the address field. The advantage
is that the microword can be shorter than the single format. The disadvantage is that
to “jump” will take one extra cycle.

control bits next address


a) one-address format

control bits true next false next


b) two-address format

0 control bits

1 next address

c) multiple format

Advantage and disadvantage of microprogramming.

Advantage

Making change to a hardwired control unit implies global change, that is, the circuit will be
almost totally changed. Hence, it is costly and time consuming although the present CAD
tools do reduce most of the burden in this area. In contrary, for a microprogrammed control
unit, making change to it is just changing the microprogram, the bit pattern in the
micromemory. There is a tool to generate these bit content from a human-readable
microprogram, hence making change to microprogram is similar to edit-compile a
program. The circuit for control unit does not change. This enables adding new instructions,
modifies addressing mode, etc. or updating the version of control behavior easy to do.

Disadvantage

Microprogram relies on fast micromemory. It requires high speed memory. In fact, the
architect of early microprogrammed machine, IBM S360 family, depended on this crucial
technology, which was still in the development at that time. The breakthrough in memory
technology came, and S360 became the most successful family of computers. Hardwired
control unit is much faster. Microprogramming is inherently very low level, making it hard
to be absolutely correct. Microprogramming is by nature concurrent, many events occur at
the same time, so it is difficult to develop and debug. (for a good reading that shows this
process, read Tracy Kidder's "Soul of a new machine").

***

You might also like