Professional Documents
Culture Documents
AND ORGANISATION
Memory is a passive component that simply stores information unit it is requested by another
part of the systems. During normal operations, it feeds instruction and data to the processor,
and at other times it is the source or destination of data transferred by I/O devices.
Information in a memory is accessed by its address.
I/O devices transfer information without altering it between the external world and one or
more internal components. I/O devices can be secondary memories, for example- disks and
tapes, or devices used to communicate directly with users, such as video displays, keyboard
and mouse.
The communication channels that tie the systems together can either be simple links that
connect two devices or more complex switches that interconnect several components and
allow any two of them to communicate at a given point in time. When a switch is configured
to allow two devices to exchange information, all other devices that rely on the switch are
blocked, i.e, they must wait until the switch can be reconfigured.
His computer architecture design consists of a Control Unit, Arithmetic and Logic
Unit (ALU), Memory Unit, Registers and Inputs/Outputs.
The Central Processing Unit (CPU) is the electronic circuit responsible for executing the
instructions of a computer program.
Registers:-
Registers are high speed storage areas in the CPU. All data must be stored in a register
before it can be processed.
Holds the memory location of data that needs to be
MAR Memory Address Register
accessed
MDR Memory Data Register Holds data that is being transferred to or from memory
Current Instruction
CIR Contains the current instruction during processing
Register
The ALU allows arithmetic (add, subtract etc) and logic (AND, OR, NOT etc) operations to
be carried out.
The control unit controls the operation of the computer’s ALU, memory and input/output
devices, telling them how to respond to the program instructions it has just read and
interpreted from the memory unit. The control unit also provides the timing and control
signals required by other computer components.
Buses:-
Buses are the means by which data is transmitted from one part of a computer to another,
connecting all major internal components to the CPU and memory.
A standard CPU system bus is comprised of a control bus, data bus and address bus.
Carries the addresses of data (but not the data) between the processor and
Address Bus
memory
Carries data between the processor, the memory unit and the input/output
Data Bus
devices
Carries control signals/commands from the CPU (and status signals from other
Control Bus
devices) in order to control and coordinate all the activities within the computer
Memory Unit:-
RAM is split into partitions. Each partition consists of an address and its contents (both in
binary form).
Loading data from permanent memory (hard drive), into the faster and directly accessible
temporary memory (RAM), allows the CPU to operate much quicker.
INSTRUCTION EXECUTION:
Instruction is command which is given by the user to computer. Execution is the process by
which a computer performs instruction. Instruction Execution means a program to be
executed by a processor consists of a set of instructions stored in memory.
Terminologies:-
Program Counter is a register in a computer processor that contains the address of the
next instruction which will be executed.
Memory Address Register (MAR) holds the Memory Location of data that needs to be
accessed.
Instruction Register (IR) is a part of CPU control unit that stores the instruction currently
being executed or decoded.
Memory Buffer Register (MBR) stores the data being transferred to and from immediate
access store also known as Memory Data Register (MDR).
Control Unit (CU) decodes the program instruction in the IR, selecting machine
resources such as a data source register and a particular arithmetic operation.
Arithmetic Logic Unit (ALU) performs mathematical and logical operations.
Accumulator (AC) means the processor contains the single data register.
INSTRUCTION EXECUTION CYCLE:-
The time period during which one instruction is fetched from memory and execute when
computer given an instruction in machine language.
Each instruction is further divided into sequence of phases.
After the execution of program counter is incremented to point to the next instruction.
Process:
I. Processor reads instruction from memory time.
II. Decodes the instruction.
III. Execute the instruction.
INSTRUCTION CYCLE:
Now, the interrupted will be handled by the interrupt handler and after the handling of
interrupt, the address of the memory is located with read instruction and the context address
is assign to the controller.
HISTORY OF COMPUTER:
JACQUARD CARD:- In early 17th century ‘Maric Jacquard’ has built a loom controlled by
punched card. It was the first concept towards programming.
PUNCHED CARD:- Dr. Hermann Hollerith developed a tabulating machine to read and
compile data from punch card.
ANALYTICAL ENGINE:- The 1st person who use the concept of programming in a
computer machine was ‘Charles Babbage’, a professor at Cambridge university in England.
Babbage invented a new machine, i.e, analytical engine which performed calculation
according to the instruction code.
MARK-1:- “Howard Aikens” and “Grance Hopper” of Harbour University in 1930 were
developed by the American multinational company IBM. His computer was called Mark-1
and it has number of components.
ENIAC:- It stands for “Electronic Numerical Integrator and Calculator”. It was a huge
machine with 10,000 vacuum tube of size 8 feet high and 80 feet long, weighted 30 tones and
consume 1,74,000 watts of power. The ENIAC performed a mathematical calculation that
could have required 40 hours for one person to complete.
UNIVAC-1:- It stands for “Universal Accounting Company”. This was the first electronic
computer developed by “Mockly Eckert” in 1951.
GENERATIONS OF COMPUTER:
The first generation computers were developed by using vacuum tube or thermionic
valve machine.
The input of this system was based on punched cards and paper tape; however, the
output was displayed on printouts.
The first generation computers worked on binary-coded concept (i.e., language of 0-
1). Examples: ENIAC, EDVAC, etc.
By the time, the computer generation was being categorized on the basis of hardware
only, but the fifth generation technology also included software.
The computers of the fifth generation had high capability and large memory capacity.
Working with computers of this generation was fast and multiple tasks could be
performed simultaneously.
Some of the popular advanced technologies of the fifth generation include Artificial
intelligence, Quantum computation, Nanotechnology, Parallel processing, etc.
***
DATA REPRESENTATION:
Data and instructions cannot be entered and processed directly into computers using human
language. Any type of data be it numbers, letters, special symbols, sound or pictures must
first be converted into machine-readable form i.e. binary form. Due to this reason, it is
important to understand how a computer together with its peripheral devices handles data in
its electronic circuits, on magnetic media and in optical devices.
The laser beam reflected from the land is interpreted, as 1.The laser entering the pot is not
reflected. This is interpreted as 0.The reflected pattern of light from the rotating disk falls on
a receiving photoelectric detector that transforms the patterns into digital form. The presence
of a magnetic field in one direction on magnetic media is interpreted as 1; while the field in
the opposite direction is interpreted as “0”.Magnetic technology is mostly used on storage
devices that are coated with special magnetic materials such as iron oxide. Data is written on
the media by arranging the magnetic dipoles of some iron oxide particles to face in the same
direction and some others in the opposite direction
In optical devices, the presence of light is interpreted as ‘1’ while its absence is interpreted as
‘0’.Optical devices use this technology to read or store data. Take example of a CD-ROM, if
the shiny surface is placed under a powerful microscope, the surface is observed to have very
tiny holes called pits. The areas that do not have pits are called land.
NUMBER SYSTEMS:
If base or radix of a number system is ‘r’, then the numbers present in that number system
are ranging from zero to r-1. The total numbers present in that number system is ‘r’. So, we
will get various number systems, by choosing the values of radix as greater than or equal to
two.
The following number systems are the most commonly used.
BASE CONVERSION:
Decimal Number to other Bases Conversion:-
If the decimal number contains both integer part and fractional part, then convert both the
parts of decimal number into other base individually. Follow these steps for converting the
decimal number into its equivalent number of any base ‘r’.
Do division of integer part of decimal number and successive quotients with base ‘r’ and
note down the remainders till the quotient is zero. Consider the remainders in reverse
order to get the integer part of equivalent number of base ‘r’. That means, first and last
remainders denote the least significant digit and most significant digit respectively.
Do multiplication of fractional part of decimal number and successive fractions with
base ‘r’ and note down the carry till the result is zero or the desired number of equivalent
digits is obtained. Consider the normal sequence of carry in order to get the fractional part
of equivalent number of base ‘r’.
Decimal to Binary Conversion:-
The following two types of operations take place, while converting decimal number into its
equivalent binary number.
58/2 29 0 (LSB)
29/2 14 1
14/2 7 0
7/2 3 1
3/2 1 1
½ 0 1(MSB)
⇒(58)10 = (111010)2
Therefore, the integer part of equivalent binary number is 111010.
Step 2 − Multiplication of 0.25 and successive fractions with base 2.
Operation Result Carry
0.25 x 2 0.5 0
0.5 x 2 1.0 1
- 0.0 -
⇒(.25)10 = (.01)2
Therefore, the fractional part of equivalent binary number is .01
⇒(58.25)10 = (111010.01)2
Therefore, the binary equivalent of decimal number 58.25 is 111010.01.
Decimal to Octal Conversion:-
The following two types of operations take place, while converting decimal number into its
equivalent octal number.
Division of integer part and successive quotients with base 8.
Multiplication of fractional part and successive fractions with base 8.
Example
Consider the decimal number 58.25. Here, the integer part is 58 and fractional part is 0.25.
Step 1 − Division of 58 and successive quotients with base 8.
58/8 7 2
7/8 0 7
⇒(58)10 = (72)8
Therefore, the integer part of equivalent octal number is 72.
Step 2 − Multiplication of 0.25 and successive fractions with base 8.
0.25 x 8 2.00 2
- 0.00 -
⇒ (.25)10 = (.2)8
Therefore, the fractional part of equivalent octal number is .2
⇒ (58.25)10 = (72.2)8
Therefore, the octal equivalent of decimal number 58.25 is 72.2.
58/16 3 10=A
3/16 0 3
⇒ (58)10 = (3A)16
Therefore, the integer part of equivalent Hexa-decimal number is 3A.
Step 2 − Multiplication of 0.25 and successive fractions with base 16.
0.25 x 16 4.00 4
- 0.00 -
⇒(.25)10 = (.4)16
Therefore, the fractional part of equivalent Hexa-decimal number is .4.
⇒(58.25)10 = (3A.4)16
Therefore, the Hexa-decimal equivalent of decimal number 58.25 is 3A.4.
Binary Number to other Bases Conversion:-
The process of converting a number from binary to decimal is different to the process of
converting a binary number to other bases. Now, let us discuss about the conversion of a
binary number to decimal, octal and Hexa-decimal number systems one by one.
Binary to Decimal Conversion:-
For converting a binary number into its equivalent decimal number, first multiply the bits of
binary number with the respective positional weights and then add all those products.
Example
Consider the binary number 1101.11.
Mathematically, we can write it as
(1101.11)2 = (1 × 23) + (1 × 22) + (0 × 21) + (1 × 20) + (1 × 2-1) +
(1 × 2-2)
⇒ (1101.11)2 = 8 + 4 + 0 + 1 + 0.5 + 0.25 = 13.75
⇒ (1101.11)2 = (13.75)10
Therefore, the decimal equivalent of binary number 1101.11 is 13.75.
Binary to Octal Conversion:-
We know that the bases of binary and octal number systems are 2 and 8 respectively. Three
bits of binary number is equivalent to one octal digit, since 23 = 8.
Follow these two steps for converting a binary number into its equivalent octal number.
Start from the binary point and make the groups of 3 bits on both sides of binary point. If
one or two bits are less while making the group of 3 bits, then include required number of
zeros on extreme sides.
Write the octal digits corresponding to each group of 3 bits.
Example
Consider the binary number 101110.01101.
Step 1 − Make the groups of 3 bits on both sides of binary point.
101 110.011 01
Here, on right side of binary point, the last group is having only 2 bits. So, include one zero
on extreme side in order to make it as group of 3 bits.
⇒ 101 110.011 010
Step 2 − Write the octal digits corresponding to each group of 3 bits.
⇒ (101 110.011 010)2 = (56.32)8
Therefore, the octal equivalent of binary number 101110.01101 is 56.32.
Binary to Hexa-Decimal Conversion:-
We know that the bases of binary and Hexa-decimal number systems are 2 and 16
respectively. Four bits of binary number is equivalent to one Hexa-decimal digit, since 24 =
16.
Follow these two steps for converting a binary number into its equivalent Hexa-decimal
number.
Start from the binary point and make the groups of 4 bits on both sides of binary point. If
some bits are less while making the group of 4 bits, then include required number of zeros
on extreme sides.
Write the Hexa-decimal digits corresponding to each group of 4 bits.
Example
Consider the binary number 101110.01101
Step 1 − Make the groups of 4 bits on both sides of binary point.
10 1110.0110 1
Here, the first group is having only 2 bits. So, include two zeros on extreme side in order to
make it as group of 4 bits. Similarly, include three zeros on extreme side in order to make the
last group also as group of 4 bits.
⇒ 0010 1110.0110 1000
Step 2 − Write the Hexa-decimal digits corresponding to each group of 4 bits.
⇒ (0010 1110.0110 1000)2 = (2E.68)16
Therefore, the Hexa-decimal equivalent of binary number 101110.01101 is (2E.68).
Octal Number to other Bases Conversion:-
The process of converting a number from octal to decimal is different to the process of
converting an octal number to other bases. Now, let us discuss about the conversion of an
octal number to decimal, binary and Hexa-decimal number systems one by one.
Octal to Decimal Conversion:-
For converting an octal number into its equivalent decimal number, first multiply the digits
of octal number with the respective positional weights and then add all those products.
Example
Consider the octal number 145.23.
Mathematically, we can write it as
(145.23)8 = (1 × 82) + (4 × 81) + (5 × 80) + (2 × 8-1) + (3 × 8-2)
⇒ (145.23)8 = 64 + 32 + 5 + 0.25 + 0.05 = 101.3
⇒ (145.23)8 = (101.3)10
Therefore, the decimal equivalent of octal number 145.23 is 101.3.
Octal to Binary Conversion:-
The process of converting an octal number to an equivalent binary number is just opposite to
that of binary to octal conversion. By representing each octal digit with 3 bits, we will get the
equivalent binary number.
Example
Consider the octal number 145.23.
Represent each octal digit with 3 bits.
(145.23)8 = (001 100 101.010 011)2
The value doesn’t change by removing the zeros, which are on the extreme side.
⇒ (145.23)8 = (1100101.010011)2
Therefore, the binary equivalent of octal number 145.23 is 1100101.010011.
Octal to Hexa-Decimal Conversion:-
Follow these two steps for converting an octal number into its equivalent Hexa-decimal
number.
Sign-Magnitude form
1’s complement form
2’s complement form
Representation of a positive number in all these 3 forms is same. But, only the representation
of negative number will differ in each form.
Example
Consider the positive decimal number +108. The binary equivalent of magnitude of this
number is 1101100. These 7 bits represent the magnitude of the number 108. Since it is
positive number, consider the sign bit as zero, which is placed on left most side of
magnitude.
(+108)10 = (01101100)2
Therefore, the signed binary representation of positive decimal number +108 is
𝟎𝟏𝟏𝟎𝟏𝟏𝟎𝟎. So, the same representation is valid in sign-magnitude form, 1’s complement
form and 2’s complement form for positive decimal number +108.
Sign-Magnitude form:-
In sign-magnitude form, the MSB is used for representing sign of the number and the
remaining bits represent the magnitude of the number. So, just include sign bit at the left
most side of unsigned binary number. This representation is similar to the signed decimal
numbers representation.
Example
Consider the negative decimal number -108. The magnitude of this number is 108. We
know the unsigned binary representation of 108 is 1101100. It is having 7 bits. All these bits
represent the magnitude.
Since the given number is negative, consider the sign bit as one, which is placed on left most
side of magnitude.
(−108)10 = (11101100)2
Therefore, the sign-magnitude representation of -108 is 11101100.
1’s complement form:-
The 1’s complement of a number is obtained by complementing all the bits of signed binary
number. So, 1’s complement of positive number gives a negative number. Similarly, 1’s
complement of negative number gives a positive number.
That means, if you perform two times 1’s complement of a binary number including sign bit,
then you will get the original signed binary number.
Example
Consider the negative decimal number -108. The magnitude of this number is 108. We
know the signed binary representation of 108 is 01101100.
It is having 8 bits. The MSB of this number is zero, which indicates positive number.
Complement of zero is one and vice-versa. So, replace zeros by ones and ones by zeros in
order to get the negative number.
(−108)10 = (10010011)2
Therefore, the 1’s complement of (108)10 is (10010011)2.
2’s complement form:-
The 2’s complement of a binary number is obtained by adding one to the 1’s
complement of signed binary number. So, 2’s complement of positive number gives a
negative number. Similarly, 2’s complement of negative number gives a positive number.
That means, if you perform two times 2’s complement of a binary number including sign bit,
then you will get the original signed binary number.
Example
Consider the negative decimal number -108.
We know the 1’s complement of (108)10 is (10010011)2
2’s compliment of (108)10 = 1’s compliment of (108)10 + 1.
= 10010011 + 1
= 10010100
Therefore, the 2’s complement of (108)10 is (10010100)2.
SIGNED BINARY ARITHMETIC:
Addition of two Signed Binary Numbers:-
Consider the two signed binary numbers A & B, which are represented in 2’s complement
form. We can perform the addition of these two numbers, which is similar to the addition of
two unsigned binary numbers. But, if the resultant sum contains carry out from sign bit, then
discard (ignore) it in order to get the correct value.
If resultant sum is positive, you can find the magnitude of it directly. But, if the resultant sum
is negative, then take 2’s complement of it in order to get the magnitude.
Example 1
Let us perform the addition of two decimal numbers +7 and +4 using 2’s complement
method.
The 2’s complement representations of +7 and +4 with 5 bits each are shown below.
(+7)10 = (00111)2
(+4)10 = (00100)2
The addition of these two numbers is
(+7)10 +(+4)10 = (00111)2+(00100)2
⇒(+7)10 +(+4)10 = (01011)2.
The resultant sum contains 5 bits. So, there is no carry out from sign bit. The sign bit ‘0’
indicates that the resultant sum is positive. So, the magnitude of sum is 11 in decimal number
system. Therefore, addition of two positive numbers will give another positive number.
Example 2
Let us perform the addition of two decimal numbers -7 and -4 using 2’s complement method.
The 2’s complement representation of -7 and -4 with 5 bits each are shown below.
(−7)10 = (11001)2
(−4)10 = (11100)2
The addition of these two numbers is
(−7)10 + (−4)10 = (11001)2 + (11100)2
⇒(−7)10 + (−4)10 = (110101)2.
The resultant sum contains 6 bits. In this case, carry is obtained from sign bit. So, we can
remove it
Resultant sum after removing carry is (−7)10 + (−4)10 = (10101)2.
The sign bit ‘1’ indicates that the resultant sum is negative. So, by taking 2’s complement of
it we will get the magnitude of resultant sum as 11 in decimal number system. Therefore,
addition of two negative numbers will give another negative number.
Subtraction of two Signed Binary Numbers:-
Consider the two signed binary numbers A & B, which are represented in 2’s complement
form. We know that 2’s complement of positive number gives a negative number. So,
whenever we have to subtract a number B from number A, then take 2’s complement of B and
add it to A. So, mathematically we can write it as
A - B = A + (2's complement of B)
Similarly, if we have to subtract the number A from number B, then take 2’s complement of
A and add it to B. So, mathematically we can write it as
B - A = B + (2's complement of A)
So, the subtraction of two signed binary numbers is similar to the addition of two signed
binary numbers. But, we have to take 2’s complement of the number, which is supposed to be
subtracted. This is the advantage of 2’s complement technique. Follow, the same rules of
addition of two signed binary numbers.
Example 3
Let us perform the subtraction of two decimal numbers +7 and +4 using 2’s complement
method.
The subtraction of these two numbers is
(+7)10 − (+4)10 = (+7)10 + (−4)10.
The 2’s complement representation of +7 and -4 with 5 bits each are shown below.
(+7)10 = (00111)2
(+4)10 = (11100)2
⇒(+7)10 + (+4)10 = (00111)2 + (11100)2 = (00011)2
Here, the carry obtained from sign bit. So, we can remove it. The resultant sum after
removing carry is
(+7)10 + (+4)10 = (00011)2
The sign bit ‘0’ indicates that the resultant sum is positive. So, the magnitude of it is 3 in
decimal number system. Therefore, subtraction of two decimal numbers +7 and +4 is +3.
Example 4
Let us perform the subtraction of two decimal numbers +4 and +7 using 2’s complement
method.
The subtraction of these two numbers is
(+4)10 − (+7)10 = (+4)10 + (−7)10.
The 2’s complement representation of +4 and -7 with 5 bits each are shown below.
(+4)10 = (00100)2
(-7)10 = (11001)2
⇒(+4)10 + (-7)10 = (00100)2 + (11001)2 = (11101)2
Here, carry is not obtained from sign bit. The sign bit ‘1’ indicates that the resultant sum
is negative. So, by taking 2’s complement of it we will get the magnitude of resultant sum as
3 in decimal number system. Therefore, subtraction of two decimal numbers +4 and +7 is -3.
CODES:
In the coding, when numbers or letters are represented by a specific group of symbols, it is
said to be that number or letter is being encoded. The group of symbols is called as code. The
digital data is represented, stored and transmitted as group of bits. This group of bits is also
called as binary code.
Binary codes can be classified into two types.
Weighted codes
Unweighted codes
If the code has positional weights, then it is said to be weighted code. Otherwise, it is an
unweighted code. Weighted codes can be further classified as positively weighted codes and
negatively weighted codes.
Binary Codes for Decimal digits:-
The following table shows the various binary codes for decimal digits 0 to 9.
Decimal Digit 8421 Code 2421 Code 84-2-1 Code Excess 3 Code
This code has all positive weights. So, it is a positively weighted code.
This code is also called as natural BCD (Binary Coded Decimal) code.
Example
Let us find the BCD equivalent of the decimal number 786. This number has 3 decimal digits
7, 8 and 6. From the table, we can write the BCD (8421) codes of 7, 8 and 6 are 0111, 1000
and 0110 respectively.
∴ (786)10 = (011110000110)BCD
There are 12 bits in BCD representation, since each BCD code of decimal digit has 4 bits.
2 4 2 1 code:-
The weights of this code are 2, 4, 2 and 1.
This code has all positive weights. So, it is a positively weighted code.
It is an unnatural BCD code. Sum of weights of unnatural BCD codes is equal to 9.
It is a self-complementing code. Self-complementing codes provide the 9’s complement
of a decimal number, just by interchanging 1’s and 0’s in its equivalent 2421
representation.
Example
Let us find the 2421 equivalent of the decimal number 786. This number has 3 decimal digits
7, 8 and 6. From the table, we can write the 2421 codes of 7, 8 and 6 are 1101, 1110 and 1100
respectively.
Therefore, the 2421 equivalent of the decimal number 786 is 110111101100.
8 4 -2 -1 code:-
The weights of this code are 8, 4, -2 and -1.
This code has negative weights along with positive weights. So, it is a negatively
weighted code.
It is an unnatural BCD code.
It is a self-complementing code.
Example
Let us find the 8 4-2-1 equivalent of the decimal number 786. This number has 3 decimal
digits 7, 8 and 6. From the table, we can write the 8 4 -2 -1 codes of 7, 8 and 6 are 1001, 1000
and 1010 respectively.
Therefore, the 8 4 -2 -1 equivalent of the decimal number 786 is 100110001010.
Excess 3 code:-
This code doesn’t have any weights. So, it is an un-weighted code.
We will get the Excess 3 code of a decimal number by adding three (0011) to the binary
equivalent of that decimal number. Hence, it is called as Excess 3 code.
It is a self-complementing code.
Example
Let us find the Excess 3 equivalent of the decimal number 786. This number has 3 decimal
digits 7, 8 and 6. From the table, we can write the Excess 3 codes of 7, 8 and 6 are 1010, 1011
and 1001 respectively.
Therefore, the Excess 3 equivalent of the decimal number 786 is 101010111001
Gray Code:-
The following table shows the 4-bit Gray codes corresponding to each 4-bit binary code.
0 0000 0000
1 0001 0001
2 0010 0011
3 0011 0010
4 0100 0110
5 0101 0111
6 0110 0101
7 0111 0100
8 1000 1100
9 1001 1101
10 1010 1111
11 1011 1110
12 1100 1010
13 1101 1011
14 1110 1001
15 1111 1000
000 0 0000
001 1 0011
010 1 0101
011 0 0110
100 1 1001
101 0 1010
110 0 1100
111 1 1111
Here, the number of bits present in the even parity codes is 4. So, the possible even number
of ones in these even parity codes are 0, 2 & 4.
If the other system receives one of these even parity codes, then there is no error in the
received data. The bits other than even parity bit are same as that of binary code.
If the other system receives other than even parity codes, then there will be an error(s) in
the received data. In this case, we can’t predict the original binary code because we don’t
know the bit position(s) of error.
Therefore, even parity bit is useful only for detection of error in the received parity code.
But, it is not sufficient to correct the error.
Odd Parity Code
The value of odd parity bit should be zero, if odd number of ones present in the binary code.
Otherwise, it should be one. So that, odd number of ones present in odd parity code. Odd
parity code contains the data bits and odd parity bit.
The following table shows the odd parity codes corresponding to each 3-bit binary code.
Here, the odd parity bit is included to the right of LSB of binary code.
000 1 0001
001 0 0010
010 0 0100
011 1 0111
100 0 1000
101 1 1011
110 1 1101
111 0 1110
Here, the number of bits present in the odd parity codes is 4. So, the possible odd number of
ones in these odd parity codes are 1 & 3.
If the other system receives one of these odd parity codes, then there is no error in the
received data. The bits other than odd parity bit are same as that of binary code.
If the other system receives other than odd parity codes, then there is an error(s) in the
received data. In this case, we can’t predict the original binary code because we don’t
know the bit position(s) of error.
Therefore, odd parity bit is useful only for detection of error in the received parity code. But,
it is not sufficient to correct the error.
Hamming Code:-
Hamming code is useful for both detection and correction of error present in the received
data. This code uses multiple parity bits and we have to place these parity bits in the
positions of powers of 2.
The minimum value of 'k' for which the following relation is correct (valid) is nothing but
the required number of parity bits.
2k≥n+k+12k≥n+k+1
Where,
‘n’ is the number of bits in the binary code (information)
‘k’ is the number of parity bits
Therefore, the number of bits in the Hamming code is equal to n + k.
Let the Hamming code is bn+kbn+k−1.....b3b2b1bn+kbn+k−1.....b3b2b1 & parity bits pk,
pk−1 ,....p1pk,pk−1,....p1. We can place the ‘k’ parity bits in powers of 2 positions only. In
remaining bit positions, we can place the ‘n’ bits of binary code.
Based on requirement, we can use either even parity or odd parity while forming a Hamming
code. But, the same parity technique should be used in order to find whether any error
present in the received data.
Follow this procedure for finding parity bits.
Find the value of p1, based on the number of ones present in bit positions b3, b5, b7 and so
on. All these bit positions (suffixes) in their equivalent binary have ‘1’ in the place value
of 20.
Find the value of p2, based on the number of ones present in bit positions b3, b6, b7 and so
on. All these bit positions (suffixes) in their equivalent binary have ‘1’ in the place value
of 21.
Find the value of p3, based on the number of ones present in bit positions b5, b6, b7 and so
on. All these bit positions (suffixes) in their equivalent binary have ‘1’ in the place value
of 22.
Similarly, find other values of parity bits.
Follow this procedure for finding check bits.
Find the value of c1, based on the number of ones present in bit positions b1, b3, b5, b7 and
so on. All these bit positions (suffixes) in their equivalent binary have ‘1’ in the place
value of 20.
Find the value of c2, based on the number of ones present in bit positions b2, b3, b6, b7 and
so on. All these bit positions (suffixes) in their equivalent binary have ‘1’ in the place
value of 21.
Find the value of c3, based on the number of ones present in bit positions b4, b5, b6, b7 and
so on. All these bit positions (suffixes) in their equivalent binary have ‘1’ in the place
value of 22.
Similarly, find other values of check bits.
The decimal equivalent of the check bits in the received data gives the value of bit position,
where the error is present. Just complement the value present in that bit position. Therefore,
we will get the original binary code after removing parity bits.
Example 1
Let us find the Hamming code for binary code, d4d3d2d1 = 1000. Consider even parity bits.
The number of bits in the given binary code is n=4.
We can find the required number of parity bits by using the following mathematical relation.
2k≥n+k+12k≥n+k+1
Substitute, n=4 in the above mathematical relation.
⇒2k≥4+k+1⇒2k≥4+k+1
⇒2k≥5+k⇒2k≥5+k
The minimum value of k that satisfied the above relation is 3. Hence, we require 3 parity bits
p1, p2, and p3. Therefore, the number of bits in Hamming code will be 7, since there are 4 bits
in binary code and 3 parity bits. We have to place the parity bits and bits of binary code in
the Hamming code as shown below.
The 7-bit Hamming
code is b7b6b5b4b3b2b1=d4d3d2p3d1p2bp1b7b6b5b4b3b2b1=d4d3d2p3d1p2bp1
By substituting the bits of binary code, the Hamming code will
be b7b6b5b4b3b2b1=100p3Op2p1b7b6b5b4b3b2b1=100p3Op2p1. Now, let us find the
parity bits.
p1=b7⊕b5⊕b3=1⊕0⊕0=1p1=b7⊕b5⊕b3=1⊕0⊕0=1
p2=b7⊕b6⊕b3=1⊕0⊕0=1p2=b7⊕b6⊕b3=1⊕0⊕0=1
p3=b7⊕b6⊕b5=1⊕0⊕0=1p3=b7⊕b6⊕b5=1⊕0⊕0=1
By substituting these parity bits, the Hamming code will
be b7b6b5b4b3b2b1=1001011b7b6b5b4b3b2b1=1001011.
Example 2
In the above example, we got the Hamming code
as b7b6b5b4b3b2b1=1001011b7b6b5b4b3b2b1=1001011. Now, let us find the error position
when the code received is b7b6b5b4b3b2b1=1001111b7b6b5b4b3b2b1=1001111.
Now, let us find the check bits.
c1=b7⊕b5⊕b3⊕b1=1⊕0⊕1⊕1=1c1=b7⊕b5⊕b3⊕b1=1⊕0⊕1⊕1=1
c2=b7⊕b6⊕b3⊕b2=1⊕0⊕1⊕1=1c2=b7⊕b6⊕b3⊕b2=1⊕0⊕1⊕1=1
c3=b7⊕b6⊕b5⊕b4=1⊕0⊕0⊕1=0c3=b7⊕b6⊕b5⊕b4=1⊕0⊕0⊕1=0
The decimal value of check bits gives the position of error in received Hamming code.
c3c2c1=(011)2=(3)10c3c2c1=(011)2=(3)10
Therefore, the error present in third bit (b3) of Hamming code. Just complement the value
present in that bit and remove parity bits in order to get the original binary code.
***
UNIT-3 PRINCIPLES OF LOGIC CIRCUITS-I
LOGIC GATES:
Logic gates are the basic building blocks of any digital system. It is an
electronic circuit having one or more than one input and only one output. The relationship
between the input and the output is based on certain logic. Based on this, logic gates are
named as AND gate, OR gate, NOT gate etc.
A gate can be represented in three ways:
I. Graphical Symbols
II. Algebric Notation
III. Truth Table
Fundamental Gates:-
1. AND Gate:
The AND gate is a digital logic gate with ‘n’ i/ps one o/p, which perform logical conjunction
based on the combinations of its inputs. The output of this gate is true only when all the
inputs are true. When one or more inputs of the AND gate’s i/ps are false, then only the
output of the AND gate is false.
Logic diagram:-
Truth Table:-
2. OR Gate:
The OR gate is a digital logic gate with ‘n’ i/ps and one o/p, that performs a logical
conjunction based on the combinations of its inputs. The output of the OR gate is true only
when one or more inputs are true. If all the i/ps of the gate are false, then only the output of
the OR gate is false.
Logic diagram:-
Truth Table:-
3. NOT Gate:
The NOT gate is a digital logic gate with one input and one output that operates an inverter
operation of the input.The output of the NOT gate is the reverse of the input. When the input
of the NOT gate is true then the output will be false and vice versa.
Logic diagram:-
Truth Table:-
Universal/Derived Gates:-
These two (NAND/NOR) gates are called as universal gate because we can derived the
function of any other gate by using only NAND/NOR gates.
1. NAND Gate:
The NAND gate is a digital logic gate with ‘n’ i/ps and one o/p, that performs the operation
of the AND gate followed by the operation of the NOT gate. NAND gate is designed by
combining the AND and NOT gates. If the input of the NAND gate high, then the output of
the gate will be low.
Logic diagram:-
Truth Table:-
2. NOR Gate:
The NOR gate is a digital logic gate with n inputs and one output, that performs the
operation of the OR gate followed by the NOT gate. NOR gate is designed by combining the
OR and NOT gate. When any one of the i/ps of the NOR gate is true, then the output of the
NOR gate will be false.
Logic diagram:-
Truth Table:-
Special Gates:-
1. XOR Gate:
The Exclusive-OR gate is a digital logic gate with two inputs and one output. The short form
of this gate is Ex-OR. It performs based on the operation of OR gate. If any one of the inputs
of this gate is high, then the output of the EX-OR gate will be high.
Logic diagram:-
Truth Table:-
2. XNOR Gate:
The Exclusive-NOR gate is a digital logic gate with two inputs and one output. The short
form of this gate is Ex-NOR. It performs based on the operation of NOR gate. When both the
inputs of this gate are high, then the output of the EX-NOR gate will be high. But, if any one
of the inputs is high (but not both), then the output will be low.
Logic diagram:-
Truth Table:-
BOOLEAN ALGEBRA:
Boolean Algebra is an algebra, which deals with binary numbers & binary variables. Hence,
it is also called as Binary Algebra or logical Algebra. A mathematician, named George Boole
had developed this algebra in 1854. The variables used in this algebra are also called as
Boolean variables.
The range of voltages corresponding to Logic ‘High’ is represented with ‘1’ and the range of
voltages corresponding to logic ‘Low’ is represented with ‘0’.
Boolean Postulates:-
Consider the binary numbers 0 and 1, Boolean variable (x) and its complement (x’). Either
the Boolean variable or complement of it is known as literal. The four possible logical
OR operations among these literals and binary numbers are shown below.
x+0=x
x+1=1
x+x=x
x + x’ = 1
Similarly, the four possible logical AND operations among those literals and binary numbers
are shown below.
x.1 = x
x.0 = 0
x.x = x
x.x’ = 0
These are the simple Boolean postulates. We can verify these postulates easily, by
substituting the Boolean variable with ‘0’ or ‘1’.
Note− The complement of complement of any Boolean variable is equal to the variable
itself. i.e., (x’)’=x.
Basic Laws of Boolean Algebra:-
Following are the three basic laws of Boolean Algebra.
Commutative law
Associative law
Distributive law
Commutative Law:-
If any logical operation of two Boolean variables give the same result irrespective of the
order of those two variables, then that logical operation is said to be Commutative. The
logical OR & logical AND operations of two Boolean variables x & y are shown below
x+y=y+x
x.y = y.x
The symbol ‘+’ indicates logical OR operation. Similarly, the symbol ‘.’ indicates logical
AND operation and it is optional to represent. Commutative law obeys for logical OR &
logical AND operations.
Associative Law:-
If a logical operation of any two Boolean variables is performed first and then the same
operation is performed with the remaining variable gives the same result, then that logical
operation is said to be Associative. The logical OR & logical AND operations of three
Boolean variables x, y & z are shown below.
x + (y + z) = (x + y) + z
x.(y.z) = (x.y).z
Associative law obeys for logical OR & logical AND operations.
Distributive Law:-
If any logical operation can be distributed to all the terms present in the Boolean function,
then that logical operation is said to be Distributive. The distribution of logical OR &
logical AND operations of three Boolean variables x, y & z are shown below.
x.(y + z) = x.y + x.z
x + (y.z) = (x + y).(x + z)
Distributive law obeys for logical OR and logical AND operations.
These are the Basic laws of Boolean algebra. We can verify these laws easily, by substituting
the Boolean variables with ‘0’ or ‘1’.
Theorems of Boolean Algebra:-
The following two theorems are used in Boolean algebra.
Duality theorem
DeMorgan’s theorem
Duality Theorem:-
This theorem states that the dual of the Boolean function is obtained by interchanging the
logical AND operator with logical OR operator and zeros with ones. For every Boolean
function, there will be a corresponding Dual function.
Let us make the Boolean equations (relations) that we discussed in the section of Boolean
postulates and basic laws into two groups. The following table shows these two groups.
Group1 Group2
x+0=x x.1 = x
x+1=1 x.0 = 0
x+x=x x.x = x
x + x’ = 1 x.x’ = 0
x + (y + z) = (x + y) + z x.(y.z) = (x.y).z
In each row, there are two Boolean equations and they are dual to each other. We can verify
all these Boolean equations of Group1 and Group2 by using duality theorem.
DeMorgan’s Theorem:-
This theorem is useful in finding the complement of Boolean function. It states that the
complement of logical OR of at least two Boolean variables is equal to the logical AND of
each complemented variable.
DeMorgan’s theorem with 2 Boolean variables x and y can be represented as
(x + y)’ = x’.y’
The dual of the above Boolean function is
(x.y)’ = x’ + y’
Therefore, the complement of logical AND of two Boolean variables is equal to the logical
OR of each complemented variable. Similarly, we can apply DeMorgan’s theorem for more
than 2 Boolean variables also.
LOGIC CIRCUITS:
Logic circuits are those circuits that simulates human mental process. Digital circuits are
logical circuits. Logic circuits use two different values of physical quantity, usually voltage,
to represent the Boolean values true(0r 1) and false(0r 0). Logic circuits can have inputs and
they have one or more outputs that are, at least partially, dependent on their inputs. In logic
circuit diagrams, connections from one circuits output to another circuit’s input are often
shown with an arrowhead at the input end.
In terms of their behaviour, logic circuits are much like programming language functions or
methods. Their inputs are analogous to function parameters and their outputs are analogous
to function returned values. However, a logic circuit can have multiple outputs.
Types of logic circuits:-
1. Combinational logic circuits:-
Output depends only on its current inputs.
A combinational circuit may contain an arbitrary number of logic gates and inverters
but no feedback loops.
A feedback loop is a connection from the output of one gate to propagate back into the
input of that same gate.
A function of a combinational circuit represent by a logic diagram is formally
described using logic expressions and truth tables.
2. Sequential logic circuits:-
Output depends not only on the current inputs but also on the past sequences of inputs.
Sequential logic circuits contain combinational logic in addition to memory elements
formed with feedback loops.
The behaviour of sequential circuits is formally described with state transition tables
and diagrams.
COMBINATIONAL CIRCUITS:
Combinational circuit is a circuit in which we combine the different gates in the circuit, for
example encoder, decoder, multiplexer and demultiplexer. Some of the characteristics of
combinational circuits are following −
The output of combinational circuit at any instant of time, depends only on the levels
present at input terminals.
The combinational circuit do not use any memory. The previous state of input does not
have any effect on the present state of the circuit.
A combinational circuit can have an n number of inputs and m number of outputs.
Half Adder:-
Half adder is a combinational logic circuit with two inputs and two outputs. The half adder
circuit is designed to add two single bit binary number A and B. It is the basic building
block for addition of two single bit numbers. This circuit has two outputs carry and sum.
Truth Table:-
Circuit Diagram:-
Full Adder:-
Full adder is developed to overcome the drawback of Half Adder circuit. It can add two one-
bit numbers A and B, and carry c. The full adder is a three input and two output
combinational circuit.
Truth Table:-
Circuit Diagram:-
Half Subtractors:-
Half subtractor is a combination circuit with two inputs and two outputs (difference and
borrow). It produces the difference between the two binary bits at the input and also
produces an output (Borrow) to indicate if a 1 has been borrowed. In the subtraction (A-B),
A is called as Minuend bit and B is called as Subtrahend bit.
Truth Table:-
Circuit Diagram:-
Full Subtractors:-
The disadvantage of a half subtractor is overcome by full subtractor. The full subtractor is a
combinational circuit with three inputs A,B,C and two output D and C'. A is the 'minuend', B
is 'subtrahend', C is the 'borrow' produced by the previous stage, D is the difference output
and C' is the borrow output.
Truth Table:-
Circuit Diagram:-
Multiplexers:-
Multiplexer is a special type of combinational circuit. There are n-data inputs, one output and
m select inputs with 2m = n. It is a digital circuit which selects one of the n data inputs and
routes it to the output. The selection of one of the n inputs is done by the selected inputs.
Depending on the digital code applied at the selected inputs, one out of n data sources is
selected and transmitted to the single output Y. E is called the strobe or enable input which is
useful for the cascading. It is generally an active low terminal that means it will perform the
required operation when it is low.
2 : 1 multiplexer
4 : 1 multiplexer
16 : 1 multiplexer
32 : 1 multiplexer
Block Diagram:-
Truth Table:-
Demultiplexers:-
A demultiplexer performs the reverse operation of a multiplexer i.e. it receives one input and
distributes it over several outputs. It has only one input, n outputs, m select input. At a time
only one output line is selected by the select lines and the input is transmitted to the selected
output line. A de-multiplexer is equivalent to a single pole multiple way switch as shown in
fig.
Demultiplexers comes in multiple variations.
1 : 2 demultiplexer
1 : 4 demultiplexer
1 : 16 demultiplexer
1 : 32 demultiplexer
Block diagram:-
Truth Table:-
Decoder:-
A decoder is a combinational circuit. It has n input and to a maximum m = 2n outputs.
Decoder is identical to a demultiplexer without any data input. It performs operations which
are exactly opposite to those of an encoder.
Code converters
BCD to seven segment decoders
2 to 4 Line Decoder:-
The block diagram of 2 to 4 line decoder is shown in the fig. A and B are the two inputs
where D0 through D3 are the four outputs. Truth table explains the operations of a decoder.
It shows that each output is 1 for only a specific combination of inputs.
Truth Table:-
Logic Circuit:-
Encoder:-
Encoder is a combinational circuit which is designed to perform the inverse operation of the
decoder. An encoder has n number of input lines and m number of output lines. An encoder
produces an m bit binary code corresponding to the digital input number. The encoder
accepts an n input digital word and converts it into an m bit another digital word.
Priority encoders
Decimal to BCD encoder
Octal to binary encoder
Hexadecimal to binary encoder
Priority Encoder:-
This is a special type of encoder. Priority is given to the input lines. If two or more input line
are 1 at the same time, then the input line with highest priority will be considered. There are
four input D0, D1, D2, D3 and two output Y0, Y1. Out of the four input D3 has the highest
priority and D0 has the lowest priority. That means if D3 = 1 then Y1 Y1 = 11 irrespective of
the other inputs. Similarly if D3 = 0 and D2 = 1 then Y1 Y0 = 10 irrespective of the other
inputs.
Truth Table:-
Logic Circuit:-
CANONICAL AND STANDARD FORMS:
We will get four Boolean product terms by combining two variables x and y with logical
AND operation. These Boolean product terms are called as min terms or standard product
terms. The min terms are x’y’, x’y, xy’ and xy.
Similarly, we will get four Boolean sum terms by combining two variables x and y with
logical OR operation. These Boolean sum terms are called as Max terms or standard sum
terms. The Max terms are x + y, x + y’, x’ + y and x’ + y’.
The following table shows the representation of min terms and MAX terms for 2 variables.
0 0 m0=x’y’ M0=x + y
0 1 m1=x’y M1=x + y’
1 0 m2=xy’ M2=x’ + y
1 1 m3=xy M3=x’ + y’
If the binary variable is ‘0’, then it is represented as complement of variable in min term and
as the variable itself in Max term. Similarly, if the binary variable is ‘1’, then it is
represented as complement of variable in Max term and as the variable itself in min term.
From the above table, we can easily notice that min terms and Max terms are complement of
each other. If there are ‘n’ Boolean variables, then there will be 2n min terms and 2n Max
terms.
Canonical SoP and PoS forms:-
A truth table consists of a set of inputs and output(s). If there are ‘n’ input variables, then
there will be 2n possible combinations with zeros and ones. So the value of each output
variable depends on the combination of input variables. So, each output variable will have
‘1’ for some combination of input variables and ‘0’ for some other combination of input
variables.
Therefore, we can express each output variable in following two ways.
Inputs Output
P Q r f
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 1
1 0 0 0
1 0 1 1
1 1 0 1
1 1 1 1
Here, the output (f) is ‘1’ for four combinations of inputs. The corresponding min terms are
p’qr, pq’r, pqr’, pqr. By doing logical OR of these four min terms, we will get the Boolean
function of output (f).
Therefore, the Boolean function of output is, f = p’qr + pq’r + pqr’ + pqr. This is
the canonical SoP form of output, f. We can also represent this function in following two
notations.
f=m3+m5+m6+m7
f=∑m(3,5,6,7)
In one equation, we represented the function as sum of respective min terms. In other
equation, we used the symbol for summation of those min terms.
Canonical PoS form:-
Canonical PoS form means Canonical Product of Sums form. In this form, each sum term
contains all literals. So, these sum terms are nothing but the Max terms. Hence, canonical
PoS form is also called as product of Max terms form.
First, identify the Max terms for which, the output variable is zero and then do the logical
AND of those Max terms in order to get the Boolean expression (function) corresponding to
that output variable. This Boolean function will be in the form of product of Max terms.
Follow the same procedure for other output variables also, if there is more than one output
variable.
Example
Consider the same truth table of previous example. Here, the output (f) is ‘0’ for four
combinations of inputs. The corresponding Max terms are p + q + r, p + q + r’, p + q’ + r, p’
+ q + r. By doing logical AND of these four Max terms, we will get the Boolean function of
output (f).
Therefore, the Boolean function of output is, f = (p + q + r).(p + q + r’).(p + q’ + r).(p’ + q +
r). This is the canonical PoS form of output, f. We can also represent this function in
following two notations.
f=M0.M1.M2.M4
f=∏M(0,1,2,4)
In one equation, we represented the function as product of respective Max terms. In other
equation, we used the symbol for multiplication of those Max terms.
The Boolean function, f = (p + q + r).(p + q + r’).(p + q’ + r).(p’ + q + r) is the dual of the
Boolean function, f = p’qr + pq’r + pqr’ + pqr.
Therefore, both canonical SoP and canonical PoS forms are Dual to each other.
Functionally, these two forms are same. Based on the requirement, we can use one of these
two forms.
Standard SoP and PoS forms:-
We discussed two canonical forms of representing the Boolean output(s). Similarly, there
are two standard forms of representing the Boolean output(s). These are the simplified
version of canonical forms.
MINIMIZATION OF GATES
We can simplify the Boolean functions using Boolean postulates and theorems. It is a time
consuming process and we have to re-write the simplified expressions after each step.
To overcome this difficulty, Karnaugh introduced a method for simplification of Boolean
functions in an easy way. This method is known as Karnaugh map method or K-map
method. It is a graphical method, which consists of 2n cells for ‘n’ variables. The adjacent
cells are differed only in single bit position.
K-Maps for 2 to 5 Variables:-
K-Map method is most suitable for minimizing Boolean functions of 2 variables to 5
variables. Now, let us discuss about the K-Maps for 2 to 5 variables one by one.
2 Variable K-Map:-
The number of cells in 2 variable K-map is four, since the number of variables is two. The
following figure shows 2 variable K-Map.
Here, we got three prime implicants WX’, WY & YZ’. All these prime implicants
are essential because of following reasons.
Two ones (m8 & m9) of fourth row grouping are not covered by any other groupings.
Only fourth row grouping covers those two ones.
Single one (m15) of square shape grouping is not covered by any other groupings. Only
the square shape grouping covers that one.
Two ones (m2 & m6) of fourth column grouping are not covered by any other groupings.
Only fourth column grouping covers those two ones.
Therefore, the simplified Boolean function is
f = WX’ + WY + YZ’
Follow these rules for simplifying K-maps in order to get standard product of sums form.
Select the respective K-map based on the number of variables present in the Boolean
function.
If the Boolean function is given as product of Max terms form, then place the zeroes at
respective Max term cells in the K-map. If the Boolean function is given as product of
sums form, then place the zeroes in all possible cells of K-map for which the given sum
terms are valid.
Check for the possibilities of grouping maximum number of adjacent zeroes. It should be
powers of two. Start from highest power of two and up to least power of two. Highest
power is equal to the number of variables considered in K-map and least power is zero.
Each grouping will give either a literal or one sum term. It is known as prime implicant.
The prime implicant is said to be essential prime implicant, if atleast single ‘0’ is not
covered with any other groupings but only that grouping covers.
Note down all the prime implicants and essential prime implicants. The simplified
Boolean function contains all essential prime implicants and only the required prime
implicants.
Note − If don’t care terms also present, then place don’t cares ‘x’ in the respective cells of
K-map. Consider only the don’t cares ‘x’ that are helpful for grouping maximum number of
adjacent zeroes. In those cases, treat the don’t care value as ‘0’.
Example:-
Let us simplify the following Boolean
function, f(X,Y,Z)=∏M(0,1,2,4)f(X,Y,Z)=∏M(0,1,2,4)using K-map.
The given Boolean function is in product of Max terms form. It is having 3 variables X, Y &
Z. So, we require 3 variable K-map. The given Max terms are M0, M1, M2 & M4. The
3 variable K-map with zeroes corresponding to the given Max terms is shown in the
following figure.
There are no possibilities of grouping either 8 adjacent zeroes or 4 adjacent zeroes. There are
three possibilities of grouping 2 adjacent zeroes. After these three groupings, there is no
single zero left as ungrouped. The 3 variable K-map with these three groupings is shown
in the following figure.
Here, we got three prime implicants X + Y, Y + Z & Z + X. All these prime implicants
are essential because one zero in each grouping is not covered by any other groupings
except with their individual groupings.
Therefore, the simplified Boolean function is
f = (X + Y).(Y + Z).(Z + X)
In this way, we can easily simplify the Boolean functions up to 5 variables using K-map
method. For more than 5 variables, it is difficult to simplify the functions using K-Maps.
Because, the number of cells in K-map gets doubled by including a new variable.
Due to this checking and grouping of adjacent ones (min terms) or adjacent zeros (Max
terms) will be complicated. We will discuss Tabular method in next chapter to overcome
the difficulties of K-map method.
Here, 3 to 8 decoder generates eight min terms. The two programmable OR gates have the
access of all these min terms. But, only the required min terms are programmed in order to
produce the respective Boolean functions by each OR gate. The symbol ‘X’ is used for
programmable connections.
Programmable Logic Array (PLA):-
PLA is a programmable logic device that has both Programmable AND array &
Programmable OR array. Hence, it is the most flexible PLD. The block diagram of PLA is
shown in the following figure.
Here, the inputs of AND gates are programmable. That means each AND gate has both
normal and complemented inputs of variables. So, based on the requirement, we can
program any of those inputs. So, we can generate only the required product terms by using
these AND gates.
Here, the inputs of OR gates are also programmable. So, we can program any number of
required product terms, since all the outputs of AND gates are applied as inputs to each OR
gate. Therefore, the outputs of PAL will be in the form of sum of products form.
Example:-
Let us implement the following Boolean functions using PLA.
A=XY+XZ′A=XY+XZ′
B=XY′+YZ+XZ′B=XY′+YZ+XZ′
The given two functions are in sum of products form. The number of product terms present
in the given Boolean functions A & B are two and three respectively. One product
term, Z′XZ′X is common in each function.
So, we require four programmable AND gates & two programmable OR gates for producing
those two functions. The corresponding PLA is shown in the following figure.
The programmable AND gates have the access of both normal and complemented inputs
of variables. In the above figure, the inputs X, X′X′, Y, Y′Y′, Z & Z′Z′, are available at the
inputs of each AND gate. So, program only the required literals in order to generate one
product term by each AND gate.
All these product terms are available at the inputs of each programmable OR gate. But,
only program the required product terms in order to produce the respective Boolean
functions by each OR gate. The symbol ‘X’ is used for programmable connections.
***
The combinational circuit does not use any memory. Hence the previous state of input does
not have any effect on the present state of the circuit. But sequential circuit has memory so
output can vary based on input. This type of circuits uses previous input, output, clock and a
memory element.
Flip Flop:-
Flip flop is a sequential circuit which generally samples its inputs and changes its outputs
only at particular instants of time and not continuously. Flip flop is said to be edge sensitive
or edge triggered rather than being level triggered like latches.
S-R Flip-Flop :-
It is basically S-R latch using NAND gates with an additional enable input. It is also called
as level triggered SR-FF. For this, circuit in output will take place if and only if the enable
input (E) is made active.
In short this circuit will operate as an S-R latch if E = 1 but there is no change in the output if
E = 0.
Circuit Diagram:-
Truth Table:-
Operation:-
Hence R' and S' both will be equal to 1. Since S' and R' are the
input of the basic S-R latch using NAND gates, there will be
no change in the state of outputs.
Circuit Diagram:-
Truth Table:-
Operation:-
1 J = K = 0 (No change) When clock = 0, the slave becomes active and master is
inactive. But since the S and R inputs have not changed, the
slave outputs will also remain unchanged. Therefore outputs
will not change if J = K =0.
Circuit Diagram:-
Truth Table:-
Operation:-
3 E = 1 and D = 1 If E = 1 and D = 1, then S = 1 and R = 0. This will set the latch and
Qn+1 = 1 and Qn+1 bar = 0 irrespective of the present state.
Symbol Diagram:-
Block Diagram:-
Truth Table:-
Operation:-
Edge-Triggered Flip-Flop:-
Positive edge (rising an edge-triggered flip-flop changes states either at the edge) or at the
negative edge (falling edge) of the clock pulse on the control input. The three basic types are
introduced here: S-R, J-K and D.
The basic operation is illustrated below, along with the truth table for this type of flip-flop.
The operation and truth table for a negative edge-triggered flip-flop are the same as those for
a positive except that the falling edge of the clock pulse is the triggering edge.
As S = 1, R = 0. Flip-flop SETS
on the rising clock edge.
Note that the S and R inputs can be changed at any time when the clock input is LOW or
HIGH (except for a very short interval around the triggering transition of the clock) without
affecting the output. This is illustrated in the timing diagram below:
The J-K flip-flop works very similar to S-R flip-flop. The only difference is that this flip-flop
has NO invalid state. The outputs toggle (change to the opposite state) when both J and K
inputs are HIGH. The truth table is shown below.
Edge-triggered D flip-flop:-
The operation of a D flip-flop is much more simpler. It has only one input addition to the
clock. It is very useful when a single data bit (0 or 1) is to be stored. If there is a HIGH on
the D input when a clock pulse is applied, the flip-flop SETs and stores a 1. If there is a
LOW on the D input when a clock pulse is applied, the flip-flop RESETs and stores a 0. The
truth table below summarizes the operations of the positive edge-triggered D flip-flop. As
before, the negative edge-triggered flip-flop works the same except that the falling edge of
the clock pulse is the triggering edge.
EXCITATION TABLE
Flip-flop specifies the next state when the input and the present state are known. During the
design of sequential circuits, the required transition from present state to next state and to find
the FF input conditions that will cause the required transition. For this reason we need a table
that lists the required input combinations for a given change of state. Such a table is called a
flip-flop excitation table.
Excitation Table
DIGITAL REGISTERS
Flip-flop is a 1 bit memory cell which can be used for storing the digital data. To increase the
storage capacity in terms of number of bits, we have to use a group of flip-flop. Such a group
of flip-flop is known as a Register. The n-bit register will consist of n number of flip-flop
and it is capable of storing an n-bit word.
The binary data in a register can be moved within the register from one flip-flop to another.
The registers that allow such data transfers are called as shift registers. There are four
modes of operations of a shift register.
Block Diagram:-
Operation:-
Before application of clock signal, let Q3 Q2 Q1 Q0 = 0000 and apply LSB bit of the number
to be entered to Din. So Din = D3 = 1. Apply the clock. On the first falling edge of clock, the
FF-3 is set, and stored word in the register is Q3 Q2 Q1Q0 = 1000.
Apply the next bit to Din. So Din = 1. As soon as the next negative edge of the clock hits, FF-
2 will set and the stored word change to Q3 Q2 Q1 Q0 = 1100.
Apply the next bit to be stored i.e. 1 to Din. Apply the clock pulse. As soon as the third
negative clock edge hits, FF-1 will be set and output will be modified to Q3 Q2 Q1 Q0 = 1110.
Similarly with Din = 1 and with the fourth negative clock edge arriving, the stored word in
the register is Q3 Q2 Q1 Q0 = 1111.
Truth Table:-
Waveforms:-
As soon as the data loading gets completed, all the flip-flops contain their required data,
the outputs are enabled so that all the loaded data is made available over all the output
lines at the same time.
4 clock cycles are required to load a four bit word. Hence the speed of operation of SIPO
mode is same as that of SISO mode.
Block Diagram:-
The circuit shown below is a four bit parallel input serial output register.
Output of previous Flip Flop is connected to the input of the next one via a combinational
circuit.
The binary input word B0, B1, B2, B3 is applied though the same combinational circuit.
There are two modes in which this circuit can work namely - shift mode or load mode.
Load mode:-
When the shift/load bar line is low (0), the AND gate 2, 4 and 6 become active they will pass
B1, B2, B3 bits to the corresponding flip-flops. On the low going edge of clock, the binary
input B0, B1, B2, B3 will get loaded into the corresponding flip-flops. Thus parallel loading
takes place.
Shift mode:-
When the shift/load bar line is low (1), the AND gate 2, 4 and 6 become inactive. Hence the
parallel loading of the data becomes impossible. But the AND gate 1,3 and 5 become active.
Therefore the shifting of data from left to right bit by bit on application of clock pulses. Thus
the parallel in serial out operation takes place.
Block Diagram:-
Parallel Input Parallel Output (PIPO) :-
In this mode, the 4 bit binary input B0, B1, B2, B3 is applied to the data inputs D0, D1, D2,
D3 respectively of the four flip-flops. As soon as a negative clock edge is applied, the input
binary bits will be loaded into the flip-flops simultaneously. The loaded bits will appear
simultaneously to the output side. Only clock pulse is essential to load all the bits.
Block Diagram:-
Hence if we want to use the shift register to multiply and divide the given binary number,
then we should be able to move the data in either left or right direction.
Such a register is called bi-directional register. A four bit bi-directional shift register fig.
There are two serial inputs namely the serial right shift data input DR, and the serial left
shift data input DL along with a mode select input (M).
Block Diagram:-
Operation:-
1 With M = 1 − Shift right operation If M = 1, then the AND gates 1, 3, 5 and 7 are
enabled whereas the remaining AND gates 2, 4,
6 and 8 will be disabled.
2 With M = 0 − Shift left operation When the mode control M is connected to 0 then
the AND gates 2, 4, 6 and 8 are enabled while 1,
3, 5 and 7 are disabled.
Parallel loading
Lift shifting
Right shifting
The mode control input is connected to logic 1 for parallel loading operation whereas it is
connected to 0 for serial shifting. With mode control pin connected to ground, the universal
shift register acts as a bi-directional register. For serial left operation, the input is applied to
the serial input which goes to AND gate-1 shown in figure. Whereas for the shift right
operation, the serial input is applied to D input.
Block Diagram:-
DIGITAL COUNTERS
Counter is a sequential circuit. A digital circuit which is used for counting pulses is known
counter. Counter is the widest application of flip-flops. It is a group of flip-flops with a clock
signal applied. Counters are of two types.
Logical Diagram:-
Operation:-
S.N. Condition Operation
1 Initially let both the FFs be in the reset state QBQA = 00 initially
2 After 1st negative clock edge As soon as the first negative clock
edge is applied, FF-A will toggle and
QA will be equal to 1.
4 After 3rd negative clock edge On the arrival of 3rd negative clock
edge, FF-A toggles again and
QA become 1 from 0.
5 After 4th negative clock edge On the arrival of 4th negative clock
edge, FF-A toggles again and
QA becomes 1 from 0.
Synchronous counters :-
If the "clock" pulses are applied to all the flip-flops in a counter simultaneously, then such a
counter is called as synchronous counter.
The JA and KA inputs of FF-A are tied to logic 1. So FF-A will work as a toggle flip-flop.
The JB and KB inputs are connected to QA.
Logical Diagram:-
Operation:-
1 Initially let both the FFs be in the reset state QBQA = 00 initially.
2 After 1st negative clock edge As soon as the first negative clock
edge is applied, FF-A will toggle and
QA will change from 0 to 1.
5 After 4th negative clock edge On application of the next clock pulse,
QA will change from 1 to 0 as QB will
also change from 1 to 0.
Classification of counters:-
Depending on the way in which the counting progresses, the synchronous or asynchronous
counters are classified as follows −
Up counters
Down counters
Up/Down counters
UP/DOWN Counter
Up counter and down counter is combined together to obtain an UP/DOWN counter. A mode
control (M) input is also provided to select either up or down mode. A combinational circuit
is required to be designed and used between each pair of flip-flop in order to achieve the
up/down operation.
UP counting mode (M=0) − The Q output of the preceding FF is connected to the clock of
the next stage if up counting is to be achieved. For this mode, the mode select input M is at
logic 0 (M=0).
DOWN counting mode (M=1) − If M = 1, then the Q bar output of the preceding FF is
connected to the next FF. This will operate the counter in the counting mode.
Example:
For a ripple up counter, the Q output of preceding FF is connected to the clock input of the
next one.
For a ripple up counter, the Q output of preceding FF is connected to the clock input of the
next one.
For a ripple down counter, the Q bar output of preceding FF is connected to the clock input
of the next one.
Let the selection of Q and Q bar output of the preceding FF be controlled by the mode
control input M such that, If M = 0, UP counting. So connect Q to CLK. If M = 1, DOWN
counting. So connect Q bar to CLK.
Block Diagram
Truth Table
Operation
1 Case 1 − With M = 0 (Up counting mode) If M = 0 and M bar = 1, then the AND
gates 1 and 3 in fig. will be enabled
whereas the AND gates 2 and 4 will be
disabled.
2 Case 2: With M = 1 (Down counting mode) If M = 1, then AND gates 2 and 4 in fig.
are enabled whereas the AND gates 1 and
3 are disabled.
Type of modulus
Frequency counters
Digital clock
Time measurement
A to D converter
Frequency divider circuits
Digital triangular wave generator.
***
Access time in RAM is independent of the address, that is, each storage location inside the
memory is as easy to reach as other locations and takes the same amount of time. Data in the
RAM can be accessed randomly but it is very expensive.
RAM is volatile, i.e. data stored in it is lost when we switch off the computer or if there is a
power failure. Hence, a backup Uninterruptible Power System (UPS) is often used with
computers. RAM is small, both in terms of its physical size and in the amount of data it can
hold.
Long life
No need to refresh
Faster
Used as cache memory
Large size
Expensive
High power consumption
Dynamic RAM (DRAM)
DRAM, unlike SRAM, must be continually refreshed in order to maintain the data. This is
done by placing the memory on a refresh circuit that rewrites the data several hundred times
per second. DRAM is used for most system memory as it is cheap and small. All DRAMs are
made up of memory cells, which are composed of one capacitor and one transistor.
Advantages of ROM
The advantages of ROM are as follows −
Non-volatile in nature
Cannot be accidentally changed
Cheaper than RAMs
Easy to test
More reliable than RAMs
Static and do not require refreshing
Contents are always known and can be verified
SECONDARY MEMORY
This type of memory is also known as external memory or non-volatile. It is slower than the
main memory. These are used for storing data/information permanently. CPU directly does
not access these memories, instead they are accessed via input-output routines. The contents
of secondary memories are first transferred to the main memory, and then the CPU can access
it. For example, disk, CD-ROM, DVD, etc.
Flash memory of a form of semiconductor memory is widely used for many electronics data
storage applications.
Although first developed in the 1980s, the use of flash memory has grown rapidly in recent
years as forms the basis of many memory products.
Flash memory can be seen in many forms today including flash memory USB memory sticks,
digital camera memory cards in the form of compact flash or secure digital, SD memory. In
addition to this flash memory storage is used in many other items from MP3 players to
mobile phones, and in many other applications
There are also different flash memory types and these different types are each suited to their
own applications.
Flash memory storage is a form of non-volatile memory that was born out of a combination
of the traditional EPROM and E2PROM.
In essence it uses the same method of programming as the standard EPROM and the erasure
method of the E2PROM.
One of the main advantages that flash memory has when compared to EPROM is its ability to
be erased electrically. However it is not possible to erase each cell in a flash memory
individually unless a large amount of additional circuitry is added into the chip. This would
add significantly to the cost and accordingly most manufacturers dropped this approach in
favour of a system whereby the whole chip, or a large part of it is block or flash erased -
hence the name.
Today most flash memory chips have selective erasure, allowing parts or sectors of the flash
memory to be erased. However any erasure still means that a significant section of the chip
has to be erased.
As with any technology there are various advantages and disadvantages. It is necessary to
consider all of these when determining the optimum type of memory to be used.
There are two basic types of Flash memory. Although they use the same basic technology,
the way they are addressed for reading and writing is slightly different. They two flash
memory types are:
1. NAND Flash memory: NAND Flash memories have a different structure to NOR
memories. This type of flash memory is accessed much like block devices such as hard
disks. When NAND Flash memories are to be read, the contents must first be paged into
memory-mapped RAM. This makes the presence of a memory management unit essential.
2. NOR Flash memory: NOR Flash memory is able to read individual flash memory cells,
and as such it behaves like a traditional ROM in this mode. For the erase and write
functions, commands are written to the first page of the mapped memory, as defined in
"common flash interface" created by Intel.
NAND / NOR tradeoff: NAND Flash memories and NOR Flash memories can be used for
different applications. However some systems will use a combination of both types of Flash
memory. The NOR memory type is used as ROM and the NAND memory is partitioned with
a file system and used as a random access storage area.
The hard disk drive is the main, and usually largest, data storage hardware device in a
computer.
The operating system, software titles, and most other files are stored in the hard disk drive.
HDD (abbreviation), hard drive, hard disk, fixed drive, fixed disk, fixed disk drive
The hard drive is sometimes referred to as the "C drive" due to the fact that Microsoft
Windows designates the "C" drive letter to the primary partition on the primary hard drive in
a computer by default.
While this is not a technically correct term to use, it is still common. For example, some
computers have multiple drive letters (e.g. C, D, E) representing areas across one or more
hard drives.
A hard drive is usually the size of a paperback book but much heavier.
The sides of the hard drive have pre-drilled, threaded holes for easy mounting in the 3.5-inch
drive bay in the computer case. Mounting is also possible in a larger 5.25-inch drive bay with
an adapter. The hard drive is mounted so the end with the connections faces inside the
computer.
The back end of the hard drive contains a port for a cable that connects to the motherboard.
The type of cable used will depend on the type of drive but is almost always included with a
hard drive purchase. Also here is a connection for power from the power supply.
Most hard drives also have jumper settings on the back end that define how the motherboard
is to recognize the drive when more than one is present. These settings vary from drive to
drive so check with your hard drive manufacturer for details.
OPTICAL MEMORIES
In Optical Memory, data is stored on an optical medium (i.e., CD-ROM or DVD), and read
with a laser beam. While not currently practical for use in computer processing, optical
memory is an ideal solution for storing large quantities of data very inexpensively, and more
importantly, transporting that data between computer devices.
I. CD:-
circular discs
4.75 in (12 cm) in diameter
developed by Philips and Sony in 1980
Initially for audio
1985 CD-ROM (Compact Disc Read Only Memory)
can hold 720 MB(80 min audio)= 500 floppy disks or 200,000 pages of text.
Advantages of CD-ROM:
o Large Storage Capacity
o Portability
o Sturdiness
Disadvantages of CD-ROM:
o cannot be updated
o access time longer
II.DVD:-
Digital Versatile Disk (Formerly Digital Video Disk)
More capacity than CDs while having the same dimensions.
developed by Philips, Sony, Toshiba, and Panasonic in 1995.
An extremely high capacity compact disc capable of storing from 4.7 GB to 17
GB
III.Blue Ray Disk:-
Blu-ray Disc (official abbreviation BD) is an optical disc storage medium
designed to replace the DVD format.
The standard physical medium is a 12 cm plastic optical disc, the same size as
DVDs and CDs.
Blu-Ray Discs contain 25 GB per layer, with dual layer discs (50 GB) the
norm for feature-length video discs and additional layers possible later.
CCDs
Stands for "Charged Coupled Device." CCDs are sensors used in digital cameras and video
cameras to record still and moving images. The CCD captures light and converts it to digital
data that is recorded by the camera. For this reason, a CCD is often considered the digital
version of film.
The quality of an image captured by a CCD depends on the resolution of the sensor. In digital
cameras, the resolution is measured in Megapixels (or thousands of or pixels. Therefore, an
8MP digital camera can capture twice as much information as a 4MP camera. The result is a
larger photo with more detail.
CCDs in video cameras are usually measured by physical size. For example, most consumer
digital cameras use a CCD around 1/6 or 1/5 of an inch in size. More expensive cameras may
have CCDs 1/3 of an inch in size or larger. The larger the sensor, the more light it can
capture, meaning it will produce better video in low light settings. Professional digital video
cameras often have three sensors, referred to as "3CCD," which use separate CCDs for
capturing red, green, and blue hues.
BUBBLE MEMORY
Bubble memory is a type of non-volatile computer memory that uses a thin film of a
magnetic material to hold small magnetized areas, known as bubbles or domains, each storing
one bit of data. Andrew Bobeck invented the Bubble Memory in 1970. His development of
the magnetic core memory and the development of the twistor memory put him in a good
position for the development of Bubble Memory.
Bubble memory is a type of non-volatile computer memory that uses a thin film of a
magnetic material to hold small magnetized areas, known as bubbles or domains, each storing
one bit of data. Andrew Bobeck invented the Bubble Memory in 1970. His development of
the magnetic core memory and the development of the twistor memory put him in a good
position for the development of Bubble Memory.
It is conceptually a stationary disk with spinning bits. The unit, only a couple of square inches
in size, contains a thin film magnetic recording layer. Globular-shaped bubbles (bits) are
electromagnetically generated in circular strings inside this layer. In order to read or write the
bubbles, they are rotated past the equivalent of a read/write head.
One of the limitations of bubble memory was the slow access. A lagre bubble memory would
requure large loops, so accessing a bit require cycling through a huge number of other bits
first.
Levels of RAID:
Lack of data redundancy means there is no fail over support with this configuration.
In the diagram, the odd blocks are written to disk 0 and the even blocks to disk 1 such that
A1, A2, A3, A4, … would be the order of blocks read if read sequentially from the
beginning.
RAID 0 analysis:-
Failure Rate:
Performance:
The fragments are written to their respective disks simultaneously on the same sector.
This allows smaller sections of the entire chunk of data to be read off the drive in
parallel, hence good performance.
RAID 1:-
Two copies of the data are held on two physical disks, and the data is always identical.
Twice as many disks are required to store the same data when compared to RAID 0.
Failure Rate:
If Pr(disk fail) = 5%, then the probability of both the drives failing in a 2 disk array is
P(both fail) = (0.05)2 = 0.25%.
Performance:
If we use independent disk controllers for each disk, then we can increase the read or
write speeds by doing operations in parallel.
RAID 5:-
RAID 5 is an ideal combination of good performance, good fault tolerance and high capacity
and storage efficiency.
An arrangement of parity and CRC to help rebuilding drive data in case of disk failures.
RAID 5 analysis :-
MTBF is slightly better than RAID 0. This is because failure of one disk is not quite a
harm. We need more time if 2 or more disks fail.
Performance is also as good as RAID 0, if not better. We can read and write parallel
blocks of data.
One of the drawbacks is that the write involves heavy parity calculations by the RAID
controller. Write operations are slower compared to RAID 0.
Pretty useful for general purpose uses where ‘read’s’ are more frequent the ‘write’s’.
RAID 10:-
Combines RAID 1 and RAID 0.
Which means having the pleasure of both - good performance and good failover handling.
RAID 6:-
It is seen as the best way to guarantee data integrity as it uses double parity.
The expanded use of RAID-6 and other dual-parity schemes is a virtual certainty.
RAID vendors to support "fast rebuild" features that can restore hundreds of gigabytes in
just an hour or so!!
Striping (of data) would extend across RAID groups -- not just across drives within a
group.
Improved disk diagnostic features should offer more reliable predictions of impending
drive failures, allowing the rebuild process to begin before an actual fault occurs.
Hot Spares!!
IMPLEMENTATIONS
CACHE MEMORY
The cache is a very high speed, expensive piece of memory, which is used to speed up the
memory retrieval process. Due to its higher cost, the CPU comes with a relatively small
amount of cache compared with the main memory. Without cache memory, every time the
CPU requests for data, it would send the request to the main memory which would then be
sent back across the system bus to the CPU. This is a slow process. The idea of introducing
cache is that this extremely fast memory would store data that is frequently accessed and if
possible, the data that is around it. This is to achieve the quickest possible response time to
the CPU.
In early PCs, the various components had one thing in common: they were all really slow.
Now processors run much faster than everything else in the computer. This means that one of
the key goals in modern system design is to ensure that to whatever extent possible, the
processor is not slowed down by the storage devices it works with. Slowdowns mean wasted
processor cycles, where the CPU can't do anything because it is sitting and waiting for
information it needs.
• Memory Cache: A memory cache, sometimes called a cache store or RAM cache, is a
portion of memory made of high-speed static RAM (SRAM) instead of the slower and
cheaper dynamic RAM (DRAM) used for main memory. Memory caching is effective
because most programs access the same data or instructions over and over. By keeping as
much of this information as possible in SRAM, the computer avoids accessing the slower
DRAM.
• Disk Cache: Disk caching works under the same principle as memory caching, but instead
of using high-speed SRAM, a disk cache uses conventional main memory. The most
recently accessed data from the disk (as well as adjacent sectors) is stored in a memory
buffer. When a program needs to access data from the disk, it first checks the disk cache to
see if the data is there. Disk caching can dramatically improve the performance of
applications, because accessing a byte of data in RAM can be thousands of times faster
than accessing a byte on a hard disk.
LEVELS OF CACHE:
Cache memory is categorized in levels based on its closeness and accessibility to the
microprocessor. There are three levels of a cache.
Level 1(L1) Cache: This cache is inbuilt in the processor and is made of SRAM(Static
RAM) Each time the processor requests information from memory, the cache controller on
the chip uses special circuitry to first check if the memory data is already in the cache. If it
is present, then the system is spared from time consuming access to the main memory. In a
typical CPU, primary cache ranges in size from 8 to 64 KB, with larger amounts on the
newer processors. This type of Cache Memory is very fast because it runs at the speed of
the processor since it is integrated into it.
Level 2(L2) Cache: The L2 cache is larger but slower in speed than L1 cache. It is used to
see recent accesses that is not picked by L1 cache and is usually 64 to 2 MB in size. A L2
cache is also found on the CPU. If L1 and L2 cache are used together, then the missing
information that is not present in L1 cache can be retrieved quickly from the L2 cache.
Like L1 caches, L2 caches are composed of SRAM but they are much larger. L2 is usually
a separate static RAM (SRAM) chip and it is placed between the CPU & DRAM(Main
Memory)
Level 3(L3) Cache: L3 Cache memory is an enhanced form of memory present on the
motherboard of the computer. It is an extra cache built into the motherboard between the
processor and main memory to speed up the processing operations. It reduces the time gap
between request and retrieving of the data and instructions much more quickly than a main
memory. L3 cache is being used with processors nowadays, having more than 3 MB of
storage in it.
Diagram showing different types of cache and their position in the computer system
The reason that this happens is due to a computer science principle called locality of
reference. It states basically that even within very large programs with several megabytes of
instructions, only small portions of this code generally get used at once. Programs tend to
spend large periods of time working in one small area of the code, often performing the same
work many times over and over with slightly different data, and then move to another area.
This occurs because of "loops", which are what programs use to do work many times in rapid
succession.
IMPORTANCE OF CACHE
Cache is responsible for a great deal of the system performance improvement of today's PCs.
The cache is a buffer of sorts between the very fast processor and the relatively slow memory
that serves it. The presence of the cache allows the processor to do its work while waiting for
memory far less often than it otherwise would. Without cache the computer will be very slow
and all our works get delay. So cache is a very important part of our computer system.
MEMORY INTERLEAVING
Memory interleaving is the technique used to increase the throughput. The core idea is to
split the memory system into independent banks, which can answer read or write requests
independents in parallel.
Usually , this is done by interleaving the address space, Consecutive cells in the address
space are assigned to different memory banks. An example of four-way interleaved
memory , and the mapping of consecutive data cells it is shown in previous slide.
There are two-address format for memory interleaving the address space:
Low order interleaving spreads contiguous memory location across the modules horizontally.
This implies that the low order bits of the memory address are used to identify the memory
module. High order bits are the word addresses (displacement) within each module
High order interleaving uses the high order bits as the module address and the low order bits
as the word address within each module.
Various organization of the physical memory are included in this section. In order to close
up the speed gap between Cache and main memory. And interleaving technique is
represented allow pipelined access of the parallel memory modules.
The memory design goal (interleaving goal) is to broaden the effective memory bandwidth
so that more memory words can be accessed per unit time.
The ultimate purpose is to match the memory bandwidth with the bus bandwidth and with
the processor bandwidth.
ASSOCIATIVE MEMORY
Write operation:
Read operation:
• When a word is to be read from an associative memory, the contents of the word, or a part
of the word is specified.
• The memory locates all the words which match the specified content and marks them for
reading.
HARDWARE ORGANISATION
Argument register(A): It contains the word to be searched. It has n bits(one for each bit of
the word).
Key Register(K): It provides mask for choosing a particular field or key in the argument
word. It also has n bits.
Associative memory array: It contains the words which are to be compared with the
argument word.
Match Register(M):It has m bits, one bit corresponding to each word in the memory array .
After the matching process, the bits corresponding to matching words in match register are
set to 1.
MATCHING PROCESS
• The entire argument word is compared with each memory word, if the key register
contains all 1’s. Otherwise, only those bits in the argument that have 1’s in their
corresponding position of the key register are compared.
• Thus the key provides a mask or identifying piece of information which specifies how the
reference to memory is made.
• To illustrate with a numerical example, suppose that the argument register A and the key
register K have the bit configuration as shown below.
• Only the three left most bits of A are compared with the memory words because K has 1’s
in these three positions only.
DISADVANTAGES
• An associative memory is more expensive than a random access memory because each
cell must have an extra storage capability as well as logic circuits for matching its content
with an external argument.
• For this reason, associative memories are used in applications where the search time is
very critical and must be very short.
***
1. Input peripherals : Allows user input, from the outside world to the computer. Example:
Keyboard, Mouse etc.
2. Output peripherals: Allows information output, from the computer to the outside world.
Example: Printer, Monitor etc
3. Input-Output peripherals: Allows both input(from outised world to computer) as well
as, output(from computer to the outside world). Example: Touch screen etc.
INTERFACES
Interface is a shared boundary between two separate components of the computer system
which can be used to attach two or more components to the system for communication
purposes.
There are two types of interface:
1. CPU Interface
2. I/O Interface
Input-Output Interface:-
Peripherals connected to a computer need special communication links for interfacing with
CPU. In computer system, there are special hardware components between the CPU and
peripherals to control or manage the input-output transfers. These components are
called input-output interface units because they provide communication links between
processor bus and peripherals. They provide a method for transferring information between
internal system and input-output devices.
Modes of I/O Data Transfer
Data transfer between the central unit and I/O devices can be handled in generally three types
of modes which are given below:
1. Programmed I/O
2. Interrupt Initiated I/O
3. Direct Memory Access
Programmed I/O:-
Programmed I/O instructions are the result of I/O instructions written in computer program.
Each data item transfer is initiated by the instruction in the program.
Usually the program controls data transfer to and from CPU and peripheral. Transferring data
under programmed I/O requires constant monitoring of the peripherals by the CPU.
Interrupt Initiated I/O:-
In the programmed I/O method the CPU stays in the program loop until the I/O unit indicates
that it is ready for data transfer. This is time consuming process because it keeps the
processor busy needlessly.
This problem can be overcome by using interrupt initiated I/O. In this when the interface
determines that the peripheral is ready for data transfer, it generates an interrupt. After
receiving the interrupt signal, the CPU stops the task which it is processing and service the
I/O transfer and then returns back to its previous processing task.
Direct Memory Access:-
Removing the CPU from the path and letting the peripheral device manage the memory buses
directly would improve the speed of transfer. This technique is known as DMA.
In this, the interface transfer data to and from the memory through memory bus. A DMA
controller manages to transfer data between peripherals and memory unit.
Many hardware systems use DMA such as disk drive controllers, graphic cards, network
cards and sound cards etc. It is also used for intra chip data transfer in multicore processors.
In DMA, CPU would initiate the transfer, do other operations while the transfer is in progress
and receive an interrupt from the DMA controller when the transfer has been completed.
The communication between the IOP and the devices is similar to the program control
method of transfer. And the communication with the memory is similar to the direct memory
access method.
In large scale computers, each processor is independent of other processors and any processor
can initiate the operation.
The CPU can act as master and the IOP act as slave processor. The CPU assigns the task of
initiating operations but it is the IOP, who executes the instructions, and not the CPU. CPU
instructions provide operations to start an I/O transfer. The IOP asks for CPU through
interrupt.
Instructions that are read from memory by an IOP are also called commands to distinguish
them from instructions that are read by CPU. Commands are prepared by programmers and
are stored in memory. Command words make the program for IOP. CPU informs the IOP
where to find the commands in memory.
INTERRUPTS
Data transfer between the CPU and the peripherals is initiated by the CPU. But the CPU
cannot start the transfer unless the peripheral is ready to communicate with the CPU. When a
device is ready to communicate with the CPU, it generates an interrupt signal. A number of
input-output devices are attached to the computer and each device is able to generate an
interrupt request.
The main job of the interrupt system is to identify the source of the interrupt. There is also a
possibility that several devices will request simultaneously for CPU communication. Then,
the interrupt system has to decide which device is to be serviced first.
Priority Interrupt:-
A priority interrupt is a system which decides the priority at which various devices, which
generates the interrupt signal at the same time, will be serviced by the CPU. The system has
authority to decide which conditions are allowed to interrupt the CPU, while some other
interrupt is being serviced. Generally, devices with high speed transfer such as magnetic
disks are given high priority and slow devices such as keyboards are given low priority.
When two or more devices interrupt the computer simultaneously, the computer services the
device with the higher priority first.
Types of Interrupts:-
Following are some different types of interrupts:
Hardware Interrupts
When the signal for the processor is from an external device or hardware then this interrupts
is known as hardware interrupt.
Let us consider an example: when we press any key on our keyboard to do some action, then
this pressing of the key will generate an interrupt signal for the processor to perform certain
action. Such an interrupt can be of two types:
Maskable Interrupt
The hardware interrupts which can be delayed when a much high priority interrupt has
occurred at the same time.
The hardware interrupts which cannot be delayed and should be processed by the
processor immediately.
Software Interrupts
The interrupt that is caused by any internal system of the computer system is known as
a software interrupt. It can also be of two types:
Normal Interrupt
The interrupts that are caused by software instructions are called normal software
interrupts.
Exception
Unplanned interrupts which are produced during the execution of some program are
called exceptions, such as division by zero.
***
INSTRUCTION SET:
The instruction set, also called ISA (instruction set architecture) is part of a computer that
pertains to programming, which is basically machine language. The instruction set provides
commands to the processor, to tell it what it needs to do. The instruction set consists of
addressing modes, instructions, native data types, registers, memory architecture, interrupt,
and exception handling, and external I/O.
An example of an instruction set is the x86 instruction set, which is common to find on
computers today. Different computer processors can use almost the same instruction set while
still having very different internal design. Both the Intel Pentium and AMD Athlon
processors use nearly the same x86 instruction set. An instruction set can be built into the
hardware of the processor, or it can be emulated in software, using an interpreter. The
hardware design is more efficient and faster for running programs than the emulated software
version.
Examples of instruction set:-
INSTRUCTION CODES
While a Program, as we all know, is, A set of instructions that specify the operations,
operands, and the sequence by which processing has to occur. An instruction code is a group
of bits that tells the computer to perform a specific operation part.
Instruction Code: Operation Code
The operation code of an instruction is a group of bits that define operations such as add,
subtract, multiply, shift and compliment. The number of bits required for the operation code
depends upon the total number of operations available on the computer. The operation code
must consist of at least n bits for a given 2^n operations. The operation part of an instruction
code specifies the operation to be performed.
Disadvantages
For Example: ADD R1, 4000 - In this the 4000 is effective address of operand.
NOTE: Effective Address is the location where operand is present.
Indirect Addressing Mode
In this, the address field of instruction gives the address where the effective address is stored
in memory. This slows down the execution, as this includes multiple memory lookups to find
the operand.
INSTRUCTION REPRESENTATION:
Within the computer, each instruction is represented by a sequence of bits. The instruction is
divided into fields, corresponding to the constituent elements of the instruction. The
instruction format is highly machine specific and it mainly depends on the machine
architecture. It is assume that it is a 16-bit CPU. 4 bits are used to provide the operation
code. So, we may have to 16 (24 = 16) different set of instructions. With each instruction,
there are two operands. To specify each operand, 6 bits are used. It is possible to provide 64
(26 = 64) different operands for each operand reference.
It is difficult to deal with binary representation of machine instructions. Thus, it has become
common practice to use a symbolic representation of machine instructions.
Opcodes are represented by abbreviations, called mnemonics that indicate the operations.
Common examples include:
ADD: Add
SUB : Subtract
MULT: Multiply
DIV : Division
LOAD: Load data from memory to CPU
STORE: Store data to memory from CPU.
RISC Processor
It is known as Reduced Instruction Set Computer. It is a type of microprocessor that has a
limited number of instructions. They can execute their instructions very fast because
instructions are very small and simple.
RISC chips require fewer transistors which make them cheaper to design and produce. In
RISC, the instruction set contains simple and basic instructions from which more complex
instruction can be produced. Most instructions complete in one cycle, which allows the
processor to handle many instructions at same time.
In this instructions are register based and data transfer takes place from register to register.
CISC Processor
Instruction size and Large set of instructions with variable Small set of instructions with
format formats (16-64 bits per instruction). fixed format (32 bit).
CPU control Most micro coded using control memory Mostly hardwired without
(ROM) but modern CISC use hardwired control memory.
control.
***
PROCESSOR ORGANIZATION
REQUIREMENTS PLACED ON THE PROCESSOR
Fetch instruction: The processor reads an instruction from memory (register,cache,
main memory).
Interpret instruction: The instruction is decoded to determine what action is
required.
Fetch data: The execution of an instruction may require reading data from memory
or an I/O module.
Process data: The execution of an instruction may require performing some
arithmetic or logical operation on data.
Write data: The results of an execution may require writing data to memory on I/O
module.
SIMPLIFIED VIEW OF PROCESSOR
COMPONENTS OF PROCESSOR
The major components of the processor are an arithmetic and logic unit (ALU) and
a control unit (CU).
The ALU does the actual computation or processing of data.
The control unit controls the movement of data and instructions into and out of the
processor and controls the operation of the ALU.
Register consists of a set of storage locations.
REGISTER ORGANIZATION
The register in the processor perform two roles:
1. User-visible register: Enable the machine- or assembly language programmer to
minimize main memory references by optimizing use of registers.
2. Control and status registers: Used by the control unit to control the operation of the
processor and by privileged, operating system programs to control the execution of
programs.
USER-VISIBLE REGISTERS
General Purpose:-
General Purpose Registers can be assigned to a variety of functions by the
programmer
Mostly these registers contain the operand for any opcode.
In some cases these are used for addressing purpose.
Data Registers:-
Data Register to hold data and cannot be employed in the calculation of an
operand address
Eg. Accumulator.
Address Registers:-
Address Register they may be devoted to a particular addressing mode
Segment pointers :a segment register holds the address of the base of the
segment
Index registers :are used for indexed addressing and may be auto indexed.
Stack Pointer: If there is user-visible stack addressing, then typically there is
a dedicated register that points to the top of the stack.
Condition Codes:-
Condition codes are bits set by the processor hardware as the result of
operations.
Condition codes are bits set by the processor hardware as the result of
operation.
MICRO-OPERATION
In the above circuit the multiplexer consist two selection line S0 and S1 which defines the
selection of bits of a register, i.e.
ARITHMETIC MICRO-OPERATION
The basic arithmetic micro-operations are
– Addition
– Subtraction
– Increment
– Decrement
The additional arithmetic micro-operations are
– Add with carry
– Subtract with borrow
– Transfer/Load
etc. …
Summary of Typical Arithmetic Micro-Operations
R3 R1 + R2 Contents of R1 plus R2 transferred to R3
R3 R1 - R2 Contents of R1 minus R2 transferred to R3
R2 R2’ Complement the contents of R2
R2 R2’+ 1 2's complement the contents of R2 (negate)
R3 R1 + R2’+ 1 subtraction
R1 R1 + 1 Increment
R1 R1 - 1 Decrement
In the above diagram we use four full-adder circuit where, each adder have three input
, i.e. , Cin, X and Y.Where, the x input is directly feeded to the full adder while, the Y
input is feeded to full adder using 4X1 multiplexer.
LOGICAL MICRO-OPERATION
Logic microoperation specify binary operation for strings of bit stored in registers.
These operation consider each bit of the register separately and treat them as binary
variables. For example,
P:R1 R1 + R2
1010 Content of R1
1100 Content of R2
0110 Content of R1 after P=1
0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
0 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
1 0 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1
1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
• SELECTIVE SET:
it sets the bit’s to 1 in register A where there are 1,s in register B.
example
1010 A before
1100 B(logic operant)
1110 A after
• SELECTIVE COMPLEMENT :
it complements bits in A where there are corresponding 1’s in B.
example
1010 A before
1100 B
0110 A after
it can be seen selective complement can be done by Exclusive –OR
SELECTIVE CLEAR:
it clear the bit to 0 in A where there are corresponding 1’s in B.example
1010 A before
1100 B
0010 A after
(it can be obtained by microoperation AB’)
MASKING:
it is similar to selective clear except that the bit of A is cleared where there
corresponding 0’s.
1010 A before
1100 B
1000 A after
INSERT :
it inserts a new value into a group of bits.
This is done by first masking and then Oring with the value. Example
0110 1010 A before
0000 1111 B
0000 1010 A after
then insert a new value
0000 1010 A before
1001 0000 B(insert)
1001 1010 A after
SHIFT MICRO-OPERATION
Shift microoperation are used for serial transfer of data. The content of the register
can be shifted to left or the right.At the same time the bits are shifted the the first flip
flop receive its binary information from the serial input.The information transferred
through the serial input determines the type of shift.
There are three types of shift:
I. Logical shift
II. Circular shift
III. Arithmetic shift
Logical shift:-
A Logical Shift Micro Operation transfers a 0 (zero) through the serial input, either from left
or right depending on the type. For, logical shift left micro operation, 0 (zero) is transferred
through the right of the data and for the logical shift right micro operation, 0 (zero is
transferred through the left of the data as shown in the figures below.
Register Transfer Language (RTL) for the logical shift micro operations can be written as:
R ← shl R (shift left register (R)).
R ← shr R (shift right register (R)).
Below is the diagram showing logical shift left micro operation on the data in a register.
Arithmetic shift:-
Arithmetic Shift Operation shifts signed (positive or negative) binary numbers either left or
right by multiplying or dividing by 2. For, Arithmetic Shift left micro operation, the value in
the register is multiplied by 2 and whereas for Arithmetic Shift right micro operation, the
value in the register is divided by 2.
In RTL (RTL stands for Register Transfer Language), we can represent this arithmetic shift
micro operations as
R ← ashl R (arithmetic shift left R (register))
R ← ashr R (arithmetic shift right R (register))
Diagram showing Arithmetic shift left operation is as follows:
INSTRUCTION EXECUTION
Instruction Execution Steps
1. Fetch next instruction from memory into instr. register
2. Change program counter to point to next instruction
3. Determine type of instruction just fetched
4. If instructions uses word in memory, determine where Fetch word, if needed, into
CPU register
5. Execute the instruction
6. Go to step 1 to begin executing following instruction
PIPELINING
Pipelining is the process of accumulating instruction from the processor through a pipeline. It
allows storing and executing instructions in an orderly process. It is also known as pipeline
processing.
Pipelining is a technique where multiple instructions are overlapped during execution.
Pipeline is divided into stages and these stages are connected with one another to form a pipe
like structure. Instructions enter from one end and exit from another end.
Pipelining increases the overall instruction throughput.
1. Arithmetic Pipeline
2. Instruction Pipeline
Arithmetic Pipeline
Arithmetic pipelines are usually found in most of the computers. They are used for floating
point operations, multiplication of fixed point numbers etc. For example: The input to the
Floating Point Adder pipeline is:
X=A*2^a
Y=B*2^b
Here A and B are mantissas (significant digit of floating point numbers), while a and b are
exponents.
The floating point addition and subtraction is done in 4 parts:
Registers are used for storing the intermediate results between the above operations.
Instruction Pipeline
In this a stream of instructions can be executed by
overlapping fetch, decode and execute phases of an instruction cycle. This type of technique
is used to increase the throughput of the computer system.
An instruction pipeline reads instruction from the memory while previous instructions are
being executed in other segments of the pipeline. Thus we can execute multiple instructions
simultaneously. The pipeline will be more efficient if the instruction cycle is divided into
segments of equal duration.
Pipeline Conflicts
There are some factors that cause the pipeline to deviate its normal performance. Some of
these factors are given below:
1. Timing Variations
All stages cannot take same amount of time. This problem generally occurs in instruction
processing where different instructions have different operand requirements and thus
different processing time.
2. Data Hazards
When several instructions are in partial execution, and if they reference same data then the
problem arises. We must ensure that next instruction does not attempt to access data before
the current instruction, because this will lead to incorrect results.
3. Branching
In order to fetch and execute the next instruction, we must know what that instruction is. If
the present instruction is a conditional branch, and its result will lead us to the next
instruction, then the next instruction may not be known until the current one is processed.
4. Interrupts
Interrupts set unwanted instruction into the instruction stream. Interrupts effect the execution
of instruction.
5. Data Dependency
It arises when an instruction depends upon the result of a previous instruction but this result is
not yet available.
Advantages of Pipelining
Disadvantages of Pipelining
Performance
***
Organization of ALU:
Various circuits are required to process data or perform arithmetical operations which are
connected to microprocessor's ALU. Accumulator and Data Buffer stores data temporarily.
These data are processed as per control instructions to solve problems. Such problems are
addition, multiplication etc.
Functions of ALU:
Functions of ALU or Arithmetic & Logic Unit can be categorized into following 3 categories:
1. Arithmetic Operations:
Additions, multiplications etc. are example of arithmetic operations. Finding greater than or
smaller than or equality between two numbers by using subtraction is also a form of
arithmetic operations.
2. Logical Operations:
Operations like AND, OR, NOR, NOT etc. using logical circuitry are examples of logical
operations.
3. Data Manipulations:
***
Decoder
This is used to decode the instructions that make up a program when they are being
processed, and to determine in what actions must be taken in order to process them. These
decisions are normally taken by looking at the opcode of the instruction, together with the
addressing mode used.
Timerorclock
The timer or clock ensures that all processes and instructions are carried out and completed at
the right time. Pulses are sent to the other areas of the CPU at regular intervals (related to the
processor clock speed), and actions only occur when a pulse is detected. This ensures that the
actions themselves also occur at these same regular intervals, meaning that the operations of
the CPU are synchronised.
The control logic circuits are used to create the control signals themselves, which are then
sent around the processor. These signals inform the arithmetic and logic unit and the register
array what they actions and steps they should be performing, what data they should be using
to perform said actions, and what should be done with the results.
1. Fetching instructions one by one from primary memory and gather required data and
operands to perform those instructions.
2. Sending instructions to ALU to perform additions, multiplication etc.
3. Receiving and sending results of operations of ALU to primary memory
4. Fetching programs from input and secondary memory and bringing them to primary
memory
5. Sending results from ALU stored in primary memory to output
Hardwired Control Unit
It is implemented with the help of gates, flip flops, decoders etc. in the hardware. The inputs
to control unit are the instruction register, flags, timing signals etc. This organization can be
very complicated if we have to make the control unit large.
If the design has to be modified or changed, all the combinational circuits have to be
modified which is a very difficult task.
Microprogrammed Control Unit
It is implemented by using programming approach. A sequence of micro operations is carried
out by executing a program consisting of micro-instructions. In this organization any
modifications or changes can be done by updating the micro program in the control memory
by the programmer.
Wilkes Control
• 1951
• Matrix partially filled with diodes
• During cycle, one row activated
— Generates signals where diode present
— First part of row generates control
— Second generates address for next cycle
Wilkes control unit consist of control memory address register, decoder, and control
store. Data from the instruction register is entered into CMAR(control memory address
register) and its output is feeded to the decoder which generate control store(it is
collection of control field, conditional bit and address line).
MICRO-INSTRUCTION
Microinstruction: A single instruction in microcode. It is the most elementary instruction in
the computer, such as moving the contents of a register to the arithmetic logic unit (ALU). It
takes several microinstructions to carry out one complex machine instruction (CISC).
Information in a Microinstruction
- Control Information
- Sequencing Information
- Constant
Information which is useful when feeding into the system
These information needs to be organized in some way for
- Efficient use of the microinstruction bits
- Fast decoding
Field Encoding
- Encoding the microinstruction bits
- Encoding slows down the execution speed due to the decoding delay
- Encoding also reduces the flexibility due to the decoding hardware
Microinstruction Encoding Direct Encoding
MICRO-INSTRUCTION TYPES
Vertical micro-programming: Each micro-instruction specifies single (or few) micro-
operations to be performed.
Width is narrow
n control signals encoded into log2 n bits
Limited ability to express parallelism
Considerable encoding of control information requires external memory word decoder
to identify the exact control line being manipulated
Diagram:-
Micro-instruction Address
Function codes
Jump Condition
MICRO-INSTRUCTION FORMATS
The microinstruction format consists of 128 bits and these bits are broken down into 30
functional fields , each of these fields consists of one or more bits and they are grouped into
five major categories:
1) Control of board
4) 8818 micro-sequencer
- Selecting condition codes for sequencer control . The first bit of field 1 indicates whether
the condition flag is to be set to 1 or 0, and the remaining 4 bits indicate which flag is to be
set.
- Determining the unit driving the system Y bus. One of the four devices attached to the bus
is selected.
below:
truth table, although it is not efficient in terms of size. A read-only-memory (ROM) stored
The ROM can output a fixed sequence of control signals simply by cycling the address of the
ROM. The content of this ROM is a microprogram. It is comparable to a straight
line program (no transfer of control). Each entry in the ROM is called a microword. A
microprogram counter is used to cycling the sequence of control.
Conditionals are the bits used to determine the flow of microprogram. The next address
determines the next microword to be executed.
A microprogram is executed as follows:
0 control bits
1 next address
c) multiple format
Advantage
Making change to a hardwired control unit implies global change, that is, the circuit will be
almost totally changed. Hence, it is costly and time consuming although the present CAD
tools do reduce most of the burden in this area. In contrary, for a microprogrammed control
unit, making change to it is just changing the microprogram, the bit pattern in the
micromemory. There is a tool to generate these bit content from a human-readable
microprogram, hence making change to microprogram is similar to edit-compile a
program. The circuit for control unit does not change. This enables adding new instructions,
modifies addressing mode, etc. or updating the version of control behavior easy to do.
Disadvantage
Microprogram relies on fast micromemory. It requires high speed memory. In fact, the
architect of early microprogrammed machine, IBM S360 family, depended on this crucial
technology, which was still in the development at that time. The breakthrough in memory
technology came, and S360 became the most successful family of computers. Hardwired
control unit is much faster. Microprogramming is inherently very low level, making it hard
to be absolutely correct. Microprogramming is by nature concurrent, many events occur at
the same time, so it is difficult to develop and debug. (for a good reading that shows this
process, read Tracy Kidder's "Soul of a new machine").
***